diff --git a/doc/administration/database_load_balancing.md b/doc/administration/database_load_balancing.md index fe30d98b92d..a373a15eccd 100644 --- a/doc/administration/database_load_balancing.md +++ b/doc/administration/database_load_balancing.md @@ -161,11 +161,11 @@ records, eventually dropping this hostname from rotation if it can't resolve its The `interval` value specifies the _minimum_ time between checks. If the A record has a TTL greater than this value, then service discovery will honor said TTL. For example, if the TTL of the A record is 90 seconds, then service -discovery will wait at least 90 seconds before checking the A record again. +discovery waits at least 90 seconds before checking the A record again. When the list of hosts is updated, it might take a while for the old connections to be terminated. The `disconnect_timeout` setting can be used to enforce an -upper limit on the time it will take to terminate all old database connections. +upper limit on the time it takes to terminate all old database connections. Some nameservers (like [Consul](https://www.consul.io/docs/discovery/dns#udp-based-dns-queries)) can return a truncated list of hosts when queried over UDP. To overcome this issue, you can use TCP for querying by setting @@ -179,7 +179,7 @@ all-in-one package based installations as well as GitLab Helm chart deployments. If you use an application server that forks, such as Unicorn, you _have to_ update your Unicorn configuration to start service discovery _after_ a fork. -Failure to do so will lead to service discovery only running in the parent +Failure to do so leads to service discovery only running in the parent process. If you are using Unicorn, then you can add the following to your Unicorn configuration file: @@ -190,13 +190,13 @@ after_fork do |server, worker| end ``` -This will ensure that service discovery is started in both the parent and all +This ensures that service discovery is started in both the parent and all child processes. ## Balancing queries -Read-only `SELECT` queries will be balanced among all the secondary hosts. -Everything else (including transactions) will be executed on the primary. +Read-only `SELECT` queries balance among all the secondary hosts. +Everything else (including transactions) executes on the primary. Queries such as `SELECT ... FOR UPDATE` are also executed on the primary. ## Prepared statements @@ -207,19 +207,19 @@ response timings. ## Primary sticking -After a write has been performed, GitLab will stick to using the primary for a -certain period of time, scoped to the user that performed the write. GitLab will -revert back to using secondaries when they have either caught up, or after 30 +After a write has been performed, GitLab sticks to using the primary for a +certain period of time, scoped to the user that performed the write. GitLab +reverts back to using secondaries when they have either caught up, or after 30 seconds. ## Failover handling -In the event of a failover or an unresponsive database, the load balancer will -try to use the next available host. If no secondaries are available the +In the event of a failover or an unresponsive database, the load balancer +tries to use the next available host. If no secondaries are available the operation is performed on the primary instead. -In the event of a connection error being produced when writing data, the -operation will be retried up to 3 times using an exponential back-off. +If a connection error occurs while writing data, the +operation is retried up to 3 times using an exponential back-off. When using load balancing, you should be able to safely restart a database server without it immediately leading to errors being presented to the users. @@ -251,9 +251,9 @@ For example: > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/3526) in [GitLab Premium](https://about.gitlab.com/pricing/) 10.3. -To prevent reading from an outdated secondary the load balancer will check if it +To prevent reading from an outdated secondary the load balancer checks if it is in sync with the primary. If the data is determined to be recent enough the -secondary can be used, otherwise it will be ignored. To reduce the overhead of +secondary is used, otherwise it is ignored. To reduce the overhead of these checks we only perform these checks at certain intervals. There are three configuration options that influence this behavior: diff --git a/doc/administration/nfs.md b/doc/administration/nfs.md index 52948732766..602b7ec220e 100644 --- a/doc/administration/nfs.md +++ b/doc/administration/nfs.md @@ -22,8 +22,8 @@ file system performance, see ## Gitaly and NFS deprecation WARNING: -From GitLab 14.0, enhancements and bug fixes for NFS for Git repositories will no longer be -considered and customer technical support will be considered out of scope. +From GitLab 14.0, enhancements and bug fixes for NFS for Git repositories are no longer +considered and customer technical support is considered out of scope. [Read more about Gitaly and NFS](gitaly/index.md#nfs-deprecation-notice) and [the correct mount options to use](#upgrade-to-gitaly-cluster-or-disable-caching-if-experiencing-data-loss). @@ -81,8 +81,8 @@ When you define your NFS exports, we recommend you also add the following options: - `no_root_squash` - NFS normally changes the `root` user to `nobody`. This is - a good security measure when NFS shares will be accessed by many different - users. However, in this case only GitLab will use the NFS share so it + a good security measure when NFS shares are accessed by many different + users. However, in this case only GitLab uses the NFS share so it is safe. GitLab recommends the `no_root_squash` setting because we need to manage file permissions automatically. Without the setting you may receive errors when the Omnibus package tries to alter permissions. Note that GitLab @@ -137,15 +137,15 @@ NFS performance with GitLab can in some cases be improved with [Rugged](https://github.com/libgit2/rugged). NOTE: -From GitLab 12.1, it will automatically be detected if Rugged can and should be used per storage. +From GitLab 12.1, it automatically detects if Rugged can and should be used per storage. -If you previously enabled Rugged using the feature flag, you will need to unset the feature flag by using: +If you previously enabled Rugged using the feature flag, you need to unset the feature flag by using: ```shell sudo gitlab-rake gitlab:features:unset_rugged ``` -If the Rugged feature flag is explicitly set to either `true` or `false`, GitLab will use the value explicitly set. +If the Rugged feature flag is explicitly set to either `true` or `false`, GitLab uses the value explicitly set. #### Improving NFS performance with Puma @@ -190,7 +190,7 @@ Note there are several options that you should consider using: | `lookupcache=positive` | Tells the NFS client to honor `positive` cache results but invalidates any `negative` cache results. Negative cache results cause problems with Git. Specifically, a `git push` can fail to register uniformly across all NFS clients. The negative cache causes the clients to 'remember' that the files did not exist previously. | `hard` | Instead of `soft`. [Further details](#soft-mount-option). | `cto` | `cto` is the default option, which you should use. Do not use `nocto`. [Further details](#nocto-mount-option). -| `_netdev` | Wait to mount filesystem until network is online. See also the [`high_availability['mountpoint']`](https://docs.gitlab.com/omnibus/settings/configuration.html#only-start-omnibus-gitlab-services-after-a-given-file-system-is-mounted) option. +| `_netdev` | Wait to mount file system until network is online. See also the [`high_availability['mountpoint']`](https://docs.gitlab.com/omnibus/settings/configuration.html#only-start-omnibus-gitlab-services-after-a-given-file-system-is-mounted) option. #### `soft` mount option @@ -222,7 +222,7 @@ they highlight that if the NFS client driver caches data, `soft` means there is writes by GitLab are actually on disk. Mount points set with the option `hard` may not perform as well, and if the -NFS server goes down, `hard` will cause processes to hang when interacting with +NFS server goes down, `hard` causes processes to hang when interacting with the mount point. Use `SIGKILL` (`kill -9`) to deal with hung processes. The `intr` option [stopped working in the 2.6 kernel](https://access.redhat.com/solutions/157873). @@ -260,7 +260,7 @@ mountpoint └── uploads ``` -To do so, we'll need to configure Omnibus with the paths to each directory nested +To do so, configure Omnibus with the paths to each directory nested in the mount point as follows: Mount `/gitlab-nfs` then use the following Omnibus @@ -274,7 +274,7 @@ gitlab_ci['builds_directory'] = '/gitlab-nfs/gitlab-data/builds' ``` Run `sudo gitlab-ctl reconfigure` to start using the central location. Be aware -that if you had existing data, you'll need to manually copy or rsync it to +that if you had existing data, you need to manually copy or rsync it to these new locations, and then restart GitLab. ### Bind mounts @@ -296,21 +296,21 @@ NFS mount point is `/gitlab-nfs`. Then, add the following bind mounts in /gitlab-nfs/gitlab-data/builds /var/opt/gitlab/gitlab-ci/builds none bind 0 0 ``` -Using bind mounts will require manually making sure the data directories +Using bind mounts requires you to manually make sure the data directories are empty before attempting a restore. Read more about the [restore prerequisites](../raketasks/backup_restore.md). ### Multiple NFS mounts -When using default Omnibus configuration you will need to share 4 data locations +When using default Omnibus configuration you need to share 4 data locations between all GitLab cluster nodes. No other locations should be shared. The following are the 4 locations need to be shared: | Location | Description | Default configuration | | -------- | ----------- | --------------------- | -| `/var/opt/gitlab/git-data` | Git repository data. This will account for a large portion of your data | `git_data_dirs({"default" => { "path" => "/var/opt/gitlab/git-data"} })` +| `/var/opt/gitlab/git-data` | Git repository data. This accounts for a large portion of your data | `git_data_dirs({"default" => { "path" => "/var/opt/gitlab/git-data"} })` | `/var/opt/gitlab/gitlab-rails/uploads` | User uploaded attachments | `gitlab_rails['uploads_directory'] = '/var/opt/gitlab/gitlab-rails/uploads'` -| `/var/opt/gitlab/gitlab-rails/shared` | Build artifacts, GitLab Pages, LFS objects, temp files, etc. If you're using LFS this may also account for a large portion of your data | `gitlab_rails['shared_path'] = '/var/opt/gitlab/gitlab-rails/shared'` +| `/var/opt/gitlab/gitlab-rails/shared` | Build artifacts, GitLab Pages, LFS objects, temp files, and so on. If you're using LFS this may also account for a large portion of your data | `gitlab_rails['shared_path'] = '/var/opt/gitlab/gitlab-rails/shared'` | `/var/opt/gitlab/gitlab-ci/builds` | GitLab CI/CD build traces | `gitlab_ci['builds_directory'] = '/var/opt/gitlab/gitlab-ci/builds'` Other GitLab directories should not be shared between nodes. They contain @@ -318,7 +318,7 @@ node-specific files and GitLab code that does not need to be shared. To ship logs to a central location consider using remote syslog. Omnibus GitLab packages provide configuration for [UDP log shipping](https://docs.gitlab.com/omnibus/settings/logs.html#udp-log-shipping-gitlab-enterprise-edition-only). -Having multiple NFS mounts will require manually making sure the data directories +Having multiple NFS mounts requires you to manually make sure the data directories are empty before attempting a restore. Read more about the [restore prerequisites](../raketasks/backup_restore.md). @@ -348,7 +348,7 @@ Any `Operation not permitted` errors means you should investigate your NFS serve ## NFS in a Firewalled Environment If the traffic between your NFS server and NFS client(s) is subject to port filtering -by a firewall, then you will need to reconfigure that firewall to allow NFS communication. +by a firewall, then you need to reconfigure that firewall to allow NFS communication. [This guide from TDLP](https://tldp.org/HOWTO/NFS-HOWTO/security.html#FIREWALLS) covers the basics of using NFS in a firewalled environment. Additionally, we encourage you to @@ -370,7 +370,7 @@ sudo ufw allow from to any port nfs WARNING: From GitLab 13.0, using NFS for Git repositories is deprecated. -As of GitLab 14.0, NFS-related issues with Gitaly will no longer be addressed. Read +As of GitLab 14.0, NFS-related issues with Gitaly are no longer addressed. Read more about [Gitaly and NFS deprecation](gitaly/index.md#nfs-deprecation-notice). Customers and users have reported data loss on high-traffic repositories when using NFS for Git repositories. @@ -391,7 +391,7 @@ The problem may be partially mitigated by adjusting caching using the following WARNING: The `actimeo=0` and `noac` options both result in a significant reduction in performance, possibly leading to timeouts. You may be able to avoid timeouts and data loss using `actimeo=0` and `lookupcache=positive` _without_ `noac`, however -we expect the performance reduction will still be significant. Upgrade to +we expect the performance reduction is still significant. Upgrade to [Gitaly Cluster](gitaly/praefect.md) as soon as possible. ### Avoid using cloud-based file systems diff --git a/doc/administration/pages/index.md b/doc/administration/pages/index.md index 13d91c83041..b1c19ef7526 100644 --- a/doc/administration/pages/index.md +++ b/doc/administration/pages/index.md @@ -259,7 +259,7 @@ control over how the Pages daemon runs and serves content in your environment. | `log_directory` | Absolute path to a log directory. | | `log_format` | The log output format: `text` or `json`. | | `log_verbose` | Verbose logging, true/false. | -| `propagate_correlation_id` | Set to true (false by default) to re-use existing Correlation ID from the incoming request header `X-Request-ID` if present. If a reverse proxy sets this header, the value will be propagated in the request chain. | +| `propagate_correlation_id` | Set to true (false by default) to re-use existing Correlation ID from the incoming request header `X-Request-ID` if present. If a reverse proxy sets this header, the value is propagated in the request chain. | | `max_connections` | Limit on the number of concurrent connections to the HTTP, HTTPS or proxy listeners. | | `metrics_address` | The address to listen on for metrics requests. | | `redirect_http` | Redirect pages from HTTP to HTTPS, true/false. | @@ -355,7 +355,7 @@ world. Custom domains and TLS are supported. listens on. If you don't have IPv6, you can omit the IPv6 address. 1. If you haven't named your certificate and key `example.io.crt` and `example.io.key` respectively, -then you'll need to also add the full paths as shown below: +then you need to also add the full paths as shown below: ```ruby gitlab_pages['cert'] = "/etc/gitlab/ssl/example.io.crt" @@ -468,7 +468,7 @@ To do that: WARNING: For self-managed installations, all public websites remain private until they are -redeployed. This issue will be resolved by +redeployed. Resolve this issue by [sourcing domain configuration from the GitLab API](https://gitlab.com/gitlab-org/gitlab/-/issues/218357). ### Running behind a proxy @@ -555,9 +555,9 @@ Follow the steps below to configure verbose logging of GitLab Pages daemon. > [Introduced](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/438) in GitLab 13.10. -Setting the `propagate_correlation_id` to true will allow installations behind a reverse proxy generate +Setting the `propagate_correlation_id` to true allows installations behind a reverse proxy to generate and set a correlation ID to requests sent to GitLab Pages. When a reverse proxy sets the header value `X-Request-ID`, -the value will be propagated in the request chain. +the value propagates in the request chain. Users [can find the correlation ID in the logs](../troubleshooting/tracing_correlation_id.md#identify-the-correlation-id-for-a-request). To enable the propagation of the correlation ID: @@ -648,7 +648,7 @@ The default is 100MB. > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/16610) in GitLab 12.7. NOTE: -Only GitLab admin users will be able to view and override the **Maximum size of Pages** setting. +Only GitLab admin users are able to view and override the **Maximum size of Pages** setting. To override the global maximum pages size for a specific project: @@ -771,7 +771,7 @@ The default value for Omnibus installations is `nil`. If left unchanged, GitLab Pages tries to use any available source (either `gitlab` or `disk`). The preferred source is `gitlab`, which uses [API-based configuration](#gitlab-api-based-configuration). -On large GitLab instances, using the API-based configuration will significantly improve the pages daemon startup time, as there is no need to load all custom domains configuration into memory. +On large GitLab instances, using the API-based configuration significantly improves the pages daemon startup time, as there is no need to load all custom domains configuration into memory. For more details see this [blog post](https://about.gitlab.com/blog/2020/08/03/how-gitlab-pages-uses-the-gitlab-api-to-serve-content/). @@ -839,25 +839,25 @@ Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Examples: -- Increasing `gitlab_cache_expiry` will allow items to exist in the cache longer. +- Increasing `gitlab_cache_expiry` allows items to exist in the cache longer. This setting might be useful if the communication between GitLab Pages and GitLab Rails is not stable. -- Increasing `gitlab_cache_refresh` will reduce the frequency at which GitLab Pages +- Increasing `gitlab_cache_refresh` reduces the frequency at which GitLab Pages requests a domain's configuration from GitLab Rails. This setting might be useful GitLab Pages generates too many requests to GitLab API and content does not change frequently. -- Decreasing `gitlab_cache_cleanup` will remove expired items from the cache more frequently, +- Decreasing `gitlab_cache_cleanup` removes expired items from the cache more frequently, reducing the memory usage of your Pages node. - Decreasing `gitlab_retrieval_timeout` allows you to stop the request to GitLab Rails -more quickly. Increasing it will allow more time to receive a response from the API, +more quickly. Increasing it allows more time to receive a response from the API, useful in slow networking environments. -- Decreasing `gitlab_retrieval_interval` will make requests to the API more frequently, +- Decreasing `gitlab_retrieval_interval` makes requests to the API more frequently, only when there is an error response from the API, for example a connection timeout. -- Decreasing `gitlab_retrieval_retries` will reduce the number of times a domain's +- Decreasing `gitlab_retrieval_retries` reduces the number of times a domain's configuration is tried to be resolved automatically before reporting an error. ## Using object storage @@ -951,7 +951,9 @@ These ZIP archives can be stored either locally on disk storage or on the [objec > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59003) in GitLab 13.11. -GitLab will [try to automatically migrate](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/54578) the old storage format to the new ZIP-based one when you upgrade to GitLab 13.11 or further. +GitLab tries to +[automatically migrate](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/54578) +the old storage format to the new ZIP-based one when you upgrade to GitLab 13.11 or further. However, some projects may fail to be migrated for different reasons. To verify that all projects have been migrated successfully, you can manually run the migration: @@ -996,7 +998,7 @@ If you find that migrated data is invalid, you can remove all migrated data by r sudo gitlab-rake gitlab:pages:clean_migrated_zip_storage ``` -This will not remove any data from the legacy disk storage and the GitLab Pages daemon will automatically fallback +This does not remove any data from the legacy disk storage and the GitLab Pages daemon automatically falls back to using that. ### Migrate Pages deployments to object storage @@ -1199,7 +1201,7 @@ If the wildcard DNS [prerequisite](#prerequisites) can't be met, you can still u all projects you need to use Pages with into a single group namespace, for example `pages`. 1. Configure a [DNS entry](#dns-configuration) without the `*.`-wildcard, for example `pages.example.io`. 1. Configure `pages_external_url http://example.io/` in your `gitlab.rb` file. - Omit the group namespace here, because it will automatically be prepended by GitLab. + Omit the group namespace here, because it automatically is prepended by GitLab. ### Pages daemon fails with permission denied errors diff --git a/doc/administration/reply_by_email_postfix_setup.md b/doc/administration/reply_by_email_postfix_setup.md index f310ef04295..f0b9daead69 100644 --- a/doc/administration/reply_by_email_postfix_setup.md +++ b/doc/administration/reply_by_email_postfix_setup.md @@ -9,7 +9,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w This document will take you through the steps of setting up a basic Postfix mail server with IMAP authentication on Ubuntu, to be used with [incoming email](incoming_email.md). -The instructions make the assumption that you will be using the email address `incoming@gitlab.example.com`, that is, username `incoming` on host `gitlab.example.com`. Don't forget to change it to your actual host when executing the example code snippets. +The instructions make the assumption that you are using the email address `incoming@gitlab.example.com`, that is, username `incoming` on host `gitlab.example.com`. Don't forget to change it to your actual host when executing the example code snippets. ## Configure your server firewall @@ -127,7 +127,7 @@ The instructions make the assumption that you will be using the email address `i ## Configure Postfix to use Maildir-style mailboxes -Courier, which we will install later to add IMAP authentication, requires mailboxes to have the Maildir format, rather than mbox. +Courier, which we install later to add IMAP authentication, requires mailboxes to have the Maildir format, rather than mbox. 1. Configure Postfix to use Maildir-style mailboxes: @@ -191,7 +191,7 @@ Courier, which we will install later to add IMAP authentication, requires mailbo imapd start ``` -1. The `courier-authdaemon` isn't started after installation. Without it, IMAP authentication will fail: +1. The `courier-authdaemon` isn't started after installation. Without it, IMAP authentication fails: ```shell sudo service courier-authdaemon start @@ -213,7 +213,7 @@ Courier, which we will install later to add IMAP authentication, requires mailbo 1. Let Postfix know about the IPs that it should consider part of the LAN: - We'll assume `192.168.1.0/24` is your local LAN. You can safely skip this step if you don't have other machines in the same local network. + Let's assume `192.168.1.0/24` is your local LAN. You can safely skip this step if you don't have other machines in the same local network. ```shell sudo postconf -e "mynetworks = 127.0.0.0/8, 192.168.1.0/24" diff --git a/doc/administration/restart_gitlab.md b/doc/administration/restart_gitlab.md index f4cc98ca145..5f7f08f4ecf 100644 --- a/doc/administration/restart_gitlab.md +++ b/doc/administration/restart_gitlab.md @@ -27,7 +27,7 @@ GitLab Rails application (Puma) as well as the other components, like: ### Omnibus GitLab restart -There may be times in the documentation where you will be asked to _restart_ +There may be times in the documentation where you are asked to _restart_ GitLab. In that case, you need to run the following command: ```shell @@ -73,7 +73,7 @@ As a last resort, you can try to ### Omnibus GitLab reconfigure -There may be times in the documentation where you will be asked to _reconfigure_ +There may be times in the documentation where you are asked to _reconfigure_ GitLab. Remember that this method applies only for the Omnibus packages. Reconfigure Omnibus GitLab with: @@ -86,15 +86,15 @@ Reconfiguring GitLab should occur in the event that something in its configuration (`/etc/gitlab/gitlab.rb`) has changed. When you run this command, [Chef](https://www.chef.io/products/chef-infra/), the underlying configuration management -application that powers Omnibus GitLab, will make sure that all things like directories, +application that powers Omnibus GitLab, makes sure that all things like directories, permissions, and services are in place and in the same shape that they were initially shipped. -It will also restart GitLab components where needed, if any of their +It also restarts GitLab components where needed, if any of their configuration files have changed. If you manually edit any files in `/var/opt/gitlab` that are managed by Chef, -running reconfigure will revert the changes AND restart the services that +running reconfigure reverts the changes AND restarts the services that depend on those files. ## Installations from source diff --git a/doc/administration/troubleshooting/elasticsearch.md b/doc/administration/troubleshooting/elasticsearch.md index 11425d464b9..d04ce23188f 100644 --- a/doc/administration/troubleshooting/elasticsearch.md +++ b/doc/administration/troubleshooting/elasticsearch.md @@ -251,13 +251,13 @@ can be made. If the indices: - Can be made, Escalate this to GitLab support. If the issue is not with creating an empty index, the next step is to check for errors -during the indexing of projects. If errors do occur, they will either stem from the indexing: +during the indexing of projects. If errors do occur, they stem from either the indexing: - On the GitLab side. You need to rectify those. If they are not something you are familiar with, contact GitLab support for guidance. - Within the Elasticsearch instance itself. See if the error is [documented and has a fix](../../integration/elasticsearch.md#troubleshooting). If not, speak with your Elasticsearch administrator. -If the indexing process does not present errors, you will want to check the status of the indexed projects. You can do this via the following Rake tasks: +If the indexing process does not present errors, check the status of the indexed projects. You can do this via the following Rake tasks: - [`sudo gitlab-rake gitlab:elastic:index_projects_status`](../../integration/elasticsearch.md#gitlab-advanced-search-rake-tasks) (shows the overall status) - [`sudo gitlab-rake gitlab:elastic:projects_not_indexed`](../../integration/elasticsearch.md#gitlab-advanced-search-rake-tasks) (shows specific projects that are not indexed) @@ -290,11 +290,11 @@ If the issue is: regarding the error(s) you are seeing. If you are unsure here, it never hurts to reach out to GitLab support. -Beyond that, you will want to review the error. If it is: +Beyond that, review the error. If it is: - Specifically from the indexer, this could be a bug/issue and should be escalated to GitLab support. -- An OS issue, you will want to reach out to your systems administrator. +- An OS issue, you should reach out to your systems administrator. - A `Faraday::TimeoutError (execution expired)` error **and** you're using a proxy, [set a custom `gitlab_rails['env']` environment variable, called `no_proxy`](https://docs.gitlab.com/omnibus/settings/environment-variables.html) with the IP address of your Elasticsearch host. @@ -365,7 +365,7 @@ require contacting an Elasticsearch administrator or GitLab Support. The best place to start while debugging issues with an Advanced Search migration is the [`elasticsearch.log` file](../logs.md#elasticsearchlog). -Migrations will log information while a migration is in progress and any +Migrations log information while a migration is in progress and any errors encountered. Apply fixes for any errors found in the log and retry the migration. diff --git a/doc/administration/troubleshooting/sidekiq.md b/doc/administration/troubleshooting/sidekiq.md index 8d19a8a163b..297a8355036 100644 --- a/doc/administration/troubleshooting/sidekiq.md +++ b/doc/administration/troubleshooting/sidekiq.md @@ -11,7 +11,7 @@ tasks. When things go wrong it can be difficult to troubleshoot. These situations also tend to be high-pressure because a production system job queue may be filling up. Users will notice when this happens because new branches may not show up and merge requests may not be updated. The following are some -troubleshooting steps that will help you diagnose the bottleneck. +troubleshooting steps to help you diagnose the bottleneck. GitLab administrators/users should consider working through these debug steps with GitLab Support so the backtraces can be analyzed by our team. @@ -42,7 +42,7 @@ Example log output: When using [Sidekiq JSON logging](../logs.md#sidekiqlog), arguments logs are limited to a maximum size of 10 kilobytes of text; -any arguments after this limit will be discarded and replaced with a +any arguments after this limit are discarded and replaced with a single argument containing the string `"..."`. You can set `SIDEKIQ_LOG_ARGUMENTS` [environment variable](https://docs.gitlab.com/omnibus/settings/environment-variables.html) @@ -58,7 +58,7 @@ In GitLab 13.5 and earlier, set `SIDEKIQ_LOG_ARGUMENTS` to `1` to start logging ## Thread dump -Send the Sidekiq process ID the `TTIN` signal and it will output thread +Send the Sidekiq process ID the `TTIN` signal to output thread backtraces in the log file. ```shell @@ -66,7 +66,7 @@ kill -TTIN ``` Check in `/var/log/gitlab/sidekiq/current` or `$GITLAB_HOME/log/sidekiq.log` for -the backtrace output. The backtraces will be lengthy and generally start with +the backtrace output. The backtraces are lengthy and generally start with several `WARN` level messages. Here's an example of a single thread's backtrace: ```plaintext @@ -88,8 +88,8 @@ Move on to other troubleshooting methods if this happens. ## Process profiling with `perf` Linux has a process profiling tool called `perf` that is helpful when a certain -process is eating up a lot of CPU. If you see high CPU usage and Sidekiq won't -respond to the `TTIN` signal, this is a good next step. +process is eating up a lot of CPU. If you see high CPU usage and Sidekiq isn't +responding to the `TTIN` signal, this is a good next step. If `perf` is not installed on your system, install it with `apt-get` or `yum`: @@ -134,8 +134,8 @@ corresponding Ruby code where this is happening. `gdb` can be another effective tool for debugging Sidekiq. It gives you a little more interactive way to look at each thread and see what's causing problems. -Attaching to a process with `gdb` will suspends the normal operation -of the process (Sidekiq will not process jobs while `gdb` is attached). +Attaching to a process with `gdb` suspends the normal operation +of the process (Sidekiq does not process jobs while `gdb` is attached). Start by attaching to the Sidekiq PID: @@ -285,7 +285,7 @@ end ### Remove Sidekiq jobs for given parameters (destructive) The general method to kill jobs conditionally is the following command, which -will remove jobs that are queued but not started. Running jobs will not be killed. +removes jobs that are queued but not started. Running jobs can not be killed. ```ruby queue = Sidekiq::Queue.new('') @@ -294,7 +294,7 @@ queue.each { |job| job.delete if } Have a look at the section below for cancelling running jobs. -In the method above, `` is the name of the queue that contains the job(s) you want to delete and `` will decide which jobs get deleted. +In the method above, `` is the name of the queue that contains the job(s) you want to delete and `` decides which jobs get deleted. Commonly, `` references the job arguments, which depend on the type of job in question. To find the arguments for a specific queue, you can have a look at the `perform` function of the related worker file, commonly found at `/app/workers/_worker.rb`. diff --git a/doc/api/oauth2.md b/doc/api/oauth2.md index 2bcf86a031c..dfb91283b50 100644 --- a/doc/api/oauth2.md +++ b/doc/api/oauth2.md @@ -41,7 +41,7 @@ how all those flows work and pick the right one for your use case. Both **authorization code** (with or without PKCE) and **implicit grant** flows require `application` to be registered first via the `/profile/applications` page in your user's account. During registration, by enabling proper scopes, you can limit the range of -resources which the `application` can access. Upon creation, you'll obtain the +resources which the `application` can access. Upon creation, you obtain the `application` credentials: _Application ID_ and _Client Secret_ - **keep them secure**. ### Prevent CSRF attacks @@ -63,7 +63,7 @@ and the [OAuth 2.0 Threat Model RFC](https://tools.ietf.org/html/rfc6819#section These factors are particularly important when using the [Implicit grant flow](#implicit-grant-flow), where actual credentials are included in the `redirect_uri`. -In the following sections you will find detailed instructions on how to obtain +In the following sections you can find detailed instructions on how to obtain authorization with each flow. ### Authorization code with Proof Key for Code Exchange (PKCE) @@ -213,12 +213,12 @@ To request the access token, you should redirect the user to the https://gitlab.example.com/oauth/authorize?client_id=APP_ID&redirect_uri=REDIRECT_URI&response_type=token&state=YOUR_UNIQUE_STATE_HASH&scope=REQUESTED_SCOPES ``` -This will ask the user to approve the applications access to their account +This prompts the user to approve the applications access to their account based on the scopes specified in `REQUESTED_SCOPES` and then redirect back to the `REDIRECT_URI` you provided. The [scope parameter](https://github.com/doorkeeper-gem/doorkeeper/wiki/Using-Scopes#requesting-particular-scopes) is a space separated list of scopes you want to have access to (e.g. `scope=read_user+profile` would request `read_user` and `profile` scopes). The redirect -will include a fragment with `access_token` as well as token details in GET +includes a fragment with `access_token` as well as token details in GET parameters, for example: ```plaintext @@ -285,7 +285,7 @@ echo 'grant_type=password&username=&password=' > a curl --data "@auth.txt" --user client_id:client_secret --request POST "https://gitlab.example.com/oauth/token" ``` -Then, you'll receive the access token back in the response: +Then, you receive a response containing the access token: ```json { @@ -358,7 +358,7 @@ The fields `scopes` and `expires_in_seconds` are included in the response. These are aliases for `scope` and `expires_in` respectively, and have been included to prevent breaking changes introduced in [doorkeeper 5.0.2](https://github.com/doorkeeper-gem/doorkeeper/wiki/Migration-from-old-versions#from-4x-to-5x). -Don't rely on these fields as they will be removed in a later release. +Don't rely on these fields as they are slated for removal in a later release. ## OAuth2 tokens and GitLab registries diff --git a/doc/architecture/blueprints/database_testing/index.md b/doc/architecture/blueprints/database_testing/index.md index a333ac12ef3..162b112732c 100644 --- a/doc/architecture/blueprints/database_testing/index.md +++ b/doc/architecture/blueprints/database_testing/index.md @@ -79,7 +79,7 @@ Database Lab provides an API we can interact with to manage thin clones. In orde The short-term focus is on testing regular migrations (typically schema changes) and using the existing Database Lab instance from postgres.ai for it. -In order to secure this process and meet compliance goals, the runner environment will be treated as a *production* environment and similarly locked down, monitored and audited. Only Database Maintainers will have access to the CI pipeline and its job output. Everyone else will only be able to see the results and statistics posted back on the merge request. +In order to secure this process and meet compliance goals, the runner environment is treated as a *production* environment and similarly locked down, monitored and audited. Only Database Maintainers have access to the CI pipeline and its job output. Everyone else can only see the results and statistics posted back on the merge request. We implement a secured CI pipeline on that adds the execution steps outlined above. The goal is to secure this pipeline in order to solve the following problem: @@ -117,7 +117,7 @@ An alternative approach we have discussed and abandoned is to "scrub" and anonym - Anonymization is complex by nature - it is a hard problem to call a "scrubbed clone" actually safe to work with in public. Different data types may require different anonymization techniques (e.g. anonymizing sensitive information inside a JSON field) and only focusing on one attribute at a time does not guarantee that a dataset is fully anonymized (for example join attacks or using timestamps in conjunction to public profiles/projects to de-anonymize users by there activity). - Anonymization requires an additional process to keep track and update the set of attributes considered as sensitive, ongoing maintenance and security reviews every time the database schema changes. - Annotating data as "sensitive" is error prone, with the wrong anonymization approach used for a data type or one sensitive attribute accidentally not marked as such possibly leading to a data breach. -- Scrubbing not only removes sensitive data, but also changes data distribution, which greatly affects performance of migrations and queries. +- Scrubbing not only removes sensitive data, but it also changes data distribution, which greatly affects performance of migrations and queries. - Scrubbing heavily changes the database contents, potentially updating a lot of data, which leads to different data storage details (think MVC bloat), affecting performance of migrations and queries. ## Who diff --git a/doc/install/google_cloud_platform/index.md b/doc/install/google_cloud_platform/index.md index 766788da061..06adde16f80 100644 --- a/doc/install/google_cloud_platform/index.md +++ b/doc/install/google_cloud_platform/index.md @@ -44,17 +44,17 @@ To deploy GitLab on GCP you first need to create a virtual machine: 1. To select the size, type, and desired [operating system](../requirements.md#supported-linux-distributions), click **Change** under `Boot disk`. Click **Select** when finished. -1. As a last step allow HTTP and HTTPS traffic, then click **Create**. The process will finish in a few seconds. +1. As a last step allow HTTP and HTTPS traffic, then click **Create**. The process finishes in a few seconds. ## Installing GitLab -After a few seconds, the instance will be created and available to log in. The next step is to install GitLab onto the instance. +After a few seconds, the instance is created and available to log in. The next step is to install GitLab onto the instance. ![Deploy settings](img/vm_created.png) -1. Make a note of the IP address of the instance, as you will need that in a later step. +1. Make a note of the IP address of the instance, as you will need that in a later step. 1. Click on the SSH button to connect to the instance. -1. A new window will appear, with you logged into the instance. +1. A new window appears, with you logged into the instance. ![GitLab first sign in](img/ssh_terminal.png) @@ -72,8 +72,8 @@ the first time. ### Assigning a static IP By default, Google assigns an ephemeral IP to your instance. It is strongly -recommended to assign a static IP if you are going to use GitLab in production -and use a domain name as we'll see below. +recommended to assign a static IP if you are using GitLab in production +and use a domain name as shown below. Read Google's documentation on how to [promote an ephemeral IP address](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#promote_ephemeral_ip). @@ -84,7 +84,7 @@ set up DNS to point to the static IP you configured in the previous step, here's how you configure GitLab to be aware of the change: 1. SSH into the VM. You can easily use the **SSH** button in the Google console - and a new window will pop up. + and a new window pops up. ![SSH button](img/vm_created.png) @@ -104,7 +104,7 @@ here's how you configure GitLab to be aware of the change: external_url 'http://gitlab.example.com' ``` - We will set up HTTPS in the next step, no need to do this now. + We will set up HTTPS in the next step, no need to do this now. 1. Reconfigure GitLab for the changes to take effect: @@ -121,8 +121,7 @@ certificate. Follow the steps in the [Omnibus documentation](https://docs.gitlab ### Configuring the email SMTP settings -You need to configure the email SMTP settings correctly otherwise GitLab will -not be able to send notification emails, like comments, and password changes. +You need to configure the email SMTP settings correctly otherwise GitLab cannot send notification emails, like comments, and password changes. Check the [Omnibus documentation](https://docs.gitlab.com/omnibus/settings/smtp.html#smtp-settings) how to do so. ## Further reading diff --git a/doc/install/openshift_and_gitlab/index.md b/doc/install/openshift_and_gitlab/index.md index 3dbcbcfc90c..31c3ca60b84 100644 --- a/doc/install/openshift_and_gitlab/index.md +++ b/doc/install/openshift_and_gitlab/index.md @@ -21,7 +21,7 @@ you can host your own PaaS for free and almost with no hassle. In this tutorial, we will see how to deploy GitLab in OpenShift using the GitLab official Docker image while getting familiar with the web interface and CLI -tools that will help us achieve our goal. +tools that help us achieve our goal. For a video demonstration on installing GitLab on OpenShift, check the article [In 13 minutes from Kubernetes to a complete application development tool](https://about.gitlab.com/blog/2016/11/14/idea-to-production/). @@ -32,7 +32,7 @@ This information is no longer up to date, as the current versions have changed and products have been renamed. OpenShift 3 is not yet deployed on RedHat's offered [Online platform](https://www.openshift.com/), -so in order to test it, we will use an [all-in-one VirtualBox image](https://www.okd.io/minishift/) that is +so in order to test it, we use an [all-in-one VirtualBox image](https://www.okd.io/minishift/) that is offered by the OpenShift developers and managed by Vagrant. If you haven't done already, go ahead and install the following components as they are essential to test OpenShift easily: @@ -65,7 +65,7 @@ the tools needed pre-installed, including Docker, Kubernetes, and OpenShift. ### Test OpenShift using Vagrant As of this writing, the all-in-one VM is at version 1.3, and that's -what we will use in this tutorial. +what we use in this tutorial. In short: @@ -75,7 +75,7 @@ In short: vagrant init openshift/origin-all-in-one ``` -1. This will generate a Vagrantfile based on the all-in-one VM image +1. This generates a Vagrantfile based on the all-in-one VM image 1. In the same directory where you generated the Vagrantfile enter: @@ -83,7 +83,7 @@ In short: vagrant up ``` -This will download the VirtualBox image and fire up the VM with some preconfigured +This downloads the VirtualBox image and fire up the VM with some preconfigured values as you can see in the Vagrantfile. As you may have noticed, you need plenty of RAM (5GB in our example), so make sure you have enough. @@ -91,7 +91,7 @@ Now that OpenShift is set up, let's see how the web console looks like. ### Explore the OpenShift web console -Once Vagrant finishes its thing with the VM, you will be presented with a +Once Vagrant finishes its thing with the VM, you are presented with a message which has some important information. One of them is the IP address of the deployed OpenShift platform and in particular `https://10.2.2.2:8443/console/`. Open this link with your browser and accept the self-signed certificate in @@ -109,7 +109,7 @@ respective pods are there to explore. ![OpenShift web console](img/openshift-infra-project.png) -We are not going to explore the whole interface, but if you want to learn about +We are not exploring the whole interface, but if you want to learn about the key concepts of OpenShift, read the [core concepts reference](https://docs.okd.io/3.11/architecture/core_concepts/index.html) in the official documentation. @@ -193,7 +193,7 @@ The connection to the server 10.2.2.2:8443 was refused - did you specify the rig In that case, the OpenShift service might not be running, so in order to fix it: -1. SSH into the VM by going to the directory where the Vagrantfile is and then +1. SSH into the VM by selecting the directory where the Vagrantfile is and then run: ```shell @@ -201,7 +201,7 @@ In that case, the OpenShift service might not be running, so in order to fix it: ``` 1. Run `systemctl` and verify by the output that the `openshift` service is not - running (it will be in red color). If that's the case start the service with: + running (it is in red color). If that's the case start the service with: ```shell sudo systemctl start openshift @@ -221,7 +221,7 @@ Now that you got a taste of what OpenShift looks like, let's deploy GitLab! ### Create a new project -First, we will create a new project to host our application. You can do this +First, create a new project to host our application. You can do this either by running the CLI client: ```shell diff --git a/doc/integration/kerberos.md b/doc/integration/kerberos.md index 5be076464d8..1984d275794 100644 --- a/doc/integration/kerberos.md +++ b/doc/integration/kerberos.md @@ -87,7 +87,7 @@ For source installations, make sure the `kerberos` gem group 1. [Reconfigure GitLab](../administration/restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect. -GitLab will now offer the `negotiate` authentication method for signing in and +GitLab now offers the `negotiate` authentication method for signing in and HTTP Git access, enabling Git clients that support this authentication protocol to authenticate with Kerberos tokens. @@ -159,7 +159,7 @@ knowledge can be a security risk. ## Link Kerberos and LDAP accounts together If your users sign in with Kerberos, but you also have [LDAP integration](../administration/auth/ldap/index.md) -enabled, your users will be linked to their LDAP accounts on their first sign-in. +enabled, your users are linked to their LDAP accounts on their first sign-in. For this to work, some prerequisites must be met: The Kerberos username must match the LDAP user's UID. You can choose which LDAP @@ -170,7 +170,7 @@ The Kerberos realm must match the domain part of the LDAP user's Distinguished Name. For instance, if the Kerberos realm is `AD.EXAMPLE.COM`, then the LDAP user's Distinguished Name should end in `dc=ad,dc=example,dc=com`. -Taken together, these rules mean that linking will only work if your users' +Taken together, these rules mean that linking only works if your users' Kerberos usernames are of the form `foo@AD.EXAMPLE.COM` and their LDAP Distinguished Names look like `sAMAccountName=foo,dc=ad,dc=example,dc=com`. @@ -180,7 +180,7 @@ LDAP Distinguished Names look like `sAMAccountName=foo,dc=ad,dc=example,dc=com`. You can configure custom allowed realms when the user's Kerberos realm doesn't match the domain from the user's LDAP DN. The configuration value must specify -all domains that users may be expected to have. Any other domains will be +all domains that users may be expected to have. Any other domains are ignored and an LDAP identity won't be linked. **For Omnibus installations** @@ -236,8 +236,8 @@ authentication fails. For GitLab users to be able to use either `basic` or `negotiate` authentication with older Git versions, it is possible to offer Kerberos ticket-based -authentication on a different port (e.g. 8443) while the standard port will -keep offering only `basic` authentication. +authentication on a different port (e.g. 8443) while the standard port offers +only `basic` authentication. **For source installations with HTTPS** @@ -280,14 +280,14 @@ keep offering only `basic` authentication. 1. [Reconfigure GitLab](../administration/restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect. -After this change, all Git remote URLs will have to be updated to +After this change, Git remote URLs have to be updated to `https://gitlab.example.com:8443/mygroup/myproject.git` in order to use Kerberos ticket-based authentication. ## Upgrading from password-based to ticket-based Kerberos sign-ins Prior to GitLab 8.10 Enterprise Edition, users had to submit their -Kerberos username and password to GitLab when signing in. We will +Kerberos username and password to GitLab when signing in. We plan to remove support for password-based Kerberos sign-ins in a future release, so we recommend that you upgrade to ticket-based sign-ins. @@ -343,14 +343,14 @@ to a larger value in [the NGINX configuration](http://nginx.org/en/docs/http/ngx With Kerberos SPNEGO authentication, the browser is expected to send a list of mechanisms it supports to GitLab. If it doesn't support any of the mechanisms -GitLab supports, authentication will fail with a message like this in the log: +GitLab supports, authentication fails with a message like this in the log: ```plaintext OmniauthKerberosSpnegoController: failed to process Negotiate/Kerberos authentication: gss_accept_sec_context did not return GSS_S_COMPLETE: An unsupported mechanism was requested Unknown error ``` This is usually seen when the browser is unable to contact the Kerberos server -directly. It will fall back to an unsupported mechanism known as +directly. It falls back to an unsupported mechanism known as [`IAKERB`](https://k5wiki.kerberos.org/wiki/Projects/IAKERB), which tries to use the GitLab server as an intermediary to the Kerberos server. @@ -359,10 +359,10 @@ client machine and the Kerberos server - this is a prerequisite! Traffic may be blocked by a firewall, or the DNS records may be incorrect. Another failure mode occurs when the forward and reverse DNS records for the -GitLab server do not match. Often, Windows clients will work in this case, while -Linux clients will fail. They use reverse DNS while detecting the Kerberos -realm. If they get the wrong realm, then ordinary Kerberos mechanisms will fail, -so the client will fall back to attempting to negotiate `IAKERB`, leading to the +GitLab server do not match. Often, Windows clients work in this case while +Linux clients fail. They use reverse DNS while detecting the Kerberos +realm. If they get the wrong realm then ordinary Kerberos mechanisms fail, +so the client falls back to attempting to negotiate `IAKERB`, leading to the above error message. To fix this, ensure that the forward and reverse DNS for your GitLab server diff --git a/doc/security/ssh_keys_restrictions.md b/doc/security/ssh_keys_restrictions.md index 102ba1fc370..0875ce82e61 100644 --- a/doc/security/ssh_keys_restrictions.md +++ b/doc/security/ssh_keys_restrictions.md @@ -25,13 +25,13 @@ In **Admin Area > Settings** (`/admin/application_settings/general`), expand the ![SSH keys restriction admin settings](img/ssh_keys_restrictions_settings.png) -If a restriction is imposed on any key type, users will be unable to upload new SSH keys that don't meet the requirement. Any existing keys that don't meet it will be disabled but not removed and users will be unable to pull or push code using them. +If a restriction is imposed on any key type, users cannot upload new SSH keys that don't meet the requirement. Any existing keys that don't meet it are disabled but not removed and users cannot to pull or push code using them. -An icon will be visible to the user of a restricted key in the SSH keys section of their profile: +An icon is visible to the user of a restricted key in the SSH keys section of their profile: ![Restricted SSH key icon](img/ssh_keys_restricted_key_icon.png) -Hovering over this icon will tell you why the key is restricted. +Hovering over this icon tells you why the key is restricted.