Add latest changes from gitlab-org/gitlab@master

This commit is contained in:
GitLab Bot 2022-10-12 00:10:06 +00:00
parent 3ecbefc581
commit 50de2638aa
68 changed files with 305 additions and 160 deletions

View File

@ -56,8 +56,8 @@ OPTIONAL_REVIEW_TEMPLATE = '%{role} review is optional for %{category}'
NOT_AVAILABLE_TEMPLATES = {
default: 'No %{role} available',
product_intelligence: group_not_available_template('#g_product_intelligence', '@gitlab-org/analytics-section/product-intelligence/engineers'),
integrations_be: group_not_available_template('#g_ecosystem_integrations', '@gitlab-org/ecosystem-stage/integrations'),
integrations_fe: group_not_available_template('#g_ecosystem_integrations', '@gitlab-org/ecosystem-stage/integrations')
integrations_be: group_not_available_template('#g_manage_integrations', '@gitlab-org/manage/integrations'),
integrations_fe: group_not_available_template('#g_manage_integrations', '@gitlab-org/manage/integrations')
}.freeze
def note_for_spin_role(spin, role, category)

View File

@ -261,7 +261,7 @@ To upgrade to a later version [using your own web-server](#self-host-the-product
If you self-host the product documentation:
- The version dropdown displays additional versions that don't exist. Selecting
- The version dropdown list displays additional versions that don't exist. Selecting
these versions displays a `404 Not Found` page.
- The search displays results from `docs.gitlab.com` and not the local site.
- By default, the landing page redirects to the

View File

@ -1320,7 +1320,7 @@ Deletions are disabled by default due to a race condition with repository rename
deletions. This is especially prominent in Geo instances as Geo performs more renames than instances without Geo.
You should enable deletions only if the [`gitaly_praefect_generated_replica_paths` feature flag](index.md#praefect-generated-replica-paths-gitlab-150-and-later) is enabled.
By default, the worker does not delete invalid metadata records but simply logs them and outputs Prometheus
By default, the worker does not delete invalid metadata records but logs them and outputs Prometheus
metrics for them.
You can enable deleting invalid metadata records with:

View File

@ -187,7 +187,7 @@ You can then view the database details for this request:
![Paste request ID into progress bar](img/paste-request-id-into-progress-bar_v14_3.png)
1. A new request is inserted into the `Request Selector` dropdown on the right-hand side of the Performance Bar. Select the new request to view the metrics of the API request:
1. A new request is inserted into the `Request Selector` dropdown list on the right-hand side of the Performance Bar. Select the new request to view the metrics of the API request:
![Select request ID from request selector drop down menu](img/select-request-id-from-request-selector-drop-down-menu_v14_3.png)

View File

@ -353,7 +353,7 @@ To add a Prometheus dashboard for a single server GitLab setup:
1. Create a new data source in Grafana.
1. Name your data source (such as GitLab).
1. Select `Prometheus` in the type dropdown box.
1. Select `Prometheus` in the type dropdown list.
1. Add your Prometheus listen address as the URL, and set access to `Browser`.
1. Set the HTTP method to `GET`.
1. Save and test your configuration to verify that it works.

View File

@ -160,7 +160,7 @@ If your certificate provider provides the CA Bundle certificates, append them to
An administrator may want the container registry listening on an arbitrary port such as `5678`.
However, the registry and application server are behind an AWS application load balancer that only
listens on ports `80` and `443`. The administrator may simply remove the port number for
listens on ports `80` and `443`. The administrator may remove the port number for
`registry_external_url`, so HTTP or HTTPS is assumed. Then, the rules apply that map the load
balancer to the registry from ports `80` or `443` to the arbitrary port. This is important if users
rely on the `docker login` example in the container registry. Here's an example:

View File

@ -354,7 +354,7 @@ listed in the descriptions of the relevant settings.
| `gravatar_enabled` | boolean | no | Enable Gravatar. |
| `hashed_storage_enabled` | boolean | no | Create new projects using hashed storage paths: Enable immutable, hash-based paths and repository names to store repositories on disk. This prevents repositories from having to be moved or renamed when the Project URL changes and may improve disk I/O performance. (Always enabled in GitLab versions 13.0 and later, configuration is scheduled for removal in 14.0) |
| `help_page_hide_commercial_content` | boolean | no | Hide marketing-related entries from help. |
| `help_page_support_url` | string | no | Alternate support URL for help page and help dropdown. |
| `help_page_support_url` | string | no | Alternate support URL for help page and help dropdown list. |
| `help_page_text` | string | no | Custom text displayed on the help page. |
| `help_text` **(PREMIUM)** | string | no | GitLab server administrator information. |
| `hide_third_party_offers` | boolean | no | Do not display offers from third parties in GitLab. |

View File

@ -177,7 +177,7 @@ everyone to understand the vision described in this architectural blueprint.
### Removing pipeline data
While it might be tempting to simply remove old or archived data from our
While it might be tempting to remove old or archived data from our
databases this should be avoided. It is usually not desired to permanently
remove user data unless consent is given to do so. We can, however, move data
to a different data store, like object storage.

View File

@ -60,7 +60,7 @@ out of a database to a different place when data is no longer relevant or
needed. Our dataset is extremely large (tens of terabytes), so moving such a
high volume of data is challenging. When time-decay is implemented using
partitioning, we can archive the entire partition (or set of partitions) by
simply updating a single record in one of our database tables. It is one of the
updating a single record in one of our database tables. It is one of the
least expensive ways to implement time-decay patterns at a database level.
![decomposition_partitioning_comparison.png](decomposition_partitioning_comparison.png)
@ -259,7 +259,7 @@ smart enough to move rows between partitions on its own.
A partitioned table is called a **routing** table and it will use the `p_`
prefix which should help us with building automated tooling for query analysis.
A table partition will be simply called **partition** and it can use the a
A table partition will be called **partition** and it can use the a
physical partition ID as suffix, leaded by a `p` letter, for example
`ci_builds_p101`. Existing CI tables will become **zero partitions** of the
new routing tables. Depending on the chosen
@ -304,7 +304,7 @@ of storing archived data in PostgreSQL will be reduced significantly this way.
There are some technical details here that are out of the scope of this
description, but by using this strategy we can "archive" data, and make it much
less expensive to reside in our PostgreSQL cluster by simply toggling a boolean
less expensive to reside in our PostgreSQL cluster by toggling a boolean
column value.
## Accessing partitioned data

View File

@ -178,24 +178,24 @@ There are several concerns represented in the current architecture. They are
coupled in the current implementation so we will break them out here to consider
them each separately.
1. **Virtual Machine (VM) shape**. The underlying provider of a VM requires configuration to
know what kind of machine to create. E.g. Cores, memory, failure domain,
etc... This information is very provider specific.
1. **VM lifecycle management**. Multiple machines will be created and a
system must keep track of which machines belong to this executor. Typically
a cloud provider will have a way to manage a set of homogenous machines.
E.g. GCE Instance Group. The basic operations are increase, decrease and
usually delete a specific machine.
1. **VM autoscaling**. In addition to low-level lifecycle management,
job-aware capacity decisions must be made to the set of machines to provide
capacity when it is needed but not maintain excess capacity for cost reasons.
1. **Job to VM mapping (routing)**. Currently the system assigns only one job to a
given a machine. A machine may be reused based on the specific executor
configuration.
1. **In-VM job execution**. Within each VM a job must be driven through
various pre-defined stages and results and trace information returned
to the Runner system. These details are highly dependent on the VM
architecture and operating system as well as Executor type.
- **Virtual Machine (VM) shape**. The underlying provider of a VM requires configuration to
know what kind of machine to create. E.g. Cores, memory, failure domain,
etc... This information is very provider specific.
- **VM lifecycle management**. Multiple machines will be created and a
system must keep track of which machines belong to this executor. Typically
a cloud provider will have a way to manage a set of homogenous machines.
E.g. GCE Instance Group. The basic operations are increase, decrease and
usually delete a specific machine.
- **VM autoscaling**. In addition to low-level lifecycle management,
job-aware capacity decisions must be made to the set of machines to provide
capacity when it is needed but not maintain excess capacity for cost reasons.
- **Job to VM mapping (routing)**. Currently the system assigns only one job to a
given a machine. A machine may be reused based on the specific executor
configuration.
- **In-VM job execution**. Within each VM a job must be driven through
various pre-defined stages and results and trace information returned
to the Runner system. These details are highly dependent on the VM
architecture and operating system as well as Executor type.
The current architecture has several points of coupling between concerns.
Coupling reduces opportunities for abstraction (e.g. community supported
@ -243,37 +243,37 @@ abstraction.
#### General high-level principles
1. Design the new auto-scaling architecture aiming for having more choices and
flexibility in the future, instead of imposing new constraints.
1. Design the new auto-scaling architecture to experiment with running multiple
jobs in parallel, on a single machine.
1. Design the new provisioning architecture to replace Docker Machine in a way
that the wider community can easily build on top of the new abstractions.
- Design the new auto-scaling architecture aiming for having more choices and
flexibility in the future, instead of imposing new constraints.
- Design the new auto-scaling architecture to experiment with running multiple
jobs in parallel, on a single machine.
- Design the new provisioning architecture to replace Docker Machine in a way
that the wider community can easily build on top of the new abstractions.
#### Principles for the new plugin system
1. Make the entry barrier for writing a new plugin low.
1. Developing a new plugin should be simple and require only basic knowledge of
a programming language and a cloud provider's API.
1. Strive for a balance between the plugin system's simplicity and flexibility.
These are not mutually exclusive.
1. Abstract away as many technical details as possible but do not hide them completely.
1. Build an abstraction that serves our community well but allows us to ship it quickly.
1. Invest in a flexible solution, avoid one-way-door decisions, foster iteration.
1. When in doubts err on the side of making things more simple for the wider community.
1. Limit coupling between concerns to make the system more simple and extensible.
1. Concerns should live on one side of the plug or the other--not both, which
duplicates effort and increases coupling.
- Make the entry barrier for writing a new plugin low.
- Developing a new plugin should be simple and require only basic knowledge of
a programming language and a cloud provider's API.
- Strive for a balance between the plugin system's simplicity and flexibility.
These are not mutually exclusive.
- Abstract away as many technical details as possible but do not hide them completely.
- Build an abstraction that serves our community well but allows us to ship it quickly.
- Invest in a flexible solution, avoid one-way-door decisions, foster iteration.
- When in doubts err on the side of making things more simple for the wider community.
- Limit coupling between concerns to make the system more simple and extensible.
- Concerns should live on one side of the plug or the other--not both, which
duplicates effort and increases coupling.
#### The most important technical details
1. Favor gRPC communication between a plugin and GitLab Runner.
1. Make it possible to version communication interface and support many versions.
1. Make Go a primary language for writing plugins but accept other languages too.
1. Prefer a GitLab job-aware autoscaler to provider specific autoscalers. Cloud provider
autoscalers don't know which VM to delete when scaling down so they make sub-optimal
decisions. Rather than teaching all autoscalers about GitLab jobs, we prefer to
have one, GitLab-owned autoscaler (not in the plugin).
- Favor gRPC communication between a plugin and GitLab Runner.
- Make it possible to version communication interface and support many versions.
- Make Go a primary language for writing plugins but accept other languages too.
- Prefer a GitLab job-aware autoscaler to provider specific autoscalers. Cloud provider
autoscalers don't know which VM to delete when scaling down so they make sub-optimal
decisions. Rather than teaching all autoscalers about GitLab jobs, we prefer to
have one, GitLab-owned autoscaler (not in the plugin).
## Plugin boundary proposals

View File

@ -87,7 +87,7 @@ Currently, different entities like issues, epics, merge requests etc share many
### Flexibility
With existing implementation, we have a rigid structure for issuables, merge requests, epics etc. This structure is defined on both backend and frontend, so any change requires a coordinated effort. Also, it would be very hard to make this structure customizable for the user without introducing a set of flags to enable/disable any existing feature. Work Item architecture allows frontend to display Work Item widgets in a flexible way: whatever is present in Work Item widgets, will be rendered on the page. This allows us to make changes fast and makes the structure way more flexible. For example, if we want to stop displaying labels on the Incident page, we simply remove labels widget from Incident Work Item type on the backend. Also, in the future this will allow users to define the set of widgets they want to see on custom Work Item types.
With existing implementation, we have a rigid structure for issuables, merge requests, epics etc. This structure is defined on both backend and frontend, so any change requires a coordinated effort. Also, it would be very hard to make this structure customizable for the user without introducing a set of flags to enable/disable any existing feature. Work Item architecture allows frontend to display Work Item widgets in a flexible way: whatever is present in Work Item widgets, will be rendered on the page. This allows us to make changes fast and makes the structure way more flexible. For example, if we want to stop displaying labels on the Incident page, we remove labels widget from Incident Work Item type on the backend. Also, in the future this will allow users to define the set of widgets they want to see on custom Work Item types.
### A consistent experience

View File

@ -10,9 +10,6 @@ description: Require approvals prior to deploying to a Protected Environment
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/343864) in GitLab 14.7 with a flag named `deployment_approvals`. Disabled by default.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/347342) in GitLab 14.8.
WARNING:
This feature is in a [Beta](../../policy/alpha-beta-support.md#beta-features) stage and subject to change without prior notice.
It may be useful to require additional approvals before deploying to certain protected environments (for example, production). This pre-deployment approval requirement is useful to accommodate testing, security, or compliance processes that must happen before each deployment.
When a protected environment requires one or more approvals, all deployments to that environment become blocked and wait for the required approvals from the `Allowed to Deploy` list before running.

View File

@ -133,7 +133,7 @@ they have the following privileges:
Users granted access to a protected environment, but not push or merge access
to the branch deployed to it, are only granted access to deploy the environment.
[Invited groups](../../user/project/members/share_project_with_groups.md#share-a-project-with-a-group-of-users) added
to the project with [Reporter role](../../user/permissions.md#project-members-permissions), appear in the dropdown menu for deployment-only access.
to the project with [Reporter role](../../user/permissions.md#project-members-permissions), appear in the dropdown list for deployment-only access.
To add deployment-only access:
@ -146,7 +146,7 @@ To add deployment-only access:
Maintainers can:
- Update existing protected environments at any time by changing the access in the
**Allowed to Deploy** dropdown menu.
**Allowed to Deploy** dropdown list.
- Unprotect a protected environment by clicking the **Unprotect** button for that environment.
After an environment is unprotected, all access entries are deleted and must

View File

@ -36,7 +36,7 @@ apt-get install ruby-dev
```
The Dpl provides support for vast number of services, including: Heroku, Cloud Foundry, AWS/S3, and more.
To use it simply define provider and any additional parameters required by the provider.
To use it, define provider and any additional parameters required by the provider.
For example if you want to use it to deploy your application to Heroku, you need to specify `heroku` as provider, specify `api_key` and `app`.
All possible parameters can be found in the [Heroku API section](https://github.com/travis-ci/dpl#heroku-api).

View File

@ -86,7 +86,7 @@ steering the browser. In this case, we can use
[`browser.url`](http://v4.webdriver.io/api/protocol/url.html) to visit `/page-that-does-not-exist` to
hit our 404 page. We can then use [`browser.getUrl`](http://v4.webdriver.io/api/property/getUrl.html)
to verify that the current page is indeed at the location we specified. To interact with the page,
we can simply pass CSS selectors to
we can pass CSS selectors to
[`browser.element`](http://v4.webdriver.io/api/protocol/element.html) to get access to elements on the
page and to interact with them - for example, to click on the link back to the home page.

View File

@ -189,7 +189,7 @@ phpenv config-add my_config.ini
Since this is a pretty bare installation of the PHP environment, you may need
some extensions that are not currently present on the build machine.
To install additional extensions simply execute:
To install additional extensions, execute:
```shell
pecl install <extension>
@ -272,5 +272,5 @@ We have set up an [Example PHP Project](https://gitlab.com/gitlab-examples/php)
that runs on [GitLab.com](https://gitlab.com) using our publicly available
[shared runners](../runners/index.md).
Want to hack on it? Simply fork it, commit, and push your changes. Within a few
Want to hack on it? Fork it, commit, and push your changes. Within a few
moments the changes are picked by a public runner and the job begins.

View File

@ -211,7 +211,7 @@ For advanced CI/CD teams, project templates can enable the reuse of pipeline con
as well as encourage inner sourcing.
In self-managed GitLab instances, you can build an [Instance Template Repository](../../user/admin_area/settings/instance_template_repository.md).
Development teams across the whole organization can select templates from a dropdown menu.
Development teams across the whole organization can select templates from a dropdown list.
A group maintainer or a group owner is able to set a group to use as the source for the
[custom project templates](../../user/admin_area/custom_project_templates.md), which can
be used by all projects in the group. An instance administrator can set a group as

View File

@ -329,7 +329,7 @@ you can view a graph or download a CSV file with this data.
1. On the top bar, select **Main menu > Projects** and find your project.
1. On the left sidebar, select **Analytics > Repository**.
The historic data for each job is listed in the dropdown above the graph.
The historic data for each job is listed in the dropdown list above the graph.
To view a CSV file of the data, select **Download raw data (`.csv`)**.

View File

@ -69,5 +69,5 @@ We have set up an [Example Redis Project](https://gitlab.com/gitlab-examples/red
that runs on [GitLab.com](https://gitlab.com) using our publicly available
[shared runners](../runners/index.md).
Want to hack on it? Simply fork it, commit and push your changes. Within a few
Want to hack on it? Fork it, commit and push your changes. Within a few
moments the changes are picked by a public runner and the job begins.

View File

@ -10,13 +10,13 @@ type: tutorial
GitLab currently doesn't have built-in support for managing SSH keys in a build
environment (where the GitLab Runner runs).
Use SSH keys when:
Use SSH keys when you want to:
1. You want to check out internal submodules
1. You want to download private packages using your package manager (for example, Bundler)
1. You want to deploy your application to your own server, or, for example, Heroku
1. You want to execute SSH commands from the build environment to a remote server
1. You want to rsync files from the build environment to a remote server
- Check out internal submodules.
- Download private packages using your package manager. For example, Bundler.
- Deploy your application to your own server or, for example, Heroku.
- Execute SSH commands from the build environment to a remote server.
- Rsync files from the build environment to a remote server.
If anything of the above rings a bell, then you most likely need an SSH key.

View File

@ -114,7 +114,7 @@ making it both concise and descriptive, err on the side of descriptive.
- **Bad:** Go to a project order.
- **Good:** Show a user's starred projects at the top of the "Go to project"
dropdown.
dropdown list.
The first example provides no context of where the change was made, or why, or
how it benefits the user.
@ -126,9 +126,9 @@ how it benefits the user.
Again, the first example is too vague and provides no context.
- **Bad:** Fixes and Improves CSS and HTML problems in mini pipeline graph and
builds dropdown.
builds dropdown list.
- **Good:** Fix tooltips and hover states in mini pipeline graph and builds
dropdown.
dropdown list.
The first example is too focused on implementation details. The user doesn't
care that we changed CSS and HTML, they care about the _end result_ of those

View File

@ -391,7 +391,7 @@ This is useful information for reviewers to make sure the template is safe to be
### Make sure the new template can be selected in UI
Templates located under some directories are also [selectable in the **New file** UI](#template-directories).
When you add a template into one of those directories, make sure that it correctly appears in the dropdown:
When you add a template into one of those directories, make sure that it correctly appears in the dropdown list:
![CI/CD template selection](img/ci_template_selection_v13_1.png)

View File

@ -57,7 +57,7 @@ We make the following assumption with regards to automatically being considered
- Team members working on a specific feature (for example, search) are considered domain experts for that feature.
We default to assigning reviews to team members with domain expertise.
When a suitable [domain expert](#domain-experts) isn't available, you can choose any team member to review the MR, or simply follow the [Reviewer roulette](#reviewer-roulette) recommendation.
When a suitable [domain expert](#domain-experts) isn't available, you can choose any team member to review the MR, or follow the [Reviewer roulette](#reviewer-roulette) recommendation.
To find a domain expert:

View File

@ -373,7 +373,7 @@ below will make it easy to manage this, without unnecessary overhead.
which might lead to many hard problems to solve. Changing some text in GitLab
is probably 1, adding a new Git Hook maybe 4 or 5, big features 7-9.
1. If something is very large, it should probably be split up in multiple
issues or chunks. You can simply not set the weight of a parent issue and set
issues or chunks. You can not set the weight of a parent issue and set
weights to children issues.
## Regression issues
@ -432,7 +432,7 @@ original merge request - or not tracked at all!
The overheads of scheduling, and rate of change in the GitLab codebase, mean
that the cost of a trivial technical debt issue can quickly exceed the value of
tracking it. This generally means we should resolve these in the original merge
request - or simply not create a follow-up issue at all.
request - or not create a follow-up issue at all.
For example, a typo in a comment that is being copied between files is worth
fixing in the same MR, but not worth creating a follow-up issue for. Renaming a

View File

@ -13,7 +13,7 @@ Danger is a gem that runs in the CI environment, like any other analysis tool.
What sets it apart from (for example, RuboCop) is that it's designed to allow you to
easily write arbitrary code to test properties of your code or changes. To this
end, it provides a set of common helpers and access to information about what
has actually changed in your environment, then simply runs your code!
has actually changed in your environment, then runs your code!
If Danger is asking you to change something about your merge request, it's best
just to make the change. If you want to learn how Danger works, or make changes

View File

@ -86,7 +86,7 @@ migration classes must be defined in the namespace
Scheduling a background migration should be done in a post-deployment
migration that includes `Gitlab::Database::MigrationHelpers`
To do so, simply use the following code while
To do so, use the following code while
replacing the class name and arguments with whatever values are necessary for
your migration:
@ -110,7 +110,7 @@ You also need to make sure that newly created data is either migrated, or
saved in both the old and new version upon creation. For complex and time
consuming migrations it's best to schedule a background job using an
`after_create` hook so this doesn't affect response timings. The same applies to
updates. Removals in turn can be handled by simply defining foreign keys with
updates. Removals in turn can be handled by defining foreign keys with
cascading deletes.
### Rescheduling background migrations

View File

@ -103,7 +103,7 @@ This looks working as a workaround, however, this approach has some downsides th
Therefore, you need to be careful of that the offset doesn't exceed the maximum value of 2 bytes integer.
As a conclusion, you should define all of the key/value pairs in FOSS.
For example, you can simply write the following code in the above case:
For example, you can write the following code in the above case:
```ruby
class Pipeline < ApplicationRecord

View File

@ -181,7 +181,7 @@ end
```
You can still save relations that are not `BulkInsertSafe` in this block; they
simply are treated as if you had invoked `save` from outside the block.
are treated as if you had invoked `save` from outside the block.
## Known limitations

View File

@ -311,7 +311,7 @@ end
The "`it has loose foreign keys`" shared example can be used to test the presence of the `ON DELETE` trigger and the
loose foreign key definitions.
Simply add to the model test file:
Add to the model test file:
```ruby
it_behaves_like 'it has loose foreign keys' do

View File

@ -50,9 +50,9 @@ When we list records on the page we often provide additional filters and differe
For the MVC version, consider the following:
- Reduce the number of sort options to the minimum.
- Reduce the number of filters (dropdown, search bar) to the minimum.
- Reduce the number of filters (dropdown list, search bar) to the minimum.
To make sorting and pagination efficient, for each sort option we need at least two database indexes (ascending, descending order). If we add filter options (by state or by author), we might need more indexes to maintain good performance. Note that indexes are not free, they can significantly affect the `UPDATE` query timings.
To make sorting and pagination efficient, for each sort option we need at least two database indexes (ascending, descending order). If we add filter options (by state or by author), we might need more indexes to maintain good performance. Indexes are not free, they can significantly affect the `UPDATE` query timings.
It's not possible to make all filter and sort combinations performant, so we should try optimizing the performance by usage patterns.
@ -154,7 +154,7 @@ Here we're leveraging the ordered property of the b-tree database index. Values
##### `COUNT(*)` on a large dataset
Kaminari by default executes a count query to determine the number of pages for rendering the page links. Count queries can be quite expensive for a large table. In an unfortunate scenario the queries simply time out.
Kaminari by default executes a count query to determine the number of pages for rendering the page links. Count queries can be quite expensive for a large table. In an unfortunate scenario the queries time out.
To work around this, we can run Kaminari without invoking the count SQL query.
@ -264,7 +264,7 @@ Looking at the query execution plan, we can see that this query read only 5 rows
##### No page numbers
Offset pagination provides an easy way to request a specific page. We can simply edit the URL and modify the `page=` URL parameter. Keyset pagination cannot provide page numbers because the paging logic might depend on different columns.
Offset pagination provides an easy way to request a specific page. We can edit the URL and modify the `page=` URL parameter. Keyset pagination cannot provide page numbers because the paging logic might depend on different columns.
In the previous example, the column is the `id`, so we might see something like this in the `URL`:

View File

@ -465,7 +465,7 @@ RSpec.describe '<What I am taking screenshots of>', :js do
#### Full page screenshots
To take a full page screenshot simply `visit the page` and perform any expectation on real content (to have capybara wait till the page is ready and not take a white screenshot).
To take a full page screenshot, `visit the page` and perform any expectation on real content (to have capybara wait till the page is ready and not take a white screenshot).
#### Element screenshot

View File

@ -824,7 +824,7 @@ end
Sometimes we need EE-specific behavior in some of the APIs. Normally we could
use EE methods to override CE methods, however API routes are not methods and
therefore can't be simply overridden. We need to extract them into a standalone
therefore cannot be overridden. We need to extract them into a standalone
method, or introduce some "hooks" where we could inject behavior in the CE
route. Something like this:
@ -875,8 +875,8 @@ end
#### EE `route_setting`
It's very hard to extend this in an EE module, and this is simply storing
some meta-data for a particular route. Given that, we could simply leave the
It's very hard to extend this in an EE module, and this is storing
some meta-data for a particular route. Given that, we could leave the
EE `route_setting` in CE as it doesn't hurt and we don't use
those meta-data in CE.
@ -1416,5 +1416,5 @@ to avoid conflicts during CE to EE merge.
### GitLab-svgs
Conflicts in `app/assets/images/icons.json` or `app/assets/images/icons.svg` can
be resolved simply by regenerating those assets with
be resolved by regenerating those assets with
[`yarn run svg`](https://gitlab.com/gitlab-org/gitlab-svgs).

View File

@ -461,8 +461,8 @@ _from "Disk-based Shard Allocation | Elasticsearch Reference" [5.6](https://www.
The use of Elasticsearch in GitLab is only ever as a secondary data store.
This means that all of the data stored in Elasticsearch can always be derived
again from other data sources, specifically PostgreSQL and Gitaly. Therefore if
the Elasticsearch data store is ever corrupted for whatever reason you can
simply reindex everything from scratch.
the Elasticsearch data store is ever corrupted for whatever reason you can reindex
everything from scratch.
If your Elasticsearch index is incredibly large it may be too time consuming or
cause too much downtime to reindex from scratch. There aren't any built in

View File

@ -146,7 +146,7 @@ Even in these scenarios, consider avoiding the Singleton pattern.
#### Utility Functions
When no state needs to be managed, we can simply export utility functions from a module without
When no state needs to be managed, we can export utility functions from a module without
messing with any class instantiation.
```javascript

View File

@ -330,7 +330,7 @@ Along with creating local data, we can also extend existing GraphQL types with `
##### Mocking API response with local Apollo cache
Using local Apollo Cache is helpful when we have a need to mock some GraphQL API responses, queries, or mutations locally (such as when they're still not added to our actual API).
Using local Apollo Cache is helpful when we have a reason to mock some GraphQL API responses, queries, or mutations locally (such as when they're still not added to our actual API).
For example, we have a [fragment](#fragments) on `DesignVersion` used in our queries:
@ -341,7 +341,7 @@ fragment VersionListItem on DesignVersion {
}
```
We also need to fetch the version author and the `created at` property to display in the versions dropdown. But, these changes are still not implemented in our API. We can change the existing fragment to get a mocked response for these new fields:
We also must fetch the version author and the `created at` property to display in the versions dropdown list. But, these changes are still not implemented in our API. We can change the existing fragment to get a mocked response for these new fields:
```javascript
fragment VersionListItem on DesignVersion {
@ -627,7 +627,7 @@ GraphQL entities are not yet part of the schema, or if they are feature-flagged
### Manually triggering queries
Queries on a component's `apollo` property are made automatically when the component is created.
Some components instead want the network request made on-demand, for example a dropdown with lazy-loaded items.
Some components instead want the network request made on-demand, for example a dropdown list with lazy-loaded items.
There are two ways to do this:

View File

@ -176,7 +176,7 @@ For project level features:
Feature.enabled?(:feature_ice_cold_projects, project)
```
If you are not certain what percentages to use, simply use the following steps:
If you are not certain what percentages to use, use the following steps:
1. 25%
1. 50%

View File

@ -357,7 +357,7 @@ Every binary ideally must have structured (JSON) logging in place as it helps
with searching and filtering the logs. At GitLab we use structured logging in
JSON format, as all our infrastructure assumes that. When using
[Logrus](https://github.com/sirupsen/logrus) you can turn on structured
logging simply by using the build in [JSON formatter](https://github.com/sirupsen/logrus#formatters). This follows the
logging by using the build in [JSON formatter](https://github.com/sirupsen/logrus#formatters). This follows the
same logging type we use in our [Ruby applications](../logging.md#use-structured-json-logging).
#### How to use Logrus

View File

@ -134,7 +134,7 @@ z.sync
```
NOTE:
There is no dependency analysis in the use of batch-loading. There is simply
There is no dependency analysis in the use of batch-loading. There is
a pending queue of requests, and as soon as any one result is needed, all pending
requests are evaluated.

View File

@ -79,7 +79,7 @@ recreate it with the following steps:
## Manually update the translation levels
There's no automated way to pull the translation levels from Crowdin, to display
this information in the language selection dropdown. Therefore, the translation
this information in the language selection dropdown list. Therefore, the translation
levels are hard-coded in the `TRANSLATION_LEVELS` constant in [`i18n.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/i18n.rb),
and must be regularly updated.

View File

@ -68,7 +68,7 @@ controller mixin. Upon receiving a request coming from a client through Workhors
it should trigger the image scaler as per the criteria mentioned above, and if so, render a special response
header field (`Gitlab-Workhorse-Send-Data`) with the necessary parameters for Workhorse to carry
out the scaling request. If Rails decides the request does not constitute a valid image scaling request,
we simply follow the path we take to serve any ordinary upload.
we follow the path we take to serve any ordinary upload.
### Workhorse

View File

@ -16,7 +16,7 @@ There are several ways to import a project.
### Importing via UI
The first option is to simply [import the Project tarball file via the GitLab UI](../user/project/settings/import_export.md#import-a-project-and-its-data):
The first option is to [import the Project tarball file via the GitLab UI](../user/project/settings/import_export.md#import-a-project-and-its-data):
1. Create the group `qa-perf-testing`
1. Import the [GitLab FOSS project tarball](https://gitlab.com/gitlab-org/quality/performance-data/-/blob/master/projects_export/gitlabhq_export.tar.gz) into the Group.
@ -59,7 +59,7 @@ This script was introduced in GitLab 12.6 for importing large GitLab project exp
As part of this script we also disable direct and background upload to avoid situations where a huge archive is being uploaded to GCS (while being inside a transaction, which can cause idle transaction timeouts).
We can simply run this script from the terminal:
We can run this script from the terminal:
Parameters:

View File

@ -44,7 +44,7 @@ and running.
Can the queries used potentially take down any critical services and result in
engineers being woken up in the night? Can a malicious user abuse the code to
take down a GitLab instance? Do my changes simply make loading a certain page
take down a GitLab instance? Do my changes make loading a certain page
slower? Does execution time grow exponentially given enough load or data in the
database?

View File

@ -612,7 +612,7 @@ Note that it is not necessary to check if the index exists prior to
removing it, however it is required to specify the name of the
index that is being removed. This can be done either by passing the name
as an option to the appropriate form of `remove_index` or `remove_concurrent_index`,
or more simply by using the `remove_concurrent_index_by_name` method. Explicitly
or by using the `remove_concurrent_index_by_name` method. Explicitly
specifying the name is important to ensure the correct index is removed.
For a small table (such as an empty one or one with less than `1,000` records),

View File

@ -77,7 +77,7 @@ expect(cleanForSnapshot(wrapper.element)).toMatchSnapshot();
- [Pinning test in a Haml to Vue refactor](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/27691#pinning-tests)
- [Pinning test in isolating a bug](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/32198#note_212736225)
- [Pinning test in refactoring dropdown](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28173)
- [Pinning test in refactoring dropdown list](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28173)
- [Pinning test in refactoring vulnerability_details.vue](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25830/commits)
- [Pinning test in refactoring notes_award_list.vue](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/29528#pinning-test)
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [Video of pair programming session on pinning tests](https://youtu.be/LrakPcspBK4)

View File

@ -63,7 +63,7 @@ do so, but then we'd need as many options as we have features. Every option adds
two code paths, which means that for four features we have to cover 8 different
code paths.
A much more reliable (and pleasant) way of dealing with this, is to simply use
A much more reliable (and pleasant) way of dealing with this, is to use
the underlying bits that make up `GroupProjectsFinder` directly. This means we
may need a little bit more code in `IssuableFinder`, but it also gives us much
more control and certainty. This means we might end up with something like this:
@ -122,7 +122,7 @@ the various abstractions and what they can (not) reuse:
Everything in `app/controllers`.
Controllers should not do much work on their own, instead they simply pass input
Controllers should not do much work on their own, instead they pass input
to other classes and present the results.
### API endpoints

View File

@ -216,7 +216,7 @@ core. It does not support multi-threading.
Dumb secondaries: Redis secondaries (also known as replicas) don't actually
handle any load. Unlike PostgreSQL secondaries, they don't even serve
read queries. They simply replicate data from the primary and take over
read queries. They replicate data from the primary and take over
only when the primary fails.
### Redis Sentinels

View File

@ -1071,7 +1071,7 @@ Symlink attacks makes it possible for an attacker to read the contents of arbitr
#### Ruby
For zip files, the [`rubyzip`](https://rubygems.org/gems/rubyzip) Ruby gem is already patched against symlink attacks as it simply ignores symbolic links, so for this vulnerable example we will extract a `tar.gz` file with `Gem::Package::TarReader`:
For zip files, the [`rubyzip`](https://rubygems.org/gems/rubyzip) Ruby gem is already patched against symlink attacks as it ignores symbolic links, so for this vulnerable example we will extract a `tar.gz` file with `Gem::Package::TarReader`:
```ruby
# Vulnerable tar.gz extraction example!

View File

@ -605,7 +605,7 @@ alt_usage_data(value = nil, fallback: -1, &block)
Arguments:
- `value`: a simple static value in which case the value is simply returned.
- `value`: a static value in which case the value is returned.
- or a `block`: which is evaluated
- `fallback: -1`: the common value used for any metrics that are failing.
@ -714,7 +714,7 @@ We also use `#database-lab` and [explain.depesz.com](https://explain.depesz.com/
- [Example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26445)
- Use defined `start` and `finish`, and simple queries.
These values can be memoized and reused, as in this [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37155).
- Avoid joins and write the queries as simply as possible,
- Avoid joins and write the queries as clearly as possible,
as in this [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36316).
- Set a custom `batch_size` for `distinct_count`, as in this [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38000).

View File

@ -90,7 +90,7 @@ Each click event provides attributes that describe the event.
| Attribute | Type | Required | Description |
| --------- | ------- | -------- | ----------- |
| category | text | true | The page or backend section of the application. Unless infeasible, use the Rails page attribute by default in the frontend, and namespace + class name on the backend, for example, `Notes::CreateService`. |
| action | text | true | The action the user takes, or aspect that's being instrumented. The first word must describe the action or aspect. For example, clicks must be `click`, activations must be `activate`, creations must be `create`. Use underscores to describe what was acted on. For example, activating a form field is `activate_form_input`, an interface action like clicking on a dropdown is `click_dropdown`, a behavior like creating a project record from the backend is `create_project`. |
| action | text | true | The action the user takes, or aspect that's being instrumented. The first word must describe the action or aspect. For example, clicks must be `click`, activations must be `activate`, creations must be `create`. Use underscores to describe what was acted on. For example, activating a form field is `activate_form_input`, an interface action like clicking on a dropdown list is `click_dropdown`, a behavior like creating a project record from the backend is `create_project`. |
| label | text | false | The specific element or object to act on. This can be one of the following: the label of the element, for example, a tab labeled 'Create from template' for `create_from_template`; a unique identifier if no text is available, for example, `groups_dropdown_close` for closing the Groups dropdown in the top bar; or the name or title attribute of a record being created. For Service Ping metrics adapted to Snowplow events, this should be the full metric [key path](../service_ping/metrics_dictionary.md#metric-key_path) taken from its definition file. |
| property | text | false | Any additional property of the element, or object being acted on. For Service Ping metrics adapted to Snowplow events, this should be additional information or context that can help analyze the event. For example, in the case of `usage_activity_by_stage_monthly.create.merge_requests_users`, there are four different possible merge request actions: "create", "merge", "comment", and "close". Each of these would be a possible property value. |
| value | decimal | false | Describes a numeric value (decimal) directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. |

View File

@ -54,8 +54,8 @@ description, note the following:
precision is 2. In some extremely low panels, you can see `0.00`, even though there is still some
real traffic.
To inspect the raw data of the panel for further calculation, select **Inspect** from the dropdown
list of a panel. Queries, raw data, and panel JSON structure are available.
To inspect the raw data of the panel for further calculation, select **Inspect** from the dropdown list of a panel.
Queries, raw data, and panel JSON structure are available.
Read more at [Grafana panel inspection](http://grafana.com/docs/grafana/next/panels/query-a-data-source/).
All the dashboards are powered by [Grafana](https://grafana.com/), a frontend for displaying metrics.

View File

@ -66,7 +66,7 @@ As mentioned in the [folder structure section](#consumer-tests), consumer tests
#### Provider naming
These are the API endpoints that provides the data to the consumer so they are simply named according to the API endpoint they pertain to. Be mindful that this name is as descriptive as possible. For example, if we're writing a test for the `GET /groups/:id/projects` endpoint, we don't want to simply name it "Projects endpoint" as there is a `GET /projects` endpoint as well that also fetches a list of projects the user has access to across all of GitLab. An easy way to name them is by checking out our [API documentation](../../../api/api_resources.md) and naming it the same way it is named in there. So the [`GET /groups/:id/projects`](../../../api/groups.md#list-a-groups-projects) would be called `List a groups projects` and [`GET /projects`](../../../api/projects.md#list-all-projects) would be called `List all projects`. Subsequently, the test files are named `list_a_groups_projects_helper.rb` and `list_all_projects_helper.rb` respectively.
These are the API endpoints that provides the data to the consumer so they are named according to the API endpoint they pertain to. Be mindful that this name is as descriptive as possible. For example, if we're writing a test for the `GET /groups/:id/projects` endpoint, we don't want to name it "Projects endpoint" as there is a `GET /projects` endpoint as well that also fetches a list of projects the user has access to across all of GitLab. An easy way to name them is by checking out our [API documentation](../../../api/api_resources.md) and naming it the same way it is named in there. So the [`GET /groups/:id/projects`](../../../api/groups.md#list-a-groups-projects) would be called `List a groups projects` and [`GET /projects`](../../../api/projects.md#list-all-projects) would be called `List all projects`. Subsequently, the test files are named `list_a_groups_projects_helper.rb` and `list_all_projects_helper.rb` respectively.
There are some cases where the provider being tested may not be documented so, in those cases, fall back to choosing a name that is as descriptive as possible to ensure it's easy to tell what the provider is for.

View File

@ -10,7 +10,7 @@ This is a tailored extension of the Best Practices [found in the testing guide](
## Class and module naming
The QA framework uses [Zeitwerk](https://github.com/fxn/zeitwerk) for class and module autoloading. The default Zeitwerk [inflector](https://github.com/fxn/zeitwerk#zeitwerkinflector) simply converts snake_cased file names to PascalCased module or class names. It is advised to stick to this pattern to avoid manual maintenance of inflections.
The QA framework uses [Zeitwerk](https://github.com/fxn/zeitwerk) for class and module autoloading. The default Zeitwerk [inflector](https://github.com/fxn/zeitwerk#zeitwerkinflector) converts snake_cased file names to PascalCased module or class names. It is advised to stick to this pattern to avoid manual maintenance of inflections.
In case custom inflection logic is needed, custom inflectors are added in the [qa.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/qa/qa.rb) file in the `loader.inflector.inflect` method invocation.

View File

@ -48,7 +48,7 @@ click_element(:my_element, Some::Page)
First it is important to define what a "required element" is.
Simply put, a required element is a visible HTML element that appears on a UI component without any user input.
A required element is a visible HTML element that appears on a UI component without any user input.
"Visible" can be defined as

View File

@ -290,7 +290,7 @@ it('tests a promise rejection', async () => {
});
```
You can also simply return a promise from the test function.
You can also return a promise from the test function.
Using the `done` and `done.fail` callbacks is discouraged when working with
promises. They should not be used.

View File

@ -101,7 +101,7 @@ the GitLab handbook information for the [shared 1Password account](https://about
1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps). For example, `review-qa-raise-e-12chm0`.
1. Find and open the `toolbox` Deployment. For example, `review-qa-raise-e-12chm0-toolbox`.
1. Select the Pod in the "Managed pods" section. For example, `review-qa-raise-e-12chm0-toolbox-d5455cc8-2lsvz`.
1. Select the `KUBECTL` dropdown, then `Exec` -> `toolbox`.
1. Select the `KUBECTL` dropdown list, then `Exec` -> `toolbox`.
1. Replace `-c toolbox -- ls` with `-it -- gitlab-rails console` from the
default command or
- Run `kubectl exec --namespace review-qa-raise-e-12chm0 review-qa-raise-e-12chm0-toolbox-d5455cc8-2lsvz -it -- gitlab-rails console` and

View File

@ -88,7 +88,7 @@ To maximize component reusability, widgets should be field wrappers owning the
work item query and mutation of the attribute it's responsible for.
A field component is a generic and simple component. It has no knowledge of the
attribute or work item details, such as input field, date selector, or dropdown.
attribute or work item details, such as input field, date selector, or dropdown list.
Widgets must be configurable to support various use cases, depending on work items.
When building widgets, use slots to provide extra context while minimizing
@ -96,18 +96,18 @@ the use of props and injected attributes.
### Examples
We have a [dropdown field component](https://gitlab.com/gitlab-org/gitlab/-/blob/eea9ad536fa2d28ee6c09ed7d9207f803142eed7/app/assets/javascripts/vue_shared/components/dropdown/dropdown_widget/dropdown_widget.vue)
We have a [dropdown list component](https://gitlab.com/gitlab-org/gitlab/-/blob/eea9ad536fa2d28ee6c09ed7d9207f803142eed7/app/assets/javascripts/vue_shared/components/dropdown/dropdown_widget/dropdown_widget.vue)
for use as reference.
Any work item widget can wrap the dropdown component. The widget has knowledge of
Any work item widget can wrap the dropdown list. The widget has knowledge of
the attribute it mutates, and owns the mutation for it. Multiple widgets can use
the same field component. For example:
- Title and description widgets use the input field component.
- Start and end date use the date selector component.
- Labels, milestones, and assignees selectors use the dropdown component.
- Labels, milestones, and assignees selectors use the dropdown list.
Some frontend widgets already use the dropdown component. Use them as a reference
Some frontend widgets already use the dropdown list. Use them as a reference
for work items widgets development:
- `ee/app/assets/javascripts/boards/components/assignee_select.vue`

View File

@ -670,7 +670,7 @@ Depending on how you installed GitLab and if you did not change the password by
- Your instance ID if you used the official GitLab AMI.
- A randomly generated password stored for 24 hours in `/etc/gitlab/initial_root_password`.
To change the default password, sign in as the `root` user with the default password and [change it in the user profile](../../user/profile#change-your-password).
To change the default password, log in as the `root` user with the default password and [change it in the user profile](../../user/profile/user_passwords.md#change-your-password).
When our [auto scaling group](#create-an-auto-scaling-group) spins up new instances, we are able to sign in with username `root` and the newly created password.

View File

@ -236,7 +236,7 @@ The credentials are:
- Password: the password is automatically created, and there are
[two ways to find it](https://docs.bitnami.com/azure/faq/get-started/find-credentials/).
After signing in, be sure to immediately [change the password](../../user/profile/index.md#change-your-password).
After signing in, be sure to immediately [change the password](../../user/profile/user_passwords.md#change-your-password).
## Maintain your GitLab instance

View File

@ -14,6 +14,8 @@ You can reset user passwords by using a Rake task, a Rails console, or the
To reset a user password, you must be an administrator of a self-managed GitLab instance.
The user's new password must meet all [password requirements](../user/profile/user_passwords.md#password-requirements).
## Use a Rake task
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/52347) in GitLab 13.9.
@ -120,6 +122,11 @@ To reset the root password, follow the steps listed previously.
## Troubleshooting
Use the following information to troubleshoot issues when resetting a
user's password.
### Email confirmation issues
If the new password doesn't work, it might be [an email confirmation issue](../user/upgrade_email_bypass.md). You can
attempt to fix this issue in a Rails console. For example, if a new `root` password isn't working:
@ -132,3 +139,9 @@ attempt to fix this issue in a Rails console. For example, if a new `root` passw
```
1. Attempt to sign in again.
### Unmet password requirements
The password might be too short, too weak, or not meet complexity
requirements. Ensure the password you are attempting to set meets all
[password requirements](../user/profile/user_passwords.md#password-requirements).

View File

@ -15,7 +15,7 @@ instead of the method documented below.
Using Git LFS can help you to reduce the size of your Git
repository and improve its performance.
However, simply adding the large files that are already in your repository to Git LFS
However, adding the large files that are already in your repository to Git LFS
doesn't actually reduce the size of your repository because
the files are still referenced by previous commits.

View File

@ -1093,6 +1093,7 @@ profile increases as the number of tests increases.
| `SECURE_ANALYZERS_PREFIX` | Specify the Docker registry base address from which to download the analyzer. |
| `FUZZAPI_VERSION` | Specify API Fuzzing container version. Defaults to `2`. |
| `FUZZAPI_IMAGE_SUFFIX` | Specify a container image suffix. Defaults to none. |
| `FUZZAPI_API_PORT` | Specify the communication port number used by API Fuzzing engine. Defaults to `5500`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) in GitLab 15.5. |
| `FUZZAPI_TARGET_URL` | Base URL of API testing target. |
| `FUZZAPI_CONFIG` | [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/276395) in GitLab 13.12, replaced with default `.gitlab/gitlab-api-fuzzing-config.yml`. API Fuzzing configuration file. |
|[`FUZZAPI_PROFILE`](#api-fuzzing-profiles) | Configuration profile to use during testing. Defaults to `Quick-10`. |
@ -2225,6 +2226,43 @@ If the issue is occurring with versions v1.6.196 or greater, contact Support and
1. The `gl-api-security-scanner.log` file available as a job artifact. In the right-hand panel of the job details page, select the **Browse** button.
1. The `apifuzzer_fuzz` job definition from your `.gitlab-ci.yml` file.
### `Failed to start session with scanner. Please retry, and if the problem persists reach out to support.`
The API Fuzzing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `apifuzzer_fuzz` job. A common cause for this issue is that the background component cannot use the selected port as it's already in use. This error can occur intermittently if timing plays a part (race condition). This issue occurs most often with Kubernetes environments when other services are mapped into the container causing port conflicts.
Before proceeding with a solution, it is important to confirm that the error message was produced because the port was already taken. To confirm this was the cause:
1. Go to the job console.
1. Look for the artifact `gl-api-security-scanner.log`. You can either download all artifacts by selecting **Download** and then search for the file, or directly start searching by selecting **Browse**.
1. Open the file `gl-api-security-scanner.log` in a text editor.
1. If the error message was produced because the port was already taken, you should see in the file a message like the following:
- In [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734):
```log
Failed to bind to address http://127.0.0.1:5500: address already in use.
```
- In GitLab 15.4 and earlier:
```log
Failed to bind to address http://[::]:5000: address already in use.
```
The text `http://[::]:5000` in the previous message could be different in your case, for instance it could be `http://[::]:5500` or `http://127.0.0.1:5500`. As long as the remaining parts of the error message are the same, it is safe to assume the port was already taken.
If you did not find evidence that the port was already taken, check other troubleshooting sections which also address the same error message shown in the job console output. If there are no more options, feel free to [get support or request an improvement](#get-support-or-request-an-improvement) through the proper channels.
Once you have confirmed the issue was produced because the port was already taken. Then, [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) introduced the configuration variable `FUZZAPI_API_PORT`. This configuration variable allows setting a fixed port number for the scanner background component.
**Solution**
1. Ensure your `.gitlab-ci.yml` file defines the configuration variable `FUZZAPI_API_PORT`.
1. Update the value of `FUZZAPI_API_PORT` to any available port number greater than 1024. We recommend checking that the new value is not in used by GitLab. See the full list of ports used by GitLab in [Package defaults](../../../administration/package_information/defaults.md#ports)
### `Error, the OpenAPI document is not valid. Errors were found during validation of the document using the published OpenAPI schema`
At the start of an API Fuzzing job the OpenAPI Specification is validated against the [published schema](https://github.com/OAI/OpenAPI-Specification/tree/master/schemas). This error is shown when the provided OpenAPI Specification has validation errors. Errors can be introduced when creating an OpenAPI Specification manually, and also when the schema is generated.

View File

@ -1039,6 +1039,7 @@ can be added, removed, and modified by creating a custom configuration.
| `SECURE_ANALYZERS_PREFIX` | Specify the Docker registry base address from which to download the analyzer. |
| `DAST_API_VERSION` | Specify DAST API container version. Defaults to `2`. |
| `DAST_API_IMAGE_SUFFIX` | Specify a container image suffix. Defaults to none. |
| `DAST_API_API_PORT` | Specify the communication port number used by DAST API engine. Defaults to `5500`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) in GitLab 15.5. |
| `DAST_API_TARGET_URL` | Base URL of API testing target. |
|[`DAST_API_CONFIG`](#configuration-files) | DAST API configuration file. Defaults to `.gitlab-dast-api.yml`. |
|[`DAST_API_PROFILE`](#configuration-files) | Configuration profile to use during testing. Defaults to `Quick`. |
@ -2085,6 +2086,43 @@ The DAST API engine outputs an error message when it cannot establish a connecti
- Remove the `DAST_API_API` variable from the `.gitlab-ci.yml` file. The value will be inherited from the DAST API CI/CD template. We recommend this method instead of manually setting a value.
- If removing the variable is not possible, check to see if this value has changed in the latest version of the [DAST API CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/DAST-API.gitlab-ci.yml). If so, update the value in the `.gitlab-ci.yml` file.
### `Failed to start session with scanner. Please retry, and if the problem persists reach out to support.`
The DAST API engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `dast_api` job. A common cause for this issue is that the background component cannot use the selected port as it's already in use. This error can occur intermittently if timing plays a part (race condition). This issue occurs most often with Kubernetes environments when other services are mapped into the container causing port conflicts.
Before proceeding with a solution, it is important to confirm that the error message was produced because the port was already taken. To confirm this was the cause:
1. Go to the job console.
1. Look for the artifact `gl-api-security-scanner.log`. You can either download all artifacts by selecting **Download** and then search for the file, or directly start searching by selecting **Browse**.
1. Open the file `gl-api-security-scanner.log` in a text editor.
1. If the error message was produced because the port was already taken, you should see in the file a message like the following:
- In [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734):
```log
Failed to bind to address http://127.0.0.1:5500: address already in use.
```
- In GitLab 15.4 and earlier:
```log
Failed to bind to address http://[::]:5000: address already in use.
```
The text `http://[::]:5000` in the previous message could be different in your case, for instance it could be `http://[::]:5500` or `http://127.0.0.1:5500`. As long as the remaining parts of the error message are the same, it is safe to assume the port was already taken.
If you did not find evidence that the port was already taken, check other troubleshooting sections which also address the same error message shown in the job console output. If there are no more options, feel free to [get support or request an improvement](#get-support-or-request-an-improvement) through the proper channels.
Once you have confirmed the issue was produced because the port was already taken. Then, [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) introduced the configuration variable `DAST_API_API_PORT`. This configuration variable allows setting a fixed port number for the scanner background component.
**Solution**
1. Ensure your `.gitlab-ci.yml` file defines the configuration variable `DAST_API_API_PORT`.
1. Update the value of `DAST_API_API_PORT` to any available port number greater than 1024. We recommend checking that the new value is not in used by GitLab. See the full list of ports used by GitLab in [Package defaults](../../../administration/package_information/defaults.md#ports)
### `Application cannot determine the base URL for the target API`
The DAST API engine outputs an error message when it cannot determine the target API after inspecting the OpenAPI document. This error message is shown when the target API has not been set in the `.gitlab-ci.yml` file, it is not available in the `environment_url.txt` file, and it could not be computed using the OpenAPI document.

View File

@ -172,7 +172,8 @@ When the `staging` job runs, it will connect to the cluster via the agent named
## Restrict project and group access by using impersonation **(PREMIUM)**
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/345014) in GitLab 14.5.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/345014) in GitLab 14.5.
> - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/357934) in GitLab 15.5 to add impersonation support for environment tiers.
By default, your CI/CD job inherits all the permissions from the service account used to install the
agent in the cluster.
@ -205,16 +206,17 @@ impersonation credentials in the following way:
- `gitlab:ci_job` to identify all requests coming from CI jobs.
- The list of IDs of groups the project is in.
- The project ID.
- The slug of the environment this job belongs to.
- The slug and tier of the environment this job belongs to.
Example: for a CI job in `group1/group1-1/project1` where:
- Group `group1` has ID 23.
- Group `group1/group1-1` has ID 25.
- Project `group1/group1-1/project1` has ID 150.
- Job running in a prod environment.
- Job running in the `prod` environment, which has the `production` environment tier.
Group list would be `[gitlab:ci_job, gitlab:group:23, gitlab:group:25, gitlab:project:150, gitlab:project_env:150:prod]`.
Group list would be `[gitlab:ci_job, gitlab:group:23, gitlab:group_env_tier:23:production, gitlab:group:25,
gitlab:group_env_tier:25:production, gitlab:project:150, gitlab:project_env:150:prod, gitlab:project_env_tier:150:production]`.
- `Extra` carries extra information about the request. The following properties are set on the impersonated identity:
@ -227,6 +229,7 @@ impersonation credentials in the following way:
| `agent.gitlab.com/ci_job_id` | Contains the CI job ID. |
| `agent.gitlab.com/username` | Contains the username of the user the CI job is running as. |
| `agent.gitlab.com/environment_slug` | Contains the slug of the environment. Only set if running in an environment. |
| `agent.gitlab.com/environment_tier` | Contains the tier of the environment. Only set if running in an environment. |
Example `config.yaml` to restrict access by the CI/CD job's identity:

View File

@ -40,7 +40,7 @@ and by doing one of the following:
If you follow the instructions, you can publish `MyProject` by running `npm publish` from the root
directory.
Publishing `Foo` is almost exactly the same. Simply follow the same steps while in the `Foo`
Publishing `Foo` is almost exactly the same. Follow the same steps while in the `Foo`
directory. `Foo` needs its own `package.json` file, which you can add manually by using `npm init`.
`Foo` also needs its own configuration settings. Since you are publishing to the same place, if you
used `npm config set` to set the registry for the parent project, then no additional setup is

View File

@ -25,25 +25,6 @@ To access your user settings:
1. On the top bar, in the top-right corner, select your avatar.
1. Select **Edit profile**.
## Change your password
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/23610) in GitLab 15.4 [with a flag](../../administration/feature_flags.md) named `block_weak_passwords`, weak passwords aren't accepted. Disabled by default.
FLAG:
On self-managed GitLab, by default blocking weak passwords is not available. To make it available, ask an administrator to [enable the feature flag](../../administration/feature_flags.md) named `block_weak_passwords`. On GitLab.com, this feature is available but can be configured by GitLab.com administrators only.
The feature is not ready for production use.
To change your password:
1. On the top bar, in the top-right corner, select your avatar.
1. Select **Edit profile**.
1. On the left sidebar, select **Password**.
1. In the **Current password** text box, enter your current password.
1. In the **New password** and **Password confirmation** text box, enter your new password.
1. Select **Save password**.
If you don't know your current password, select the **I forgot my password** link. A password reset email is sent to the account's **primary** email address.
## Change your username
Your username has a unique [namespace](../namespace/index.md),
@ -478,6 +459,7 @@ Without the `config.extend_remember_period` flag, you would be forced to sign in
- [Create users](account/create_accounts.md)
- [Sign in to your GitLab account](../../topics/authentication/index.md)
- [Change your password](user_passwords.md)
- [Receive emails for sign-ins from unknown IP addresses or devices](unknown_sign_in_notification.md)
- [Receive emails for attempted sign-ins using a wrong two-factor authentication code](wrong_two_factor_authentication_code_notification.md)
- Manage applications that can [use GitLab as an OAuth provider](../../integration/oauth_provider.md#introduction-to-oauth)

View File

@ -0,0 +1,74 @@
---
stage: Manage
group: Authentication and Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# User passwords **(FREE)**
If you use a password to sign in to GitLab, a strong password is very important. A weak or guessable password makes it easier
for unauthorised people to log into your account.
Some organizations require you to meet certain requirements when choosing a password.
INFO:
Improve the security of your account with [two-factor authentication](account/two_factor_authentication.md)
## Choose your password
You can choose a password when you [create a user account](account/create_accounts.md).
If you register your account using an external authentication and
authorization provider, you do not need to choose a password. GitLab
[sets a random, unique, and secure password for you](../../security/passwords_for_integrated_authentication_methods.md).
## Change your password
You can change your password. GitLab enforces [password requirements](#password-requirements) when you choose your new password.
1. On the top bar, in the top-right corner, select your avatar.
1. Select **Edit profile**.
1. On the left sidebar, select **Password**.
1. In the **Current password** text box, enter your current password.
1. In the **New password** and **Password confirmation** text box, enter your new password.
1. Select **Save password**.
If you don't know your current password, select the **I forgot my password** link. A password reset email is sent to the account's **primary** email address.
## Password requirements
Your passwords must meet a set of requirements when:
- You choose a password during registration.
- You choose a new password using the forgotten password reset flow.
- You change your password proactively.
- You change your password after it expires.
- An an administrator creates your account.
- An administrator updates your account.
By default GitLab enforces the following password requirements:
- [Minimum and maximum password lengths](../gitlab_com/index.md#password-requirements). For example,
see [the settings for GitLab.com](../gitlab_com/index.md#password-requirements).
- Disallowing [weak passwords](#block-weak-passwords).
Self-managed installations can configure the following additional password requirements:
- [Password minimum and maximum length limits](../../security/password_length_limits.md).
- [Password complexity requirements](../admin_area/settings/sign_up_restrictions.md#password-complexity-requirements).
## Block weak passwords
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/23610) in GitLab 15.4 [with a flag](../../administration/feature_flags.md) named `block_weak_passwords`, weak passwords aren't accepted. Disabled by default.
FLAG:
On self-managed GitLab, by default blocking weak passwords is not available. To make it available, ask an administrator to [enable the feature flag](../../administration/feature_flags.md) named `block_weak_passwords`.
On GitLab.com, this feature is available but can be configured by GitLab.com administrators only.
GitLab disallows weak passwords. Your password is considered weak when it:
- Matches one of 4500+ known, breached passwords.
- Contains part of your name, username, or email address.
- Contains a predictable word (for example, `gitlab` or `devops`).
Weak passwords are rejected with the error message: **Password must not contain commonly used combinations of words and letters**.

View File

@ -39,6 +39,6 @@ Suggested Reviewers is off by default and requires a Project Owner or Admin to e
Suggested Reviewers operates completely within the GitLab.com infrastructure providing the same level of [privacy](https://about.gitlab.com/privacy/) and [security](https://about.gitlab.com/security/) of any other feature of GitLab.com.
No new additional data is collected to enable this feature, simply GitLab is inferencing your merge request against a trained machine learning model. The content of your source code is not used as training data. Your data also never leaves GitLab.com, all training and inference is done within GitLab.com infrastructure.
No new additional data is collected to enable this feature. GitLab is inferencing your merge request against a trained machine learning model. The content of your source code is not used as training data. Your data also never leaves GitLab.com, all training and inference is done within GitLab.com infrastructure.
[Read more about the security of GitLab.com](https://about.gitlab.com/security/faq/)

View File

@ -108,7 +108,7 @@ Supposed your repository contained the following files:
└── main.js
```
Then the `.gitlab-ci.yml` example below simply moves all files from the root
Then the `.gitlab-ci.yml` example below moves all files from the root
directory of the project to the `public/` directory. The `.public` workaround
is so `cp` doesn't also copy `public/` to itself in an infinite loop: