Add latest changes from gitlab-org/gitlab@master

This commit is contained in:
GitLab Bot 2022-10-11 03:09:00 +00:00
parent 67d19cc004
commit 574ed32358
87 changed files with 274 additions and 294 deletions

View File

@ -166,7 +166,7 @@ class Issue < ApplicationRecord
scope :with_api_entity_associations, -> {
preload(:timelogs, :closed_by, :assignees, :author, :labels, :issuable_severity,
milestone: { project: [:route, { namespace: :route }] },
project: [:route, { namespace: :route }],
project: [:project_feature, :route, { namespace: :route }],
duplicated_to: { project: [:project_feature] })
}
scope :with_issue_type, ->(types) { where(issue_type: types) }
@ -622,7 +622,9 @@ class Issue < ApplicationRecord
# for performance reasons, check commit: 002ad215818450d2cbbc5fa065850a953dc7ada8
# Make sure to sync this method with issue_policy.rb
def readable_by?(user)
if user.can_read_all_resources?
if !project.issues_enabled?
false
elsif user.can_read_all_resources?
true
elsif project.personal? && project.team.owner?(user)
true

View File

@ -10,6 +10,7 @@ value_type: number
status: active
time_frame: all
data_source: database
instrumentation_class: DistinctCountProjectsWithExpirationPolicyDisabledMetric
distribution:
- ee
- ce

View File

@ -53,7 +53,7 @@ Documentation on package signatures can be found at [Signed Packages](signed_pac
Configuration file in `/etc/gitlab/gitlab.rb` is created on initial installation
of the Omnibus GitLab package. On subsequent package upgrades, the configuration
file is not updated with new configuration. This is done in order to avoid
file is not updated with new configuration. This is done to avoid
accidental overwrite of user configuration provided in `/etc/gitlab/gitlab.rb`.
New configuration options are noted in the
@ -76,7 +76,7 @@ characters on each line.
## Init system detection
Omnibus GitLab attempts to query the underlying system in order to
Omnibus GitLab attempts to query the underlying system to
check which init system it uses.
This manifests itself as a `WARNING` during the `sudo gitlab-ctl reconfigure`
run.

View File

@ -1225,7 +1225,7 @@ and signed with the private key.
The Registry then verifies that the signature matches the registry certificate
specified in its configuration and allows the operation.
GitLab background jobs processing (through Sidekiq) also interacts with Registry.
These jobs talk directly to Registry in order to handle image deletion.
These jobs talk directly to Registry to handle image deletion.
## Troubleshooting

View File

@ -9,7 +9,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Get current application statistics
List the current statistics of the GitLab instance. You have to be an
administrator in order to perform this action.
administrator to perform this action.
NOTE:
These statistics are approximate.

View File

@ -20,7 +20,7 @@ company behind the project.
This effort is described in more detail
[in the infrastructure team handbook page](https://about.gitlab.com/handbook/engineering/infrastructure/production/kubernetes/gitlab-com/).
GitLab Pages is tightly coupled with NFS and in order to unblock Kubernetes
GitLab Pages is tightly coupled with NFS and to unblock Kubernetes
migration a significant change to GitLab Pages' architecture is required. This
is an ongoing work that we have started more than a year ago. This blueprint
might be useful to understand why it is important, and what is the roadmap.

View File

@ -50,7 +50,7 @@ codebase without clear boundaries results in a number of problems and inefficien
we usually need to run a whole test suite to confidently know which parts are affected. This to
some extent can be improved by building a heuristic to aid this process, but it is prone to errors and hard
to keep accurate at all times
- All components need to be loaded at all times in order to run only parts of the application
- All components need to be loaded at all times to run only parts of the application
- Increased resource usage, as we load parts of the application that are rarely used in a given context
- The high memory usage results in slowing the whole application as it increases GC cycles duration
creating significantly longer latency for processing requests or worse cache usage of CPUs
@ -208,7 +208,7 @@ graph LR
### Application Layers on GitLab.com
Due to its scale, GitLab.com requires much more attention to run. This is needed in order to better manage resources
Due to its scale, GitLab.com requires much more attention to run. This is needed to better manage resources
and provide SLAs for different functional parts. The chart below provides a simplistic view of GitLab.com application layers.
It does not include all components, like Object Storage nor Gitaly nodes, but shows the GitLab Rails dependencies between
different components and how they are configured on GitLab.com today:
@ -543,7 +543,7 @@ Controllers, Serializers, some presenters and some of the Grape:Entities are als
Potential challenges with moving Controllers:
- We needed to extend `Gitlab::Patch::DrawRoute` in order to support `engines/web_engine/config/routes` and `engines/web_engine/ee/config/routes` in case when `web_engine` is loaded. Here is potential [solution](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/53720#note_506957398).
- We needed to extend `Gitlab::Patch::DrawRoute` to support `engines/web_engine/config/routes` and `engines/web_engine/ee/config/routes` in case when `web_engine` is loaded. Here is potential [solution](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/53720#note_506957398).
- `Gitlab::Routing.url_helpers` paths are used in models and services, that could be used by Sidekiq (for example `Gitlab::Routing.url_helpers.project_pipelines_path` is used by [ExpirePipelineCacheService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/expire_pipeline_cache_service.rb#L20) in [ExpirePipelineCacheWorker](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/expire_pipeline_cache_worker.rb#L18)))
### Packwerk

View File

@ -84,7 +84,7 @@ The short-term focus is on testing regular migrations (typically schema changes)
In order to secure this process and meet compliance goals, the runner environment is treated as a *production* environment and similarly locked down, monitored and audited. Only Database Maintainers have access to the CI pipeline and its job output. Everyone else can only see the results and statistics posted back on the merge request.
We implement a secured CI pipeline on <https://ops.gitlab.net> that adds the execution steps outlined above. The goal is to secure this pipeline in order to solve the following problem:
We implement a secured CI pipeline on <https://ops.gitlab.net> that adds the execution steps outlined above. The goal is to secure this pipeline to solve the following problem:
Make sure we strongly protect production data, even though we allow everyone (GitLab team/developers) to execute arbitrary code on the thin-clone which contains production data.

View File

@ -261,7 +261,7 @@ abstraction.
1. Build an abstraction that serves our community well but allows us to ship it quickly.
1. Invest in a flexible solution, avoid one-way-door decisions, foster iteration.
1. When in doubts err on the side of making things more simple for the wider community.
1. Limit coupling between concerns in order to make the system more simple and extensible.
1. Limit coupling between concerns to make the system more simple and extensible.
1. Concerns should live on one side of the plug or the other--not both, which
duplicates effort and increases coupling.

View File

@ -74,10 +74,10 @@ Task is a special Work Item type. Tasks can be added to issues as child items an
## Motivation
Work Items main goal is to enhance the planning toolset in order to become the most popular collaboration tool for knowledge workers in any industry.
Work Items main goal is to enhance the planning toolset to become the most popular collaboration tool for knowledge workers in any industry.
- Puts all like-items (issues, incidents, epics, test cases etc.) on a standard platform in order to simplify maintenance and increase consistency in experience
- Enables first-class support of common planning concepts in order to lower complexity and allow users to plan without learning GitLab-specific nuances.
- Puts all like-items (issues, incidents, epics, test cases etc.) on a standard platform to simplify maintenance and increase consistency in experience
- Enables first-class support of common planning concepts to lower complexity and allow users to plan without learning GitLab-specific nuances.
## Goals

View File

@ -12,7 +12,7 @@ environment (where the GitLab Runner runs).
Use SSH keys when:
1. You want to checkout internal submodules
1. You want to check out internal submodules
1. You want to download private packages using your package manager (for example, Bundler)
1. You want to deploy your application to your own server, or, for example, Heroku
1. You want to execute SSH commands from the build environment to a remote server

View File

@ -436,7 +436,7 @@ By default, fields add `1` to a query's complexity score. This can be overridden
[providing a custom `complexity`](https://graphql-ruby.org/queries/complexity_and_depth.html) value for a field.
Developers should specify higher complexity for fields that cause more _work_ to be performed
by the server in order to return data. Fields that represent data that can be returned
by the server to return data. Fields that represent data that can be returned
with little-to-no _work_, for example in most cases; `id` or `title`, can be given a complexity of `0`.
### `calls_gitaly`

View File

@ -14,7 +14,7 @@ label, but you are free to contribute to any issue you want.
If an issue is marked for the current milestone at any time, even
when you are working on it, a GitLab Inc. team member may take over the merge request
in order to ensure the work is finished before the release date.
to ensure the work is finished before the release date.
If you want to add a new feature that is not labeled, it is best to first create
an issue (if there isn't one already) and leave a comment asking for it

View File

@ -183,7 +183,7 @@ To fix this issue:
When creating [node patterns](https://docs.rubocop.org/rubocop-ast/node_pattern.html) to match
Ruby's AST, you can use [`scripts/rubocop-parse`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/rubocop-parse)
to display the AST of a Ruby expression, in order to help you create the matcher.
to display the AST of a Ruby expression, to help you create the matcher.
See also [!97024](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/97024).
### Resolving RuboCop exceptions

View File

@ -1052,6 +1052,6 @@ Performance comparison for the `gitlab-org` group:
| Optimized `IN` query | 9783 | 450ms | 22ms |
NOTE:
Before taking measurements, the group lookup query was executed separately in order to make
Before taking measurements, the group lookup query was executed separately to make
the group data available in the buffer cache. Since it's a frequently called query, it
hits many shared buffers during the query execution in the production environment.

View File

@ -196,7 +196,7 @@ value is "excluded". The query looks at the index to get the location of the fiv
rows on the disk and read the rows from the table. The returned array is processed in Ruby.
The first iteration is done. For the next iteration, the last `id` value is reused from the
previous iteration in order to find out the next end `id` value.
previous iteration to find out the next end `id` value.
```sql
SELECT "users"."id" FROM "users" WHERE "users"."id" >= 302 ORDER BY "users"."id" ASC LIMIT 1 OFFSET 5

View File

@ -719,7 +719,7 @@ and through its [web interface](https://console.postgres.ai/gitlab/joe-instances
With Joe Bot you can execute DDL statements (like creating indexes, tables, and columns) and get query plans for `SELECT`, `UPDATE`, and `DELETE` statements.
For example, in order to test new index on a column that is not existing on production yet, you can do the following:
For example, to test new index on a column that is not existing on production yet, you can do the following:
Create the column:

View File

@ -148,7 +148,7 @@ This limit is hardcoded and only applied on GitLab.
Diff Viewers, which can be found on `models/diff_viewer/*` are classes used to map metadata about each type of Diff File. It has information
whether it's a binary, which partial should be used to render it or which File extensions this class accounts for.
`DiffViewer::Base` validates _blobs_ (old and new versions) content, extension and file type in order to check if it can be rendered.
`DiffViewer::Base` validates _blobs_ (old and new versions) content, extension and file type to check if it can be rendered.
## Merge request diffs against the `HEAD` of the target branch
@ -169,7 +169,7 @@ In order to display an up-to-date diff, in GitLab 12.9 we
[introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27008) merge request
diffs compared against `HEAD` of the target branch: the
target branch is artificially merged into the source branch, then the resulting
merge ref is compared to the source branch in order to calculate an accurate
merge ref is compared to the source branch to calculate an accurate
diff.
Until we complete the epics ["use merge refs for diffs"](https://gitlab.com/groups/gitlab-org/-/epics/854)

View File

@ -64,7 +64,7 @@ Please see the `sha_tokenizer` explanation later below for an example.
Used when indexing a blob's filename and content. Uses the `whitespace` tokenizer and the filters: [`code`](#code), `lowercase`, and `asciifolding`
The `whitespace` tokenizer was selected in order to have more control over how tokens are split. For example the string `Foo::bar(4)` needs to generate tokens like `Foo` and `bar(4)` in order to be properly searched.
The `whitespace` tokenizer was selected to have more control over how tokens are split. For example the string `Foo::bar(4)` needs to generate tokens like `Foo` and `bar(4)` to be properly searched.
Please see the `code` filter for an explanation on how tokens are split.
@ -94,7 +94,7 @@ Example:
#### `path_tokenizer`
This is a custom tokenizer that uses the [`path_hierarchy` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pathhierarchy-tokenizer.html) with `reverse: true` in order to allow searches to find paths no matter how much or how little of the path is given as input.
This is a custom tokenizer that uses the [`path_hierarchy` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pathhierarchy-tokenizer.html) with `reverse: true` to allow searches to find paths no matter how much or how little of the path is given as input.
Example:

View File

@ -26,7 +26,7 @@ Use your best judgment when to use it and contribute new points through merge re
- [ ] Check the current set weight of the issue, does it fit your estimate?
- [ ] Are all [departments](https://about.gitlab.com/handbook/engineering/#engineering-teams) that are needed from your perspective already involved in the issue? (For example is UX missing?)
- [ ] Is the specification complete? Are you missing decisions? How about error handling/defaults/edge cases? Take your time to understand the needed implementation and go through its flow.
- [ ] Are all necessary UX specifications available that you will need in order to implement? Are there new UX components/patterns in the designs? Then contact the UI component team early on. How should error messages or validation be handled?
- [ ] Are all necessary UX specifications available that you will need to implement? Are there new UX components/patterns in the designs? Then contact the UI component team early on. How should error messages or validation be handled?
- [ ] **Library usage** Use Vuex as soon as you have even a medium state to manage, use Vue router if you need to have different views internally and want to link from the outside. Check what libraries we already have for which occasions.
- [ ] **Plan your implementation:**
- [ ] **Architecture plan:** Create a plan aligned with GitLab's architecture, how you are going to do the implementation, for example Vue application setup and its components (through [onion skinning](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/35873#note_39994091)), Store structure and data flow, which existing Vue components can you reuse. It's a good idea to go through your plan with another engineer to refine it.

View File

@ -1318,7 +1318,7 @@ automatically find and index the schema.
#### Testing Apollo components
If we use `ApolloQuery` or `ApolloMutation` in our components, in order to test their functionality we need to add a stub first:
If we use `ApolloQuery` or `ApolloMutation` in our components, to test their functionality we need to add a stub first:
```javascript
import { ApolloMutation } from 'vue-apollo';

View File

@ -52,7 +52,7 @@ When the feature implementation is delivered among multiple merge requests:
1. When the feature is ready to be announced, create a merge request that adds
documentation about the feature, including [documentation for the feature flag itself](../documentation/feature_flags.md),
and a [changelog entry](#changelog). In the same merge request either flip the feature flag to
be **on by default** or remove it entirely in order to enable the new behavior.
be **on by default** or remove it entirely to enable the new behavior.
One might be tempted to think that feature flags will delay the release of a
feature by at least one month (= one release). This is not the case. A feature

View File

@ -67,7 +67,7 @@ listed here that also do not work properly in FIPS mode:
- [Static Application Security Testing (SAST)](../user/application_security/sast/index.md)
supports a reduced set of [analyzers](../user/application_security/sast/index.md#fips-enabled-images)
when operating in FIPS-compliant mode.
- Advanced Search is currently not included in FIPS mode. It must not be enabled in order to be FIPS-compliant.
- Advanced Search is currently not included in FIPS mode. It must not be enabled to be FIPS-compliant.
- [Gravatar or Libravatar-based profile images](../administration/libravatar.md) are not FIPS-compliant.
Additionally, these package repositories are disabled in FIPS mode:

View File

@ -199,7 +199,7 @@ GitHub has a rate limit of 5,000 API calls per hour. The number of requests
necessary to import a project is largely dominated by the number of unique users
involved in a project (for example, issue authors). Other data such as issue pages
and comments typically only requires a few dozen requests to import. This is
because we need the Email address of users in order to map them to GitLab users.
because we need the Email address of users to map them to GitLab users.
We handle this by doing the following:

View File

@ -146,7 +146,7 @@ def publisher
::Gitlab::Graphql::Loaders::BatchModelLoader.new(::Publisher, object.publisher_id).find
end
# Here we need the publisher in order to generate the catalog URL
# Here we need the publisher to generate the catalog URL
def catalog_url
::Gitlab::Graphql::Lazy.with_value(publisher) do |p|
UrlHelpers.book_catalog_url(publisher, object.isbn)

View File

@ -13,7 +13,7 @@ For a general introduction to the history of image scaling at GitLab, you might
## Why image scaling?
Since version 13.6, GitLab scales down images on demand in order to reduce the page data footprint.
Since version 13.6, GitLab scales down images on demand to reduce the page data footprint.
This both reduces the amount of data "on the wire", but also helps with rendering performance,
since the browser has less work to do.

View File

@ -452,7 +452,7 @@ The `scan.primary_identifiers` field is an optional field containing an array of
This is an exhaustive list of all rulesets for which the analyzer performed the scan.
Even when the [`Vulnerabilities`](#vulnerabilities) array for a given scan may be empty, this optional field
should contain the complete list of potential identifiers in order to inform the Rails application of which
should contain the complete list of potential identifiers to inform the Rails application of which
rules were executed.
When populated, the Rails application will automatically resolve previously detected vulnerabilities as no

View File

@ -665,8 +665,7 @@ Example response:
## Subscriptions
The subscriptions endpoint is used by [CustomersDot](https://gitlab.com/gitlab-org/customers-gitlab-com) (`customers.gitlab.com`)
in order to apply subscriptions including trials, and add-on purchases, for personal namespaces or top-level groups within GitLab.com.
The subscriptions endpoint is used by [CustomersDot](https://gitlab.com/gitlab-org/customers-gitlab-com) (`customers.gitlab.com`) to apply subscriptions including trials, and add-on purchases, for personal namespaces or top-level groups within GitLab.com.
### Creating a subscription

View File

@ -18,7 +18,7 @@ Some gems may not include their license information in their `gemspec` file, and
### License Finder commands
There are a few basic commands License Finder provides that you need in order to manage license detection.
There are a few basic commands License Finder provides that you need to manage license detection.
To verify that the checks are passing, and/or to see what dependencies are causing the checks to fail:

View File

@ -281,7 +281,7 @@ be clearly mentioned in the merge request description.
## Batch process
**Summary:** Iterating a single process to external services (for example, PostgreSQL, Redis, Object Storage)
should be executed in a **batch-style** in order to reduce connection overheads.
should be executed in a **batch-style** to reduce connection overheads.
For fetching rows from various tables in a batch-style, please see [Eager Loading](#eager-loading) section.
@ -488,7 +488,7 @@ We can consider the following types of storages:
- **Object-based persistent storage** (long term storage) this type of storage uses external
services like [AWS S3](https://en.wikipedia.org/wiki/Amazon_S3). The Object Storage
can be treated as infinitely scalable and redundant. Accessing this storage usually requires
downloading the file in order to manipulate it. The Object Storage can be considered as an ultimate
downloading the file to manipulate it. The Object Storage can be considered as an ultimate
solution, as by definition it can be assumed that it can handle unlimited concurrent uploads
and downloads of files. This is also ultimate solution required to ensure that application can
run in containerized deployments (Kubernetes) at ease.

View File

@ -1281,7 +1281,7 @@ in a previous migration.
### Example: Add a column `my_column` to the users table
It is important not to leave out the `User.reset_column_information` command, in order to ensure that the old schema is dropped from the cache and ActiveRecord loads the updated schema information.
It is important not to leave out the `User.reset_column_information` command, to ensure that the old schema is dropped from the cache and ActiveRecord loads the updated schema information.
```ruby
class AddAndSeedMyColumn < Gitlab::Database::Migration[2.0]

View File

@ -925,7 +925,7 @@ SOME_CONSTANT = 'bar'
## How to seed a database with millions of rows
You might want millions of project rows in your local database, for example,
in order to compare relative query performance, or to reproduce a bug. You could
to compare relative query performance, or to reproduce a bug. You could
do this by hand with SQL commands or using [Mass Inserting Rails Models](mass_insert.md) functionality.
Assuming you are working with ActiveRecord models, you might also find these links helpful:

View File

@ -394,8 +394,7 @@ In general, pipelines for an MR fall into one of the following types (from short
A "pipeline type" is an abstract term that mostly describes the "critical path" (for example, the chain of jobs for which the sum
of individual duration equals the pipeline's duration).
We use these "pipeline types" in [metrics dashboards](https://app.periscopedata.com/app/gitlab/858266/GitLab-Pipeline-Durations)
in order to detect what types and jobs need to be optimized first.
We use these "pipeline types" in [metrics dashboards](https://app.periscopedata.com/app/gitlab/858266/GitLab-Pipeline-Durations) to detect what types and jobs need to be optimized first.
An MR that touches multiple areas would be associated with the longest type applicable. For instance, an MR that touches backend
and frontend would fall into the "Frontend" pipeline type since this type takes longer to finish than the "Backend" pipeline type.
@ -751,7 +750,7 @@ This works well for the following reasons:
### Artifacts strategy
We limit the artifacts that are saved and retrieved by jobs to the minimum in order to reduce the upload/download time and costs, as well as the artifacts storage.
We limit the artifacts that are saved and retrieved by jobs to the minimum to reduce the upload/download time and costs, as well as the artifacts storage.
### Components caching

View File

@ -286,7 +286,7 @@ The following table lists these variables along with their default values.
([Source](https://github.com/ruby/ruby/blob/45b29754cfba8435bc4980a87cd0d32c648f8a2e/gc.c#L254-L308))
GitLab may decide to change these settings in order to speed up application performance, lower memory requirements, or both.
GitLab may decide to change these settings to speed up application performance, lower memory requirements, or both.
You can see how each of these settings affect GC performance, memory use and application start-up time for an idle instance of
GitLab by running the `scripts/perf/gc/collect_gc_stats.rb` script. It will output GC stats and general timing data to standard

View File

@ -50,7 +50,7 @@ Before GitLab can implement the project template, you must [create a merge reque
1. [Export the project](../user/project/settings/import_export.md#export-a-project-and-its-data)
and save the file as `<name>.tar.gz`, where `<name>` is the short name of the project.
Move this file to the root directory of `gitlab-org/gitlab`.
1. In `gitlab-org/gitlab`, create and checkout a new branch.
1. In `gitlab-org/gitlab`, create and check out a new branch.
1. Edit the following files to include the project template:
- For **non-Enterprise** project templates:
- In `lib/gitlab/project_template.rb`, add details about the template

View File

@ -19,7 +19,7 @@ out using the instructions below.
## Reuse an existing WebSocket connection
Features reusing an existing connection incur minimal risk. Feature flag rollout
is recommended in order to give more control to self-hosting customers. However,
is recommended to give more control to self-hosting customers. However,
it is not necessary to roll out in percentages, or to estimate new connections for
GitLab.com.

View File

@ -18,14 +18,14 @@ Pinning tests help you ensure that you don't unintentionally change the output o
1. For each possible input, identify the significant possible values.
1. Create a test to save a full detailed snapshot for each helpful combination values per input. This should guarantee that we have "pinned down" the current behavior. The snapshot could be literally a screenshot, a dump of HTML, or even an ordered list of debugging statements.
1. Run all the pinning tests against the code, before you start refactoring (Oracle)
1. Perform the refactor (or checkout the commit with the work done)
1. Perform the refactor (or check out the commit with the work done)
1. Run again all the pinning test against the post refactor code (Pin)
1. Compare the Oracle with the Pin. If the Pin is different, you know the refactoring doesn't preserve existing behavior.
1. Repeat the previous three steps as necessary until the refactoring is complete.
### Example commit history
Leaving in the commits for adding and removing pins helps others checkout and verify the result of the test.
Leaving in the commits for adding and removing pins helps others check out and verify the result of the test.
```shell
AAAAAA Add pinning tests to funky_foo

View File

@ -53,7 +53,7 @@ Each time you implement a new feature/endpoint, whether it is at UI, API or Grap
Be careful to **also test [visibility levels](https://gitlab.com/gitlab-org/gitlab-foss/-/blob/master/doc/development/permissions.md#feature-specific-permissions)** and not only project access rights.
The HTTP status code returned when an authorization check fails should generally be `404 Not Found` in order to avoid revealing information
The HTTP status code returned when an authorization check fails should generally be `404 Not Found` to avoid revealing information
about whether or not the requested resource exists. `403 Forbidden` may be appropriate if you need to display a specific message to the user
about why they cannot access the resource. If you are displaying a generic message such as "access denied", consider returning `404 Not Found` instead.

View File

@ -6,7 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# GitLab Developers Guide to service measurement
You can enable service measurement in order to debug any slow service's execution time, number of SQL calls, garbage collection stats, memory usage, etc.
You can enable service measurement to debug any slow service's execution time, number of SQL calls, garbage collection stats, memory usage, etc.
## Measuring module
@ -43,7 +43,7 @@ DummyService.prepend(Measurable)
In case when you are prepending a module from the `EE` namespace with EE features, you need to prepend Measurable after prepending the `EE` module.
This way, `Measurable` is at the bottom of the ancestor chain, in order to measure execution of `EE` features as well:
This way, `Measurable` is at the bottom of the ancestor chain, to measure execution of `EE` features as well:
```ruby
class DummyService

View File

@ -120,7 +120,7 @@ To remove a metric:
Do not remove the metric's YAML definition altogether. Some self-managed
instances might not immediately update to the latest version of GitLab, and
therefore continue to report the removed metric. The Product Intelligence team
requires a record of all removed metrics in order to identify and filter them.
requires a record of all removed metrics to identify and filter them.
For example please take a look at this [merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60149/diffs#b01f429a54843feb22265100c0e4fec1b7da1240_10_10).

View File

@ -60,7 +60,7 @@ that branch, but be told in the UI that the branch does not exist. We deem these
jobs to be `urgency :high`.
Extra effort is made to ensure that these jobs are started within a very short
period of time after being scheduled. However, in order to ensure throughput,
period of time after being scheduled. However, to ensure throughput,
these jobs also have very strict execution duration requirements:
1. The median job execution time should be less than 1 second.
@ -117,7 +117,7 @@ Most background jobs in the GitLab application communicate with other GitLab
services. For example, PostgreSQL, Redis, Gitaly, and Object Storage. These are considered
to be "internal" dependencies for a job.
However, some jobs are dependent on external services in order to complete
However, some jobs are dependent on external services to complete
successfully. Some examples include:
1. Jobs which call web-hooks configured by a user.

View File

@ -74,7 +74,7 @@ Enrichers are not scalling well for the amount of events we receive.
See the [dashboard](https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#dashboards:name=SnowPlow).
Could we get assistance in order to fix the delay?
Could we get assistance to fix the delay?
Thank you!
```

View File

@ -303,8 +303,7 @@ To receive emails from GitLab you have to configure the
have an SMTP server installed. You may also be interested in
[enabling HTTPS](https://docs.gitlab.com/omnibus/settings/ssl.html).
After you make all the changes you want, you will need to restart the container
in order to reconfigure GitLab:
After you make all the changes you want, you will need to restart the container to reconfigure GitLab:
```shell
sudo docker restart gitlab

View File

@ -742,7 +742,7 @@ Make sure to prepare for this task by having a
### Deleted documents
Whenever a change or deletion is made to an indexed GitLab object (a merge request description is changed, a file is deleted from the default branch in a repository, a project is deleted, etc), a document in the index is deleted. However, since these are "soft" deletes, the overall number of "deleted documents", and therefore wasted space, increases. Elasticsearch does intelligent merging of segments in order to remove these deleted documents. However, depending on the amount and type of activity in your GitLab installation, it's possible to see as much as 50% wasted space in the index.
Whenever a change or deletion is made to an indexed GitLab object (a merge request description is changed, a file is deleted from the default branch in a repository, a project is deleted, etc), a document in the index is deleted. However, since these are "soft" deletes, the overall number of "deleted documents", and therefore wasted space, increases. Elasticsearch does intelligent merging of segments to remove these deleted documents. However, depending on the amount and type of activity in your GitLab installation, it's possible to see as much as 50% wasted space in the index.
In general, we recommend letting Elasticsearch merge and reclaim space automatically, with the default settings. From [Lucene's Handling of Deleted Documents](https://www.elastic.co/blog/lucenes-handling-of-deleted-documents "Lucene's Handling of Deleted Documents"), _"Overall, besides perhaps decreasing the maximum segment size, it is best to leave Lucene's defaults as-is and not fret too much about when deletes are reclaimed."_

View File

@ -295,7 +295,7 @@ this can happen in GitLab CI/CD jobs that [authenticate with the CI/CD job token
1. [Reconfigure GitLab](../administration/restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
After this change, Git remote URLs have to be updated to
`https://gitlab.example.com:8443/mygroup/myproject.git` in order to use
`https://gitlab.example.com:8443/mygroup/myproject.git` to use
Kerberos ticket-based authentication.
## Upgrading from password-based to ticket-based Kerberos sign-ins

View File

@ -11,7 +11,7 @@ the ability to steal a user's IP address by referencing images in issues and com
For example, adding `![Example image](http://example.com/example.png)` to
an issue description causes the image to be loaded from the external
server in order to be displayed. However, this also allows the external server
server to be displayed. However, this also allows the external server
to log the IP address of the user.
One way to mitigate this is by proxying any external images to a server you

View File

@ -84,8 +84,7 @@ which brings you to the new project page so you can review the change.
## Libravatar
[Libravatar](https://www.libravatar.org) is supported by GitLab for avatar images, but you must
[manually enable Libravatar support on the GitLab instance](../../administration/libravatar.md)
in order to use the service.
[manually enable Libravatar support on the GitLab instance](../../administration/libravatar.md) to use the service.
<!-- ## Troubleshooting

View File

@ -1385,7 +1385,7 @@ variables:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/334578) in GitLab 14.8.
By default the output of the overrides command is hidden. If the overrides command returns a non zero exit code, the command is displayed as part of your job output. Optionally, you can set the variable `FUZZAPI_OVERRIDES_CMD_VERBOSE` to any value in order to display overrides command output as it is generated. This is useful when testing your overrides script, but should be disabled afterwards as it slows down testing.
By default the output of the overrides command is hidden. If the overrides command returns a non zero exit code, the command is displayed as part of your job output. Optionally, you can set the variable `FUZZAPI_OVERRIDES_CMD_VERBOSE` to any value to display overrides command output as it is generated. This is useful when testing your overrides script, but should be disabled afterwards as it slows down testing.
It is also possible to write messages from your script to a log file that is collected when the job completes or fails. The log file must be created in a specific location and follow a naming convention.

View File

@ -1333,7 +1333,7 @@ variables:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/334578) in GitLab 14.8.
By default the output of the overrides command is hidden. If the overrides command returns a non zero exit code, the command is displayed as part of your job output. Optionally, you can set the variable `DAST_API_OVERRIDES_CMD_VERBOSE` to any value in order to display overrides command output as it is generated. This is useful when testing your overrides script, but should be disabled afterwards as it slows down testing.
By default the output of the overrides command is hidden. If the overrides command returns a non zero exit code, the command is displayed as part of your job output. Optionally, you can set the variable `DAST_API_OVERRIDES_CMD_VERBOSE` to any value to display overrides command output as it is generated. This is useful when testing your overrides script, but should be disabled afterwards as it slows down testing.
It is also possible to write messages from your script to a log file that is collected when the job completes or fails. The log file must be created in a specific location and following a naming convention.

View File

@ -166,7 +166,7 @@ Configure a passthrough these parameters:
| `type` | One of `file`, `raw`, `git` or `url`. |
| `target` | The target file that contains the data written by the passthrough evaluation. If no value is provided, a random target file is generated. |
| `mode` | `overwrite`: if `target` exists, overwrites the file; `append`: append to file instead. The default is `overwrite`. |
| `ref` | This option only applies to the `git` passthrough type and contains the name of the branch or the SHA to be used. |
| `ref` | This option only applies to the `git` passthrough type and contains the name of the branch or the SHA to be used. When using a branch name, specify it in the form `refs/heads/<branch>`, not `refs/remotes/<remote_name>/<branch>`. |
| `subdir` | This option only applies to the `git` passthrough type and can be used to only consider a certain subdirectory of the source Git repository. |
| `value` | For the `file` `url` and `git` types, `value` defines the source location of the file/Git repository; for the `raw` type, `value` carries the raw content to be passed through. |
| `validator` | Can be used to explicitly invoke validators (`xml`, `yaml`, `json`, `toml`) on the target files after the application of a passthrough. Per default, no validator is set. |
@ -237,7 +237,7 @@ target directory with a total `timeout` of 60 seconds.
Several passthrouh types generate a configuration for the target analyzer:
- Two `git` passthrough sections pull the head of branch
`refs/remotes/origin/test` from the `myrules` Git repository, and revision
`refs/heads/test` from the `myrules` Git repository, and revision
`97f7686` from the `sast-rules` Git repository. From the `sast-rules` Git
repository, only data from the `go` subdirectory is considered.
- The `sast-rules` entry has a higher precedence because it appears later in
@ -262,7 +262,7 @@ Afterwards, Semgrep is invoked with the final configuration located under
[[semgrep.passthrough]]
type = "git"
value = "https://gitlab.com/user/myrules.git"
ref = "refs/remotes/origin/test"
ref = "refs/heads/test"
[[semgrep.passthrough]]
type = "git"
@ -309,7 +309,7 @@ It does not explicitly store credentials in the configuration file. To reduce th
[[semgrep.passthrough]]
type = "git"
value = "$GITURL"
ref = "refs/remotes/origin/main"
ref = "refs/heads/main"
```
### Configure the append mode for passthroughs

View File

@ -357,7 +357,7 @@ Support for more languages and analyzers is tracked in [this epic](https://gitla
### Using CI/CD variables to pass credentials for private repositories
Some analyzers require downloading the project's dependencies in order to
Some analyzers require downloading the project's dependencies to
perform the analysis. In turn, such dependencies may live in private Git
repositories and thus require credentials like username and password to download them.
Depending on the analyzer, such credentials can be provided to

View File

@ -32,7 +32,7 @@ If required, you can find [a glossary of common terms](../../../integration/saml
1. Configure the SAML response to include a [NameID](#nameid) that uniquely identifies each user.
1. Configure the required [user attributes](#user-attributes), ensuring you include the user's email address.
1. While the default is enabled for most SAML providers, please ensure the app is set to have service provider
initiated calls in order to link existing GitLab accounts.
initiated calls to link existing GitLab accounts.
1. Once the identity provider is set up, move on to [configuring GitLab](#configure-gitlab).
![Issuer and callback for configuring SAML identity provider with GitLab.com](img/group_saml_configuration_information.png)

View File

@ -191,7 +191,7 @@ For self-managed, administrators can use the [users API](../../../api/users.md)
When using SAML for groups, group members of a role with the appropriate permissions can make use of the [members API](../../../api/members.md) to view group SAML identity information for members of the group.
This can then be compared to the NameID being sent by the identity provider by decoding the message with a [SAML debugging tool](#saml-debugging-tools). We require that these match in order to identify users.
This can then be compared to the NameID being sent by the identity provider by decoding the message with a [SAML debugging tool](#saml-debugging-tools). We require that these match to identify users.
### Stuck in a login "loop"

View File

@ -88,7 +88,7 @@ server:
After you have successfully installed Vault, you must
[initialize the Vault](https://learn.hashicorp.com/tutorials/vault/getting-started-deploy#initializing-the-vault)
and obtain the initial root token. You need access to your Kubernetes cluster that
Vault has been deployed into in order to do this. To initialize the Vault, get a
Vault has been deployed into to do this. To initialize the Vault, get a
shell to one of the Vault pods running inside Kubernetes (typically this is done
by using the `kubectl` command line tool). After you have a shell into the pod,
run the `vault operator init` command:

View File

@ -130,7 +130,7 @@ Once all of the above are set up and the pipeline has run at least once,
navigate to the environments page under **Deployments > Environments**.
Deploy boards are visible by default. You can explicitly select
the triangle next to their respective environment name in order to hide them.
the triangle next to their respective environment name to hide them.
### Example manifest file

View File

@ -35,7 +35,7 @@ integration services must be enabled.
## Configuring Prometheus to monitor for Kubernetes metrics
Prometheus needs to be deployed into the cluster and configured properly in order to gather Kubernetes metrics. GitLab supports two methods for doing so:
Prometheus needs to be deployed into the cluster and configured properly to gather Kubernetes metrics. GitLab supports two methods for doing so:
- GitLab [integrates with Kubernetes](../../clusters/index.md), and can [query a Prometheus in a connected cluster](../../../clusters/integrations.md#prometheus-cluster-integration). The in-cluster Prometheus can be configured to automatically collect application metrics from your cluster.
- To configure your own Prometheus server, you can follow the [Prometheus documentation](https://prometheus.io/docs/introduction/overview/).

View File

@ -254,7 +254,7 @@ after merging does not retarget open merge requests. This improvement is
For a software developer working in a team:
1. You checkout a new branch, and submit your changes through a merge request.
1. You check out a new branch, and submit your changes through a merge request.
1. You gather feedback from your team.
1. You work on the implementation optimizing code with [Code Quality reports](../../../ci/testing/code_quality.md).
1. You verify your changes with [Unit test reports](../../../ci/testing/unit_test_reports.md) in GitLab CI/CD.
@ -269,7 +269,7 @@ For a software developer working in a team:
For a web developer writing a webpage for your company's website:
1. You checkout a new branch and submit a new page through a merge request.
1. You check out a new branch and submit a new page through a merge request.
1. You gather feedback from your reviewers.
1. You preview your changes with [Review Apps](../../../ci/review_apps/index.md).
1. You request your web designers for their implementation.

View File

@ -266,7 +266,7 @@ The merge request sidebar contains the branch reference for the source branch
used to contribute changes for this merge request.
To copy the branch reference into your clipboard, select the **Copy branch name** button
(**{copy-to-clipboard}**) in the right sidebar. Use it to checkout the branch locally
(**{copy-to-clipboard}**) in the right sidebar. Use it to check out the branch locally
from the command line by running `git checkout <branch-name>`.
### Checkout merge requests locally through the `head` ref

View File

@ -54,7 +54,7 @@ Set a merge request that looks ready to merge to
If you configured [Review Apps](https://about.gitlab.com/stages-devops-lifecycle/review-apps/) for your project,
you can preview the changes submitted to a feature branch through a merge request
on a per-branch basis. You don't need to checkout the branch, install, and preview locally.
on a per-branch basis. You don't need to check out the branch, install, and preview locally.
All your changes are available to preview by anyone with the Review Apps link.
With GitLab [Route Maps](../../../ci/review_apps/index.md#route-maps) set, the

View File

@ -299,8 +299,7 @@ To fix this, verify that the user is a member of the project.
### Cannot play media content on Safari
Safari requires the web server to support the [Range request header](https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariWebContent/CreatingVideoforSafarioniPhone/CreatingVideoforSafarioniPhone.html#//apple_ref/doc/uid/TP40006514-SW6)
in order to play your media content. For GitLab Pages to serve
Safari requires the web server to support the [Range request header](https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariWebContent/CreatingVideoforSafarioniPhone/CreatingVideoforSafarioniPhone.html#//apple_ref/doc/uid/TP40006514-SW6) to play your media content. For GitLab Pages to serve
HTTP Range requests, you should use the following two variables in your `.gitlab-ci.yml` file:
```yaml

View File

@ -31,6 +31,7 @@ module API
end
params do
use :pagination
optional :name, type: String, desc: 'Name for the badge'
end
get ":id/badges", urgency: :low do
source = find_source(source_type, params[:id])

View File

@ -27,8 +27,7 @@ module Gitlab
@extractor.issues.reject do |issue|
@extractor.project.forked_from?(issue.project) ||
!issue.project.autoclose_referenced_issues ||
!issue.project.issues_enabled?
!issue.project.autoclose_referenced_issues
end
end
end

View File

@ -0,0 +1,20 @@
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class DistinctCountProjectsWithExpirationPolicyDisabledMetric < DatabaseMetric
operation :distinct_count, column: :project_id
start { Project.minimum(:id) }
finish { Project.maximum(:id) }
cache_start_and_finish_as :project_id
relation { ::ContainerExpirationPolicy.where(enabled: false) }
end
end
end
end
end

View File

@ -346,7 +346,6 @@ module Gitlab
start = minimum_id(Project)
finish = maximum_id(Project)
results[:projects_with_expiration_policy_disabled] = distinct_count(::ContainerExpirationPolicy.where(enabled: false), :project_id, start: start, finish: finish)
# rubocop: disable UsageData/LargeTable
base = ::ContainerExpirationPolicy.active
# rubocop: enable UsageData/LargeTable

View File

@ -20093,6 +20093,15 @@ msgstr ""
msgid "IdentityVerification|Something went wrong. Please try again."
msgstr ""
msgid "IdentityVerification|Step %{stepNumber}: Verify a payment method"
msgstr ""
msgid "IdentityVerification|Step %{stepNumber}: Verify email address"
msgstr ""
msgid "IdentityVerification|Step %{stepNumber}: Verify phone number"
msgstr ""
msgid "IdentityVerification|Step 1: Verify phone number"
msgstr ""

View File

@ -5,8 +5,8 @@
<modelVersion>4.0.0</modelVersion>
<repositories>
<repository>
<id><%= package_project.name %></id>
<url><%= gitlab_address_with_port %>/api/v4/groups/<%= package_project.group.id %>/-/packages/maven</url>
<id><%= client_project.name %></id>
<url><%= gitlab_address_with_port %>/api/v4/groups/<%= client_project.group.id %>/-/packages/maven</url>
</repository>
</repositories>
<dependencies>

View File

@ -0,0 +1,16 @@
<settings xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd">
<servers>
<server>
<id><%= client_project.name %></id>
<configuration>
<httpHeaders>
<property>
<name><%= maven_header_name %></name>
<value><%= token %></value>
</property>
</httpHeaders>
</configuration>
</server>
</servers>
</settings>

View File

@ -0,0 +1,9 @@
deploy-and-install:
image: maven:3.6-jdk-11
script:
- 'mvn deploy -s settings.xml'
- 'mvn install -s settings.xml'
only:
- "<%= package_project.default_branch %>"
tags:
- "runner-for-<%= package_project.name %>"

View File

@ -0,0 +1,22 @@
<project>
<groupId><%= group_id %></groupId>
<artifactId><%= artifact_id %></artifactId>
<version><%= package_version %></version>
<modelVersion>4.0.0</modelVersion>
<repositories>
<repository>
<id><%= package_project.name %></id>
<url><%= gitlab_address_with_port %>/api/v4/groups/<%= package_project.group.id %>/-/packages/maven</url>
</repository>
</repositories>
<distributionManagement>
<repository>
<id><%= package_project.name %></id>
<url><%= gitlab_address_with_port %>/api/v4/projects/<%= package_project.id %>/packages/maven</url>
</repository>
<snapshotRepository>
<id><%= package_project.name %></id>
<url><%= gitlab_address_with_port %>/api/v4/projects/<%= package_project.id %>/packages/maven</url>
</snapshotRepository>
</distributionManagement>
</project>

View File

@ -0,0 +1,16 @@
<settings xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd">
<servers>
<server>
<id><%= package_project.name %></id>
<configuration>
<httpHeaders>
<property>
<name><%= maven_header_name %></name>
<value><%= token %></value>
</property>
</httpHeaders>
</configuration>
</server>
</servers>
</settings>

View File

@ -48,16 +48,16 @@ module QA
it 'pushes and pulls a maven package', testcase: params[:testcase] do
Support::Retrier.retry_on_exception(max_attempts: 3, sleep_interval: 2) do
Resource::Repository::Commit.fabricate_via_api! do |commit|
maven_upload_package_yaml = ERB.new(read_fixture('package_managers/maven', 'maven_upload_package.yaml.erb')).result(binding)
package_pom_xml = ERB.new(read_fixture('package_managers/maven', 'package_pom.xml.erb')).result(binding)
settings_xml = ERB.new(read_fixture('package_managers/maven', 'settings.xml.erb')).result(binding)
gitlab_ci_yaml = ERB.new(read_fixture('package_managers/maven/group/producer', 'gitlab_ci.yaml.erb')).result(binding)
pom_xml = ERB.new(read_fixture('package_managers/maven/group/producer', 'pom.xml.erb')).result(binding)
settings_xml = ERB.new(read_fixture('package_managers/maven/group/producer', 'settings.xml.erb')).result(binding)
commit.project = package_project
commit.commit_message = 'Add files'
commit.add_files(
[
{ file_path: '.gitlab-ci.yml', content: maven_upload_package_yaml },
{ file_path: 'pom.xml', content: package_pom_xml },
{ file_path: '.gitlab-ci.yml', content: gitlab_ci_yaml },
{ file_path: 'pom.xml', content: pom_xml },
{ file_path: 'settings.xml', content: settings_xml }
])
end
@ -89,16 +89,16 @@ module QA
Support::Retrier.retry_on_exception(max_attempts: 3, sleep_interval: 2) do
Resource::Repository::Commit.fabricate_via_api! do |commit|
maven_install_package_yaml = ERB.new(read_fixture('package_managers/maven', 'maven_install_package.yaml.erb')).result(binding)
client_pom_xml = ERB.new(read_fixture('package_managers/maven', 'client_pom.xml.erb')).result(binding)
settings_xml = ERB.new(read_fixture('package_managers/maven', 'settings.xml.erb')).result(binding)
gitlab_ci_yaml = ERB.new(read_fixture('package_managers/maven/group/consumer', 'gitlab_ci.yaml.erb')).result(binding)
pom_xml = ERB.new(read_fixture('package_managers/maven/group/consumer', 'pom.xml.erb')).result(binding)
settings_xml = ERB.new(read_fixture('package_managers/maven/group/consumer', 'settings.xml.erb')).result(binding)
commit.project = client_project
commit.commit_message = 'Add files'
commit.add_files(
[
{ file_path: '.gitlab-ci.yml', content: maven_install_package_yaml },
{ file_path: 'pom.xml', content: client_pom_xml },
{ file_path: '.gitlab-ci.yml', content: gitlab_ci_yaml },
{ file_path: 'pom.xml', content: pom_xml },
{ file_path: 'settings.xml', content: settings_xml }
])
end
@ -127,117 +127,51 @@ module QA
end
context 'when disabled' do
where do
{
'using a personal access token' => {
authentication_token_type: :personal_access_token,
maven_header_name: 'Private-Token',
testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347581'
},
'using a project deploy token' => {
authentication_token_type: :project_deploy_token,
maven_header_name: 'Deploy-Token',
testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347584'
},
'using a ci job token' => {
authentication_token_type: :ci_job_token,
maven_header_name: 'Job-Token',
testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347578'
}
}
before do
Page::Group::Settings::PackageRegistries.perform(&:set_allow_duplicates_disabled)
end
with_them do
let(:token) do
case authentication_token_type
when :personal_access_token
personal_access_token
when :ci_job_token
'${env.CI_JOB_TOKEN}'
when :project_deploy_token
project_deploy_token.token
end
end
it 'prevents users from publishing duplicates' do
create_duplicated_package
before do
Page::Group::Settings::PackageRegistries.perform(&:set_allow_duplicates_disabled)
end
push_duplicated_package
it 'prevents users from publishing group level Maven packages duplicates', testcase: params[:testcase] do
create_duplicated_package
client_project.visit!
push_duplicated_package
show_latest_deploy_job
client_project.visit!
show_latest_deploy_job
Page::Project::Job::Show.perform do |job|
expect(job).not_to be_successful(timeout: 800)
end
Page::Project::Job::Show.perform do |job|
expect(job).not_to be_successful(timeout: 800)
end
end
end
context 'when enabled' do
where do
{
'using a personal access token' => {
authentication_token_type: :personal_access_token,
maven_header_name: 'Private-Token',
testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347580'
},
'using a project deploy token' => {
authentication_token_type: :project_deploy_token,
maven_header_name: 'Deploy-Token',
testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347583'
},
'using a ci job token' => {
authentication_token_type: :ci_job_token,
maven_header_name: 'Job-Token',
testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347577'
}
}
before do
Page::Group::Settings::PackageRegistries.perform(&:set_allow_duplicates_enabled)
end
with_them do
let(:token) do
case authentication_token_type
when :personal_access_token
personal_access_token
when :ci_job_token
'${env.CI_JOB_TOKEN}'
when :project_deploy_token
project_deploy_token.token
end
end
it 'allows users to publish duplicates' do
create_duplicated_package
before do
Page::Group::Settings::PackageRegistries.perform(&:set_allow_duplicates_enabled)
end
push_duplicated_package
it 'allows users to publish group level Maven packages duplicates', testcase: params[:testcase] do
create_duplicated_package
show_latest_deploy_job
push_duplicated_package
show_latest_deploy_job
Page::Project::Job::Show.perform do |job|
expect(job).to be_successful(timeout: 800)
end
Page::Project::Job::Show.perform do |job|
expect(job).to be_successful(timeout: 800)
end
end
end
def create_duplicated_package
settings_xml_with_pat = ERB.new(read_fixture('package_managers/maven', 'settings_with_pat.xml.erb')).result(binding)
package_pom_xml = ERB.new(read_fixture('package_managers/maven', 'package_pom.xml.erb')).result(binding)
settings_xml_with_pat = ERB.new(read_fixture('package_managers/maven/group', 'settings_with_pat.xml.erb')).result(binding)
pom_xml = ERB.new(read_fixture('package_managers/maven/group/producer', 'pom.xml.erb')).result(binding)
with_fixtures([
{
file_path: 'pom.xml',
content: package_pom_xml
content: pom_xml
},
{
file_path: 'settings.xml',
@ -259,17 +193,17 @@ module QA
def push_duplicated_package
Support::Retrier.retry_on_exception(max_attempts: 3, sleep_interval: 2) do
Resource::Repository::Commit.fabricate_via_api! do |commit|
maven_upload_package_yaml = ERB.new(read_fixture('package_managers/maven', 'maven_upload_package.yaml.erb')).result(binding)
package_pom_xml = ERB.new(read_fixture('package_managers/maven', 'package_pom.xml.erb')).result(binding)
settings_xml = ERB.new(read_fixture('package_managers/maven', 'settings.xml.erb')).result(binding)
gitlab_ci_yaml = ERB.new(read_fixture('package_managers/maven/group/producer', 'gitlab_ci.yaml.erb')).result(binding)
pom_xml = ERB.new(read_fixture('package_managers/maven/group/producer', 'pom.xml.erb')).result(binding)
settings_xml_with_pat = ERB.new(read_fixture('package_managers/maven/group', 'settings_with_pat.xml.erb')).result(binding)
commit.project = client_project
commit.commit_message = 'Add .gitlab-ci.yml'
commit.add_files(
[
{ file_path: '.gitlab-ci.yml', content: maven_upload_package_yaml },
{ file_path: 'pom.xml', content: package_pom_xml },
{ file_path: 'settings.xml', content: settings_xml }
{ file_path: '.gitlab-ci.yml', content: gitlab_ci_yaml },
{ file_path: 'pom.xml', content: pom_xml },
{ file_path: 'settings.xml', content: settings_xml_with_pat }
])
end
end

View File

@ -3,6 +3,8 @@
module QA
RSpec.describe 'Package', :orchestrated, :packages, :object_storage, :reliable do
describe 'Maven project level endpoint' do
include Runtime::Fixtures
let(:group_id) { 'com.gitlab.qa' }
let(:artifact_id) { "maven-#{SecureRandom.hex(8)}" }
let(:package_name) { "#{group_id}/#{artifact_id}".tr('.', '/') }
@ -51,54 +53,6 @@ module QA
end
end
let(:gitlab_ci_file) do
{
file_path: '.gitlab-ci.yml',
content:
<<~YAML
deploy-and-install:
image: maven:3.6-jdk-11
script:
- 'mvn deploy -s settings.xml'
- 'mvn install -s settings.xml'
only:
- "#{package_project.default_branch}"
tags:
- "runner-for-#{package_project.name}"
YAML
}
end
let(:pom_file) do
{
file_path: 'pom.xml',
content: <<~XML
<project>
<groupId>#{group_id}</groupId>
<artifactId>#{artifact_id}</artifactId>
<version>#{package_version}</version>
<modelVersion>4.0.0</modelVersion>
<repositories>
<repository>
<id>#{package_project.name}</id>
<url>#{gitlab_address_with_port}/api/v4/projects/#{package_project.id}/-/packages/maven</url>
</repository>
</repositories>
<distributionManagement>
<repository>
<id>#{package_project.name}</id>
<url>#{gitlab_address_with_port}/api/v4/projects/#{package_project.id}/packages/maven</url>
</repository>
<snapshotRepository>
<id>#{package_project.name}</id>
<url>#{gitlab_address_with_port}/api/v4/projects/#{package_project.id}/packages/maven</url>
</snapshotRepository>
</distributionManagement>
</project>
XML
}
end
before do
Flow::Login.sign_in_unless_signed_in
runner
@ -142,40 +96,23 @@ module QA
end
end
let(:settings_xml) do
{
file_path: 'settings.xml',
content: <<~XML
<settings xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd">
<servers>
<server>
<id>#{package_project.name}</id>
<configuration>
<httpHeaders>
<property>
<name>#{maven_header_name}</name>
<value>#{token}</value>
</property>
</httpHeaders>
</configuration>
</server>
</servers>
</settings>
XML
}
end
it 'pushes and pulls a maven package via maven', testcase: params[:testcase] do
Support::Retrier.retry_on_exception(max_attempts: 3, sleep_interval: 2) do
Resource::Repository::Commit.fabricate_via_api! do |commit|
gitlab_ci_yaml = ERB.new(read_fixture('package_managers/maven/project', 'gitlab_ci.yaml.erb'))
.result(binding)
pom_xml = ERB.new(read_fixture('package_managers/maven/project', 'pom.xml.erb'))
.result(binding)
settings_xml = ERB.new(read_fixture('package_managers/maven/project', 'settings.xml.erb'))
.result(binding)
commit.project = package_project
commit.commit_message = 'Add .gitlab-ci.yml'
commit.commit_message = 'Add files'
commit.add_files(
[
gitlab_ci_file,
pom_file,
settings_xml
{ file_path: '.gitlab-ci.yml', content: gitlab_ci_yaml },
{ file_path: 'pom.xml', content: pom_xml },
{ file_path: 'settings.xml', content: settings_xml }
])
end
end
@ -185,17 +122,7 @@ module QA
Flow::Pipeline.visit_latest_pipeline
Page::Project::Pipeline::Show.perform do |pipeline|
pipeline.click_job('deploy')
end
Page::Project::Job::Show.perform do |job|
expect(job).to be_successful(timeout: 800)
job.click_element(:pipeline_path)
end
Page::Project::Pipeline::Show.perform do |pipeline|
pipeline.click_job('install')
pipeline.click_job('deploy-and-install')
end
Page::Project::Job::Show.perform do |job|

View File

@ -34,8 +34,8 @@ module QA
it 'pushes and pulls a maven package via gradle', testcase: params[:testcase] do
Support::Retrier.retry_on_exception(max_attempts: 3, sleep_interval: 2) do
Resource::Repository::Commit.fabricate_via_api! do |commit|
gradle_upload_yaml = ERB.new(read_fixture('package_managers/maven', 'gradle_upload_package.yaml.erb')).result(binding)
build_upload_gradle = ERB.new(read_fixture('package_managers/maven', 'build_upload.gradle.erb')).result(binding)
gradle_upload_yaml = ERB.new(read_fixture('package_managers/maven/gradle', 'gradle_upload_package.yaml.erb')).result(binding)
build_upload_gradle = ERB.new(read_fixture('package_managers/maven/gradle', 'build_upload.gradle.erb')).result(binding)
commit.project = package_project
commit.commit_message = 'Add .gitlab-ci.yml'
@ -73,8 +73,8 @@ module QA
Support::Retrier.retry_on_exception(max_attempts: 3, sleep_interval: 2) do
Resource::Repository::Commit.fabricate_via_api! do |commit|
gradle_install_yaml = ERB.new(read_fixture('package_managers/maven', 'gradle_install_package.yaml.erb')).result(binding)
build_install_gradle = ERB.new(read_fixture('package_managers/maven', 'build_install.gradle.erb')).result(binding)
gradle_install_yaml = ERB.new(read_fixture('package_managers/maven/gradle', 'gradle_install_package.yaml.erb')).result(binding)
build_install_gradle = ERB.new(read_fixture('package_managers/maven/gradle', 'build_install.gradle.erb')).result(binding)
commit.project = client_project
commit.commit_message = 'Add files'

View File

@ -297,14 +297,21 @@ RSpec.describe Gitlab::ClosingIssueExtractor do
end
context 'with an external issue tracker reference' do
it 'extracts the referenced issue' do
jira_project = create(:project, :with_jira_integration, name: 'JIRA_EXT1')
jira_project.add_maintainer(jira_project.creator)
jira_issue = ExternalIssue.new("#{jira_project.name}-1", project: jira_project)
closing_issue_extractor = described_class.new(jira_project, jira_project.creator)
message = "Resolve #{jira_issue.to_reference}"
let_it_be_with_reload(:jira_project) { create(:project, :with_jira_integration, name: 'JIRA_EXT1') }
expect(closing_issue_extractor.closed_by_message(message)).to eq([jira_issue])
let(:jira_issue) { ExternalIssue.new("#{jira_project.name}-1", project: jira_project) }
let(:message) { "Resolve #{jira_issue.to_reference}" }
subject { described_class.new(jira_project, jira_project.creator) }
it 'extracts the referenced issue' do
expect(subject.closed_by_message(message)).to eq([jira_issue])
end
it 'extracts the referenced issue even if GitLab issues are disabled for the project' do
jira_project.update!(issues_enabled: false)
expect(subject.closed_by_message(message)).to eq([jira_issue])
end
end
end
@ -346,6 +353,17 @@ RSpec.describe Gitlab::ClosingIssueExtractor do
end
end
context 'when target project has issues disabled' do
before do
project2.update!(issues_enabled: false)
end
it 'omits the issue reference' do
message = "Closes #{cross_reference}"
expect(subject.closed_by_message(message)).to be_empty
end
end
context "with an invalid URL" do
it do
message = "Closes https://google.com#{urls.project_issue_path(issue2.project, issue2)}"

View File

@ -0,0 +1,19 @@
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe Gitlab::Usage::Metrics::Instrumentations::DistinctCountProjectsWithExpirationPolicyDisabledMetric do
before_all do
create(:container_expiration_policy, enabled: false)
create(:container_expiration_policy, enabled: false, created_at: 29.days.ago)
create(:container_expiration_policy, enabled: true)
end
it_behaves_like 'a correct instrumented metric value', { time_frame: '28d' } do
let(:expected_value) { 1 }
end
it_behaves_like 'a correct instrumented metric value', { time_frame: 'all' } do
let(:expected_value) { 2 }
end
end

View File

@ -624,7 +624,6 @@ RSpec.describe Gitlab::UsageData, :aggregate_failures do
it 'gathers usage data' do
expect(subject[:projects_with_expiration_policy_enabled]).to eq 19
expect(subject[:projects_with_expiration_policy_disabled]).to eq 5
expect(subject[:projects_with_expiration_policy_enabled_with_keep_n_unset]).to eq 1
expect(subject[:projects_with_expiration_policy_enabled_with_keep_n_set_to_1]).to eq 1

View File

@ -971,16 +971,11 @@ RSpec.describe Issue do
context 'with a project' do
it 'returns false when feature is disabled' do
project.add_developer(user)
project.project_feature.update_attribute(:issues_access_level, ProjectFeature::DISABLED)
is_expected.to eq(false)
end
it 'returns false when restricted for members' do
project.project_feature.update_attribute(:issues_access_level, ProjectFeature::PRIVATE)
is_expected.to eq(false)
end
end
context 'without a user' do

View File

@ -68,7 +68,6 @@ module UsageDataHelpers
projects_with_error_tracking_enabled
projects_with_enabled_alert_integrations
projects_with_expiration_policy_enabled
projects_with_expiration_policy_disabled
projects_with_expiration_policy_enabled_with_keep_n_unset
projects_with_expiration_policy_enabled_with_keep_n_set_to_1
projects_with_expiration_policy_enabled_with_keep_n_set_to_5