gitlab-org--gitlab-foss/doc/development/pipelines.md

50 KiB

stage group info
none Engineering Productivity To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments

Pipelines for the GitLab project

Pipelines for gitlab-org/gitlab (as well as the dev instance's) is configured in the usual .gitlab-ci.yml which itself includes files under .gitlab/ci/ for easier maintenance.

We're striving to dogfood GitLab CI/CD features and best-practices as much as possible.

Minimal test jobs before a merge request is approved

To reduce the pipeline cost and shorten the job duration, before a merge request is approved, the pipeline will run a minimal set of RSpec & Jest tests that are related to the merge request changes.

After a merge request has been approved, the pipeline would contain the full RSpec & Jest tests. This will ensure that all tests have been run before a merge request is merged.

Overview of the GitLab project test dependency

To understand how the minimal test jobs are executed, we need to understand the dependency between GitLab code (frontend and backend) and the respective tests (Jest and RSpec). This dependency can be visualized in the following diagram:

flowchart LR
    subgraph frontend
    fe["Frontend code"]--tested with-->jest
    end
    subgraph backend
    be["Backend code"]--tested with-->rspec
    end

    be--generates-->fixtures["frontend fixtures"]
    fixtures--used in-->jest

In summary:

  • RSpec tests are dependent on the backend code.
  • Jest tests are dependent on both frontend and backend code, the latter through the frontend fixtures.

RSpec minimal jobs

To identify the minimal set of tests needed, we use the test_file_finder gem, with two strategies:

The test mappings contain a map of each source files to a list of test files which is dependent of the source file.

In the detect-tests job, we use this mapping to identify the minimal tests needed for the current merge request.

Later on in the rspec fail-fast job, we run the minimal tests needed for the current merge request.

Exceptional cases

In addition, there are a few circumstances where we would always run the full RSpec tests:

  • when the pipeline:run-all-rspec label is set on the merge request
  • when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
  • when the merge request is created in a security mirror
  • when any CI configuration file is changed (for example, .gitlab-ci.yml or .gitlab/ci/**/*)

Jest minimal jobs

To identify the minimal set of tests needed, we pass a list of all the changed files into jest using the --findRelatedTests option. In this mode, jest would resolve all the dependencies of related to the changed files, which include test files that have these files in the dependency chain.

Exceptional cases

In addition, there are a few circumstances where we would always run the full Jest tests:

  • when the pipeline:run-all-jest label is set on the merge request
  • when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
  • when the merge request is created in a security mirror
  • when any CI configuration file is changed (for example, .gitlab-ci.yml or .gitlab/ci/**/*)
  • when any frontend "core" file is changed (for example, package.json, yarn.lock, babel.config.js, jest.config.*.js, config/helpers/**/*.js)
  • when any vendored JavaScript file is changed (for example, vendor/assets/javascripts/**/*)
  • when any backend file is changed (see the patterns list for details)

Fork pipelines

We run only the minimal RSpec & Jest jobs for fork pipelines, unless the pipeline:run-all-rspec label is set on the MR. The goal is to reduce the CI/CD minutes consumed by fork pipelines.

See the experiment issue.

Fail-fast job in merge request pipelines

To provide faster feedback when a merge request breaks existing tests, we are experimenting with a fail-fast mechanism.

An rspec fail-fast job is added in parallel to all other rspec jobs in a merge request pipeline. This job runs the tests that are directly related to the changes in the merge request.

If any of these tests fail, the rspec fail-fast job fails, triggering a fail-pipeline-early job to run. The fail-pipeline-early job:

  • Cancels the currently running pipeline and all in-progress jobs.
  • Sets pipeline to have status failed.

For example:

graph LR
    subgraph "prepare stage";
        A["detect-tests"]
    end

    subgraph "test stage";
        B["jest"];
        C["rspec migration"];
        D["rspec unit"];
        E["rspec integration"];
        F["rspec system"];
        G["rspec fail-fast"];
    end

    subgraph "post-test stage";
        Z["fail-pipeline-early"];
    end

    A --"artifact: list of test files"--> G
    G --"on failure"--> Z

The rspec fail-fast is a no-op if there are more than 10 test files related to the merge request. This prevents rspec fail-fast duration from exceeding the average rspec job duration and defeating its purpose.

This number can be overridden by setting a CI/CD variable named RSPEC_FAIL_FAST_TEST_FILE_COUNT_THRESHOLD.

Faster feedback when reverting merge requests

When you need to revert a merge request, to get accelerated feedback, you can add the ~pipeline:revert label to your merge request.

When this label is assigned, the following steps of the CI/CD pipeline are skipped:

Apply the label to the merge request, and run a new pipeline for the MR.

Test jobs

We have dedicated jobs for each testing level and each job runs depending on the changes made in your merge request. If you want to force all the RSpec jobs to run regardless of your changes, you can add the pipeline:run-all-rspec label to the merge request.

WARNING: Forcing all jobs on docs only related MRs would not have the prerequisite jobs and would lead to errors

Test suite parallelization

Our current RSpec tests parallelization setup is as follows:

  1. The retrieve-tests-metadata job in the prepare stage ensures we have a knapsack/report-master.json file:
    • The knapsack/report-master.json file is fetched from the latest main pipeline which runs update-tests-metadata (for now it's the 2-hourly maintenance scheduled master pipeline), if it's not here we initialize the file with {}.
  2. Each [rspec|rspec-ee] [migration|unit|integration|system|geo] n m job are run with knapsack rspec and should have an evenly distributed share of tests:
    • It works because the jobs have access to the knapsack/report-master.json since the "artifacts from all previous stages are passed by default".
    • the jobs set their own report path to "knapsack/${TEST_TOOL}_${TEST_LEVEL}_${DATABASE}_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json".
    • if knapsack is doing its job, test files that are run should be listed under Report specs, not under Leftover specs.
  3. The update-tests-metadata job (which only runs on scheduled pipelines for the canonical project takes all the knapsack/rspec*.json files and merge them all together into a single knapsack/report-master.json file that is saved as artifact.

After that, the next pipeline uses the up-to-date knapsack/report-master.json file.

Flaky tests

Automatic skipping of flaky tests

Tests that are known to be flaky are skipped unless the $SKIP_FLAKY_TESTS_AUTOMATICALLY variable is set to false or if the ~"pipeline:run-flaky-tests" label is set on the MR.

See the experiment issue.

Automatic retry of failing tests in a separate process

Unless $RETRY_FAILED_TESTS_IN_NEW_PROCESS variable is set to false (true by default), RSpec tests that failed are automatically retried once in a separate RSpec process. The goal is to get rid of most side-effects from previous tests that may lead to a subsequent test failure.

We keep track of retried tests in the $RETRIED_TESTS_REPORT_FILE file saved as artifact by the rspec:flaky-tests-report job.

See the experiment issue.

Single database testing

By default, all tests run with multiple databases.

We also run tests with a single database in nightly scheduled pipelines, and in merge requests that touch database-related files.

If you want to force tests to run with a single database, you can add the pipeline:run-single-db label to the merge request.

Monitoring

The GitLab test suite is monitored for the main branch, and any branch that includes rspec-profile in their name.

Logging

  • Rails logging to log/test.log is disabled by default in CI for performance reasons. To override this setting, provide the RAILS_ENABLE_TEST_LOG environment variable.

Review app jobs

Consult the Review Apps dedicated page for more information.

If you want to force a Review App to be deployed regardless of your changes, you can add the pipeline:run-review-app label to the merge request.

As-if-FOSS jobs

The * as-if-foss jobs run the GitLab test suite "as if FOSS", meaning as if the jobs would run in the context of gitlab-org/gitlab-foss. These jobs are only created in the following cases:

  • when the pipeline:run-as-if-foss label is set on the merge request
  • when the merge request is created in the gitlab-org/security/gitlab project
  • when any CI configuration file is changed (for example, .gitlab-ci.yml or .gitlab/ci/**/*)

The * as-if-foss jobs are run in addition to the regular EE-context jobs. They have the FOSS_ONLY='1' variable set and get the ee/ folder removed before the tests start running.

The intent is to ensure that a change doesn't introduce a failure after gitlab-org/gitlab is synced to gitlab-org/gitlab-foss.

As-if-JH jobs

NOTE: This is disabled for now.

The * as-if-jh jobs run the GitLab test suite "as if JiHu", meaning as if the jobs would run in the context of GitLab JH. These jobs are only created in the following cases:

  • when the pipeline:run-as-if-jh label is set on the merge request
  • when the pipeline:run-all-rspec label is set on the merge request
  • when any code or backstage file is changed
  • when any startup CSS file is changed

The * as-if-jh jobs are run in addition to the regular EE-context jobs. The jh/ folder is added before the tests start running.

The intent is to ensure that a change doesn't introduce a failure after gitlab-org/gitlab is synced to GitLab JH.

When to consider applying pipeline:run-as-if-jh label

NOTE: This is disabled for now.

If a Ruby file is renamed and there's a corresponding prepend_mod line, it's likely that GitLab JH is relying on it and requires a corresponding change to rename the module or class it's prepending.

Corresponding JH branch

NOTE: This is disabled for now.

You can create a corresponding JH branch on GitLab JH by appending -jh to the branch name. If a corresponding JH branch is found, * as-if-jh jobs grab the jh folder from the respective branch, rather than from the default branch main-jh.

NOTE: For now, CI will try to fetch the branch on the GitLab JH mirror, so it might take some time for the new JH branch to propagate to the mirror.

Ruby 3.0 jobs

You can add the pipeline:run-in-ruby3 label to the merge request to switch the Ruby version used for running the whole test suite to 3.0. When you do this, the test suite will no longer run in Ruby 2.7 (default), and an additional job verify-ruby-2.7 will also run and always fail to remind us to remove the label and run in Ruby 2.7 before merging the merge request.

This should let us:

  • Test changes for Ruby 3.0
  • Make sure it will not break anything when it's merged into the default branch

undercover RSpec test

Introduced in GitLab 14.6.

The rspec:undercoverage job runs undercover to detect, and fail if any changes introduced in the merge request has zero coverage.

The rspec:undercoverage job obtains coverage data from the rspec:coverage job.

In the event of an emergency, or false positive from this job, add the pipeline:skip-undercoverage label to the merge request to allow this job to fail.

Troubleshooting rspec:undercoverage failures

The rspec:undercoverage job has known bugs that can cause false positive failures. You can test coverage locally to determine if it's safe to apply ~"pipeline:skip-undercoverage". For example, using <spec> as the name of the test causing the failure:

  1. Run SIMPLECOV=1 bundle exec rspec <spec>.
  2. Run scripts/undercoverage.

If these commands return undercover: ✅ No coverage is missing in latest changes then you can apply ~"pipeline:skip-undercoverage" to bypass pipeline failures.

Ruby versions testing

Our test suite runs against Ruby 2 in merge requests and default branch pipelines.

We also run our test suite against Ruby 3 on another 2-hourly scheduled pipelines, as GitLab.com will soon run on Ruby 3.

PostgreSQL versions testing

Our test suite runs against PG12 as GitLab.com runs on PG12 and Omnibus defaults to PG12 for new installs and upgrades.

We do run our test suite against PG11 and PG13 on nightly scheduled pipelines.

We also run our test suite against PG11 upon specific database library changes in MRs and main pipelines (with the rspec db-library-code pg11 job).

Current versions testing

Where? PostgreSQL version Ruby version
Merge requests 12 (default version), 11 for DB library changes 2.7 (default version)
master branch commits 12 (default version), 11 for DB library changes 2.7 (default version)
maintenance scheduled pipelines for the master branch (every even-numbered hour) 12 (default version), 11 for DB library changes 2.7 (default version)
maintenance scheduled pipelines for the ruby3 branch (every odd-numbered hour), see below. 12 (default version), 11 for DB library changes 3.0 (coded in the branch)
nightly scheduled pipelines for the master branch 12 (default version), 11, 13 2.7 (default version)

The pipeline configuration for the scheduled pipeline testing Ruby 3 is stored in the ruby3-sync branch. The pipeline updates the ruby3 branch with latest master, and then it triggers a regular branch pipeline for ruby3. Any changes in ruby3 are only for running the pipeline. It should never be merged back to master. Any other Ruby 3 changes should go into master directly, which should be compatible with Ruby 2.7.

Previously, ruby3-sync was using a project token stored in RUBY3_SYNC_TOKEN (now backed up in RUBY3_SYNC_TOKEN_NOT_USED), however due to various permissions issues, we ended up using an access token from gitlab-bot so now RUBY3_SYNC_TOKEN is actually an access token from gitlab-bot.

Long-term plan

We follow the PostgreSQL versions shipped with Omnibus GitLab:

PostgreSQL version 14.1 (July 2021) 14.2 (August 2021) 14.3 (September 2021) 14.4 (October 2021) 14.5 (November 2021) 14.6 (December 2021)
PG12 MRs/2-hour/nightly MRs/2-hour/nightly MRs/2-hour/nightly MRs/2-hour/nightly MRs/2-hour/nightly MRs/2-hour/nightly
PG11 nightly nightly nightly nightly nightly nightly
PG13 nightly nightly nightly nightly nightly nightly

Redis versions testing

Our test suite runs against Redis 6 as GitLab.com runs on Redis 6 and Omnibus defaults to Redis 6 for new installs and upgrades.

We do run our test suite against Redis 5 on nightly scheduled pipelines, specifically when running backward-compatible and forward-compatible PostgreSQL jobs.

Current versions testing

Where? Redis version
MRs 6
default branch (non-scheduled pipelines) 6
nightly scheduled pipelines 5

Pipelines types for merge requests

In general, pipelines for an MR fall into one of the following types (from shorter to longer), depending on the changes made in the MR:

A "pipeline type" is an abstract term that mostly describes the "critical path" (for example, the chain of jobs for which the sum of individual duration equals the pipeline's duration). We use these "pipeline types" in metrics dashboards to detect what types and jobs need to be optimized first.

An MR that touches multiple areas would be associated with the longest type applicable. For instance, an MR that touches backend and frontend would fall into the "Frontend" pipeline type since this type takes longer to finish than the "Backend" pipeline type.

We use the rules: and needs: keywords extensively to determine the jobs that need to be run in a pipeline. Note that an MR that includes multiple types of changes would have a pipelines that include jobs from multiple types (for example, a combination of docs-only and code-only pipelines).

Following are graphs of the critical paths for each pipeline type. Jobs that aren't part of the critical path are omitted.

Documentation pipeline

Reference pipeline.

graph LR
  classDef criticalPath fill:#f66;

  1-3["docs-lint links (5 minutes)"];
  class 1-3 criticalPath;
  click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356757&udv=0"

Backend pipeline

Reference pipeline.

graph RL;
  classDef criticalPath fill:#f66;

  1-3["compile-test-assets (6 minutes)"];
  class 1-3 criticalPath;
  click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914317&udv=0"
  1-6["setup-test-env (4 minutes)"];
  click 1-6 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914315&udv=0"
  1-14["retrieve-tests-metadata"];
  click 1-14 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356697&udv=0"
  1-15["detect-tests"];
  click 1-15 "https://app.periscopedata.com/app/gitlab/652085/EP---Jobs-Durations?widget=10113603&udv=1005715"

  2_5-1["rspec & db jobs (24 minutes)"];
  class 2_5-1 criticalPath;
  click 2_5-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations"
  2_5-1 --> 1-3 & 1-6 & 1-14 & 1-15;

  3_2-1["rspec:coverage (5.35 minutes)"];
  class 3_2-1 criticalPath;
  click 3_2-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7248745&udv=0"
  3_2-1 -.->|"(don't use needs<br/>because of limitations)"| 2_5-1;

  4_3-1["rspec:undercoverage (3.5 minutes)"];
  class 4_3-1 criticalPath;
  click 4_3-1 "https://app.periscopedata.com/app/gitlab/652085/EP---Jobs-Durations?widget=13446492&udv=1005715"
  4_3-1 --> 3_2-1;

Frontend pipeline

Reference pipeline.

graph RL;
  classDef criticalPath fill:#f66;

  1-2["build-qa-image (2 minutes)"];
  click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
  1-5["compile-production-assets (16 minutes)"];
  class 1-5 criticalPath;
  click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"

  2_3-1["build-assets-image (1.3 minutes)"];
  class 2_3-1 criticalPath;
  2_3-1 --> 1-5

  2_6-1["start-review-app-pipeline (49 minutes)"];
  class 2_6-1 criticalPath;
  click 2_6-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations"
  2_6-1 --> 2_3-1 & 1-2;

End-to-end pipeline

Reference pipeline.

graph RL;
  classDef criticalPath fill:#f66;

  1-2["build-qa-image (2 minutes)"];
  click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
  1-5["compile-production-assets (16 minutes)"];
  class 1-5 criticalPath;
  click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
  1-15["detect-tests"];
  click 1-15 "https://app.periscopedata.com/app/gitlab/652085/EP---Jobs-Durations?widget=10113603&udv=1005715"

  2_3-1["build-assets-image (1.3 minutes)"];
  class 2_3-1 criticalPath;
  2_3-1 --> 1-5

  2_4-1["e2e:package-and-test (102 minutes)"];
  class 2_4-1 criticalPath;
  click 2_4-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914305&udv=0"
  2_4-1 --> 1-2 & 2_3-1 & 1-15;

CI configuration internals

Workflow rules

Pipelines for the GitLab project are created using the workflow:rules keyword feature of the GitLab CI/CD.

Pipelines are always created for the following scenarios:

  • main branch, including on schedules, pushes, merges, and so on.
  • Merge requests.
  • Tags.
  • Stable, auto-deploy, and security branches.

Pipeline creation is also affected by the following CI/CD variables:

  • If $FORCE_GITLAB_CI is set, pipelines are created.
  • If $GITLAB_INTERNAL is not set, pipelines are not created.

No pipeline is created in any other cases (for example, when pushing a branch with no MR for it).

The source of truth for these workflow rules is defined in .gitlab-ci.yml.

Default image

The default image is defined in .gitlab-ci.yml.

It includes Ruby, Go, Git, Git LFS, Chrome, Node, Yarn, PostgreSQL, and Graphics Magick.

The images used in our pipelines are configured in the gitlab-org/gitlab-build-images project, which is push-mirrored to gitlab/gitlab-build-images for redundancy.

The current version of the build images can be found in the "Used by GitLab section".

Default variables

In addition to the predefined CI/CD variables, each pipeline includes default variables defined in .gitlab-ci.yml.

Stages

The current stages are:

  • sync: This stage is used to synchronize changes from gitlab-org/gitlab to gitlab-org/gitlab-foss.
  • prepare: This stage includes jobs that prepare artifacts that are needed by jobs in subsequent stages.
  • build-images: This stage includes jobs that prepare Docker images that are needed by jobs in subsequent stages or downstream pipelines.
  • fixtures: This stage includes jobs that prepare fixtures needed by frontend tests.
  • lint: This stage includes linting and static analysis jobs.
  • test: This stage includes most of the tests, and DB/migration jobs.
  • post-test: This stage includes jobs that build reports or gather data from the test stage's jobs (for example, coverage, Knapsack metadata, and so on).
  • review: This stage includes jobs that build the CNG images, deploy them, and run end-to-end tests against Review Apps (see Review Apps for details). It also includes Docs Review App jobs.
  • qa: This stage includes jobs that perform QA tasks against the Review App that is deployed in stage review.
  • post-qa: This stage includes jobs that build reports or gather data from the qa stage's jobs (for example, Review App performance report).
  • pages: This stage includes a job that deploys the various reports as GitLab Pages (for example, coverage-ruby, and webpack-report (found at https://gitlab-org.gitlab.io/gitlab/webpack-report/, but there is an issue with the deployment).
  • notify: This stage includes jobs that notify various failures to Slack.

Dependency Proxy

Some of the jobs are using images from Docker Hub, where we also use ${GITLAB_DEPENDENCY_PROXY_ADDRESS} as a prefix to the image path, so that we pull images from our Dependency Proxy. By default, this variable is set from the value of ${GITLAB_DEPENDENCY_PROXY}.

${GITLAB_DEPENDENCY_PROXY} is a group CI/CD variable defined in gitlab-org as ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/. This means when we use an image defined as:

image: ${GITLAB_DEPENDENCY_PROXY_ADDRESS}alpine:edge

Projects in the gitlab-org group pull from the Dependency Proxy, while forks that reside on any other personal namespaces or groups fall back to Docker Hub unless ${GITLAB_DEPENDENCY_PROXY} is also defined there.

Work around for when a pipeline is started by a Project access token user

When a pipeline is started by a Project access token user (e.g. the release-tools approver bot user which automatically updates the Gitaly version used in the main project), the Dependency proxy isn't accessible and the job fails at the Preparing the "docker+machine" executor step. To work around that, we have a special workflow rule, that overrides the ${GITLAB_DEPENDENCY_PROXY_ADDRESS} variable so that Depdendency proxy isn't used in that case:

- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $GITLAB_USER_LOGIN =~ /project_\d+_bot\d*/'
  variables:
    GITLAB_DEPENDENCY_PROXY_ADDRESS: ""

NOTE: We don't directly override the ${GITLAB_DEPENDENCY_PROXY} variable because group-level variables have higher precedence over .gitlab-ci.yml variables.

Common job definitions

Most of the jobs extend from a few CI definitions defined in .gitlab/ci/global.gitlab-ci.yml that are scoped to a single configuration keyword.

Job definitions Description
.default-retry Allows a job to retry upon unknown_failure, api_failure, runner_system_failure, job_execution_timeout, or stuck_or_timeout_failure.
.default-before_script Allows a job to use a default before_script definition suitable for Ruby/Rails tasks that may need a database running (for example, tests).
.setup-test-env-cache Allows a job to use a default cache definition suitable for setting up test environment for subsequent Ruby/Rails tasks.
.rails-cache Allows a job to use a default cache definition suitable for Ruby/Rails tasks.
.static-analysis-cache Allows a job to use a default cache definition suitable for static analysis tasks.
.coverage-cache Allows a job to use a default cache definition suitable for coverage tasks.
.qa-cache Allows a job to use a default cache definition suitable for QA tasks.
.yarn-cache Allows a job to use a default cache definition suitable for frontend jobs that do a yarn install.
.assets-compile-cache Allows a job to use a default cache definition suitable for frontend jobs that compile assets.
.use-pg11 Allows a job to run the postgres 11 and redis services (see .gitlab/ci/global.gitlab-ci.yml for the specific versions of the services).
.use-pg11-ee Same as .use-pg11 but also use an elasticsearch service (see .gitlab/ci/global.gitlab-ci.yml for the specific version of the service).
.use-pg12 Allows a job to use the postgres 12 and redis services (see .gitlab/ci/global.gitlab-ci.yml for the specific versions of the services).
.use-pg12-ee Same as .use-pg12 but also use an elasticsearch service (see .gitlab/ci/global.gitlab-ci.yml for the specific version of the service).
.use-pg13 Allows a job to use the postgres 13 and redis services (see .gitlab/ci/global.gitlab-ci.yml for the specific versions of the services).
.use-pg13-ee Same as .use-pg13 but also use an elasticsearch service (see .gitlab/ci/global.gitlab-ci.yml for the specific version of the service).
.use-kaniko Allows a job to use the kaniko tool to build Docker images.
.as-if-foss Simulate the FOSS project by setting the FOSS_ONLY='1' CI/CD variable.
.use-docker-in-docker Allows a job to use Docker in Docker.

rules, if: conditions and changes: patterns

We're using the rules keyword extensively.

All rules definitions are defined in rules.gitlab-ci.yml, then included in individual jobs via extends.

The rules definitions are composed of if: conditions and changes: patterns, which are also defined in rules.gitlab-ci.yml and included in rules definitions via YAML anchors

if: conditions

if: conditions Description Notes
if-not-canonical-namespace Matches if the project isn't in the canonical (gitlab-org/) or security (gitlab-org/security) namespace. Use to create a job for forks (by using `when: on_success
if-not-ee Matches if the project isn't EE (that is, project name isn't gitlab or gitlab-ee). Use to create a job only in the FOSS project (by using `when: on_success
if-not-foss Matches if the project isn't FOSS (that is, project name isn't gitlab-foss, gitlab-ce, or gitlabhq). Use to create a job only in the EE project (by using `when: on_success
if-default-refs Matches if the pipeline is for master, main, /^[\d-]+-stable(-ee)?$/ (stable branches), /^\d+-\d+-auto-deploy-\d+$/ (auto-deploy branches), /^security\// (security branches), merge requests, and tags. Note that jobs aren't created for branches with this default configuration.
if-master-refs Matches if the current branch is master or main.
if-master-push Matches if the current branch is master or main and pipeline source is push.
if-master-schedule-maintenance Matches if the current branch is master or main and pipeline runs on a 2-hourly schedule.
if-master-schedule-nightly Matches if the current branch is master or main and pipeline runs on a nightly schedule.
if-auto-deploy-branches Matches if the current branch is an auto-deploy one.
if-master-or-tag Matches if the pipeline is for the master or main branch or for a tag.
if-merge-request Matches if the pipeline is for a merge request.
if-merge-request-title-as-if-foss Matches if the pipeline is for a merge request and the MR has label ~"pipeline:run-as-if-foss"
if-merge-request-title-update-caches Matches if the pipeline is for a merge request and the MR has label ~"pipeline:update-cache".
if-merge-request-title-run-all-rspec Matches if the pipeline is for a merge request and the MR has label ~"pipeline:run-all-rspec".
if-security-merge-request Matches if the pipeline is for a security merge request.
if-security-schedule Matches if the pipeline is for a security scheduled pipeline.
if-nightly-master-schedule Matches if the pipeline is for a master scheduled pipeline with $NIGHTLY set.
if-dot-com-gitlab-org-schedule Limits jobs creation to scheduled pipelines for the gitlab-org group on GitLab.com.
if-dot-com-gitlab-org-master Limits jobs creation to the master or main branch for the gitlab-org group on GitLab.com.
if-dot-com-gitlab-org-merge-request Limits jobs creation to merge requests for the gitlab-org group on GitLab.com.
if-dot-com-gitlab-org-and-security-tag Limits job creation to tags for the gitlab-org and gitlab-org/security groups on GitLab.com.
if-dot-com-gitlab-org-and-security-merge-request Limit jobs creation to merge requests for the gitlab-org and gitlab-org/security groups on GitLab.com.
if-dot-com-gitlab-org-and-security-tag Limit jobs creation to tags for the gitlab-org and gitlab-org/security groups on GitLab.com.
if-dot-com-ee-schedule Limits jobs to scheduled pipelines for the gitlab-org/gitlab project on GitLab.com.
if-security-pipeline-merge-result Matches if the pipeline is for a security merge request triggered by @gitlab-release-tools-bot.

changes: patterns

changes: patterns Description
ci-patterns Only create job for CI configuration-related changes.
ci-build-images-patterns Only create job for CI configuration-related changes related to the build-images stage.
ci-review-patterns Only create job for CI configuration-related changes related to the review stage.
ci-qa-patterns Only create job for CI configuration-related changes related to the qa stage.
yaml-lint-patterns Only create job for YAML-related changes.
docs-patterns Only create job for docs-related changes.
frontend-dependency-patterns Only create job when frontend dependencies are updated (that is, package.json, and yarn.lock). changes.
frontend-patterns Only create job for frontend-related changes.
backend-patterns Only create job for backend-related changes.
db-patterns Only create job for DB-related changes.
backstage-patterns Only create job for backstage-related changes (that is, Danger, fixtures, RuboCop, specs).
code-patterns Only create job for code-related changes.
qa-patterns Only create job for QA-related changes.
code-backstage-patterns Combination of code-patterns and backstage-patterns.
code-qa-patterns Combination of code-patterns and qa-patterns.
code-backstage-qa-patterns Combination of code-patterns, backstage-patterns, and qa-patterns.
static-analysis-patterns Only create jobs for Static Analytics configuration-related changes.

Performance

Interruptible pipelines

By default, all jobs are interruptible, except the dont-interrupt-me job which runs automatically on main, and is manual otherwise.

If you want a running pipeline to finish even if you push new commits to a merge request, be sure to start the dont-interrupt-me job before pushing.

Git fetch caching

Because GitLab.com uses the pack-objects cache, concurrent Git fetches of the same pipeline ref are deduplicated on the Gitaly server (always) and served from cache (when available).

This works well for the following reasons:

  • The pack-objects cache is enabled on all Gitaly servers on GitLab.com.
  • The CI/CD Git strategy setting for gitlab-org/gitlab is Git clone, causing all jobs to fetch the same data, which maximizes the cache hit ratio.
  • We use shallow clone to avoid downloading the full Git history for every job.

Caching strategy

  1. All jobs must only pull caches by default.
  2. All jobs must be able to pass with an empty cache. In other words, caches are only there to speed up jobs.
  3. We currently have several different cache definitions defined in .gitlab/ci/global.gitlab-ci.yml, with fixed keys:
    • .setup-test-env-cache
    • .ruby-cache
    • .rails-cache
    • .static-analysis-cache
    • .rubocop-cache
    • .coverage-cache
    • .ruby-node-cache
    • .qa-cache
    • .yarn-cache
    • .assets-compile-cache (the key includes ${NODE_ENV} so it's actually two different caches).
  4. These cache definitions are composed of multiple atomic caches.
  5. Only the following jobs, running in 2-hourly maintenance scheduled pipelines, are pushing (that is, updating) to the caches:
  6. These jobs can also be forced to run in merge requests with the pipeline:update-cache label (this can be useful to warm the caches in a MR that updates the cache keys).

Artifacts strategy

We limit the artifacts that are saved and retrieved by jobs to the minimum to reduce the upload/download time and costs, as well as the artifacts storage.

Components caching

Some external components (GitLab Workhorse and frontend assets) of GitLab need to be built from source as a preliminary step for running tests.

cache-workhorse

In this MR, and then this MR, we introduced a new cache-workhorse job that:

  • runs automatically for all GitLab.com gitlab-org/gitlab scheduled pipelines
  • runs automatically for any master commit that touches the workhorse/ folder
  • is manual for GitLab.com's gitlab-org's MRs that touches caching-related files

This job tries to download a generic package that contains GitLab Workhorse binaries needed in the GitLab test suite (under tmp/tests/gitlab-workhorse).

  • If the package URL returns a 404:
    1. It runs scripts/setup-test-env, so that the GitLab Workhorse binaries are built.
    2. It then creates an archive which contains the binaries and upload it as a generic package.
  • Otherwise, if the package already exists, it exits the job successfully.

We also changed the setup-test-env job to:

  1. First download the GitLab Workhorse generic package build and uploaded by cache-workhorse.
  2. If the package is retrieved successfully, its content is placed in the right folder (for example, tmp/tests/gitlab-workhorse), preventing the building of the binaries when scripts/setup-test-env is run later on.
  3. If the package URL returns a 404, the behavior doesn't change compared to the current one: the GitLab Workhorse binaries are built as part of scripts/setup-test-env.

NOTE: The version of the package is the workhorse tree SHA (for example, git rev-parse HEAD:workhorse).

cache-assets

In this MR, we introduced three new cache-assets:test, cache-assets:test as-if-foss, and cache-assets:production jobs that:

  • never run unless $CACHE_ASSETS_AS_PACKAGE == "true"
  • runs automatically for all GitLab.com gitlab-org/gitlab scheduled pipelines
  • runs automatically for any master commit that touches the assets-related folders
  • is manual for GitLab.com's gitlab-org's MRs that touches caching-related files

This job tries to download a generic package that contains GitLab compiled assets needed in the GitLab test suite (under app/assets/javascripts/locale/**/app.js, and public/assets).

  • If the package URL returns a 404:
    1. It runs bin/rake gitlab:assets:compile, so that the GitLab assets are compiled.
    2. It then creates an archive which contains the assets and uploads it as a generic package. The package version is set to the assets folders' hash sum.
  • Otherwise, if the package already exists, it exits the job successfully.

compile-*-assets

We also changed the compile-test-assets, compile-test-assets as-if-foss, and compile-production-assets jobs to:

  1. First download the "native" cache assets, which contain:

  2. We then we compute the SHA256 hexdigest of all the source files the assets depend on, for the current checked out branch. We store the hexdigest in the GITLAB_ASSETS_HASH variable.

  3. If $CACHE_ASSETS_AS_PACKAGE == "true", we download the generic package built and uploaded by cache-assets:*.

    • If the cache is up-to-date for the checked out branch, we download the native cache and the cache package. We could optimize that by not downloading the genetic package but the native cache is actually very often outdated because it's rebuilt only every 2 hours.
  4. We run the assets_compile_script function, which itself runs the assets:compile Rake task.

    This task is responsible for deciding if assets need to be compiled or not. It compares the HEAD SHA256 hexdigest from $GITLAB_ASSETS_HASH with the master hexdigest from cached-assets-hash.txt.

  5. If the hashes are the same, we don't compile anything. If they're different, we compile the assets.

Pre-clone step

NOTE: We no longer use this optimization for gitlab-org/gitlab because the pack-objects cache allows Gitaly to serve the full CI/CD fetch traffic now. See Git fetch caching.

The pre-clone step works by using the CI_PRE_CLONE_SCRIPT variable defined by GitLab.com shared runners.


Return to Development documentation