.. | ||
includes.md | ||
README.md |
stage | group | info | type |
---|---|---|---|
Verify | Continuous Integration | To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers | reference |
GitLab CI/CD pipeline configuration reference
GitLab CI/CD pipelines are configured using a YAML file called .gitlab-ci.yml
within each project.
The .gitlab-ci.yml
file defines the structure and order of the pipelines and determines:
- What to execute using GitLab Runner.
- What decisions to make when specific conditions are encountered. For example, when a process succeeds or fails.
This topic covers CI/CD pipeline configuration. For other CI/CD configuration information, see:
- GitLab CI/CD Variables, for configuring the environment the pipelines run in.
- GitLab Runner advanced configuration, for configuring GitLab Runner.
We have complete examples of configuring pipelines:
- For a quick introduction to GitLab CI/CD, follow our quick start guide.
- For a collection of examples, see GitLab CI/CD Examples.
- To see a large
.gitlab-ci.yml
file used in an enterprise, see the.gitlab-ci.yml
file forgitlab
.
For some additional information about GitLab CI/CD:
- Watch the CI/CD Ease of configuration video.
- Watch the Making the case for CI/CD in your organization webcast to learn the benefits of CI/CD and how to measure the results of CI/CD automation.
- Learn how Verizon reduced rebuilds from 30 days to under 8 hours with GitLab.
NOTE: Note: If you have a mirrored repository where GitLab pulls from, you may need to enable pipeline triggering in your project's Settings > Repository > Pull from a remote repository > Trigger pipelines for mirror updates.
Introduction
Pipeline configuration begins with jobs. Jobs are the most fundamental element of a .gitlab-ci.yml
file.
Jobs are:
- Defined with constraints stating under what conditions they should be executed.
- Top-level elements with an arbitrary name and must contain at least the
script
clause. - Not limited in how many can be defined.
For example:
job1:
script: "execute-script-for-job1"
job2:
script: "execute-script-for-job2"
The above example is the simplest possible CI/CD configuration with two separate
jobs, where each of the jobs executes a different command.
Of course a command can execute code directly (./configure;make;make install
)
or run a script (test.sh
) in the repository.
Jobs are picked up by Runners and executed within the environment of the Runner. What is important, is that each job is run independently from each other.
Validate the .gitlab-ci.yml
Each instance of GitLab CI/CD has an embedded debug tool called Lint, which validates the
content of your .gitlab-ci.yml
files. You can find the Lint under the page ci/lint
of your
project namespace. For example, https://gitlab.example.com/gitlab-org/project-123/-/ci/lint
.
Unavailable names for jobs
Each job must have a unique name, but there are a few reserved keywords
that
can't be used as job names:
image
services
stages
types
before_script
after_script
variables
cache
include
Using reserved keywords
If you get validation error when using specific values (for example, true
or false
), try to:
- Quote them.
- Change them to a different form. For example,
/bin/true
.
Configuration parameters
A job is defined as a list of parameters that define the job's behavior.
The following table lists available parameters for jobs:
Keyword | Description |
---|---|
script |
Shell script which is executed by Runner. |
image |
Use Docker images. Also available: image:name and image:entrypoint . |
services |
Use Docker services images. Also available: services:name , services:alias , services:entrypoint , and services:command . |
before_script |
Override a set of commands that are executed before job. |
after_script |
Override a set of commands that are executed after job. |
stage |
Defines a job stage (default: test ). |
only |
Limit when jobs are created. Also available: only:refs , only:kubernetes , only:variables , and only:changes . |
except |
Limit when jobs are not created. Also available: except:refs , except:kubernetes , except:variables , and except:changes . |
rules |
List of conditions to evaluate and determine selected attributes of a job, and whether or not it's created. May not be used alongside only /except . |
tags |
List of tags which are used to select Runner. |
allow_failure |
Allow job to fail. Failed job does not contribute to commit status. |
when |
When to run job. Also available: when:manual and when:delayed . |
environment |
Name of an environment to which the job deploys. Also available: environment:name , environment:url , environment:on_stop , environment:auto_stop_in and environment:action . |
cache |
List of files that should be cached between subsequent runs. Also available: cache:paths , cache:key , cache:untracked , and cache:policy . |
artifacts |
List of files and directories to attach to a job on success. Also available: artifacts:paths , artifacts:exclude , artifacts:expose_as , artifacts:name , artifacts:untracked , artifacts:when , artifacts:expire_in , artifacts:reports , artifacts:reports:codequality , artifacts:reports:junit , artifacts:reports:cobertura , and artifacts:reports:terraform .In GitLab Enterprise Edition, these are available: artifacts:reports:sast , artifacts:reports:dependency_scanning , artifacts:reports:container_scanning , artifacts:reports:dast , artifacts:reports:license_scanning , artifacts:reports:license_management (removed in GitLab 13.0), artifacts:reports:performance , artifacts:reports:load_performance , and artifacts:reports:metrics . |
dependencies |
Restrict which artifacts are passed to a specific job by providing a list of jobs to fetch artifacts from. |
coverage |
Code coverage settings for a given job. |
retry |
When and how many times a job can be auto-retried in case of a failure. |
timeout |
Define a custom job-level timeout that takes precedence over the project-wide setting. |
parallel |
How many instances of a job should be run in parallel. |
trigger |
Defines a downstream pipeline trigger. |
include |
Allows this job to include external YAML files. Also available: include:local , include:file , include:template , and include:remote . |
extends |
Configuration entries that this job is going to inherit from. |
pages |
Upload the result of a job to use with GitLab Pages. |
variables |
Define job variables on a job level. |
interruptible |
Defines if a job can be canceled when made redundant by a newer run. |
resource_group |
Limit job concurrency. |
release |
Instructs the Runner to generate a Release object. |
NOTE: Note:
Parameters types
and type
are deprecated.
Global parameters
Some parameters must be defined at a global level, affecting all jobs in the pipeline.
Global defaults
Some parameters can be set globally as the default for all jobs using the
default:
keyword. Default parameters can then be overridden by job-specific
configuration.
The following job parameters can be defined inside a default:
block:
In the following example, the ruby:2.5
image is set as the default for all
jobs except the rspec 2.6
job, which uses the ruby:2.6
image:
default:
image: ruby:2.5
rspec:
script: bundle exec rspec
rspec 2.6:
image: ruby:2.6
script: bundle exec rspec
inherit
Introduced in GitLab 12.9.
You can disable inheritance of globally defined defaults
and variables with the inherit:
parameter.
To enable or disable the inheritance of all variables:
or default:
parameters, use the following format:
default: true
ordefault: false
variables: true
orvariables: false
To inherit only a subset of default:
parameters or variables:
, specify what
you wish to inherit, and any not listed will not be inherited. Use
one of the following formats:
inherit:
default: [parameter1, parameter2]
variables: [VARIABLE1, VARIABLE2]
Or:
inherit:
default:
- parameter1
- parameter2
variables:
- VARIABLE1
- VARIABLE2
In the example below:
rubocop
:- will inherit: Nothing.
rspec
:- will inherit: the default
image
and theWEBHOOK_URL
variable. - will not inherit: the default
before_script
and theDOMAIN
variable.
- will inherit: the default
capybara
:- will inherit: the default
before_script
andimage
. - will not inherit: the
DOMAIN
andWEBHOOK_URL
variables.
- will inherit: the default
karma
:- will inherit: the default
image
andbefore_script
, and theDOMAIN
variable. - will not inherit:
WEBHOOK_URL
variable.
- will inherit: the default
default:
image: 'ruby:2.4'
before_script:
- echo Hello World
variables:
DOMAIN: example.com
WEBHOOK_URL: https://my-webhook.example.com
rubocop:
inherit:
default: false
variables: false
script: bundle exec rubocop
rspec:
inherit:
default: [image]
variables: [WEBHOOK_URL]
script: bundle exec rspec
capybara:
inherit:
variables: false
script: bundle exec capybara
karma:
inherit:
default: true
variables: [DOMAIN]
script: karma
stages
stages
is used to define stages that contain jobs and is defined
globally for the pipeline.
The specification of stages
allows for having flexible multi stage pipelines.
The ordering of elements in stages
defines the ordering of jobs' execution:
- Jobs of the same stage are run in parallel.
- Jobs of the next stage are run after the jobs from the previous stage complete successfully.
Let's consider the following example, which defines 3 stages:
stages:
- build
- test
- deploy
- First, all jobs of
build
are executed in parallel. - If all jobs of
build
succeed, thetest
jobs are executed in parallel. - If all jobs of
test
succeed, thedeploy
jobs are executed in parallel. - If all jobs of
deploy
succeed, the commit is marked aspassed
. - If any of the previous jobs fails, the commit is marked as
failed
and no jobs of further stage are executed.
There are also two edge cases worth mentioning:
- If no
stages
are defined in.gitlab-ci.yml
, then thebuild
,test
anddeploy
are allowed to be used as job's stage by default. - If a job does not specify a
stage
, the job is assigned thetest
stage.
workflow:rules
Introduced in GitLab 12.5
The top-level workflow:
key applies to the entirety of a pipeline, and will
determine whether or not a pipeline is created. It currently accepts a single
rules:
key that operates similarly to rules:
defined within jobs,
enabling dynamic configuration of the pipeline.
If you are new to GitLab CI/CD and workflow: rules
, you may find the workflow:rules
templates useful.
To define your own workflow: rules
, the configuration options currently available are:
if
: Define a rule.when
: May be set toalways
ornever
only. If not provided, the default value isalways
.
If a pipeline attempts to run but matches no rule, it's dropped and doesn't run.
For example, with the following configuration, pipelines run for all push
events (changes to
branches and new tags) as long as they don't have -wip
in the commit message. Scheduled
pipelines and merge request pipelines don't run, as there's no rule allowing them.
workflow:
rules:
- if: $CI_COMMIT_REF_NAME =~ /-wip$/
when: never
- if: '$CI_PIPELINE_SOURCE == "push"'
This example has strict rules, and no other pipelines can run.
Alternatively, you can have loose rules by using only when: never
rules, followed
by a final when: always
rule. This allows all types of pipelines, except for any
that match the when: never
rules:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: never
- if: '$CI_PIPELINE_SOURCE == "push"'
when: never
- when: always
This example never allows pipelines for schedules or push
(branches and tags) pipelines,
but does allow pipelines in all other cases, including merge request pipelines.
As with rules
defined in jobs, be careful not to use a configuration that allows
merge request pipelines and branch pipelines to run at the same time, or you could
have duplicate pipelines.
Useful workflow rules clauses:
Clause | Details |
---|---|
if: '$CI_PIPELINE_SOURCE == "merge_request_event"' |
Allow or block merge request pipelines. |
if: '$CI_PIPELINE_SOURCE == "push"' |
Allow or block both branch pipelines and tag pipelines. |
if: $CI_COMMIT_BEFORE_SHA == '0000000000000000000000000000000000000000' |
Allow or block pipeline creation when new branches are created or pushed with no commits. |
workflow:rules
templates
Introduced in GitLab 13.0.
We provide pre-made templates for use with your pipelines that set up workflow: rules
for common scenarios. Usage of these will make things easier and prevent duplicate pipelines from running.
The Branch-Pipelines
template
makes your pipelines run for branches and tags.
Branch pipeline status will be displayed within merge requests that use that branch as a source, but this pipeline type does not support any features offered by Merge Request Pipelines like Pipelines for Merge Results or Merge Trains. Use this template if you are intentionally avoiding those features.
It is included as follows:
include:
- template: 'Workflows/Branch-Pipelines.gitlab-ci.yml'
The MergeRequest-Pipelines
template
makes your pipelines run for the default branch (usually master
), tags, and
all types of merge request pipelines. Use this template if you use any of the
the Pipelines for Merge Requests features, as mentioned
above.
It is included as follows:
include:
- template: 'Workflows/MergeRequest-Pipelines.gitlab-ci.yml'
include
- Introduced in GitLab Premium 10.5.
- Available for Starter, Premium and Ultimate since 10.6.
- Moved to GitLab Core in 11.4.
Using the include
keyword allows the inclusion of external YAML files. This helps
to break down the CI/CD configuration into multiple files and increases readability for long configuration files.
It's also possible to have template files stored in a central repository and projects include their
configuration files. This helps avoid duplicated configuration, for example, global default variables for all projects.
include
requires the external YAML file to have the extensions .yml
or .yaml
,
otherwise the external file won't be included.
include
supports the following inclusion methods:
Method | Description |
---|---|
local |
Include a file from the local project repository. |
file |
Include a file from a different project repository. |
remote |
Include a file from a remote URL. Must be publicly accessible. |
template |
Include templates which are provided by GitLab. |
The include
methods do not support variable expansion.
NOTE: Note:
.gitlab-ci.yml
configuration included by all methods is evaluated at pipeline creation.
The configuration is a snapshot in time and persisted in the database. Any changes to
referenced .gitlab-ci.yml
configuration won't be reflected in GitLab until the next pipeline is created.
The files defined by include
are:
- Deep merged with those in
.gitlab-ci.yml
. - Always evaluated first and merged with the content of
.gitlab-ci.yml
, regardless of the position of theinclude
keyword.
TIP: Tip:
Use merging to customize and override included CI/CD configurations with local
definitions. Local definitions in .gitlab-ci.yml
will override included definitions.
NOTE: Note:
Using YAML anchors across different YAML files sourced by include
is not
supported. You must only refer to anchors in the same file. Instead
of using YAML anchors, you can use the extends
keyword.
include:local
include:local
includes a file from the same repository as .gitlab-ci.yml
.
It's referenced using full paths relative to the root directory (/
).
You can only use files that are currently tracked by Git on the same branch
your configuration file is on. In other words, when using a include:local
, make
sure that both .gitlab-ci.yml
and the local file are on the same branch.
All nested includes will be executed in the scope of the same project, so it's possible to use local, project, remote, or template includes.
NOTE: Note: Including local files through Git submodules paths is not supported.
Example:
include:
- local: '/templates/.gitlab-ci-template.yml'
TIP: Tip: Local includes can be used as a replacement for symbolic links which are not followed.
This can be defined as a short local include:
include: '.gitlab-ci-production.yml'
include:file
Introduced in GitLab 11.7.
To include files from another private project under the same GitLab instance,
use include:file
. This file is referenced using full paths relative to the
root directory (/
). For example:
include:
- project: 'my-group/my-project'
file: '/templates/.gitlab-ci-template.yml'
You can also specify ref
, with the default being the HEAD
of the project:
include:
- project: 'my-group/my-project'
ref: master
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: v1.0.0
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: 787123b47f14b552955ca2786bc9542ae66fee5b # Git SHA
file: '/templates/.gitlab-ci-template.yml'
All nested includes will be executed in the scope of the target project, so it's possible to use local (relative to target project), project, remote or template includes.
include:remote
include:remote
can be used to include a file from a different location,
using HTTP/HTTPS, referenced by using the full URL. The remote file must be
publicly accessible through a simple GET request as authentication schemas
in the remote URL are not supported. For example:
include:
- remote: 'https://gitlab.com/awesome-project/raw/master/.gitlab-ci-template.yml'
All nested includes will be executed without context as public user, so only another remote or public project, or template, is allowed.
include:template
Introduced in GitLab 11.7.
include:template
can be used to include .gitlab-ci.yml
templates that are
shipped with GitLab.
For example:
# File sourced from GitLab's template collection
include:
- template: Auto-DevOps.gitlab-ci.yml
Multiple include:template
files:
include:
- template: Android-Fastlane.gitlab-ci.yml
- template: Auto-DevOps.gitlab-ci.yml
All nested includes will be executed only with the permission of the user, so it's possible to use project, remote or template includes.
Nested includes
Introduced in GitLab 11.9.
Nested includes allow you to compose a set of includes.
A total of 100 includes is allowed, but duplicate includes are considered a configuration error.
Since GitLab 12.4, the time limit for resolving all files is 30 seconds.
Additional includes
examples
There is a list of additional includes
examples available.
Parameter details
The following are detailed explanations for parameters used to configure CI/CD pipelines.
image
Used to specify a Docker image to use for the job.
For:
- Simple definition examples, see Define
image
andservices
from.gitlab-ci.yml
. - Detailed usage information, refer to Docker integration documentation.
image:name
An extended Docker configuration option.
For more information, see Available settings for image
.
image:entrypoint
An extended Docker configuration option.
For more information, see Available settings for image
.
services
Used to specify a service Docker image, linked to a base image specified in image
.
For:
- Simple definition examples, see Define
image
andservices
from.gitlab-ci.yml
. - Detailed usage information, refer to Docker integration documentation.
- For example services, see GitLab CI/CD Services.
services:name
An extended Docker configuration option.
For more information, see Available settings for services
.
services:alias
An extended Docker configuration option.
For more information, see Available settings for services
.
services:entrypoint
An extended Docker configuration option.
For more information, see Available settings for services
.
services:command
An extended Docker configuration option.
For more information, see Available settings for services
.
script
script
is the only required keyword that a job needs. It's a shell script
which is executed by the Runner. For example:
job:
script: "bundle exec rspec"
YAML anchors for scripts are available.
This parameter can also contain several commands using an array:
job:
script:
- uname -a
- bundle exec rspec
NOTE: Note:
Sometimes, script
commands will need to be wrapped in single or double quotes.
For example, commands that contain a colon (:
) need to be wrapped in quotes so
that the YAML parser knows to interpret the whole thing as a string rather than
a "key: value" pair. Be careful when using special characters:
:
, {
, }
, [
, ]
, ,
, &
, *
, #
, ?
, |
, -
, <
, >
, =
, !
, %
, @
, `
.
If any of the script commands return an exit code different from zero, the job will fail and further commands won't be executed. This behavior can be avoided by storing the exit code in a variable:
job:
script:
- false || exit_code=$?
- if [ $exit_code -ne 0 ]; then echo "Previous command failed"; fi;
before_script
and after_script
Introduced in GitLab 8.7 and requires GitLab Runner v1.2.
before_script
is used to define a command that should be run before each
job, including deploy jobs, but after the restoration of any artifacts.
This must be an array.
Scripts specified in before_script
are concatenated with any scripts specified
in the main script
, and executed together in a single shell.
after_script
is used to define the command that will be run after each
job, including failed ones. This must be an array.
Scripts specified in after_script
are executed in a new shell, separate from any
before_script
or script
scripts. As a result, they:
- Have a current working directory set back to the default.
- Have no access to changes done by scripts defined in
before_script
orscript
, including:- Command aliases and variables exported in
script
scripts. - Changes outside of the working tree (depending on the Runner executor), like
software installed by a
before_script
orscript
script.
- Command aliases and variables exported in
- Have a separate timeout, which is hard coded to 5 minutes. See related issue for details.
- Don't affect the job's exit code. If the
script
section succeeds and theafter_script
times out or fails, the job will exit with code0
(Job Succeeded
).
It's possible to overwrite a globally defined before_script
or after_script
if you set it per-job:
default:
before_script:
- global before script
job:
before_script:
- execute this instead of global before script
script:
- my command
after_script:
- execute this after my script
YAML anchors for before_script
and after_script
are available.
Coloring script output
Script output can be colored using ANSI escape codes, or by running commands or programs that output ANSI escape codes.
For example, using Bash with color codes:
job:
script:
- echo -e "\e[31mThis text is red,\e[0m but this text isn't\e[31m however this text is red again."
You can define the color codes in Shell variables, or even custom environment variables, which makes the commands easier to read and reusable.
For example, using the same example as above and variables defined in a before_script
:
job:
before_script:
- TXT_RED="\e[31m" && TXT_CLEAR="\e[0m"
script:
- echo -e "${TXT_RED}This text is red,${TXT_CLEAR} but this part isn't${TXT_RED} however this part is again."
- echo "This text is not colored"
Or with PowerShell color codes:
job:
before_script:
- $esc="$([char]27)"; $TXT_RED="$esc[31m"; $TXT_CLEAR="$esc[0m"
script:
- Write-Host $TXT_RED"This text is red,"$TXT_CLEAR" but this text isn't"$TXT_RED" however this text is red again."
- Write-Host "This text is not colored"
Multiline commands
You can split long commands into multi-line commands to improve readability
using |
(literal) and >
(folded) YAML multiline block scalar indicators.
CAUTION: Warning:
If multiple commands are combined into one command string, only the last command's
failure or success will be reported,
incorrectly ignoring failures from earlier commands due to a bug.
If the success of the job depends on the success or failure of these commands,
you can run the commands as separate script:
items, or add exit 1
commands
as appropriate to the command string where needed.
You can use the |
(literal) YAML multiline block scalar indicator to write
commands over multiple lines in the script
section of a job description.
Each line is treated as a separate command.
Only the first command is repeated in the job log, but additional
commands are still executed:
job:
script:
- |
echo "First command line."
echo "Second command line."
echo "Third command line."
The example above renders in the job log as:
$ echo First command line # collapsed multi-line command
First command line
Second command line.
Third command line.
The >
(folded) YAML multiline block scalar indicator treats empty lines between
sections as the start of a new command:
job:
script:
- >
echo "First command line
is split over two lines."
echo "Second command line."
This behaves similarly to writing multiline commands without the >
or |
block
scalar indicators:
job:
script:
- echo "First command line
is split over two lines."
echo "Second command line."
Both examples above render in the job log as:
$ echo First command line is split over two lines. # collapsed multi-line command
First command line is split over two lines.
Second command line.
When the >
or |
block scalar indicators are omitted, GitLab will form the command
by concatenating non-empty lines, so make sure the lines can run when concatenated.
Shell here documents work with the
|
and >
operators as well. The example below transliterates the lower case letters
to upper case:
job:
script:
- |
tr a-z A-Z << END_TEXT
one two three
four five six
END_TEXT
Results in:
$ tr a-z A-Z << END_TEXT # collapsed multi-line command
ONE TWO THREE
FOUR FIVE SIX
stage
stage
is defined per-job and relies on stages
which is defined
globally. It allows to group jobs into different stages, and jobs of the same
stage
are executed in parallel (subject to certain conditions). For example:
stages:
- build
- test
- deploy
job 0:
stage: .pre
script: make something useful before build stage
job 1:
stage: build
script: make build dependencies
job 2:
stage: build
script: make build artifacts
job 3:
stage: test
script: make test
job 4:
stage: deploy
script: make deploy
job 5:
stage: .post
script: make something useful at the end of pipeline
Using your own Runners
When using your own Runners, GitLab Runner runs only one job at a time by default (see the
concurrent
flag in Runner global settings
for more information).
Jobs will run on your own Runners in parallel only if:
- Run on different Runners.
- The Runner's
concurrent
setting has been changed.
.pre
and .post
Introduced in GitLab 12.4.
The following stages are available to every pipeline:
.pre
, which is guaranteed to always be the first stage in a pipeline..post
, which is guaranteed to always be the last stage in a pipeline.
User-defined stages are executed after .pre
and before .post
.
The order of .pre
and .post
can't be changed, even if defined out of order in .gitlab-ci.yml
.
For example, the following are equivalent configuration:
-
Configured in order:
stages: - .pre - a - b - .post
-
Configured out of order:
stages: - a - .pre - b - .post
-
Not explicitly configured:
stages: - a - b
NOTE: Note:
A pipeline won't be created if it only contains jobs in .pre
or .post
stages.
extends
Introduced in GitLab 11.3.
extends
defines entry names that a job that uses extends
is going to
inherit from.
It's an alternative to using YAML anchors and is a little more flexible and readable:
.tests:
script: rake test
stage: test
only:
refs:
- branches
rspec:
extends: .tests
script: rake rspec
only:
variables:
- $RSPEC
In the example above, the rspec
job inherits from the .tests
template job.
GitLab will perform a reverse deep merge based on the keys. GitLab will:
- Merge the
rspec
contents into.tests
recursively. - Not merge the values of the keys.
This results in the following rspec
job:
rspec:
script: rake rspec
stage: test
only:
refs:
- branches
variables:
- $RSPEC
NOTE: Note:
Note that script: rake test
has been overwritten by script: rake rspec
.
If you do want to include the rake test
, see before_script
and after_script
.
.tests
in this example is a hidden job, but it's
possible to inherit from regular jobs as well.
extends
supports multi-level inheritance, however it's not recommended to
use more than three levels. The maximum nesting level that is supported is 10.
The following example has two levels of inheritance:
.tests:
only:
- pushes
.rspec:
extends: .tests
script: rake rspec
rspec 1:
variables:
RSPEC_SUITE: '1'
extends: .rspec
rspec 2:
variables:
RSPEC_SUITE: '2'
extends: .rspec
spinach:
extends: .tests
script: rake spinach
In GitLab 12.0 and later, it's also possible to use multiple parents for
extends
.
Merge details
extends
is able to merge hashes but not arrays.
The algorithm used for merge is "closest scope wins", so
keys from the last member will always override anything defined on other
levels. For example:
.only-important:
variables:
URL: "http://my-url.internal"
IMPORTANT_VAR: "the details"
only:
- master
- stable
tags:
- production
script:
- echo "Hello world!"
.in-docker:
variables:
URL: "http://docker-url.internal"
tags:
- docker
image: alpine
rspec:
variables:
GITLAB: "is-awesome"
extends:
- .only-important
- .in-docker
script:
- rake rspec
This results in the following rspec
job:
rspec:
variables:
URL: "http://docker-url.internal"
IMPORTANT_VAR: "the details"
GITLAB: "is-awesome"
only:
- master
- stable
tags:
- docker
image: alpine
script:
- rake rspec
Note that in the example above:
variables
sections have been merged but thatURL: "http://my-url.internal"
has been overwritten byURL: "http://docker-url.internal"
.tags: ['production']
has been overwritten bytags: ['docker']
.script
has not been merged but ratherscript: ['echo "Hello world!"']
has been overwritten byscript: ['rake rspec']
. Arrays can be merged using YAML anchors.
Using extends
and include
together
extends
works across configuration files combined with include
.
For example, if you have a local included.yml
file:
.template:
script:
- echo Hello!
Then, in .gitlab-ci.yml
you can use it like this:
include: included.yml
useTemplate:
image: alpine
extends: .template
This will run a job called useTemplate
that runs echo Hello!
as defined in
the .template
job, and uses the alpine
Docker image as defined in the local job.
rules
Introduced in GitLab 12.3.
The rules
keyword can be used to include or exclude jobs in pipelines.
Rules are evaluated in order until the first match. When matched, the job is either included or excluded from the pipeline, depending on the configuration. If included, the job also has certain attributes added to it.
CAUTION: Caution:
rules
can't be used in combination with only/except
because it is a replacement for
that functionality. If you attempt to do this, the linter returns a
key may not be used with rules
error.
Rules attributes
The job attributes allowed by rules
are:
when
: If not defined, defaults towhen: on_success
.- If used as
when: delayed
,start_in
is also required.
- If used as
allow_failure
: If not defined, defaults toallow_failure: false
.
If a rule evaluates to true, and when
has any value except never
, the job is included in the pipeline.
For example:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
when: delayed
start_in: '3 hours'
allow_failure: true
Additional job configuration may be added to rules in the future. If something useful is not available, please open an issue.
Rules clauses
Available rule clauses are:
Clause | Description |
---|---|
if |
Add or exclude jobs from a pipeline by evaluating an if statement. Similar to only:variables . |
changes |
Add or exclude jobs from a pipeline based on what files are changed. Same as only:changes . |
exists |
Add or exclude jobs from a pipeline based on the presence of specific files. |
Rules are evaluated in order until a match is found. If a match is found, the attributes are checked to see if the job should be added to the pipeline. If no attributes are defined, the defaults are:
when: on_success
allow_failure: false
The job is added to the pipeline:
- If a rule matches and has
when: on_success
,when: delayed
orwhen: always
. - If no rules match, but the last clause is
when: on_success
,when: delayed
orwhen: always
(with no rule).
The job is not added to the pipeline:
- If no rules match, and there is no standalone
when: on_success
,when: delayed
orwhen: always
. - If a rule matches, and has
when: never
as the attribute.
For example, using if
clauses to strictly limit when jobs run:
job:
script: "echo Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
allow_failure: true
- if: '$CI_PIPELINE_SOURCE == "schedule"'
In this example:
- If the pipeline is for a merge request, the first rule matches, and the job
is added to the merge request pipeline
with attributes of:
when: manual
(manual job)allow_failure: true
(allows the pipeline to continue running even if the manual job is not run)
- If the pipeline is not for a merge request, the first rule doesn't match, and the second rule is evaluated.
- If the pipeline is a scheduled pipeline, the second rule matches, and the job
is added to the scheduled pipeline. Since no attributes were defined, it is added
with:
when: on_success
(default)allow_failure: false
(default)
- In all other cases, no rules match, so the job is not added to any other pipeline.
Alternatively, you can define a set of rules to exclude jobs in a few cases, but run them in all other cases:
job:
script: "echo Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: never
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: never
- when: on_success
- If the pipeline is for a merge request, the job is not be added to the pipeline.
- If the pipeline is a scheduled pipeline, the job is not be added to the pipeline.
- In all other cases, the job is added to the pipeline, with
when: on_success
.
CAUTION: Caution:
If you use when: on_success
, always
, or delayed
as the final rule, two
simultaneous pipelines may start. Both push pipelines and merge request pipelines can
be triggered by the same event (a push to the source branch for an open merge request).
See the important differences between rules
and only
/except
for more details.
Differences between rules
and only
/except
Jobs defined with only/except
do not trigger merge request pipelines by default.
You must explicitly add only: merge_requests
.
Jobs defined with rules
can trigger all types of pipelines.
You do not have to explicitly configure each type.
For example:
job:
script: "echo This creates double pipelines!"
rules:
- if: '$CUSTOM_VARIABLE == "false"'
when: never
- when: always
This job does not run when $CUSTOM_VARIABLE
is false, but it does run in all
other pipelines, including both push (branch) and merge request pipelines. With
this configuration, every push to an open merge request's source branch
causes duplicated pipelines. Explicitly allowing both push and merge request pipelines
in the same job could have the same effect.
We recommend using workflow: rules
to limit which types of pipelines
are permitted. Allowing only merge request pipelines, or only branch pipelines,
eliminates duplicated pipelines. Alternatively, you can rewrite the rules to be
stricter, or avoid using a final when
(always
, on_success
or delayed
).
Also, we don't recommend mixing only/except
jobs with rules
jobs in the same pipeline.
It may not cause YAML errors, but debugging the exact execution behavior can be complex
due to the different default behaviors of only/except
and rules
.
rules:if
rules:if
clauses determine whether or not jobs are added to a pipeline by evaluating
a simple if
statement. If the if
statement is true, the job is either included
or excluded from a pipeline. In plain English, if
rules can be interpreted as one of:
- "If this rule evaluates to true, add the job" (default).
- "If this rule evaluates to true, do not add the job" (by adding
when: never
).
rules:if
differs slightly from only:variables
by accepting only a single
expression string per rule, rather than an array of them. Any set of expressions to be
evaluated can be conjoined into a single expression by using &&
or ||
, and use
the variable matching syntax.
if:
clauses are evaluated based on the values of predefined environment variables
or custom environment variables.
For example:
job:
script: "echo Hello, Rules!"
rules:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/ && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"'
when: always
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/'
when: manual
allow_failure: true
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME' # Checking for the presence of a variable is possible
Some details regarding the logic that determines the when
for the job:
- If none of the provided rules match, the job is set to
when: never
and is not included in the pipeline. - A rule without any conditional clause, such as a
when
orallow_failure
rule withoutif
orchanges
, always matches, and is always used if reached. - If a rule matches and has no
when
defined, the rule uses thewhen
defined for the job, which defaults toon_success
if not defined. - You can define
when
once per rule, or once at the job-level, which applies to all rules. You can't mixwhen
at the job-level withwhen
in rules.
For behavior similar to the only
/except
keywords, you can
check the value of the $CI_PIPELINE_SOURCE
variable.
Value | Description |
---|---|
push |
For pipelines triggered by a git push event, including for branches and tags. |
web |
For pipelines created by using Run pipeline button in the GitLab UI, from the project's CI/CD > Pipelines section. |
trigger |
For pipelines created by using a trigger token. |
schedule |
For scheduled pipelines. |
api |
For pipelines triggered by the pipelines API. |
external |
When using CI services other than GitLab. |
pipeline |
For multi-project pipelines created by using the API with CI_JOB_TOKEN . |
chat |
For pipelines created by using a GitLab ChatOps command. |
webide |
For pipelines created by using the WebIDE. |
merge_request_event |
For pipelines created when a merge request is created or updated. Required to enable merge request pipelines, merged results pipelines, and merge trains. |
external_pull_request_event |
When an external pull request on GitHub is created or updated. See Pipelines for external pull requests. |
parent_pipeline |
For pipelines triggered by a parent/child pipeline with rules , use this in the child pipeline configuration so that it can be triggered by the parent pipeline. |
For example:
job:
script: "echo Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: manual
allow_failure: true
- if: '$CI_PIPELINE_SOURCE == "push"'
This example runs the job as a manual job in scheduled pipelines or in push
pipelines (to branches or tags), with when: on_success
(default). It does not
add the job to any other pipeline type.
Another example:
job:
script: "echo Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_PIPELINE_SOURCE == "schedule"'
This example runs the job as a when: on_success
job in merge request pipelines
and scheduled pipelines. It does not run in any other pipeline type.
Other commonly used variables for if
clauses:
if: $CI_COMMIT_TAG
: If changes are pushed for a tag.if: $CI_COMMIT_BRANCH
: If changes are pushed to any branch.if: '$CI_COMMIT_BRANCH == "master"'
: If changes are pushed tomaster
.if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
: If changes are pushed to the default branch (usuallymaster
). Useful if reusing the same configuration in multiple projects with potentially different default branches.if: '$CI_COMMIT_BRANCH =~ /regex-expression/'
: If the commit branch matches a regular expression.if: '$CUSTOM_VARIABLE !~ /regex-expression/'
: If the custom variableCUSTOM_VARIABLE
does not match a regular expression.if: '$CUSTOM_VARIABLE == "value1"'
: If the custom variableCUSTOM_VARIABLE
is exactlyvalue1
.
rules:changes
To determine if jobs should be added to a pipeline, rules: changes
clauses check
the files changed by Git push events.
rules: changes
works exactly the same way as only: changes
and except: changes
,
accepting an array of paths. Similarly, it always returns true if there is no
Git push event. It should only be used for branch pipelines or merge request pipelines.
For example:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- changes:
- Dockerfile
when: manual
allow_failure: true
In this example:
workflow: rules
allows only pipelines for merge requests for all jobs.- If
Dockerfile
has changed, add the job to the pipeline as a manual job, and allow the pipeline to continue running even if the job is not triggered (allow_failure: true
). - If
Dockerfile
has not changed, do not add job to any pipeline (same aswhen: never
).
rules:exists
Introduced in GitLab 12.4.
exists
accepts an array of paths and will match if any of these paths exist
as files in the repository.
For example:
job:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- exists:
- Dockerfile
You can also use glob patterns to match multiple files in any directory within the repository.
For example:
job:
script: bundle exec rspec
rules:
- exists:
- spec/**.rb
NOTE: Note:
For performance reasons, using exists
with patterns is limited to 10000
checks. After the 10000th check, rules with patterned globs will always match.
rules:allow_failure
Introduced in GitLab 12.8.
You can use allow_failure: true
within rules:
to allow a job to fail, or a manual job to
wait for action, without stopping the pipeline itself. All jobs using rules:
default to allow_failure: false
if allow_failure:
is not defined.
The rule-level rules:allow_failure
option overrides the job-level
allow_failure
option, and is only applied when the job is
triggered by the particular rule.
job:
script: "echo Hello, Rules!"
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"'
when: manual
allow_failure: true
In this example, if the first rule matches, then the job will have when: manual
and allow_failure: true
.
Complex rule clauses
To conjoin if
, changes
, and exists
clauses with an AND, use them in the
same rule.
In the following example:
- We run the job manually if
Dockerfile
or any file indocker/scripts/
has changed AND$VAR == "string value"
. - Otherwise, the job won't be included in the pipeline.
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: '$VAR == "string value"'
changes: # Will include the job and set to when:manual if any of the follow paths match a modified file.
- Dockerfile
- docker/scripts/*
when: manual
# - when: never would be redundant here, this is implied any time rules are listed.
Keywords such as branches
or refs
that are currently available for
only
/except
are not yet available in rules
as they are being individually
considered for their usage and behavior in this context. Future keyword improvements
are being discussed in our epic for improving rules
,
where anyone can add suggestions or requests.
only
/except
(basic)
NOTE: Note:
The rules
syntax is an improved, more powerful solution for defining
when jobs should run or not. Consider using rules
instead of only/except
to get
the most out of your pipelines.
only
and except
are two parameters that set a job policy to limit when
jobs are created:
only
defines the names of branches and tags for which the job will run.except
defines the names of branches and tags for which the job will not run.
There are a few rules that apply to the usage of job policy:
only
andexcept
are inclusive. If bothonly
andexcept
are defined in a job specification, the ref is filtered byonly
andexcept
.only
andexcept
allow the use of regular expressions (supported regexp syntax).only
andexcept
allow to specify a repository path to filter jobs for forks.
In addition, only
and except
allow the use of special keywords:
Value | Description |
---|---|
branches |
When the Git reference for a pipeline is a branch. |
tags |
When the Git reference for a pipeline is a tag. |
api |
For pipelines triggered by the pipelines API. |
external |
When using CI services other than GitLab. |
pipelines |
For multi-project pipelines created by using the API with CI_JOB_TOKEN . |
pushes |
For pipelines triggered by a git push event, including for branches and tags. |
schedules |
For scheduled pipelines. |
triggers |
For pipelines created by using a trigger token. |
web |
For pipelines created by using Run pipeline button in the GitLab UI, from the project's CI/CD > Pipelines section. |
merge_requests |
For pipelines created when a merge request is created or updated. Enables merge request pipelines, merged results pipelines, and merge trains. |
external_pull_requests |
When an external pull request on GitHub is created or updated (See Pipelines for external pull requests). |
chat |
For pipelines created by using a GitLab ChatOps command. |
In the example below, job
will run only for refs that start with issue-
,
whereas all branches will be skipped:
job:
# use regexp
only:
- /^issue-.*$/
# use special keyword
except:
- branches
Pattern matching is case-sensitive by default. Use i
flag modifier, like
/pattern/i
to make a pattern case-insensitive:
job:
# use regexp
only:
- /^issue-.*$/i
# use special keyword
except:
- branches
In this example, job
will run only for refs that are tagged, or if a build is
explicitly requested via an API trigger or a Pipeline Schedule:
job:
# use special keywords
only:
- tags
- triggers
- schedules
The repository path can be used to have jobs executed only for the parent repository and not forks:
job:
only:
- branches@gitlab-org/gitlab
except:
- master@gitlab-org/gitlab
- /^release/.*$/@gitlab-org/gitlab
The above example will run job
for all branches on gitlab-org/gitlab
,
except master
and those with names prefixed with release/
.
If a job does not have an only
rule, only: ['branches', 'tags']
is set by
default. If it does not have an except
rule, it's empty.
For example,
job:
script: echo 'test'
is translated to:
job:
script: echo 'test'
only: ['branches', 'tags']
Regular expressions
Because @
is used to denote the beginning of a ref's repository path,
matching a ref name containing the @
character in a regular expression
requires the use of the hex character code match \x40
.
Only the tag or branch name can be matched by a regular expression. The repository path, if given, is always matched literally.
If a regular expression shall be used to match the tag or branch name,
the entire ref name part of the pattern has to be a regular expression,
and must be surrounded by /
.
(With regular expression flags appended after the closing /
.)
So issue-/.*/
won't work to match all tag names or branch names
that begin with issue-
.
TIP: Tip:
Use anchors ^
and $
to avoid the regular expression
matching only a substring of the tag name or branch name.
For example, /^issue-.*$/
is equivalent to /^issue-/
,
while just /issue/
would also match a branch called severe-issues
.
Supported only
/except
regexp syntax
CAUTION: Warning: This is a breaking change that was introduced with GitLab 11.9.4.
In GitLab 11.9.4, GitLab begun internally converting regexp used
in only
and except
parameters to RE2.
This means that only subset of features provided by Ruby Regexp is supported. RE2 limits the set of features provided due to computational complexity, which means some features became unavailable in GitLab 11.9.4. For example, negative lookaheads.
For GitLab versions from 11.9.7 and up to GitLab 12.0, GitLab provides a feature flag that can be enabled by administrators that allows users to use unsafe regexp syntax. This brings compatibility with previously allowed syntax version and allows users to gracefully migrate to the new syntax.
Feature.enable(:allow_unsafe_ruby_regexp)
only
/except
(advanced)
CAUTION: Warning: This is an alpha feature, and is subject to change at any time without prior notice!
GitLab supports both simple and complex strategies, so it's possible to use an array and a hash configuration scheme.
Four keys are available:
refs
variables
changes
kubernetes
If you use multiple keys under only
or except
, the keys will be evaluated as a
single conjoined expression. That is:
only:
means "include this job if all of the conditions match".except:
means "exclude this job if any of the conditions match".
With only
, individual keys are logically joined by an AND:
(any of refs) AND (any of variables) AND (any of changes) AND (if Kubernetes is active)
In the example below, the test
job will only
be created when all of the following are true:
- The pipeline has been scheduled or runs for
master
. - The
variables
keyword matches. - The
kubernetes
service is active on the project.
test:
script: npm run test
only:
refs:
- master
- schedules
variables:
- $CI_COMMIT_MESSAGE =~ /run-end-to-end-tests/
kubernetes: active
except
is implemented as a negation of this complete expression:
NOT((any of refs) AND (any of variables) AND (any of changes) AND (if Kubernetes is active))
This means the keys are treated as if joined by an OR. This relationship could be described as:
(any of refs) OR (any of variables) OR (any of changes) OR (if Kubernetes is active)
In the example below, the test
job will not be created when any of the following are true:
- The pipeline runs for the
master
. - There are changes to the
README.md
file in the root directory of the repository.
test:
script: npm run test
except:
refs:
- master
changes:
- "README.md"
only:refs
/except:refs
refs
policy introduced in GitLab 10.0.
The refs
strategy can take the same values as the
simplified only/except configuration.
In the example below, the deploy
job is going to be created only when the
pipeline has been scheduled or runs for the master
branch:
deploy:
only:
refs:
- master
- schedules
only:kubernetes
/except:kubernetes
kubernetes
policy introduced in GitLab 10.0.
The kubernetes
strategy accepts only the active
keyword.
In the example below, the deploy
job is going to be created only when the
Kubernetes service is active in the project:
deploy:
only:
kubernetes: active
only:variables
/except:variables
variables
policy introduced in GitLab 10.7.
The variables
keyword is used to define variables expressions. In other words,
you can use predefined variables / project / group or
environment-scoped variables to define an expression GitLab is going to
evaluate in order to decide whether a job should be created or not.
Examples of using variables expressions:
deploy:
script: cap staging deploy
only:
refs:
- branches
variables:
- $RELEASE == "staging"
- $STAGING
Another use case is excluding jobs depending on a commit message:
end-to-end:
script: rake test:end-to-end
except:
variables:
- $CI_COMMIT_MESSAGE =~ /skip-end-to-end-tests/
Learn more about variables expressions.
only:changes
/except:changes
changes
policy introduced in GitLab 11.4.
Using the changes
keyword with only
or except
makes it possible to define if
a job should be created based on files modified by a Git push event.
This means the only:changes
policy is useful for pipelines where:
$CI_PIPELINE_SOURCE == 'push'
$CI_PIPELINE_SOURCE == 'merge_request_event'
$CI_PIPELINE_SOURCE == 'external_pull_request_event'
If there is no Git push event, such as for pipelines with
sources other than the three above,
changes
can't determine if a given file is new or old, and will always
return true.
A basic example of using only: changes
:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
only:
changes:
- Dockerfile
- docker/scripts/*
- dockerfiles/**/*
- more_scripts/*.{rb,py,sh}
In the scenario above, when pushing commits to an existing branch in GitLab,
it creates and triggers the docker build
job, provided that one of the
commits contains changes to any of the following:
- The
Dockerfile
file. - Any of the files inside
docker/scripts/
directory. - Any of the files and subdirectories inside the
dockerfiles
directory. - Any of the files with
rb
,py
,sh
extensions inside themore_scripts
directory.
CAUTION: Warning:
If using only:changes
with only allow merge requests to be merged if the pipeline succeeds,
undesired behavior could result if you don't also use only:merge_requests
.
You can also use glob patterns to match multiple files in either the root directory
of the repository, or in any directory within the repository, but they must be wrapped
in double quotes or GitLab will fail to parse the .gitlab-ci.yml
. For example:
test:
script: npm run test
only:
changes:
- "*.json"
- "**/*.sql"
The following example will skip the build
job if a change is detected in any file
in the root directory of the repository with a .md
extension:
build:
script: npm run build
except:
changes:
- "*.md"
CAUTION: Warning: There are some points to be aware of when using this feature with new branches or tags without pipelines for merge requests.
CAUTION: Warning: There are some points to be aware of when using this feature with scheduled pipelines.
Using only:changes
with pipelines for merge requests
With pipelines for merge requests, it's possible to define a job to be created based on files modified in a merge request.
In order to deduce the correct base SHA of the source branch, we recommend combining
this keyword with only: [merge_requests]
. This way, file differences are correctly
calculated from any further commits, thus all changes in the merge requests are properly
tested in pipelines.
For example:
docker build service one:
script: docker build -t my-service-one-image:$CI_COMMIT_REF_SLUG .
only:
refs:
- merge_requests
changes:
- Dockerfile
- service-one/**/*
In the scenario above, if a merge request is created or updated that changes
either files in service-one
directory or the Dockerfile
, GitLab creates
and triggers the docker build service one
job.
Note that if pipelines for merge requests is
combined with only: [change]
, but only: [merge_requests]
is omitted, there could be
unwanted behavior.
For example:
docker build service one:
script: docker build -t my-service-one-image:$CI_COMMIT_REF_SLUG .
only:
changes:
- Dockerfile
- service-one/**/*
In the example above, a pipeline could fail due to changes to a file in service-one/**/*
.
A later commit could then be pushed that does not include any changes to this file,
but includes changes to the Dockerfile
, and this pipeline could pass because it's only
testing the changes to the Dockerfile
. GitLab checks the most recent pipeline,
that passed, and will show the merge request as mergeable, despite the earlier
failed pipeline caused by a change that was not yet corrected.
With this configuration, care must be taken to check that the most recent pipeline properly corrected any failures from previous pipelines.
Using only:changes
without pipelines for merge requests
Without pipelines for merge requests, pipelines
run on branches or tags that don't have an explicit association with a merge request.
In this case, a previous SHA is used to calculate the diff, which equivalent to git diff HEAD~
.
This could result in some unexpected behavior, including:
- When pushing a new branch or a new tag to GitLab, the policy always evaluates to true.
- When pushing a new commit, the changed files are calculated using the previous commit as the base SHA.
Using only:changes
with scheduled pipelines
only:changes
always evaluates as "true" in Scheduled pipelines.
All files are considered to have "changed" when a scheduled pipeline
runs.
needs
- Introduced in GitLab 12.2.
- In GitLab 12.3, maximum number of jobs in
needs
array raised from five to 50.- Introduced in GitLab 12.8,
needs: []
lets jobs start immediately.
The needs:
keyword enables executing jobs out-of-order, allowing you to implement
a directed acyclic graph in your .gitlab-ci.yml
.
This lets you run some jobs without waiting for other ones, disregarding stage ordering so you can have multiple stages running concurrently.
Let's consider the following example:
linux:build:
stage: build
mac:build:
stage: build
lint:
stage: test
needs: []
linux:rspec:
stage: test
needs: ["linux:build"]
linux:rubocop:
stage: test
needs: ["linux:build"]
mac:rspec:
stage: test
needs: ["mac:build"]
mac:rubocop:
stage: test
needs: ["mac:build"]
production:
stage: deploy
This example creates four paths of execution:
-
Linter: the
lint
job will run immediately without waiting for thebuild
stage to complete because it has no needs (needs: []
). -
Linux path: the
linux:rspec
andlinux:rubocop
jobs will be run as soon as thelinux:build
job finishes without waiting formac:build
to finish. -
macOS path: the
mac:rspec
andmac:rubocop
jobs will be run as soon as themac:build
job finishes, without waiting forlinux:build
to finish. -
The
production
job will be executed as soon as all previous jobs finish; in this case:linux:build
,linux:rspec
,linux:rubocop
,mac:build
,mac:rspec
,mac:rubocop
.
Requirements and limitations
- If
needs:
is set to point to a job that is not instantiated because ofonly/except
rules or otherwise does not exist, the pipeline will be created with YAML error. - The maximum number of jobs that a single job can need in the
needs:
array is limited:- For GitLab.com, the limit is ten. For more information, see our infrastructure issue.
- For self-managed instances, the limit is:
- 10, if the
ci_dag_limit_needs
feature flag is enabled (default). - 50, if the
ci_dag_limit_needs
feature flag is disabled.
- 10, if the
- If
needs:
refers to a job that is marked asparallel:
. the current job will depend on all parallel jobs created. needs:
is similar todependencies:
in that it needs to use jobs from prior stages, meaning it's impossible to create circular dependencies. Depending on jobs in the current stage is not possible either, but support is planned.- Related to the above, stages must be explicitly defined for all jobs
that have the keyword
needs:
or are referred to by one.
Changing the needs:
job limit
The maximum number of jobs that can be defined within needs:
defaults to 10, but
can be changed to 50 via a feature flag. To change the limit to 50,
start a Rails console session
and run:
Feature::disable(:ci_dag_limit_needs)
To set it back to 10, run the opposite command:
Feature::enable(:ci_dag_limit_needs)
Artifact downloads with needs
Introduced in GitLab v12.6.
When using needs
, artifact downloads are controlled with artifacts: true
(default) or artifacts: false
.
Since GitLab 12.6, you can't combine the dependencies
keyword
with needs
to control artifact downloads in jobs. dependencies
is still valid
in jobs that do not use needs
.
In the example below, the rspec
job will download the build_job
artifacts, while the
rubocop
job won't:
build_job:
stage: build
artifacts:
paths:
- binaries/
rspec:
stage: test
needs:
- job: build_job
artifacts: true
rubocop:
stage: test
needs:
- job: build_job
artifacts: false
Additionally, in the three syntax examples below, the rspec
job will download the artifacts
from all three build_jobs
, as artifacts
is true for build_job_1
, and will
default to true for both build_job_2
and build_job_3
.
rspec:
needs:
- job: build_job_1
artifacts: true
- job: build_job_2
- build_job_3
Cross project artifact downloads with needs
(PREMIUM)
Introduced in GitLab v12.7.
needs
can be used to download artifacts from up to five jobs in pipelines on
other refs in the same project,
or pipelines in different projects:
build_job:
stage: build
script:
- ls -lhR
needs:
- project: group/project-name
job: build-1
ref: master
artifacts: true
build_job
will download the artifacts from the latest successful build-1
job
on the master
branch in the group/project-name
project.
Artifact downloads between pipelines in the same project
needs
can be used to download artifacts from different pipelines in the current project
by setting the project
keyword as the current project's name, and specifying a ref.
In the example below, build_job
will download the artifacts for the latest successful
build-1
job with the other-ref
ref:
build_job:
stage: build
script:
- ls -lhR
needs:
- project: group/same-project-name
job: build-1
ref: other-ref
artifacts: true
NOTE: Note:
Downloading artifacts from jobs that are run in parallel:
is not supported.
tags
tags
is used to select specific Runners from the list of all Runners that are
allowed to run this project.
During the registration of a Runner, you can specify the Runner's tags, for
example ruby
, postgres
, development
.
tags
allow you to run jobs with Runners that have the specified tags
assigned to them:
job:
tags:
- ruby
- postgres
The specification above, will make sure that job
is built by a Runner that
has both ruby
AND postgres
tags defined.
Tags are also a great way to run different jobs on different platforms, for
example, given an OS X Runner with tag osx
and Windows Runner with tag
windows
, the following jobs run on respective platforms:
windows job:
stage:
- build
tags:
- windows
script:
- echo Hello, %USERNAME%!
osx job:
stage:
- build
tags:
- osx
script:
- echo "Hello, $USER!"
allow_failure
allow_failure
allows a job to fail without impacting the rest of the CI
suite.
The default value is false
, except for manual jobs using the
when: manual
syntax, unless using rules:
syntax, where all jobs
default to false, including when: manual
jobs.
When enabled and the job fails, the job will show an orange warning in the UI. However, the logical flow of the pipeline will consider the job a success/passed, and is not blocked.
Assuming all other jobs are successful, the job's stage and its pipeline will show the same orange warning. However, the associated commit will be marked "passed", without warnings.
In the example below, job1
and job2
will run in parallel, but if job1
fails, it won't stop the next stage from running, since it's marked with
allow_failure: true
:
job1:
stage: test
script:
- execute_script_that_will_fail
allow_failure: true
job2:
stage: test
script:
- execute_script_that_will_succeed
job3:
stage: deploy
script:
- deploy_to_staging
when
when
is used to implement jobs that are run in case of failure or despite the
failure.
when
can be set to one of the following values:
on_success
- execute job only when all jobs from prior stages succeed (or are considered succeeding because they are markedallow_failure
). This is the default.on_failure
- execute job only when at least one job from prior stages fails.always
- execute job regardless of the status of jobs from prior stages.manual
- execute job manually (added in GitLab 8.10). Read about manual actions below.delayed
- execute job after a certain period (added in GitLab 11.14). Read about delayed actions below.
For example:
stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
test_job:
stage: test
script:
- make test
deploy_job:
stage: deploy
script:
- make deploy
when: manual
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always
The above script will:
- Execute
cleanup_build_job
only whenbuild_job
fails. - Always execute
cleanup_job
as the last step in pipeline regardless of success or failure. - Allow you to manually execute
deploy_job
from GitLab's UI.
when:manual
- Introduced in GitLab 8.10.
- Blocking manual actions were introduced in GitLab 9.0.
- Protected actions were introduced in GitLab 9.2.
Manual actions are a special type of job that are not executed automatically, they need to be explicitly started by a user. An example usage of manual actions would be a deployment to a production environment. Manual actions can be started from the pipeline, job, environment, and deployment views. Read more at the environments documentation.
Manual actions can be either optional or blocking. Blocking manual actions will block the execution of the pipeline at the stage this action is defined in. It's possible to resume execution of the pipeline when someone executes a blocking manual action by clicking a play button.
When a pipeline is blocked, it won't be merged if Merge When Pipeline Succeeds
is set. Blocked pipelines also do have a special status, called manual.
When the when:manual
syntax is used, manual actions are non-blocking by
default. If you want to make manual action blocking, it's necessary to add
allow_failure: false
to the job's definition in .gitlab-ci.yml
.
Optional manual actions have allow_failure: true
set by default and their
Statuses don't contribute to the overall pipeline status. So, if a manual
action fails, the pipeline will eventually succeed.
NOTE: Note:
When using rules:
, allow_failure
defaults to false
, including for manual jobs.
Manual actions are considered to be write actions, so permissions for protected branches are used when a user wants to trigger an action. In other words, in order to trigger a manual action assigned to a branch that the pipeline is running for, the user needs to have the ability to merge to this branch. It's possible to use protected environments to more strictly protect manual deployments from being run by unauthorized users.
NOTE: Note:
Using when:manual
and trigger
together results in the error jobs:#{job-name} when should be on_success, on_failure or always
, because when:manual
prevents triggers
being used.
Protecting manual jobs (PREMIUM)
It's possible to use protected environments to define a precise list of users authorized to run a manual job. By allowing only users associated with a protected environment to trigger manual jobs, it's possible to implement some special use cases, such as:
- More precisely limiting who can deploy to an environment.
- Enabling a pipeline to be blocked until an approved user "approves" it.
To do this, you must:
-
Add an
environment
to the job. For example:deploy_prod: stage: deploy script: - echo "Deploy to production server" environment: name: production url: https://example.com when: manual only: - master
-
In the protected environments settings, select the environment (
production
in the example above) and add the users, roles or groups that are authorized to trigger the manual job to the Allowed to Deploy list. Only those in this list will be able to trigger this manual job, as well as GitLab administrators who are always able to use protected environments.
Additionally, if a manual job is defined as blocking by adding allow_failure: false
,
the next stages of the pipeline won't run until the manual job is triggered. This
can be used as a way to have a defined list of users allowed to "approve" later pipeline
stages by triggering the blocking manual job.
when:delayed
Introduced in GitLab 11.4.
Delayed job are for executing scripts after a certain period.
This is useful if you want to avoid jobs entering pending
state immediately.
You can set the period with start_in
key. The value of start_in
key is an elapsed time in seconds, unless a unit is
provided. start_in
key must be less than or equal to one week. Examples of valid values include:
'5'
10 seconds
30 minutes
1 day
1 week
When there is a delayed job in a stage, the pipeline won't progress until the delayed job has finished. This means this keyword can also be used for inserting delays between different stages.
The timer of a delayed job starts immediately after the previous stage has completed. Similar to other types of jobs, a delayed job's timer won't start unless the previous stage passed.
The following example creates a job named timed rollout 10%
that is executed 30 minutes after the previous stage has completed:
timed rollout 10%:
stage: deploy
script: echo 'Rolling out 10% ...'
when: delayed
start_in: 30 minutes
You can stop the active timer of a delayed job by clicking the {time-out} (Unschedule) button. This job will never be executed in the future unless you execute the job manually.
You can start a delayed job immediately by clicking the Play button. GitLab Runner will pick your job soon and start the job.
environment
- Introduced in GitLab 8.9.
- You can read more about environments and find more examples in the documentation about environments.
environment
is used to define that a job deploys to a specific environment.
If environment
is specified and no environment under that name exists, a new
one will be created automatically.
In its simplest form, the environment
keyword can be defined like:
deploy to production:
stage: deploy
script: git push production HEAD:master
environment: production
In the above example, the deploy to production
job will be marked as doing a
deployment to the production
environment.
environment:name
- Introduced in GitLab 8.11.
- Before GitLab 8.11, the name of an environment could be defined as a string like
environment: production
. The recommended way now is to define it under thename
keyword.- The
name
parameter can use any of the defined CI variables, including predefined, secure variables and.gitlab-ci.yml
variables
. You however can't use variables defined underscript
.
The environment
name can contain:
- letters
- digits
- spaces
-
_
/
$
{
}
Common names are qa
, staging
, and production
, but you can use whatever
name works with your workflow.
Instead of defining the name of the environment right after the environment
keyword, it's also possible to define it as a separate value. For that, use
the name
keyword under environment
:
deploy to production:
stage: deploy
script: git push production HEAD:master
environment:
name: production
environment:url
- Introduced in GitLab 8.11.
- Before GitLab 8.11, the URL could be added only in GitLab's UI. The recommended way now is to define it in
.gitlab-ci.yml
.- The
url
parameter can use any of the defined CI variables, including predefined, secure variables and.gitlab-ci.yml
variables
. You however can't use variables defined underscript
.
This is an optional value that when set, it exposes buttons in various places in GitLab which when clicked take you to the defined URL.
In the example below, if the job finishes successfully, it will create buttons
in the merge requests and in the environments/deployments pages which will point
to https://prod.example.com
.
deploy to production:
stage: deploy
script: git push production HEAD:master
environment:
name: production
url: https://prod.example.com
environment:on_stop
- Introduced in GitLab 8.13.
- Starting with GitLab 8.14, when you have an environment that has a stop action defined, GitLab will automatically trigger a stop action when the associated branch is deleted.
Closing (stopping) environments can be achieved with the on_stop
keyword defined under
environment
. It declares a different job that runs in order to close
the environment.
Read the environment:action
section for an example.
environment:action
Introduced in GitLab 8.13.
The action
keyword can be used to specify jobs that prepare, start, or stop environments.
Value | Description |
---|---|
start | Default value. Indicates that job starts the environment. Deployment will be created after job starts. |
prepare | Indicates that job is only preparing the environment. Does not affect deployments. Read more about environments |
stop | Indicates that job stops deployment. See the example below. |
Take for instance:
review_app:
stage: deploy
script: make deploy-app
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://$CI_ENVIRONMENT_SLUG.example.com
on_stop: stop_review_app
stop_review_app:
stage: deploy
variables:
GIT_STRATEGY: none
script: make delete-app
when: manual
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
In the above example we set up the review_app
job to deploy to the review
environment, and we also defined a new stop_review_app
job under on_stop
.
Once the review_app
job is successfully finished, it will trigger the
stop_review_app
job based on what is defined under when
. In this case we
set it up to manual
so it will need a manual action via
GitLab's web interface in order to run.
Also in the example, GIT_STRATEGY
is set to none
so that GitLab Runner won’t
try to check out the code after the branch is deleted when the stop_review_app
job is automatically triggered.
NOTE: Note:
The above example overwrites global variables. If your stop environment job depends
on global variables, you can use anchor variables when setting the GIT_STRATEGY
to change it without overriding the global variables.
The stop_review_app
job is required to have the following keywords defined:
when
- referenceenvironment:name
environment:action
Additionally, both jobs should have matching rules
or only/except
configuration. In the example
above, if the configuration is not identical, the stop_review_app
job might not be
included in all pipelines that include the review_app
job, and it will not be
possible to trigger the action: stop
to stop the environment automatically.
environment:auto_stop_in
Introduced in GitLab 12.8.
The auto_stop_in
keyword is for specifying life period of the environment,
that when expired, GitLab automatically stops them.
For example,
review_app:
script: deploy-review-app
environment:
name: review/$CI_COMMIT_REF_NAME
auto_stop_in: 1 day
When review_app
job is executed and a review app is created, a life period of
the environment is set to 1 day
.
For more information, see the environments auto-stop documentation
environment:kubernetes
Introduced in GitLab 12.6.
The kubernetes
block is used to configure deployments to a
Kubernetes cluster that is associated with your project.
For example:
deploy:
stage: deploy
script: make deploy-app
environment:
name: production
kubernetes:
namespace: production
This will set up the deploy
job to deploy to the production
environment, using the production
Kubernetes namespace.
For more information, see
Available settings for kubernetes
.
NOTE: Note: Kubernetes configuration is not supported for Kubernetes clusters that are managed by GitLab. To follow progress on support for GitLab-managed clusters, see the relevant issue.
Dynamic environments
- Introduced in GitLab 8.12 and GitLab Runner 1.6.
- The
$CI_ENVIRONMENT_SLUG
was introduced in GitLab 8.15.- The
name
andurl
parameters can use any of the defined CI variables, including predefined, secure variables and.gitlab-ci.yml
variables
. You however can't use variables defined underscript
.
For example:
deploy as review app:
stage: deploy
script: make deploy
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://$CI_ENVIRONMENT_SLUG.example.com/
The deploy as review app
job will be marked as deployment to dynamically
create the review/$CI_COMMIT_REF_NAME
environment, where $CI_COMMIT_REF_NAME
is an environment variable set by the Runner. The
$CI_ENVIRONMENT_SLUG
variable is based on the environment name, but suitable
for inclusion in URLs. In this case, if the deploy as review app
job was run
in a branch named pow
, this environment would be accessible with an URL like
https://review-pow.example.com/
.
This of course implies that the underlying server which hosts the application is properly configured.
The common use case is to create dynamic environments for branches and use them as Review Apps. You can see a simple example using Review Apps at https://gitlab.com/gitlab-examples/review-apps-nginx/.
cache
- Introduced in GitLab Runner v0.7.0.
cache
can be set globally and per-job.- From GitLab 9.0, caching is enabled and shared between pipelines and jobs by default.
- From GitLab 9.2, caches are restored before artifacts.
TIP: Learn more: Read how caching works and find out some good practices in the caching dependencies documentation.
cache
is used to specify a list of files and directories which should be
cached between jobs. You can only use paths that are within the local working
copy.
If cache
is defined outside the scope of jobs, it means it's set
globally and all jobs will use that definition.
cache:paths
Use the paths
directive to choose which files or directories will be cached. Paths
are relative to the project directory ($CI_PROJECT_DIR
) and can't directly link outside it.
Wildcards can be used that follow the glob
patterns and:
- In GitLab Runner 13.0 and later,
doublestar.Glob
. - In GitLab Runner 12.10 and earlier,
filepath.Match
.
Cache all files in binaries
that end in .apk
and the .config
file:
rspec:
script: test
cache:
paths:
- binaries/*.apk
- .config
Locally defined cache overrides globally defined options. The following rspec
job will cache only binaries/
:
cache:
paths:
- my/files
rspec:
script: test
cache:
key: rspec
paths:
- binaries/
Note that since cache is shared between jobs, if you're using different paths for different jobs, you should also set a different cache:key otherwise cache content can be overwritten.
cache:key
Introduced in GitLab Runner v1.0.0.
Since the cache is shared between jobs, if you're using different
paths for different jobs, you should also set a different cache:key
otherwise cache content can be overwritten.
The key
directive allows you to define the affinity of caching between jobs,
allowing to have a single cache for all jobs, cache per-job, cache per-branch
or any other way that fits your workflow. This way, you can fine tune caching,
allowing you to cache data between different jobs or even different branches.
The cache:key
variable can use any of the
predefined variables, and the default key, if not
set, is just literal default
which means everything is shared between
pipelines and jobs by default, starting from GitLab 9.0.
NOTE: Note:
The cache:key
variable can't contain the /
character, or the equivalent
URI-encoded %2F
; a value made only of dots (.
, %2E
) is also forbidden.
For example, to enable per-branch caching:
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- binaries/
If you use Windows Batch to run your shell scripts you need to replace
$
with %
:
cache:
key: "%CI_COMMIT_REF_SLUG%"
paths:
- binaries/
cache:key:files
Introduced in GitLab v12.5.
The cache:key:files
keyword extends the cache:key
functionality by making it easier
to reuse some caches, and rebuild them less often, which will speed up subsequent pipeline
runs.
When you include cache:key:files
, you must also list the project files that will be used to generate the key, up to a maximum of two files.
The cache key
will be a SHA checksum computed from the most recent commits (up to two, if two files are listed)
that changed the given files. If neither file was changed in any commits,
the fallback key will be default
.
cache:
key:
files:
- Gemfile.lock
- package.json
paths:
- vendor/ruby
- node_modules
In this example we're creating a cache for Ruby and Node.js dependencies that
is tied to current versions of the Gemfile.lock
and package.json
files. Whenever one of
these files changes, a new cache key is computed and a new cache is created. Any future
job runs using the same Gemfile.lock
and package.json
with cache:key:files
will
use the new cache, instead of rebuilding the dependencies.
cache:key:prefix
Introduced in GitLab v12.5.
The prefix
parameter adds extra functionality to key:files
by allowing the key to
be composed of the given prefix
combined with the SHA computed for cache:key:files
.
For example, adding a prefix
of test
, will cause keys to look like: test-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5
.
If neither file was changed in any commits, the prefix is added to default
, so the
key in the example would be test-default
.
Like cache:key
, prefix
can use any of the predefined variables,
but the following are not allowed:
- the
/
character (or the equivalent URI-encoded%2F
) - a value made only of
.
(or the equivalent URI-encoded%2E
)
cache:
key:
files:
- Gemfile.lock
prefix: ${CI_JOB_NAME}
paths:
- vendor/ruby
rspec:
script:
- bundle exec rspec
For example, adding a prefix
of $CI_JOB_NAME
will
cause the key to look like: rspec-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5
and
the job cache is shared across different branches. If a branch changes
Gemfile.lock
, that branch will have a new SHA checksum for cache:key:files
. A new cache key
will be generated, and a new cache will be created for that key.
If Gemfile.lock
is not found, the prefix is added to
default
, so the key in the example would be rspec-default
.
cache:untracked
Set untracked: true
to cache all files that are untracked in your Git
repository:
rspec:
script: test
cache:
untracked: true
Cache all Git untracked files and files in binaries
:
rspec:
script: test
cache:
untracked: true
paths:
- binaries/
cache:policy
Introduced in GitLab 9.4.
The default behavior of a caching job is to download the files at the start of
execution, and to re-upload them at the end. This allows any changes made by the
job to be persisted for future runs, and is known as the pull-push
cache
policy.
If you know the job does not alter the cached files, you can skip the upload step
by setting policy: pull
in the job specification. Typically, this would be
twinned with an ordinary cache job at an earlier stage to ensure the cache
is updated from time to time:
stages:
- setup
- test
prepare:
stage: setup
cache:
key: gems
paths:
- vendor/bundle
script:
- bundle install --deployment
rspec:
stage: test
cache:
key: gems
paths:
- vendor/bundle
policy: pull
script:
- bundle exec rspec ...
This helps to speed up job execution and reduce load on the cache server, especially when you have a large number of cache-using jobs executing in parallel.
Additionally, if you have a job that unconditionally recreates the cache without
reference to its previous contents, you can use policy: push
in that job to
skip the download step.
artifacts
- Introduced in GitLab Runner v0.7.0 for non-Windows platforms.
- Windows support was added in GitLab Runner v.1.0.0.
- From GitLab 9.2, caches are restored before artifacts.
- Not all executors are supported.
- Job artifacts are only collected for successful jobs by default.
artifacts
is used to specify a list of files and directories which should be
attached to the job when it succeeds, fails, or always.
The artifacts will be sent to GitLab after the job finishes and will be available for download in the GitLab UI.
artifacts:paths
Paths are relative to the project directory ($CI_PROJECT_DIR
) and can't directly
link outside it. Wildcards can be used that follow the glob
patterns and:
- In GitLab Runner 13.0 and later,
doublestar.Glob
. - In GitLab Runner 12.10 and earlier,
filepath.Match
.
To restrict which jobs a specific job will fetch artifacts from, see dependencies.
Send all files in binaries
and .config
:
artifacts:
paths:
- binaries/
- .config
To disable artifact passing, define the job with empty dependencies:
job:
stage: build
script: make build
dependencies: []
You may want to create artifacts only for tagged releases to avoid filling the build server storage with temporary build artifacts.
Create artifacts only for tags (default-job
won't create artifacts):
default-job:
script:
- mvn test -U
except:
- tags
release-job:
script:
- mvn package -U
artifacts:
paths:
- target/*.war
only:
- tags
You can use wildcards for directories too. For example, if you want to get all the files inside the directories that end with xyz
:
job:
artifacts:
paths:
- path/*xyz/*
artifacts:exclude
- Introduced in GitLab 13.1
- Requires GitLab Runner 13.1
exclude
makes it possible to prevent files from being added to an artifacts
archive.
Similar to artifacts:paths
, exclude
paths are relative
to the project directory. Wildcards can be used that follow the
glob patterns and
filepath.Match
.
For example, to store all files in binaries/
, but not *.o
files located in
subdirectories of binaries/
:
artifacts:
paths:
- binaries/
exclude:
- binaries/**/*.o
Files matched by artifacts:untracked
can be excluded using
artifacts:exclude
too.
artifacts:expose_as
Introduced in GitLab 12.5.
The expose_as
keyword can be used to expose job artifacts
in the merge request UI.
For example, to match a single file:
test:
script: [ "echo 'test' > file.txt" ]
artifacts:
expose_as: 'artifact 1'
paths: ['file.txt']
With this configuration, GitLab will add a link artifact 1 to the relevant merge request
that points to file1.txt
.
An example that will match an entire directory:
test:
script: [ "mkdir test && echo 'test' > test/file.txt" ]
artifacts:
expose_as: 'artifact 1'
paths: ['test/']
Note the following:
- Artifacts do not display in the merge request UI when using variables to define the
artifacts:paths
. - A maximum of 10 job artifacts per merge request can be exposed.
- Glob patterns are unsupported.
- If a directory is specified, the link will be to the job artifacts browser if there is more than one file in the directory.
- For exposed single file artifacts with
.html
,.htm
,.txt
,.json
,.xml
, and.log
extensions, if GitLab Pages is:- Enabled, GitLab will automatically render the artifact.
- Not enabled, you will see the file in the artifacts browser.
artifacts:name
Introduced in GitLab 8.6 and GitLab Runner v1.1.0.
The name
directive allows you to define the name of the created artifacts
archive. That way, you can have a unique name for every archive which could be
useful when you'd like to download the archive from GitLab. The artifacts:name
variable can make use of any of the predefined variables.
The default name is artifacts
, which becomes artifacts.zip
when downloaded.
NOTE: Note:
If your branch-name contains forward slashes
(for example feature/my-feature
) it's advised to use $CI_COMMIT_REF_SLUG
instead of $CI_COMMIT_REF_NAME
for proper naming of the artifact.
To create an archive with a name of the current job:
job:
artifacts:
name: "$CI_JOB_NAME"
paths:
- binaries/
To create an archive with a name of the current branch or tag including only the binaries directory:
job:
artifacts:
name: "$CI_COMMIT_REF_NAME"
paths:
- binaries/
To create an archive with a name of the current job and the current branch or tag including only the binaries directory:
job:
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_NAME"
paths:
- binaries/
To create an archive with a name of the current stage and branch name:
job:
artifacts:
name: "$CI_JOB_STAGE-$CI_COMMIT_REF_NAME"
paths:
- binaries/
If you use Windows Batch to run your shell scripts you need to replace
$
with %
:
job:
artifacts:
name: "%CI_JOB_STAGE%-%CI_COMMIT_REF_NAME%"
paths:
- binaries/
If you use Windows PowerShell to run your shell scripts you need to replace
$
with $env:
:
job:
artifacts:
name: "$env:CI_JOB_STAGE-$env:CI_COMMIT_REF_NAME"
paths:
- binaries/
artifacts:untracked
artifacts:untracked
is used to add all Git untracked files as artifacts (along
to the paths defined in artifacts:paths
).
NOTE: Note:
artifacts:untracked
ignores configuration in the repository's .gitignore
file.
Send all Git untracked files:
artifacts:
untracked: true
Send all Git untracked files and files in binaries
:
artifacts:
untracked: true
paths:
- binaries/
Send all untracked files but exclude *.txt
:
artifacts:
untracked: true
exclude:
- *.txt
artifacts:when
Introduced in GitLab 8.9 and GitLab Runner v1.3.0.
artifacts:when
is used to upload artifacts on job failure or despite the
failure.
artifacts:when
can be set to one of the following values:
on_success
- upload artifacts only when the job succeeds. This is the default.on_failure
- upload artifacts only when the job fails.always
- upload artifacts regardless of the job status.
To upload artifacts only when job fails:
job:
artifacts:
when: on_failure
artifacts:expire_in
Introduced in GitLab 8.9 and GitLab Runner v1.3.0.
expire_in
allows you to specify how long artifacts should live before they
expire and are therefore deleted, counting from the time they are uploaded and
stored on GitLab. If the expiry time is not defined, it defaults to the
instance wide setting
(30 days by default).
You can use the Keep button on the job page to override expiration and keep artifacts forever.
After their expiry, artifacts are deleted hourly by default (via a cron job), and are not accessible anymore.
The value of expire_in
is an elapsed time in seconds, unless a unit is
provided. Examples of valid values:
42
3 mins 4 sec
2 hrs 20 min
2h20min
6 mos 1 day
47 yrs 6 mos and 4d
3 weeks and 2 days
To expire artifacts 1 week after being uploaded:
job:
artifacts:
expire_in: 1 week
NOTE: Note:
Since GitLab 13.0, the latest
artifacts for refs can be locked against deletion, and kept regardless of the expiry time. This feature is disabled
by default and is not ready for production use. It can be enabled for testing by
enabling the :keep_latest_artifact_for_ref
and :destroy_only_unlocked_expired_artifacts
feature flags.
artifacts:reports
The artifacts:reports
keyword
is used for collecting test reports, code quality reports, and security reports from jobs.
It also exposes these reports in GitLab's UI (merge requests, pipeline views, and security dashboards).
These are the available report types:
Parameter | Description |
---|---|
artifacts:reports:junit |
The junit report collects JUnit XML files. |
artifacts:reports:dotenv |
The dotenv report collects a set of environment variables. |
artifacts:reports:cobertura |
The cobertura report collects Cobertura coverage XML files. |
artifacts:reports:terraform |
The terraform report collects Terraform tfplan.json files. |
artifacts:reports:codequality |
The codequality report collects CodeQuality issues. |
artifacts:reports:sast (ULTIMATE) |
The sast report collects Static Application Security Testing vulnerabilities. |
artifacts:reports:dependency_scanning (ULTIMATE) |
The dependency_scanning report collects Dependency Scanning vulnerabilities. |
artifacts:reports:container_scanning (ULTIMATE) |
The container_scanning report collects Container Scanning vulnerabilities. |
artifacts:reports:dast (ULTIMATE) |
The dast report collects Dynamic Application Security Testing vulnerabilities. |
artifacts:reports:license_management (ULTIMATE) |
The license_management report collects Licenses (removed from GitLab 13.0). |
artifacts:reports:license_scanning (ULTIMATE) |
The license_scanning report collects Licenses. |
artifacts:reports:performance (PREMIUM) |
The performance report collects Browser Performance metrics. |
artifacts:reports:load_performance (PREMIUM) |
The load_performance report collects load performance metrics. |
artifacts:reports:metrics (PREMIUM) |
The metrics report collects Metrics. |
dependencies
Introduced in GitLab 8.6 and GitLab Runner v1.1.1.
By default, all artifacts
from all previous stages
are passed, but you can use the dependencies
parameter to define a limited
list of jobs (or no jobs) to fetch artifacts from.
To use this feature, define dependencies
in context of the job and pass
a list of all previous jobs from which the artifacts should be downloaded.
You can only define jobs from stages that are executed before the current one.
An error will be shown if you define jobs from the current stage or next ones.
Defining an empty array will skip downloading any artifacts for that job.
The status of the previous job is not considered when using dependencies
, so
if it failed or it's a manual job that was not run, no error occurs.
In the following example, we define two jobs with artifacts, build:osx
and
build:linux
. When the test:osx
is executed, the artifacts from build:osx
will be downloaded and extracted in the context of the build. The same happens
for test:linux
and artifacts from build:linux
.
The job deploy
will download artifacts from all previous jobs because of
the stage precedence:
build:osx:
stage: build
script: make build:osx
artifacts:
paths:
- binaries/
build:linux:
stage: build
script: make build:linux
artifacts:
paths:
- binaries/
test:osx:
stage: test
script: make test:osx
dependencies:
- build:osx
test:linux:
stage: test
script: make test:linux
dependencies:
- build:linux
deploy:
stage: deploy
script: make deploy
When a dependent job will fail
Introduced in GitLab 10.3.
If the artifacts of the job that is set as a dependency have been expired or erased, then the dependent job will fail.
NOTE: Note: You can ask your administrator to flip this switch and bring back the old behavior.
coverage
Introduced in GitLab 8.17.
coverage
allows you to configure how code coverage will be extracted from the
job output.
Regular expressions are the only valid kind of value expected here. So, using
surrounding /
is mandatory in order to consistently and explicitly represent
a regular expression string. You must escape special characters if you want to
match them literally.
A simple example:
job1:
script: rspec
coverage: '/Code coverage: \d+\.\d+/'
retry
- Introduced in GitLab 9.5.
- Behavior expanded in GitLab 11.5 to control on which failures to retry.
retry
allows you to configure how many times a job is going to be retried in
case of a failure.
When a job fails and has retry
configured, it's going to be processed again
up to the amount of times specified by the retry
keyword.
If retry
is set to 2, and a job succeeds in a second run (first retry), it won't be retried
again. retry
value has to be a positive integer, equal or larger than 0, but
lower or equal to 2 (two retries maximum, three runs in total).
A simple example to retry in all failure cases:
test:
script: rspec
retry: 2
By default, a job will be retried on all failure cases. To have a better control
on which failures to retry, retry
can be a hash with the following keys:
max
: The maximum number of retries.when
: The failure cases to retry.
To retry only runner system failures at maximum two times:
test:
script: rspec
retry:
max: 2
when: runner_system_failure
If there is another failure, other than a runner system failure, the job will not be retried.
To retry on multiple failure cases, when
can also be an array of failures:
test:
script: rspec
retry:
max: 2
when:
- runner_system_failure
- stuck_or_timeout_failure
Possible values for when
are:
always
: Retry on any failure (default).unknown_failure
: Retry when the failure reason is unknown.script_failure
: Retry when the script failed.api_failure
: Retry on API failure.stuck_or_timeout_failure
: Retry when the job got stuck or timed out.runner_system_failure
: Retry if there was a runner system failure (for example, job setup failed).missing_dependency_failure
: Retry if a dependency was missing.runner_unsupported
: Retry if the runner was unsupported.stale_schedule
: Retry if a delayed job could not be executed.job_execution_timeout
: Retry if the script exceeded the maximum execution time set for the job.archived_failure
: Retry if the job is archived and can't be run.unmet_prerequisites
: Retry if the job failed to complete prerequisite tasks.scheduler_failure
: Retry if the scheduler failed to assign the job to a runner.data_integrity_failure
: Retry if there was a structural integrity problem detected.
You can specify the number of retry attempts for certain stages of job execution using variables.
timeout
Introduced in GitLab 12.3.
timeout
allows you to configure a timeout for a specific job. For example:
build:
script: build.sh
timeout: 3 hours 30 minutes
test:
script: rspec
timeout: 3h 30m
The job-level timeout can exceed the project-level timeout but can't exceed the Runner-specific timeout.
parallel
Introduced in GitLab 11.5.
parallel
allows you to configure how many instances of a job to run in
parallel. This value has to be greater than or equal to two (2) and less than or equal to 50.
This creates N instances of the same job that run in parallel. They are named
sequentially from job_name 1/N
to job_name N/N
.
For every job, CI_NODE_INDEX
and CI_NODE_TOTAL
environment variables are set.
Marking a job to be run in parallel requires adding parallel
to your configuration
file. For example:
test:
script: rspec
parallel: 5
TIP: Tip: Parallelize tests suites across parallel jobs. Different languages have different tools to facilitate this.
A simple example using Semaphore Test Boosters and RSpec to run some Ruby tests:
# Gemfile
source 'https://rubygems.org'
gem 'rspec'
gem 'semaphore_test_boosters'
test:
parallel: 3
script:
- bundle
- bundle exec rspec_booster --job $CI_NODE_INDEX/$CI_NODE_TOTAL
CAUTION: Caution: Please be aware that semaphore_test_boosters reports usages statistics to the author.
You can then navigate to the Jobs tab of a new pipeline build and see your RSpec job split into three separate jobs.
Parallel matrix
jobs
Introduced in GitLab 13.2.
matrix:
allows you to configure different variables for jobs that are running in parallel.
There can be from 2 to 50 jobs.
Every job gets the same CI_NODE_TOTAL
environment variable value, and a unique CI_NODE_INDEX
value.
deploystacks:
stage: deploy
script:
- bin/deploy
parallel:
matrix:
- PROVIDER: aws
STACK:
- monitoring
- app1
- app2
- PROVIDER: ovh
STACK: [monitoring, backup, app]
- PROVIDER: [gcp, vultr]
STACK: [data, processing]
This generates 10 parallel deploystacks
jobs, each with different values for PROVIDER
and STACK
:
deploystacks 1/10 with PROVIDER=aws and STACK=monitoring
deploystacks 2/10 with PROVIDER=aws and STACK=app1
deploystacks 3/10 with PROVIDER=aws and STACK=app2
deploystacks 4/10 with PROVIDER=ovh and STACK=monitoring
deploystacks 5/10 with PROVIDER=ovh and STACK=backup
deploystacks 6/10 with PROVIDER=ovh and STACK=app
deploystacks 7/10 with PROVIDER=gcp and STACK=data
deploystacks 8/10 with PROVIDER=gcp and STACK=processing
deploystacks 9/10 with PROVIDER=vultr and STACK=data
deploystacks 10/10 with PROVIDER=vultr and STACK=processing
trigger
- Introduced in GitLab Premium 11.8.
- Moved to GitLab Core in 12.8.
trigger
allows you to define downstream pipeline trigger. When a job created
from trigger
definition is started by GitLab, a downstream pipeline gets
created.
This keyword allows the creation of two different types of downstream pipelines:
Since GitLab 13.2, you can see which job triggered a downstream pipeline by hovering your mouse cursor over the downstream pipeline job in the pipeline graph.
NOTE: Note:
Using a trigger
with when:manual
together results in the error jobs:#{job-name} when should be on_success, on_failure or always
, because when:manual
prevents
triggers being used.
Simple trigger
syntax for multi-project pipelines
The simplest way to configure a downstream trigger is to use trigger
keyword
with a full path to a downstream project:
rspec:
stage: test
script: bundle exec rspec
staging:
stage: deploy
trigger: my/deployment
Complex trigger
syntax for multi-project pipelines
It's possible to configure a branch name that GitLab will use to create a downstream pipeline with:
rspec:
stage: test
script: bundle exec rspec
staging:
stage: deploy
trigger:
project: my/deployment
branch: stable
It's possible to mirror the status from a triggered pipeline:
trigger_job:
trigger:
project: my/project
strategy: depend
It's possible to mirror the status from an upstream pipeline:
upstream_bridge:
stage: test
needs:
pipeline: other/project
trigger
syntax for child pipeline
Introduced in GitLab 12.7.
To create a child pipeline, specify the path to the YAML file containing the CI config of the child pipeline:
trigger_job:
trigger:
include: path/to/child-pipeline.yml
Similar to multi-project pipelines, it's possible to mirror the status from a triggered pipeline:
trigger_job:
trigger:
include:
- local: path/to/child-pipeline.yml
strategy: depend
Trigger child pipeline with generated configuration file
Introduced in GitLab 12.9.
You can also trigger a child pipeline from a dynamically generated configuration file:
generate-config:
stage: build
script: generate-ci-config > generated-config.yml
artifacts:
paths:
- generated-config.yml
child-pipeline:
stage: test
trigger:
include:
- artifact: generated-config.yml
job: generate-config
The generated-config.yml
is extracted from the artifacts and used as the configuration
for triggering the child pipeline.
Linking pipelines with trigger:strategy
By default, the trigger
job completes with the success
status
as soon as the downstream pipeline is created.
To force the trigger
job to wait for the downstream (multi-project or child) pipeline to complete, use
strategy: depend
. This will make the trigger job wait with a "running" status until the triggered
pipeline completes. At that point, the trigger
job will complete and display the same status as
the downstream job.
trigger_job:
trigger:
include: path/to/child-pipeline.yml
strategy: depend
This can help keep your pipeline execution linear. In the example above, jobs from subsequent stages will wait for the triggered pipeline to successfully complete before starting, at the cost of reduced parallelization.
Trigger a pipeline by API call
Triggers can be used to force a rebuild of a specific branch, tag or commit, with an API call when a pipeline gets created using a trigger token.
Not to be confused with the trigger
parameter.
Read more in the triggers documentation.
interruptible
Introduced in GitLab 12.3.
interruptible
is used to indicate that a job should be canceled if made redundant by a newer pipeline run. Defaults to false
.
This value will only be used if the automatic cancellation of redundant pipelines feature
is enabled.
When enabled, a pipeline on the same branch will be canceled when:
- It's made redundant by a newer pipeline run.
- Either all jobs are set as interruptible, or any uninterruptible jobs haven't started.
Pending jobs are always considered interruptible.
TIP: Tip: Set jobs as interruptible that can be safely canceled once started (for instance, a build job).
Here is a simple example:
stages:
- stage1
- stage2
- stage3
step-1:
stage: stage1
script:
- echo "Can be canceled."
interruptible: true
step-2:
stage: stage2
script:
- echo "Can not be canceled."
step-3:
stage: stage3
script:
- echo "Because step-2 can not be canceled, this step will never be canceled, even though set as interruptible."
interruptible: true
In the example above, a new pipeline run will cause an existing running pipeline to be:
- Canceled, if only
step-1
is running or pending. - Not canceled, once
step-2
starts running.
NOTE: Note: Once an uninterruptible job is running, the pipeline will never be canceled, regardless of the final job's state.
resource_group
Introduced in GitLab 12.7.
Sometimes running multiples jobs or pipelines at the same time in an environment can lead to errors during the deployment.
To avoid these errors, the resource_group
attribute can be used to ensure that
the Runner won't run certain jobs simultaneously.
When the resource_group
key is defined for a job in .gitlab-ci.yml
,
job executions are mutually exclusive across different pipelines for the same project.
If multiple jobs belonging to the same resource group are enqueued simultaneously,
only one of the jobs will be picked by the Runner, and the other jobs will wait until the
resource_group
is free.
Here is a simple example:
deploy-to-production:
script: deploy
resource_group: production
In this case, if a deploy-to-production
job is running in a pipeline, and a new
deploy-to-production
job is created in a different pipeline, it won't run until
the currently running/pending deploy-to-production
job is finished. As a result,
you can ensure that concurrent deployments will never happen to the production environment.
There can be multiple resource_group
s defined per environment. A good use case for this
is when deploying to physical devices. You may have more than one physical device, and each
one can be deployed to, but there can be only one deployment per device at any given time.
NOTE: Note:
This key can only contain letters, digits, -
, _
, /
, $
, {
, }
, .
, and spaces.
It can't start or end with /
.
For more information, see Deployments Safety.
release
Introduced in GitLab 13.2.
release
indicates that the job creates a Release,
and optionally includes URLs for Release assets.
These methods are supported:
tag_name
name
(optional)description
(optional)ref
(optional)milestones
(optional)released_at
(optional)
The Release is created only if the job processes without error. If the Rails API
returns an error during Release creation, the release
job fails.
release-cli
Docker image
The Docker image to use for the release-cli
must be specified, using the following directive:
image: registry.gitlab.com/gitlab-org/release-cli:latest
Script
All jobs require a script
tag at a minimum. A :release
job can use the output of a
:script
tag, but if this is not necessary, a placeholder script can be used, for example:
script:
- echo 'release job'
An issue exists to remove this requirement in an upcoming version of GitLab.
A pipeline can have multiple release
jobs, for example:
ios-release:
script:
- echo 'iOS release job'
release:
tag_name: v1.0.0-ios
description: 'iOS release v1.0.0'
android-release:
script:
- echo 'Android release job'
release:
tag_name: v1.0.0-android
description: 'Android release v1.0.0'
release:tag_name
The tag_name
must be specified. It can refer to an existing Git tag or can be specified by the user.
When the specified tag doesn't exist in the repository, a new tag is created from the associated SHA of the pipeline.
For example, when creating a Release from a Git tag:
job:
release:
tag_name: $CI_COMMIT_TAG
description: changelog.txt
It is also possible to create any unique tag, in which case only: tags
is not mandatory.
A semantic versioning example:
job:
release:
tag_name: ${MAJOR}_${MINOR}_${REVISION}
description: changelog.txt
- The Release is created only if the job's main script succeeds.
- If the Release already exists, it is not updated and the job with the
release
keyword fails. - The
release
section executes after thescript
tag and before theafter_script
.
release:name
The Release name. If omitted, it is populated with the value of release: tag_name
.
release:description
Specifies the longer description of the Release.
release:ref
If the release: tag_name
doesn’t exist yet, the release is created from ref
.
ref
can be a commit SHA, another tag name, or a branch name.
release:milestones
The title of each milestone the release is associated with.
release:released_at
The date and time when the release is ready. Defaults to the current date and time if not defined. Expected in ISO 8601 format (2019-03-15T08:00:00Z).
Complete example for release
Combining the individual examples given above for release
results in the following
code snippets. There are two options, depending on how you generate the
tags. These options cannot be used together, so choose one:
-
To create a release when you push a Git tag, or when you add a Git tag in the UI by going to Repository > Tags:
release_job: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest rules: - if: $CI_COMMIT_TAG # Run this job when a tag is created manually script: - echo 'running release_job' release: name: 'Release $CI_COMMIT_TAG' description: 'Created using the release-cli $EXTRA_DESCRIPTION' # $EXTRA_DESCRIPTION must be defined tag_name: '$CI_COMMIT_TAG' # elsewhere in the pipeline. ref: '$CI_COMMIT_TAG' milestones: - 'm1' - 'm2' - 'm3' released_at: '2020-07-15T08:00:00Z' # Optional, will auto generate if not defined, # or can use a variable.
-
To create a release automatically when changes are pushed to the default branch, using a new Git tag that is defined with variables:
release_job: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest rules: - if: $CI_COMMIT_TAG when: never # Do not run this job when a tag is created manually - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when the default branch changes script: - echo 'running release_job' release: name: 'Release $CI_COMMIT_SHA' description: 'Created using the release-cli $EXTRA_DESCRIPTION' # $EXTRA_DESCRIPTION and the tag_name tag_name: 'v${MAJOR}.${MINOR}.${REVISION}' # variables must be defined elsewhere ref: '$CI_COMMIT_SHA' # in the pipeline. milestones: - 'm1' - 'm2' - 'm3' released_at: '2020-07-15T08:00:00Z' # Optional, will auto generate if not defined, # or can use a variable.
releaser-cli
command line
The entries under the :release
node are transformed into a bash
command line and sent
to the Docker container, which contains the release-cli.
You can also call the release-cli
directly from a script
entry.
The YAML described above would be translated into a CLI command like this:
release-cli create --name "Release $CI_COMMIT_SHA" --description "Created using the release-cli $EXTRA_DESCRIPTION" --tag-name "v${MAJOR}.${MINOR}.${REVISION}" --ref "$CI_COMMIT_SHA" --released-at "2020-07-15T08:00:00Z" --milestone "m1" --milestone "m2" --milestone "m3"
pages
pages
is a special job that is used to upload static content to GitLab that
can be used to serve your website. It has a special syntax, so the two
requirements below must be met:
- Any static content must be placed under a
public/
directory. artifacts
with a path to thepublic/
directory must be defined.
The example below simply moves all files from the root of the project to the
public/
directory. The .public
workaround is so cp
does not also copy
public/
to itself in an infinite loop:
pages:
stage: deploy
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
Read more on GitLab Pages user documentation.
variables
Introduced in GitLab Runner v0.5.0.
NOTE: Note: Integers (as well as strings) are legal both for variable's name and value. Floats are not legal and can't be used.
GitLab CI/CD allows you to define variables inside .gitlab-ci.yml
that are
then passed in the job environment. They can be set globally and per-job.
When the variables
keyword is used on a job level, it will override the global
YAML variables and predefined ones of the same name.
They are stored in the Git repository and are meant to store non-sensitive project configuration, for example:
variables:
DATABASE_URL: "postgres://postgres@postgres/my_database"
These variables can be later used in all executed commands and scripts. The YAML-defined variables are also set to all created service containers, thus allowing to fine tune them.
Except for the user defined variables, there are also the ones set up by the
Runner itself.
One example would be CI_COMMIT_REF_NAME
which has the value of
the branch or tag name for which project is built. Apart from the variables
you can set in .gitlab-ci.yml
, there are also the so called
Variables
which can be set in GitLab's UI.
YAML anchors for variables are available.
Learn more about variables and their priority.
Git strategy
- Introduced in GitLab 8.9 as an experimental feature.
GIT_STRATEGY=none
requires GitLab Runner v1.7+.
CAUTION: Caution: May change or be removed completely in future releases.
You can set the GIT_STRATEGY
used for getting recent application code, either
globally or per-job in the variables
section. If left
unspecified, the default from project settings will be used.
There are three possible values: clone
, fetch
, and none
.
clone
is the slowest option. It clones the repository from scratch for every
job, ensuring that the local working copy is always pristine.
variables:
GIT_STRATEGY: clone
fetch
is faster as it re-uses the local working copy (falling back to clone
if it does not exist). git clean
is used to undo any changes made by the last
job, and git fetch
is used to retrieve commits made since the last job ran.
variables:
GIT_STRATEGY: fetch
none
also re-uses the local working copy, but skips all Git operations
(including GitLab Runner's pre-clone script, if present). It's mostly useful
for jobs that operate exclusively on artifacts (for example, deploy
). Git repository
data may be present, but it's certain to be out of date, so you should only
rely on files brought into the local working copy from cache or artifacts.
variables:
GIT_STRATEGY: none
NOTE: Note:
GIT_STRATEGY
is not supported for
Kubernetes executor,
but may be in the future. See the support Git strategy with Kubernetes executor feature proposal
for updates.
Git submodule strategy
Requires GitLab Runner v1.10+.
The GIT_SUBMODULE_STRATEGY
variable is used to control if / how Git
submodules are included when fetching the code before a build. You can set them
globally or per-job in the variables
section.
There are three possible values: none
, normal
, and recursive
:
-
none
means that submodules won't be included when fetching the project code. This is the default, which matches the pre-v1.10 behavior. -
normal
means that only the top-level submodules will be included. It's equivalent to:git submodule sync git submodule update --init
-
recursive
means that all submodules (including submodules of submodules) will be included. This feature needs Git v1.8.1 and later. When using a GitLab Runner with an executor not based on Docker, make sure the Git version meets that requirement. It's equivalent to:git submodule sync --recursive git submodule update --init --recursive
Note that for this feature to work correctly, the submodules must be configured
(in .gitmodules
) with either:
- the HTTP(S) URL of a publicly-accessible repository, or
- a relative path to another repository on the same GitLab server. See the Git submodules documentation.
Git checkout
Introduced in GitLab Runner 9.3.
The GIT_CHECKOUT
variable can be used when the GIT_STRATEGY
is set to either
clone
or fetch
to specify whether a git checkout
should be run. If not
specified, it defaults to true. You can set them globally or per-job in the
variables
section.
If set to false
, the Runner will:
- when doing
fetch
- update the repository and leave working copy on the current revision, - when doing
clone
- clone the repository and leave working copy on the default branch.
Having this setting set to true
will mean that for both clone
and fetch
strategies the Runner will checkout the working copy to a revision related
to the CI pipeline:
variables:
GIT_STRATEGY: clone
GIT_CHECKOUT: "false"
script:
- git checkout -B master origin/master
- git merge $CI_COMMIT_SHA
Git clean flags
Introduced in GitLab Runner 11.10
The GIT_CLEAN_FLAGS
variable is used to control the default behavior of
git clean
after checking out the sources. You can set it globally or per-job in the
variables
section.
GIT_CLEAN_FLAGS
accepts all possible options of the git clean
command.
git clean
is disabled if GIT_CHECKOUT: "false"
is specified.
If GIT_CLEAN_FLAGS
is:
- Not specified,
git clean
flags default to-ffdx
. - Given the value
none
,git clean
is not executed.
For example:
variables:
GIT_CLEAN_FLAGS: -ffdx -e cache/
script:
- ls -al cache/
Git fetch extra flags
Introduced in GitLab Runner 13.1.
The GIT_FETCH_EXTRA_FLAGS
variable is used to control the behavior of
git fetch
. You can set it globally or per-job in the variables
section.
GIT_FETCH_EXTRA_FLAGS
accepts all possible options of the git fetch
command, but please note that GIT_FETCH_EXTRA_FLAGS
flags will be appended after the default flags that can't be modified.
The default flags are:
If GIT_FETCH_EXTRA_FLAGS
is:
- Not specified,
git fetch
flags default to--prune --quiet
along with the default flags. - Given the value
none
,git fetch
is executed only with the default flags.
For example, the default flags are --prune --quiet
, so you can make git fetch
more verbose by overriding this with just --prune
:
variables:
GIT_FETCH_EXTRA_FLAGS: --prune
script:
- ls -al cache/
The configurtion above will result in git fetch
being called this way:
git fetch origin $REFSPECS --depth 50 --prune
Where $REFSPECS
is a value provided to the Runner internally by GitLab.
Job stages attempts
Introduced in GitLab, it requires GitLab Runner v1.9+.
You can set the number for attempts the running job will try to execute each of the following stages:
Variable | Description |
---|---|
GET_SOURCES_ATTEMPTS | Number of attempts to fetch sources running a job |
ARTIFACT_DOWNLOAD_ATTEMPTS | Number of attempts to download artifacts running a job |
RESTORE_CACHE_ATTEMPTS | Number of attempts to restore the cache running a job |
EXECUTOR_JOB_SECTION_ATTEMPTS | Since GitLab 12.10, the number of attempts to run a section in a job after a No Such Container error (Docker executor only). |
The default is one single attempt.
Example:
variables:
GET_SOURCES_ATTEMPTS: 3
You can set them globally or per-job in the variables
section.
Shallow cloning
Introduced in GitLab 8.9 as an experimental feature.
NOTE: Note:
As of GitLab 12.0, newly created projects will automatically have a default git depth
value of 50
.
You can specify the depth of fetching and cloning using GIT_DEPTH
. This allows
shallow cloning of the repository which can significantly speed up cloning for
repositories with a large number of commits or old, large binaries. The value is
passed to git fetch
and git clone
.
NOTE: Note: If you use a depth of 1 and have a queue of jobs or retry jobs, jobs may fail.
Since Git fetching and cloning is based on a ref, such as a branch name, Runners
can't clone a specific commit SHA. If there are multiple jobs in the queue, or
you're retrying an old job, the commit to be tested needs to be within the
Git history that is cloned. Setting too small a value for GIT_DEPTH
can make
it impossible to run these old commits. You will see unresolved reference
in
job logs. You should then reconsider changing GIT_DEPTH
to a higher value.
Jobs that rely on git describe
may not work correctly when GIT_DEPTH
is
set since only part of the Git history is present.
To fetch or clone only the last 3 commits:
variables:
GIT_DEPTH: "3"
You can set it globally or per-job in the variables
section.
Custom build directories
Introduced in GitLab Runner 11.10
NOTE: Note:
This can only be used when custom_build_dir
is enabled in the Runner's
configuration.
This is the default configuration for docker
and kubernetes
executor.
By default, GitLab Runner clones the repository in a unique subpath of the
$CI_BUILDS_DIR
directory. However, your project might require the code in a
specific directory (Go projects, for example). In that case, you can specify
the GIT_CLONE_PATH
variable to tell the Runner in which directory to clone the
repository:
variables:
GIT_CLONE_PATH: $CI_BUILDS_DIR/project-name
test:
script:
- pwd
The GIT_CLONE_PATH
has to always be within $CI_BUILDS_DIR
. The directory set in $CI_BUILDS_DIR
is dependent on executor and configuration of runners.builds_dir
setting.
Handling concurrency
An executor using a concurrency greater than 1
might lead
to failures because multiple jobs might be working on the same directory if the builds_dir
is shared between jobs.
GitLab Runner does not try to prevent this situation. It's up to the administrator
and developers to comply with the requirements of Runner configuration.
To avoid this scenario, you can use a unique path within $CI_BUILDS_DIR
, because Runner
exposes two additional variables that provide a unique ID
of concurrency:
$CI_CONCURRENT_ID
: Unique ID for all jobs running within the given executor.$CI_CONCURRENT_PROJECT_ID
: Unique ID for all jobs running within the given executor and project.
The most stable configuration that should work well in any scenario and on any executor
is to use $CI_CONCURRENT_ID
in the GIT_CLONE_PATH
. For example:
variables:
GIT_CLONE_PATH: $CI_BUILDS_DIR/$CI_CONCURRENT_ID/project-name
test:
script:
- pwd
The $CI_CONCURRENT_PROJECT_ID
should be used in conjunction with $CI_PROJECT_PATH
as the $CI_PROJECT_PATH
provides a path of a repository. That is, group/subgroup/project
. For example:
variables:
GIT_CLONE_PATH: $CI_BUILDS_DIR/$CI_CONCURRENT_ID/$CI_PROJECT_PATH
test:
script:
- pwd
Nested paths
The value of GIT_CLONE_PATH
is expanded once and nesting variables
within is not supported.
For example, you define both the variables below in your
.gitlab-ci.yml
file:
variables:
GOPATH: $CI_BUILDS_DIR/go
GIT_CLONE_PATH: $GOPATH/src/namespace/project
The value of GIT_CLONE_PATH
is expanded once into
$CI_BUILDS_DIR/go/src/namespace/project
, and results in failure
because $CI_BUILDS_DIR
is not expanded.
Special YAML features
It's possible to use special YAML features like anchors (&
), aliases (*
)
and map merging (<<
), which will allow you to greatly reduce the complexity
of .gitlab-ci.yml
.
Read more about the various YAML features.
In most cases, the extends
keyword is more user friendly and should
be used over these special YAML features. YAML anchors may still
need to be used to merge arrays.
Anchors
Introduced in GitLab 8.6 and GitLab Runner v1.1.1.
YAML has a handy feature called 'anchors', which lets you easily duplicate content across your document. Anchors can be used to duplicate/inherit properties, and is a perfect example to be used with hidden jobs to provide templates for your jobs. When there is duplicate keys, GitLab will perform a reverse deep merge based on the keys.
The following example uses anchors and map merging. It will create two jobs,
test1
and test2
, that will inherit the parameters of .job_template
, each
having their own custom script
defined:
.job_template: &job_definition # Hidden key that defines an anchor named 'job_definition'
image: ruby:2.6
services:
- postgres
- redis
test1:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test1 project
test2:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test2 project
&
sets up the name of the anchor (job_definition
), <<
means "merge the
given hash into the current one", and *
includes the named anchor
(job_definition
again). The expanded version looks like this:
.job_template:
image: ruby:2.6
services:
- postgres
- redis
test1:
image: ruby:2.6
services:
- postgres
- redis
script:
- test1 project
test2:
image: ruby:2.6
services:
- postgres
- redis
script:
- test2 project
Let's see another one example. This time we will use anchors to define two sets
of services. This will create two jobs, test:postgres
and test:mysql
, that
will share the script
directive defined in .job_template
, and the services
directive defined in .postgres_services
and .mysql_services
respectively:
.job_template: &job_definition
script:
- test project
tags:
- dev
.postgres_services:
services: &postgres_definition
- postgres
- ruby
.mysql_services:
services: &mysql_definition
- mysql
- ruby
test:postgres:
<<: *job_definition
services: *postgres_definition
tags:
- postgres
test:mysql:
<<: *job_definition
services: *mysql_definition
The expanded version looks like this:
.job_template:
script:
- test project
tags:
- dev
.postgres_services:
services:
- postgres
- ruby
.mysql_services:
services:
- mysql
- ruby
test:postgres:
script:
- test project
services:
- postgres
- ruby
tags:
- postgres
test:mysql:
script:
- test project
services:
- mysql
- ruby
tags:
- dev
You can see that the hidden jobs are conveniently used as templates.
NOTE: Note:
Note that tags: [dev]
has been overwritten by tags: [postgres]
.
NOTE: Note:
You can't use YAML anchors across multiple files when leveraging the include
feature. Anchors are only valid within the file they were defined in. Instead
of using YAML anchors, you can use the extends
keyword.
YAML anchors for before_script
and after_script
Introduced in GitLab 12.5.
You can use YAML anchors with before_script
and after_script
,
which makes it possible to include a predefined list of commands in multiple
jobs.
Example:
.something_before: &something_before
- echo 'something before'
.something_after: &something_after
- echo 'something after'
- echo 'another thing after'
job_name:
before_script:
- *something_before
script:
- echo 'this is the script'
after_script:
- *something_after
YAML anchors for script
Introduced in GitLab 12.5.
You can use YAML anchors with scripts, which makes it possible to include a predefined list of commands in multiple jobs.
For example:
.something: &something
- echo 'something'
job_name:
script:
- *something
- echo 'this is the script'
YAML anchors for variables
YAML anchors can be used with variables
, to easily repeat assignment
of variables across multiple jobs. It can also enable more flexibility when a job
requires a specific variables
block that would otherwise override the global variables.
In the example below, we will override the GIT_STRATEGY
variable without affecting
the use of the SAMPLE_VARIABLE
variable:
# global variables
variables: &global-variables
SAMPLE_VARIABLE: sample_variable_value
ANOTHER_SAMPLE_VARIABLE: another_sample_variable_value
# a job that needs to set the GIT_STRATEGY variable, yet depend on global variables
job_no_git_strategy:
stage: cleanup
variables:
<<: *global-variables
GIT_STRATEGY: none
script: echo $SAMPLE_VARIABLE
Hide jobs
Introduced in GitLab 8.6 and GitLab Runner v1.1.1.
If you want to temporarily 'disable' a job, rather than commenting out all the lines where the job is defined:
#hidden_job:
# script:
# - run test
You can instead start its name with a dot (.
) and it won't be processed by
GitLab CI/CD. In the following example, .hidden_job
will be ignored:
.hidden_job:
script:
- run test
Use this feature to ignore jobs, or use the special YAML features and transform the hidden jobs into templates.
Skip Pipeline
If your commit message contains [ci skip]
or [skip ci]
, using any
capitalization, the commit will be created but the pipeline will be skipped.
Alternatively, one can pass the ci.skip
Git push option
if using Git 2.10 or newer.
Processing Git pushes
GitLab will create at most 4 branch and tag pipelines when
pushing multiple changes in single git push
invocation.
This limitation does not affect any of the updated Merge Request pipelines. All updated Merge Requests will have a pipeline created when using pipelines for merge requests.
Deprecated parameters
The following parameters are deprecated.
Globally-defined types
CAUTION: Deprecated:
types
is deprecated, and could be removed in a future release.
Use stages
instead.
Job-defined type
CAUTION: Deprecated:
type
is deprecated, and could be removed in one of the future releases.
Use stage
instead.
Globally-defined image
, services
, cache
, before_script
, after_script
Defining image
, services
, cache
, before_script
, and
after_script
globally is deprecated. Support could be removed
from a future release.
Use default:
instead. For example:
default:
image: ruby:2.5
services:
- docker:dind
cache:
paths: [vendor/]
before_script:
- bundle install --path vendor/
after_script:
- rm -rf tmp/