.. | ||
README.md |
Configuration of your jobs with .gitlab-ci.yml
This document describes the usage of .gitlab-ci.yml
, the file that is used by
GitLab Runner to manage your project's jobs.
From version 7.12, GitLab CI uses a YAML
file (.gitlab-ci.yml
) for the project configuration. It is placed in the root
of your repository and contains definitions of how your project should be built.
If you want a quick introduction to GitLab CI, follow our quick start guide.
NOTE: Note: If you have a mirrored repository where GitLab pulls from, you may need to enable pipeline triggering in your project's Settings > Repository > Pull from a remote repository > Trigger pipelines for mirror updates.
Jobs
The YAML file defines a set of jobs with constraints stating when they should
be run. You can specify an unlimited number of jobs which are defined as
top-level elements with an arbitrary name and always have to contain at least
the script
clause.
job1:
script: "execute-script-for-job1"
job2:
script: "execute-script-for-job2"
The above example is the simplest possible CI/CD configuration with two separate
jobs, where each of the jobs executes a different command.
Of course a command can execute code directly (./configure;make;make install
)
or run a script (test.sh
) in the repository.
Jobs are picked up by Runners and executed within the environment of the Runner. What is important, is that each job is run independently from each other.
Each job must have a unique name, but there are a few reserved keywords
that
cannot be used as job names:
image
services
stages
types
before_script
after_script
variables
cache
A job is defined by a list of parameters that define the job behavior.
Keyword | Required | Description |
---|---|---|
script | yes | Defines a shell script which is executed by Runner |
extends | no | Defines a configuration entry that this job is going to inherit from |
image | no | Use docker image, covered in Using Docker Images |
services | no | Use docker services, covered in Using Docker Images |
stage | no | Defines a job stage (default: test ) |
type | no | Alias for stage |
variables | no | Define job variables on a job level |
only | no | Defines a list of git refs for which job is created |
except | no | Defines a list of git refs for which job is not created |
tags | no | Defines a list of tags which are used to select Runner |
allow_failure | no | Allow job to fail. Failed job doesn't contribute to commit status |
when | no | Define when to run job. Can be on_success , on_failure , always or manual |
dependencies | no | Define other jobs that a job depends on so that you can pass artifacts between them |
artifacts | no | Define list of job artifacts |
cache | no | Define list of files that should be cached between subsequent runs |
before_script | no | Override a set of commands that are executed before job |
after_script | no | Override a set of commands that are executed after job |
environment | no | Defines a name of environment to which deployment is done by this job |
coverage | no | Define code coverage settings for a given job |
retry | no | Define when and how many times a job can be auto-retried in case of a failure |
parallel | no | Defines how many instances of a job should be run in parallel |
extends
Introduced in GitLab 11.3.
extends
defines an entry name that a job that uses extends
is going to
inherit from.
It is an alternative to using YAML anchors and is a little more flexible and readable:
.tests:
script: rake test
stage: test
only:
refs:
- branches
rspec:
extends: .tests
script: rake rspec
only:
variables:
- $RSPEC
In the example above, the rspec
job inherits from the .tests
template job.
GitLab will perform a reverse deep merge based on the keys. GitLab will:
- Merge the
rspec
contents into.tests
recursively. - Not merge the values of the keys.
This results in the following rspec
job:
rspec:
script: rake rspec
stage: test
only:
refs:
- branches
variables:
- $RSPEC
NOTE: Note:
Note that script: rake test
has been overwritten by script: rake rspec
.
If you do want to include the rake test
, have a look at before_script-and-after_script.
.tests
in this example is a hidden key, but it's
possible to inherit from regular jobs as well.
extends
supports multi-level inheritance, however it is not recommended to
use more than three levels. The maximum nesting level that is supported is 10.
The following example has two levels of inheritance:
.tests:
only:
- pushes
.rspec:
extends: .tests
script: rake rspec
rspec 1:
variables:
RSPEC_SUITE: '1'
extends: .rspec
rspec 2:
variables:
RSPEC_SUITE: '2'
extends: .rspec
spinach:
extends: .tests
script: rake spinach
extends
works across configuration files combined with include
.
pages
pages
is a special job that is used to upload static content to GitLab that
can be used to serve your website. It has a special syntax, so the two
requirements below must be met:
- Any static content must be placed under a
public/
directory artifacts
with a path to thepublic/
directory must be defined
The example below simply moves all files from the root of the project to the
public/
directory. The .public
workaround is so cp
doesn't also copy
public/
to itself in an infinite loop:
pages:
stage: deploy
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
Read more on GitLab Pages user documentation.
image
and services
This allows to specify a custom Docker image and a list of services that can be used for time of the job. The configuration of this feature is covered in a separate document.
before_script
and after_script
Introduced in GitLab 8.7 and requires GitLab Runner v1.2
before_script
is used to define the command that should be run before all
jobs, including deploy jobs, but after the restoration of artifacts.
This can be an array or a multi-line string.
after_script
is used to define the command that will be run after all
jobs, including failed ones. This has to be an array or a multi-line string.
The before_script
and the main script
are concatenated and run in a single context/container.
The after_script
is run separately, so depending on the executor, changes done
outside of the working tree might not be visible, e.g. software installed in the
before_script
.
It's possible to overwrite the globally defined before_script
and after_script
if you set it per-job:
before_script:
- global before script
job:
before_script:
- execute this instead of global before script
script:
- my command
after_script:
- execute this after my script
stages
stages
is used to define stages that can be used by jobs and is defined
globally.
The specification of stages
allows for having flexible multi stage pipelines.
The ordering of elements in stages
defines the ordering of jobs' execution:
- Jobs of the same stage are run in parallel.
- Jobs of the next stage are run after the jobs from the previous stage complete successfully.
Let's consider the following example, which defines 3 stages:
stages:
- build
- test
- deploy
- First, all jobs of
build
are executed in parallel. - If all jobs of
build
succeed, thetest
jobs are executed in parallel. - If all jobs of
test
succeed, thedeploy
jobs are executed in parallel. - If all jobs of
deploy
succeed, the commit is marked aspassed
. - If any of the previous jobs fails, the commit is marked as
failed
and no jobs of further stage are executed.
There are also two edge cases worth mentioning:
- If no
stages
are defined in.gitlab-ci.yml
, then thebuild
,test
anddeploy
are allowed to be used as job's stage by default. - If a job doesn't specify a
stage
, the job is assigned thetest
stage.
stage
stage
is defined per-job and relies on stages
which is defined
globally. It allows to group jobs into different stages, and jobs of the same
stage
are executed in parallel
. For example:
stages:
- build
- test
- deploy
job 1:
stage: build
script: make build dependencies
job 2:
stage: build
script: make build artifacts
job 3:
stage: test
script: make test
job 4:
stage: deploy
script: make deploy
types
CAUTION: Deprecated:
types
is deprecated, and could be removed in one of the future releases.
Use stages instead.
script
script
is the only required keyword that a job needs. It's a shell script
which is executed by the Runner. For example:
job:
script: "bundle exec rspec"
This parameter can also contain several commands using an array:
job:
script:
- uname -a
- bundle exec rspec
Sometimes, script
commands will need to be wrapped in single or double quotes.
For example, commands that contain a colon (:
) need to be wrapped in quotes so
that the YAML parser knows to interpret the whole thing as a string rather than
a "key: value" pair. Be careful when using special characters:
:
, {
, }
, [
, ]
, ,
, &
, *
, #
, ?
, |
, -
, <
, >
, =
, !
, %
, @
, `
.
only
and except
(simplified)
only
and except
are two parameters that set a job policy to limit when
jobs are created:
only
defines the names of branches and tags for which the job will run.except
defines the names of branches and tags for which the job will not run.
There are a few rules that apply to the usage of job policy:
only
andexcept
are inclusive. If bothonly
andexcept
are defined in a job specification, the ref is filtered byonly
andexcept
.only
andexcept
allow the use of regular expressions (using Ruby regexp syntax).only
andexcept
allow to specify a repository path to filter jobs for forks.
In addition, only
and except
allow the use of special keywords:
Value | Description |
---|---|
branches |
When a git reference of a pipeline is a branch. |
tags |
When a git reference of a pipeline is a tag. |
api |
When pipeline has been triggered by a second pipelines API (not triggers API). |
external |
When using CI services other than GitLab. |
pipelines |
For multi-project triggers, created using the API with CI_JOB_TOKEN . |
pushes |
Pipeline is triggered by a git push by the user. |
schedules |
For scheduled pipelines. |
triggers |
For pipelines created using a trigger token. |
web |
For pipelines created using Run pipeline button in GitLab UI (under your project's Pipelines). |
merge_requests |
When a merge request is created or updated (See pipelines for merge requests). |
In the example below, job
will run only for refs that start with issue-
,
whereas all branches will be skipped:
job:
# use regexp
only:
- /^issue-.*$/
# use special keyword
except:
- branches
In this example, job
will run only for refs that are tagged, or if a build is
explicitly requested via an API trigger or a Pipeline Schedule:
job:
# use special keywords
only:
- tags
- triggers
- schedules
The repository path can be used to have jobs executed only for the parent repository and not forks:
job:
only:
- branches@gitlab-org/gitlab-ce
except:
- master@gitlab-org/gitlab-ce
The above example will run job
for all branches on gitlab-org/gitlab-ce
,
except master.
If a job does not have neither only
nor except
rule,
only: ['branches', 'tags']
is set by default.
For example,
job:
script: echo 'test'
is translated to:
job:
script: echo 'test'
only: ['branches', 'tags']
only
and except
(complex)
refs
andkubernetes
policies introduced in GitLab 10.0.variables
policy introduced in GitLab 10.7.changes
policy introduced in GitLab 11.4.
CAUTION: Warning: This an alpha feature, and it is subject to change at any time without prior notice!
GitLab supports both simple and complex strategies, so it's possible to use an array and a hash configuration scheme.
Four keys are available:
refs
variables
changes
kubernetes
If you use multiple keys under only
or except
, they act as an AND. The logic is:
(any of refs) AND (any of variables) AND (any of changes) AND (if kubernetes is active)
only:refs
and except:refs
The refs
strategy can take the same values as the
simplified only/except configuration.
In the example below, the deploy
job is going to be created only when the
pipeline has been scheduled or runs for the master
branch:
deploy:
only:
refs:
- master
- schedules
only:kubernetes
and except:kubernetes
The kubernetes
strategy accepts only the active
keyword.
In the example below, the deploy
job is going to be created only when the
Kubernetes service is active in the project:
deploy:
only:
kubernetes: active
only:variables
and except:variables
The variables
keyword is used to define variables expressions. In other words,
you can use predefined variables / project / group or
environment-scoped variables to define an expression GitLab is going to
evaluate in order to decide whether a job should be created or not.
Examples of using variables expressions:
deploy:
script: cap staging deploy
only:
refs:
- branches
variables:
- $RELEASE == "staging"
- $STAGING
Another use case is excluding jobs depending on a commit message:
end-to-end:
script: rake test:end-to-end
except:
variables:
- $CI_COMMIT_MESSAGE =~ /skip-end-to-end-tests/
Learn more about variables expressions.
only:changes
and except:changes
Using the changes
keyword with only
or except
, makes it possible to define if
a job should be created based on files modified by a git push event.
For example:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
only:
changes:
- Dockerfile
- docker/scripts/*
- dockerfiles/**/*
- more_scripts/*.{rb,py,sh}
In the scenario above, if you are pushing multiple commits to GitLab to an
existing branch, GitLab creates and triggers the docker build
job, provided that
one of the commits contains changes to either:
- The
Dockerfile
file. - Any of the files inside
docker/scripts/
directory. - Any of the files and subdirectories inside the
dockerfiles
directory. - Any of the files with
rb
,py
,sh
extensions inside themore_scripts
directory.
CAUTION: Warning: There are some caveats when using this feature with new branches and tags. See the section below.
Using changes
with new branches and tags
If you are pushing a new branch or a new tag to GitLab, the policy always evaluates to true and GitLab will create a job. This feature is not connected with merge requests yet, and because GitLab is creating pipelines before an user can create a merge request we don't know a target branch at this point.
Without a target branch, it is not possible to know what the common ancestor is,
thus we always create a job in that case. This feature works best for stable
branches like master
because in that case GitLab uses the previous commit
that is present in a branch to compare against the latest SHA that was pushed.
tags
tags
is used to select specific Runners from the list of all Runners that are
allowed to run this project.
During the registration of a Runner, you can specify the Runner's tags, for
example ruby
, postgres
, development
.
tags
allow you to run jobs with Runners that have the specified tags
assigned to them:
job:
tags:
- ruby
- postgres
The specification above, will make sure that job
is built by a Runner that
has both ruby
AND postgres
tags defined.
Tags are also a great way to run different jobs on different platforms, for
example, given an OS X Runner with tag osx
and Windows Runner with tag
windows
, the following jobs run on respective platforms:
windows job:
stage:
- build
tags:
- windows
script:
- echo Hello, %USERNAME%!
osx job:
stage:
- build
tags:
- osx
script:
- echo "Hello, $USER!"
allow_failure
allow_failure
is used when you want to allow a job to fail without impacting
the rest of the CI suite. Failed jobs don't contribute to the commit status.
The default value is false
, except for manual jobs.
When enabled and the job fails, the pipeline will be successful/green for all intents and purposes, but a "CI build passed with warnings" message will be displayed on the merge request or commit or job page. This is to be used by jobs that are allowed to fail, but where failure indicates some other (manual) steps should be taken elsewhere.
In the example below, job1
and job2
will run in parallel, but if job1
fails, it will not stop the next stage from running, since it's marked with
allow_failure: true
:
job1:
stage: test
script:
- execute_script_that_will_fail
allow_failure: true
job2:
stage: test
script:
- execute_script_that_will_succeed
job3:
stage: deploy
script:
- deploy_to_staging
when
when
is used to implement jobs that are run in case of failure or despite the
failure.
when
can be set to one of the following values:
on_success
- execute job only when all jobs from prior stages succeed. This is the default.on_failure
- execute job only when at least one job from prior stages fails.always
- execute job regardless of the status of jobs from prior stages.manual
- execute job manually (added in GitLab 8.10). Read about manual actions below.
For example:
stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
test_job:
stage: test
script:
- make test
deploy_job:
stage: deploy
script:
- make deploy
when: manual
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always
The above script will:
- Execute
cleanup_build_job
only whenbuild_job
fails. - Always execute
cleanup_job
as the last step in pipeline regardless of success or failure. - Allow you to manually execute
deploy_job
from GitLab's UI.
when:manual
Notes:
- Introduced in GitLab 8.10.
- Blocking manual actions were introduced in GitLab 9.0.
- Protected actions were introduced in GitLab 9.2.
Manual actions are a special type of job that are not executed automatically, they need to be explicitly started by a user. An example usage of manual actions would be a deployment to a production environment. Manual actions can be started from the pipeline, job, environment, and deployment views. Read more at the environments documentation.
Manual actions can be either optional or blocking. Blocking manual actions will block the execution of the pipeline at the stage this action is defined in. It's possible to resume execution of the pipeline when someone executes a blocking manual action by clicking a play button.
When a pipeline is blocked, it will not be merged if Merge When Pipeline Succeeds
is set. Blocked pipelines also do have a special status, called manual.
Manual actions are non-blocking by default. If you want to make manual action
blocking, it is necessary to add allow_failure: false
to the job's definition
in .gitlab-ci.yml
.
Optional manual actions have allow_failure: true
set by default and their
Statuses do not contribute to the overall pipeline status. So, if a manual
action fails, the pipeline will eventually succeed.
Manual actions are considered to be write actions, so permissions for protected branches are used when user wants to trigger an action. In other words, in order to trigger a manual action assigned to a branch that the pipeline is running for, user needs to have ability to merge to this branch.
when:delayed
Introduced in GitLab 11.4.
Delayed job are for executing scripts after a certain period.
This is useful if you want to avoid jobs entering pending
state immediately.
You can set the period with start_in
key. The value of start_in
key is an elapsed time in seconds, unless a unit is
provided. start_in
key must be less than or equal to one hour. Examples of valid values include:
10 seconds
30 minutes
1 hour
When there is a delayed job in a stage, the pipeline will not progress until the delayed job has finished. This means this keyword can also be used for inserting delays between different stages.
The timer of a delayed job starts immediately after the previous stage has completed. Similar to other types of jobs, a delayed job's timer will not start unless the previous stage passed.
The following example creates a job named timed rollout 10%
that is executed 30 minutes after the previous stage has completed:
timed rollout 10%:
stage: deploy
script: echo 'Rolling out 10% ...'
when: delayed
start_in: 30 minutes
You can stop the active timer of a delayed job by clicking the Unschedule button. This job will never be executed in the future unless you execute the job manually.
You can start a delayed job immediately by clicking the Play button. GitLab runner will pick your job soon and start the job.
environment
Notes:
- Introduced in GitLab 8.9.
- You can read more about environments and find more examples in the documentation about environments.
environment
is used to define that a job deploys to a specific environment.
If environment
is specified and no environment under that name exists, a new
one will be created automatically.
In its simplest form, the environment
keyword can be defined like:
deploy to production:
stage: deploy
script: git push production HEAD:master
environment:
name: production
In the above example, the deploy to production
job will be marked as doing a
deployment to the production
environment.
environment:name
Notes:
- Introduced in GitLab 8.11.
- Before GitLab 8.11, the name of an environment could be defined as a string like
environment: production
. The recommended way now is to define it under thename
keyword.- The
name
parameter can use any of the defined CI variables, including predefined, secure variables and.gitlab-ci.yml
variables
. You however cannot use variables defined underscript
.
The environment
name can contain:
- letters
- digits
- spaces
-
_
/
$
{
}
Common names are qa
, staging
, and production
, but you can use whatever
name works with your workflow.
Instead of defining the name of the environment right after the environment
keyword, it is also possible to define it as a separate value. For that, use
the name
keyword under environment
:
deploy to production:
stage: deploy
script: git push production HEAD:master
environment:
name: production
environment:url
Notes:
- Introduced in GitLab 8.11.
- Before GitLab 8.11, the URL could be added only in GitLab's UI. The recommended way now is to define it in
.gitlab-ci.yml
.- The
url
parameter can use any of the defined CI variables, including predefined, secure variables and.gitlab-ci.yml
variables
. You however cannot use variables defined underscript
.
This is an optional value that when set, it exposes buttons in various places in GitLab which when clicked take you to the defined URL.
In the example below, if the job finishes successfully, it will create buttons
in the merge requests and in the environments/deployments pages which will point
to https://prod.example.com
.
deploy to production:
stage: deploy
script: git push production HEAD:master
environment:
name: production
url: https://prod.example.com
environment:on_stop
Notes:
- Introduced in GitLab 8.13.
- Starting with GitLab 8.14, when you have an environment that has a stop action defined, GitLab will automatically trigger a stop action when the associated branch is deleted.
Closing (stopping) environments can be achieved with the on_stop
keyword defined under
environment
. It declares a different job that runs in order to close
the environment.
Read the environment:action
section for an example.
environment:action
Introduced in GitLab 8.13.
The action
keyword is to be used in conjunction with on_stop
and is defined
in the job that is called to close the environment.
Take for instance:
review_app:
stage: deploy
script: make deploy-app
environment:
name: review
on_stop: stop_review_app
stop_review_app:
stage: deploy
script: make delete-app
when: manual
environment:
name: review
action: stop
In the above example we set up the review_app
job to deploy to the review
environment, and we also defined a new stop_review_app
job under on_stop
.
Once the review_app
job is successfully finished, it will trigger the
stop_review_app
job based on what is defined under when
. In this case we
set it up to manual
so it will need a manual action via
GitLab's web interface in order to run.
The stop_review_app
job is required to have the following keywords defined:
when
- referenceenvironment:name
environment:action
stage
should be the same as thereview_app
in order for the environment to stop automatically when the branch is deleted
Dynamic environments
Notes:
- Introduced in GitLab 8.12 and GitLab Runner 1.6.
- The
$CI_ENVIRONMENT_SLUG
was introduced in GitLab 8.15.- The
name
andurl
parameters can use any of the defined CI variables, including predefined, secure variables and.gitlab-ci.yml
variables
. You however cannot use variables defined underscript
.
For example:
deploy as review app:
stage: deploy
script: make deploy
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://$CI_ENVIRONMENT_SLUG.example.com/
The deploy as review app
job will be marked as deployment to dynamically
create the review/$CI_COMMIT_REF_NAME
environment, where $CI_COMMIT_REF_NAME
is an environment variable set by the Runner. The
$CI_ENVIRONMENT_SLUG
variable is based on the environment name, but suitable
for inclusion in URLs. In this case, if the deploy as review app
job was run
in a branch named pow
, this environment would be accessible with an URL like
https://review-pow.example.com/
.
This of course implies that the underlying server which hosts the application is properly configured.
The common use case is to create dynamic environments for branches and use them as Review Apps. You can see a simple example using Review Apps at https://gitlab.com/gitlab-examples/review-apps-nginx/.
cache
Notes:
- Introduced in GitLab Runner v0.7.0.
cache
can be set globally and per-job.- From GitLab 9.0, caching is enabled and shared between pipelines and jobs by default.
- From GitLab 9.2, caches are restored before artifacts.
TIP: Learn more: Read how caching works and find out some good practices in the caching dependencies documentation.
cache
is used to specify a list of files and directories which should be
cached between jobs. You can only use paths that are within the project
workspace.
If cache
is defined outside the scope of jobs, it means it is set
globally and all jobs will use that definition.
cache:paths
Use the paths
directive to choose which files or directories will be cached.
Wildcards can be used as well.
Cache all files in binaries
that end in .apk
and the .config
file:
rspec:
script: test
cache:
paths:
- binaries/*.apk
- .config
Locally defined cache overrides globally defined options. The following rspec
job will cache only binaries/
:
cache:
paths:
- my/files
rspec:
script: test
cache:
key: rspec
paths:
- binaries/
Note that since cache is shared between jobs, if you're using different paths for different jobs, you should also set a different cache:key otherwise cache content can be overwritten.
cache:key
Introduced in GitLab Runner v1.0.0.
Since the cache is shared between jobs, if you're using different
paths for different jobs, you should also set a different cache:key
otherwise cache content can be overwritten.
The key
directive allows you to define the affinity of caching between jobs,
allowing to have a single cache for all jobs, cache per-job, cache per-branch
or any other way that fits your workflow. This way, you can fine tune caching,
allowing you to cache data between different jobs or even different branches.
The cache:key
variable can use any of the
predefined variables, and the default key, if not
set, is just literal default
which means everything is shared between each
pipelines and jobs by default, starting from GitLab 9.0.
NOTE: Note:
The cache:key
variable cannot contain the /
character, or the equivalent
URI-encoded %2F
; a value made only of dots (.
, %2E
) is also forbidden.
For example, to enable per-branch caching:
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- binaries/
If you use Windows Batch to run your shell scripts you need to replace
$
with %
:
cache:
key: "%CI_COMMIT_REF_SLUG%"
paths:
- binaries/
cache:untracked
Set untracked: true
to cache all files that are untracked in your Git
repository:
rspec:
script: test
cache:
untracked: true
Cache all Git untracked files and files in binaries
:
rspec:
script: test
cache:
untracked: true
paths:
- binaries/
cache:policy
Introduced in GitLab 9.4.
The default behaviour of a caching job is to download the files at the start of
execution, and to re-upload them at the end. This allows any changes made by the
job to be persisted for future runs, and is known as the pull-push
cache
policy.
If you know the job doesn't alter the cached files, you can skip the upload step
by setting policy: pull
in the job specification. Typically, this would be
twinned with an ordinary cache job at an earlier stage to ensure the cache
is updated from time to time:
stages:
- setup
- test
prepare:
stage: setup
cache:
key: gems
paths:
- vendor/bundle
script:
- bundle install --deployment
rspec:
stage: test
cache:
key: gems
paths:
- vendor/bundle
policy: pull
script:
- bundle exec rspec ...
This helps to speed up job execution and reduce load on the cache server, especially when you have a large number of cache-using jobs executing in parallel.
Additionally, if you have a job that unconditionally recreates the cache without
reference to its previous contents, you can use policy: push
in that job to
skip the download step.
artifacts
Notes:
- Introduced in GitLab Runner v0.7.0 for non-Windows platforms.
- Windows support was added in GitLab Runner v.1.0.0.
- From GitLab 9.2, caches are restored before artifacts.
- Not all executors are supported.
- Job artifacts are only collected for successful jobs by default.
artifacts
is used to specify a list of files and directories which should be
attached to the job after success.
The artifacts will be sent to GitLab after the job finishes successfully and will be available for download in the GitLab UI.
artifacts:paths
You can only use paths that are within the project workspace. To pass artifacts between different jobs, see dependencies.
Send all files in binaries
and .config
:
artifacts:
paths:
- binaries/
- .config
To disable artifact passing, define the job with empty dependencies:
job:
stage: build
script: make build
dependencies: []
You may want to create artifacts only for tagged releases to avoid filling the build server storage with temporary build artifacts.
Create artifacts only for tags (default-job
will not create artifacts):
default-job:
script:
- mvn test -U
except:
- tags
release-job:
script:
- mvn package -U
artifacts:
paths:
- target/*.war
only:
- tags
artifacts:name
Introduced in GitLab 8.6 and GitLab Runner v1.1.0.
The name
directive allows you to define the name of the created artifacts
archive. That way, you can have a unique name for every archive which could be
useful when you'd like to download the archive from GitLab. The artifacts:name
variable can make use of any of the predefined variables.
The default name is artifacts
, which becomes artifacts.zip
when downloaded.
NOTE: Note:
If your branch-name contains forward slashes
(e.g. feature/my-feature
) it is advised to use $CI_COMMIT_REF_SLUG
instead of $CI_COMMIT_REF_NAME
for proper naming of the artifact.
To create an archive with a name of the current job:
job:
artifacts:
name: "$CI_JOB_NAME"
paths:
- binaries/
To create an archive with a name of the current branch or tag including only the binaries directory:
job:
artifacts:
name: "$CI_COMMIT_REF_NAME"
paths:
- binaries/
To create an archive with a name of the current job and the current branch or tag including only the binaries directory:
job:
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_NAME"
paths:
- binaries/
To create an archive with a name of the current stage and branch name:
job:
artifacts:
name: "$CI_JOB_STAGE-$CI_COMMIT_REF_NAME"
paths:
- binaries/
If you use Windows Batch to run your shell scripts you need to replace
$
with %
:
job:
artifacts:
name: "%CI_JOB_STAGE%-%CI_COMMIT_REF_NAME%"
paths:
- binaries/
If you use Windows PowerShell to run your shell scripts you need to replace
$
with $env:
:
job:
artifacts:
name: "$env:CI_JOB_STAGE-$env:CI_COMMIT_REF_NAME"
paths:
- binaries/
artifacts:untracked
artifacts:untracked
is used to add all Git untracked files as artifacts (along
to the paths defined in artifacts:paths
).
NOTE: Note:
To exclude the folders/files which should not be a part of untracked
just
add them to .gitignore
.
Send all Git untracked files:
artifacts:
untracked: true
Send all Git untracked files and files in binaries
:
artifacts:
untracked: true
paths:
- binaries/
artifacts:when
Introduced in GitLab 8.9 and GitLab Runner v1.3.0.
artifacts:when
is used to upload artifacts on job failure or despite the
failure.
artifacts:when
can be set to one of the following values:
on_success
- upload artifacts only when the job succeeds. This is the default.on_failure
- upload artifacts only when the job fails.always
- upload artifacts regardless of the job status.
To upload artifacts only when job fails:
job:
artifacts:
when: on_failure
artifacts:expire_in
Introduced in GitLab 8.9 and GitLab Runner v1.3.0.
expire_in
allows you to specify how long artifacts should live before they
expire and therefore deleted, counting from the time they are uploaded and
stored on GitLab. If the expiry time is not defined, it defaults to the
instance wide setting
(30 days by default, forever on GitLab.com).
You can use the Keep button on the job page to override expiration and keep artifacts forever.
After their expiry, artifacts are deleted hourly by default (via a cron job), and are not accessible anymore.
The value of expire_in
is an elapsed time in seconds, unless a unit is
provided. Examples of parsable values:
- '42'
- '3 mins 4 sec'
- '2 hrs 20 min'
- '2h20min'
- '6 mos 1 day'
- '47 yrs 6 mos and 4d'
- '3 weeks and 2 days'
To expire artifacts 1 week after being uploaded:
job:
artifacts:
expire_in: 1 week
artifacts:reports
Introduced in GitLab 11.2. Requires GitLab Runner 11.2 and above.
The reports
keyword is used for collecting test reports from jobs and
exposing them in GitLab's UI (merge requests, pipeline views). Read how to use
this with JUnit reports.
NOTE: Note:
The test reports are collected regardless of the job results (success or failure).
You can use artifacts:expire_in
to set up an expiration
date for their artifacts.
NOTE: Note:
If you also want the ability to browse the report output files, include the
artifacts:paths
keyword.
artifacts:reports:junit
Introduced in GitLab 11.2. Requires GitLab Runner 11.2 and above.
The junit
report collects JUnit XML files
as artifacts. Although JUnit was originally developed in Java, there are many
third party ports for other
languages like JavaScript, Python, Ruby, etc.
See JUnit test reports for more details and examples. Below is an example of collecting a JUnit XML file from Ruby's RSpec test tool:
rspec:
stage: test
script:
- bundle install
- rspec --format RspecJunitFormatter --out rspec.xml
artifacts:
reports:
junit: rspec.xml
The collected JUnit reports will be uploaded to GitLab as an artifact and will be automatically shown in merge requests.
NOTE: Note:
In case the JUnit tool you use exports to multiple XML files, you can specify
multiple test report paths within a single job and they will be automatically
concatenated into a single file. Use a filename pattern (junit: rspec-*.xml
),
an array of filenames (junit: [rspec-1.xml, rspec-2.xml, rspec-3.xml]
), or a
combination thereof (junit: [rspec.xml, test-results/TEST-*.xml]
).
artifacts:reports:codequality
[STARTER]
Introduced in GitLab 11.5. Requires GitLab Runner 11.5 and above.
The codequality
report collects CodeQuality issues
as artifacts.
The collected Code Quality report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests.
artifacts:reports:sast
[ULTIMATE]
Introduced in GitLab 11.5. Requires GitLab Runner 11.5 and above.
The sast
report collects SAST vulnerabilities
as artifacts.
The collected SAST report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards.
artifacts:reports:dependency_scanning
[ULTIMATE]
Introduced in GitLab 11.5. Requires GitLab Runner 11.5 and above.
The dependency_scanning
report collects Dependency Scanning vulnerabilities
as artifacts.
The collected Dependency Scanning report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards.
artifacts:reports:container_scanning
[ULTIMATE]
Introduced in GitLab 11.5. Requires GitLab Runner 11.5 and above.
The container_scanning
report collects Container Scanning vulnerabilities
as artifacts.
The collected Container Scanning report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards.
artifacts:reports:dast
[ULTIMATE]
Introduced in GitLab 11.5. Requires GitLab Runner 11.5 and above.
The dast
report collects DAST vulnerabilities
as artifacts.
The collected DAST report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards.
artifacts:reports:license_management
[ULTIMATE]
Introduced in GitLab 11.5. Requires GitLab Runner 11.5 and above.
The license_management
report collects Licenses
as artifacts.
The collected License Management report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests, pipeline view and provide data for security dashboards.
artifacts:reports:performance
[PREMIUM]
Introduced in GitLab 11.5. Requires GitLab Runner 11.5 and above.
The performance
report collects Performance metrics
as artifacts.
The collected Performance report will be uploaded to GitLab as an artifact and will be automatically shown in merge requests.
dependencies
Introduced in GitLab 8.6 and GitLab Runner v1.1.1.
This feature should be used in conjunction with artifacts
and
allows you to define the artifacts to pass between different jobs.
Note that artifacts
from all previous stages are passed by default.
To use this feature, define dependencies
in context of the job and pass
a list of all previous jobs from which the artifacts should be downloaded.
You can only define jobs from stages that are executed before the current one.
An error will be shown if you define jobs from the current stage or next ones.
Defining an empty array will skip downloading any artifacts for that job.
The status of the previous job is not considered when using dependencies
, so
if it failed or it is a manual job that was not run, no error occurs.
In the following example, we define two jobs with artifacts, build:osx
and
build:linux
. When the test:osx
is executed, the artifacts from build:osx
will be downloaded and extracted in the context of the build. The same happens
for test:linux
and artifacts from build:linux
.
The job deploy
will download artifacts from all previous jobs because of
the stage precedence:
build:osx:
stage: build
script: make build:osx
artifacts:
paths:
- binaries/
build:linux:
stage: build
script: make build:linux
artifacts:
paths:
- binaries/
test:osx:
stage: test
script: make test:osx
dependencies:
- build:osx
test:linux:
stage: test
script: make test:linux
dependencies:
- build:linux
deploy:
stage: deploy
script: make deploy
When a dependent job will fail
Introduced in GitLab 10.3.
If the artifacts of the job that is set as a dependency have been expired or erased, then the dependent job will fail.
NOTE: Note: You can ask your administrator to flip this switch and bring back the old behavior.
coverage
Introduced in GitLab 8.17.
coverage
allows you to configure how code coverage will be extracted from the
job output.
Regular expressions are the only valid kind of value expected here. So, using
surrounding /
is mandatory in order to consistently and explicitly represent
a regular expression string. You must escape special characters if you want to
match them literally.
A simple example:
job1:
script: rspec
coverage: '/Code coverage: \d+\.\d+/'
retry
Introduced in GitLab 9.5. Behaviour expanded in GitLab 11.5 to control on which failures to retry.
retry
allows you to configure how many times a job is going to be retried in
case of a failure.
When a job fails and has retry
configured, it is going to be processed again
up to the amount of times specified by the retry
keyword.
If retry
is set to 2, and a job succeeds in a second run (first retry), it won't be retried
again. retry
value has to be a positive integer, equal or larger than 0, but
lower or equal to 2 (two retries maximum, three runs in total).
A simple example to retry in all failure cases:
test:
script: rspec
retry: 2
By default, a job will be retried on all failure cases. To have a better control
on which failures to retry, retry
can be a hash with the following keys:
max
: The maximum number of retries.when
: The failure cases to retry.
To retry only runner system failures at maximum two times:
test:
script: rspec
retry:
max: 2
when: runner_system_failure
If there is another failure, other than a runner system failure, the job will not be retried.
To retry on multiple failure cases, when
can also be an array of failures:
test:
script: rspec
retry:
max: 2
when:
- runner_system_failure
- stuck_or_timeout_failure
Possible values for when
are:
always
: Retry on any failure (default).unknown_failure
: Retry when the failure reason is unknown.script_failure
: Retry when the script failed.api_failure
: Retry on API failure.stuck_or_timeout_failure
: Retry when the job got stuck or timed out.runner_system_failure
: Retry if there was a runner system failure (e.g. setting up the job failed).missing_dependency_failure
: Retry if a dependency was missing.runner_unsupported
: Retry if the runner was unsupported.
parallel
Introduced in GitLab 11.5.
parallel
allows you to configure how many instances of a job to run in
parallel. This value has to be greater than or equal to two (2) and less than or equal to 50.
This creates N instances of the same job that run in parallel. They're named
sequentially from job_name 1/N
to job_name N/N
.
For every job, CI_NODE_INDEX
and CI_NODE_TOTAL
environment variables are set.
A simple example:
test:
script: rspec
parallel: 5
include
Introduced in GitLab Premium 10.5. Available for Starter, Premium and Ultimate since 10.6. Behaviour expanded in GitLab 10.8 to allow more flexible overriding. Moved to GitLab Core in 11.4
Using the include
keyword, you can allow the inclusion of external YAML files.
In the following example, the content of .before-script-template.yml
will be
automatically fetched and evaluated along with the content of .gitlab-ci.yml
:
# Content of https://gitlab.com/awesome-project/raw/master/.before-script-template.yml
before_script:
- apt-get update -qq && apt-get install -y -qq sqlite3 libsqlite3-dev nodejs
- gem install bundler --no-ri --no-rdoc
- bundle install --jobs $(nproc) "${FLAGS[@]}"
# Content of .gitlab-ci.yml
include: 'https://gitlab.com/awesome-project/raw/master/.before-script-template.yml'
rspec:
script:
- bundle exec rspec
NOTE: Note:
include
requires the external YAML files to have the extensions .yml
or .yaml
.
The external file will not be included if the extension is missing.
You can define it either as a single string, or, in case you want to include more than one files, an array of different values . The following examples are both valid cases:
# Single string
include: '/templates/.after-script-template.yml'
# Array
include:
- 'https://gitlab.com/awesome-project/raw/master/.before-script-template.yml'
- '/templates/.after-script-template.yml'
include
supports two types of files:
-
local to the same repository, referenced by using full paths in the same repository, with
/
being the root directory. For example:# Within the repository include: '/templates/.gitlab-ci-template.yml'
NOTE: Note: You can only use files that are currently tracked by Git on the same branch your configuration file is. In other words, when using a local file, make sure that both
.gitlab-ci.yml
and the local file are on the same branch.NOTE: Note: We don't support the inclusion of local files through Git submodules paths.
-
remote in a different location, accessed using HTTP/HTTPS, referenced using the full URL. For example:
include: 'https://gitlab.com/awesome-project/raw/master/.gitlab-ci-template.yml'
NOTE: Note: The remote file must be publicly accessible through a simple GET request, as we don't support authentication schemas in the remote URL.
NOTE: Note: In order to include files from another repository inside your local network, you may need to enable the Allow requests to the local network from hooks and services checkbox located in the Settings > Network > Outbound requests section within the Admin area.
Since GitLab 10.8 we are now deep merging the files defined in include
with those in .gitlab-ci.yml
. Files defined by include
are always
evaluated first and merged with the content of .gitlab-ci.yml
, no
matter the position of the include
keyword. You can take advantage of
merging to customize and override details in included CI
configurations with local definitions.
NOTE: Note:
The recursive includes are not supported, meaning your external files
should not use the include
keyword, as it will be ignored.
The following example shows specific YAML-defined variables and details of the
production
job from an include file being customized in .gitlab-ci.yml
.
# Content of https://company.com/autodevops-template.yml
variables:
POSTGRES_USER: user
POSTGRES_PASSWORD: testing_password
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
production:
stage: production
script:
- install_dependencies
- deploy
environment:
name: production
url: https://$CI_PROJECT_PATH_SLUG.$AUTO_DEVOPS_DOMAIN
only:
- master
# Content of .gitlab-ci.yml
include: 'https://company.com/autodevops-template.yml'
image: alpine:latest
variables:
POSTGRES_USER: root
POSTGRES_PASSWORD: secure_password
stages:
- build
- test
- production
production:
environment:
url: https://domain.com
In this case, the variables POSTGRES_USER
and POSTGRES_PASSWORD
along
with the environment url of the production
job defined in
autodevops-template.yml
have been overridden by new values defined in
.gitlab-ci.yml
.
The merging lets you extend and override dictionary mappings, but you cannot add or modify items to an included array. For example, to add an additional item to the production job script, you must repeat the existing script items.
# Content of https://company.com/autodevops-template.yml
production:
stage: production
script:
- install_dependencies
- deploy
# Content of .gitlab-ci.yml
include: 'https://company.com/autodevops-template.yml'
stages:
- production
production:
script:
- install_dependencies
- deploy
- notify_owner
In this case, if install_dependencies
and deploy
were not repeated in
.gitlab-ci.yml
, they would not be part of the script for the production
job in the combined CI configuration.
NOTE: Note:
We currently do not support using YAML aliases across different YAML files
sourced by include
. You must only refer to aliases in the same file. Instead
of using YAML anchors you can use extends
keyword.
variables
Introduced in GitLab Runner v0.5.0.
NOTE: Note: Integers (as well as strings) are legal both for variable's name and value. Floats are not legal and cannot be used.
GitLab CI/CD allows you to define variables inside .gitlab-ci.yml
that are
then passed in the job environment. They can be set globally and per-job.
When the variables
keyword is used on a job level, it overrides the global
YAML variables and predefined ones.
They are stored in the Git repository and are meant to store non-sensitive project configuration, for example:
variables:
DATABASE_URL: "postgres://postgres@postgres/my_database"
These variables can be later used in all executed commands and scripts. The YAML-defined variables are also set to all created service containers, thus allowing to fine tune them.
Except for the user defined variables, there are also the ones set up by the
Runner itself.
One example would be CI_COMMIT_REF_NAME
which has the value of
the branch or tag name for which project is built. Apart from the variables
you can set in .gitlab-ci.yml
, there are also the so called
Variables
which can be set in GitLab's UI.
Learn more about variables and their priority.
Git strategy
Introduced in GitLab 8.9 as an experimental feature. May change or be removed completely in future releases.
GIT_STRATEGY=none
requires GitLab Runner v1.7+.
You can set the GIT_STRATEGY
used for getting recent application code, either
globally or per-job in the variables
section. If left
unspecified, the default from project settings will be used.
There are three possible values: clone
, fetch
, and none
.
clone
is the slowest option. It clones the repository from scratch for every
job, ensuring that the project workspace is always pristine.
variables:
GIT_STRATEGY: clone
fetch
is faster as it re-uses the project workspace (falling back to clone
if it doesn't exist). git clean
is used to undo any changes made by the last
job, and git fetch
is used to retrieve commits made since the last job ran.
variables:
GIT_STRATEGY: fetch
none
also re-uses the project workspace, but skips all Git operations
(including GitLab Runner's pre-clone script, if present). It is mostly useful
for jobs that operate exclusively on artifacts (e.g., deploy
). Git repository
data may be present, but it is certain to be out of date, so you should only
rely on files brought into the project workspace from cache or artifacts.
variables:
GIT_STRATEGY: none
Git submodule strategy
Requires GitLab Runner v1.10+.
The GIT_SUBMODULE_STRATEGY
variable is used to control if / how Git
submodules are included when fetching the code before a build. You can set them
globally or per-job in the variables
section.
There are three possible values: none
, normal
, and recursive
:
-
none
means that submodules will not be included when fetching the project code. This is the default, which matches the pre-v1.10 behavior. -
normal
means that only the top-level submodules will be included. It is equivalent to:git submodule sync git submodule update --init
-
recursive
means that all submodules (including submodules of submodules) will be included. This feature needs Git v1.8.1 and later. When using a GitLab Runner with an executor not based on Docker, make sure the Git version meets that requirement. It is equivalent to:git submodule sync --recursive git submodule update --init --recursive
Note that for this feature to work correctly, the submodules must be configured
(in .gitmodules
) with either:
- the HTTP(S) URL of a publicly-accessible repository, or
- a relative path to another repository on the same GitLab server. See the Git submodules documentation.
Git checkout
Introduced in GitLab Runner 9.3
The GIT_CHECKOUT
variable can be used when the GIT_STRATEGY
is set to either
clone
or fetch
to specify whether a git checkout
should be run. If not
specified, it defaults to true. You can set them globally or per-job in the
variables
section.
If set to false
, the Runner will:
- when doing
fetch
- update the repository and leave working copy on the current revision, - when doing
clone
- clone the repository and leave working copy on the default branch.
Having this setting set to true
will mean that for both clone
and fetch
strategies the Runner will checkout the working copy to a revision related
to the CI pipeline:
variables:
GIT_STRATEGY: clone
GIT_CHECKOUT: "false"
script:
- git checkout master
- git merge $CI_BUILD_REF_NAME
Job stages attempts
Introduced in GitLab, it requires GitLab Runner v1.9+.
You can set the number for attempts the running job will try to execute each of the following stages:
Variable | Description |
---|---|
GET_SOURCES_ATTEMPTS | Number of attempts to fetch sources running a job |
ARTIFACT_DOWNLOAD_ATTEMPTS | Number of attempts to download artifacts running a job |
RESTORE_CACHE_ATTEMPTS | Number of attempts to restore the cache running a job |
The default is one single attempt.
Example:
variables:
GET_SOURCES_ATTEMPTS: 3
You can set them globally or per-job in the variables
section.
Shallow cloning
Introduced in GitLab 8.9 as an experimental feature. May change in future releases or be removed completely.
You can specify the depth of fetching and cloning using GIT_DEPTH
. This allows
shallow cloning of the repository which can significantly speed up cloning for
repositories with a large number of commits or old, large binaries. The value is
passed to git fetch
and git clone
.
Note: If you use a depth of 1 and have a queue of jobs or retry jobs, jobs may fail.
Since Git fetching and cloning is based on a ref, such as a branch name, Runners
can't clone a specific commit SHA. If there are multiple jobs in the queue, or
you are retrying an old job, the commit to be tested needs to be within the
Git history that is cloned. Setting too small a value for GIT_DEPTH
can make
it impossible to run these old commits. You will see unresolved reference
in
job logs. You should then reconsider changing GIT_DEPTH
to a higher value.
Jobs that rely on git describe
may not work correctly when GIT_DEPTH
is
set since only part of the Git history is present.
To fetch or clone only the last 3 commits:
variables:
GIT_DEPTH: "3"
You can set it globally or per-job in the variables
section.
Special YAML features
It's possible to use special YAML features like anchors (&
), aliases (*
)
and map merging (<<
), which will allow you to greatly reduce the complexity
of .gitlab-ci.yml
.
Read more about the various YAML features.
Hidden keys (jobs)
Introduced in GitLab 8.6 and GitLab Runner v1.1.1.
If you want to temporarily 'disable' a job, rather than commenting out all the lines where the job is defined:
#hidden_job:
# script:
# - run test
you can instead start its name with a dot (.
) and it will not be processed by
GitLab CI. In the following example, .hidden_job
will be ignored:
.hidden_job:
script:
- run test
Use this feature to ignore jobs, or use the special YAML features and transform the hidden keys into templates.
Anchors
Introduced in GitLab 8.6 and GitLab Runner v1.1.1.
YAML has a handy feature called 'anchors', which lets you easily duplicate content across your document. Anchors can be used to duplicate/inherit properties, and is a perfect example to be used with hidden keys to provide templates for your jobs.
The following example uses anchors and map merging. It will create two jobs,
test1
and test2
, that will inherit the parameters of .job_template
, each
having their own custom script
defined:
.job_template: &job_definition # Hidden key that defines an anchor named 'job_definition'
image: ruby:2.1
services:
- postgres
- redis
test1:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test1 project
test2:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test2 project
&
sets up the name of the anchor (job_definition
), <<
means "merge the
given hash into the current one", and *
includes the named anchor
(job_definition
again). The expanded version looks like this:
.job_template:
image: ruby:2.1
services:
- postgres
- redis
test1:
image: ruby:2.1
services:
- postgres
- redis
script:
- test1 project
test2:
image: ruby:2.1
services:
- postgres
- redis
script:
- test2 project
Let's see another one example. This time we will use anchors to define two sets
of services. This will create two jobs, test:postgres
and test:mysql
, that
will share the script
directive defined in .job_template
, and the services
directive defined in .postgres_services
and .mysql_services
respectively:
.job_template: &job_definition
script:
- test project
.postgres_services:
services: &postgres_definition
- postgres
- ruby
.mysql_services:
services: &mysql_definition
- mysql
- ruby
test:postgres:
<<: *job_definition
services: *postgres_definition
test:mysql:
<<: *job_definition
services: *mysql_definition
The expanded version looks like this:
.job_template:
script:
- test project
.postgres_services:
services:
- postgres
- ruby
.mysql_services:
services:
- mysql
- ruby
test:postgres:
script:
- test project
services:
- postgres
- ruby
test:mysql:
script:
- test project
services:
- mysql
- ruby
You can see that the hidden keys are conveniently used as templates.
Triggers
Triggers can be used to force a rebuild of a specific branch, tag or commit, with an API call.
Read more in the triggers documentation.
Skipping jobs
If your commit message contains [ci skip]
or [skip ci]
, using any
capitalization, the commit will be created but the pipeline will be skipped.
Validate the .gitlab-ci.yml
Each instance of GitLab CI has an embedded debug tool called Lint, which validates the
content of your .gitlab-ci.yml
files. You can find the Lint under the page ci/lint
of your
project namespace (e.g, http://gitlab-example.com/gitlab-org/project-123/-/ci/lint
)
Using reserved keywords
If you get validation error when using specific values (e.g., true
or false
),
try to quote them, or change them to a different form (e.g., /bin/true
).
Examples
See a list of examples for using GitLab CI/CD with various languages.