4.3 KiB
Review apps
We currently have review apps available as a manual job in EE pipelines. Here is the first implementation.
That said, the Quality team is working on making Review Apps automatically deployed by each pipeline, both in CE and EE.
How does it work?
- On every EE pipeline during the
test
stage, you can start thereview
job - The
review
job triggers a pipeline in theCNG-mirror
1 project - The
CNG-mirror
pipeline creates the Docker images of each component (e.g.gitlab-rails-ee
,gitlab-shell
,gitaly
etc.) based on the commit from the GitLab pipeline and store them in its registry - Once all images are built, the review app is deployed using
the official GitLab Helm chart 2 to the
review-apps-ee
Kubernetes cluster on GCP
- The actual scripts used to deploy the review app can be found at
scripts/review_apps/review-apps.sh
- These scripts are basically
our official Auto DevOps scripts where the
default CNG images are overriden with the images built and stored in the
CNG-mirror
project's registry
- Once the
review
job succeeds, you should be able to use your review app thanks to the direct link to it from the MR widget. The default username isroot
and its password can be found in the 1Password secure note named gitlab-{ce,ee} review app's root password.
Additional notes:
- The Kubernetes cluster is connected to the
gitlab-ee
project using GitLab's Kubernetes integration. This basically allows to have a link to the review app directly from the merge request widget. - The manual
stop_review
in thepost-cleanup
stage can be used to stop a review app manually, and is also started by GitLab once a branch is deleted - [TBD] Review apps are cleaned up regularly using a pipeline schedule that runs
the
scripts/review_apps/automated_cleanup.rb
script
Frequently Asked Questions
Will it be too much to trigger CNG image builds on every test run? This could create thousands of unused docker images.
We have to start somewhere and improve later. If we see this getting out of hand, we will revisit.
How big is the Kubernetes cluster?
The cluster is currently setup with a single pool of preemptible nodes, with a minimum of 1 node and a maximum of 30 nodes.
What are the machine running on the cluster?
We're currently using
n1-standard-4
(4 vCPUs, 15 GB memory) machines.
How do we secure this from abuse? Apps are open to the world so we need to find a way to limit it to only us.
This won't work for forks. We will add a root password to 1password shared vault.
Return to Testing documentation
-
We use the
CNG-mirror
project so that theCNG
, (Cloud Native GitLab), project's registry is not overloaded with a lot of transient Docker images. ↩︎ -
Since we're using the official GitLab Helm chart, this means you get the a dedicated environment for your branch that's very close to what it would look in production ↩︎