Improve the 'Testing levels' documentation

Signed-off-by: Rémy Coutable <remy@rymai.me>
This commit is contained in:
Rémy Coutable 2018-09-04 19:20:30 +02:00
parent 173c6ca1e8
commit 35616708ff
No known key found for this signature in database
GPG key ID: 98DFFD1C0C62B70B
3 changed files with 119 additions and 52 deletions

View file

@ -1,32 +1,37 @@
# End-to-End Testing
# End-to-end Testing
## What is End-to-End testing?
## What is end-to-end testing?
End-to-End testing is a strategy used to check whether your application works
as expected across entire software stack and architecture, including
integration of all microservices and components that are supposed to work
End-to-end testing is a strategy used to check whether your application works
as expected across the entire software stack and architecture, including
integration of all micro-services and components that are supposed to work
together.
## How do we test GitLab?
We use [Omnibus GitLab][omnibus-gitlab] to build GitLab packages and then we
test these packages using [GitLab QA][gitlab-qa] project, which is entirely
black-box, click-driven testing framework.
test these packages using the [GitLab QA orchestrator][gitlab-qa] tool, which is
a black-box testing framework for the API and the UI.
### Testing nightly builds
We run scheduled pipeline each night to test nightly builds created by Omnibus.
You can find these nightly pipelines at [GitLab QA pipelines page][gitlab-qa-pipelines].
You can find these nightly pipelines at [gitlab-org/quality/nightly/pipelines][quality-nightly-pipelines].
### Testing staging
We run scheduled pipeline each night to test staging.
You can find these nightly pipelines at [gitlab-org/quality/staging/pipelines][quality-staging-pipelines].
### Testing code in merge requests
It is possible to run end-to-end tests (eventually being run within a
[GitLab QA pipeline][gitlab-qa-pipelines]) for a merge request by triggering
the `package-and-qa` manual action, that should be present in a merge request
widget.
the `package-and-qa` manual action in the `test` stage, that should be present
in a merge request widget (unless the merge request is from a fork).
Manual action that starts end-to-end tests is also available in merge requests
in Omnibus GitLab project.
in [Omnibus GitLab][omnibus-gitlab].
Below you can read more about how to use it and how does it work.
@ -35,46 +40,56 @@ Below you can read more about how to use it and how does it work.
Currently, we are using _multi-project pipeline_-like approach to run QA
pipelines.
1. Developer triggers a manual action, that can be found in CE and EE merge
1. Developer triggers a manual action, that can be found in CE / EE merge
requests. This starts a chain of pipelines in multiple projects.
1. The script being executed triggers a pipeline in GitLab Omnibus and waits
for the resulting status. We call this a _status attribution_.
1. The script being executed triggers a pipeline in [Omnibus GitLab][omnibus-gitlab]
and waits for the resulting status. We call this a _status attribution_.
1. GitLab packages are being built in Omnibus pipeline. Packages are going to be
pushed to Container Registry.
1. GitLab packages are being built in the [Omnibus GitLab][omnibus-gitlab]
pipeline. Packages are then pushed to its Container Registry.
1. When packages are ready, and available in the registry, a final step in the
pipeline, that is now running in Omnibus, triggers a new pipeline in the GitLab
QA project. It also waits for a resulting status.
[Omnibus GitLab][omnibus-gitlab] pipeline, triggers a new
[GitLab QA pipeline][gitlab-qa-pipelines]. It also waits for a resulting status.
1. GitLab QA pulls images from the registry, spins-up containers and runs tests
against a test environment that has been just orchestrated by the `gitlab-qa`
tool.
1. The result of the GitLab QA pipeline is being propagated upstream, through
Omnibus, back to CE / EE merge request.
1. The result of the [GitLab QA pipeline][gitlab-qa-pipelines] is being
propagated upstream, through Omnibus, back to the CE / EE merge request.
#### How do I write tests?
In order to write new tests, you first need to learn more about GitLab QA
architecture. See the [documentation about it][gitlab-qa-architecture] in
GitLab QA project.
architecture. See the [documentation about it][gitlab-qa-architecture].
Once you decided where to put test environment orchestration scenarios and
instance specs, take a look at the [relevant documentation][instance-qa-readme]
and examples in [the `qa/` directory][instance-qa-examples].
Once you decided where to put [test environment orchestration scenarios] and
[instance-level scenarios], take a look at the [GitLab QA README][instance-qa-readme],
the [GitLab QA orchestrator README][gitlab-qa-readme], and [the already existing
instance-level scenarios][instance-level scenarios].
## Where can I ask for help?
You can ask question in the `#quality` channel on Slack (GitLab internal) or
you can find an issue you would like to work on in
[the issue tracker][gitlab-qa-issues] and start a new discussion there.
[the `gitlab-ce` issue tracker][gitlab-ce-issues],
[the `gitlab-ee` issue tracker][gitlab-ce-issues], or
[the `gitlab-qa` issue tracker][gitlab-qa-issues].
[omnibus-gitlab]: https://gitlab.com/gitlab-org/omnibus-gitlab
[gitlab-qa]: https://gitlab.com/gitlab-org/gitlab-qa
[gitlab-qa-readme]: https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md
[gitlab-qa-pipelines]: https://gitlab.com/gitlab-org/gitlab-qa/pipelines
[quality-nightly-pipelines]: https://gitlab.com/gitlab-org/quality/nightly/pipelines
[quality-staging-pipelines]: https://gitlab.com/gitlab-org/quality/staging/pipelines
[gitlab-qa-architecture]: https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/architecture.md
[gitlab-qa-issues]: https://gitlab.com/gitlab-org/gitlab-qa/issues
[gitlab-qa-issues]: https://gitlab.com/gitlab-org/gitlab-qa/issues?label_name%5B%5D=new+scenario
[gitlab-ce-issues]: https://gitlab.com/gitlab-org/gitlab-ce/issues?label_name[]=QA&label_name[]=test
[gitlab-ee-issues]: https://gitlab.com/gitlab-org/gitlab-ee/issues?label_name[]=QA&label_name[]=test
[test environment orchestration scenarios]: https://gitlab.com/gitlab-org/gitlab-qa/tree/master/lib/gitlab/qa/scenario
[instance-level scenarios]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/qa/specs/features
[Page objects documentation]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/qa/page/README.md
[instance-qa-readme]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/README.md
[instance-qa-examples]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/qa

View file

@ -1,8 +1,9 @@
# Smoke Tests
It is imperative in any testing suite that we have Smoke Tests. In short, smoke tests are will run quick sanity
end-to-end functional tests from GitLab QA and are designed to run against the specified environment to ensure that
basic functionality is working.
It is imperative in any testing suite that we have Smoke Tests. In short, smoke
tests will run quick sanity end-to-end functional tests from GitLab QA and are
designed to run against the specified environment to ensure that basic
functionality is working.
Currently, our suite consists of this basic functionality coverage:
@ -11,6 +12,8 @@ Currently, our suite consists of this basic functionality coverage:
- Issue Creation
- Merge Request Creation
Smoke tests have the `:smoke` RSpec metadata.
---
[Return to Testing documentation](index.md)

View file

@ -34,7 +34,11 @@ records should use stubs/doubles as much as possible.
Formal definition: https://en.wikipedia.org/wiki/Integration_testing
These kind of tests ensure that individual parts of the application work well together, without the overhead of the actual app environment (i.e. the browser). These tests should assert at the request/response level: status code, headers, body. They're useful to test permissions, redirections, what view is rendered etc.
These kind of tests ensure that individual parts of the application work well
together, without the overhead of the actual app environment (i.e. the browser).
These tests should assert at the request/response level: status code, headers,
body.
They're useful to test permissions, redirections, what view is rendered etc.
| Code path | Tests path | Testing engine | Notes |
| --------- | ---------- | -------------- | ----- |
@ -67,20 +71,40 @@ run JavaScript tests, so you can either run unit tests (e.g. test a single
JavaScript method), or integration tests (e.g. test a component that is composed
of multiple components).
## System tests or feature tests
## White-box tests at the system level (formerly known as System / Feature tests)
Formal definition: https://en.wikipedia.org/wiki/System_testing.
Formal definitions:
These kind of tests ensure the application works as expected from a user point
of view (aka black-box testing). These tests should test a happy path for a
given page or set of pages, and a test case should be added for any regression
- https://en.wikipedia.org/wiki/System_testing
- https://en.wikipedia.org/wiki/White-box_testing
These kind of tests ensure the GitLab *Rails* application (i.e.
`gitlab-ce`/`gitlab-ee`) works as expected from a *browser* point of view.
Note that:
- knowledge of the internals of the application are still required
- data needed for the tests are usually created directly using RSpec factories
- expectations are often set on the database or objects state
These tests should only be used when:
- the functionality/component being tested is small
- the internal state of the objects/database *needs* to be tested
- it cannot be tested at a lower level
For instance, to test the breadcrumbs on a given page, writing a system test
makes sense since it's a small component, which cannot be tested at the unit or
controller level.
Only test the happy path, but make sure to add a test case for any regression
that couldn't have been caught at lower levels with better tests (i.e. if a
regression is found, regression tests should be added at the lowest-level
possible).
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
| `spec/features/` | [Capybara] + [RSpec] | If your spec has the `:js` metadata, the browser driver will be [Poltergeist], otherwise it's using [RackTest]. |
| `spec/features/` | [Capybara] + [RSpec] | If your test has the `:js` metadata, the browser driver will be [Poltergeist], otherwise it's using [RackTest]. |
### Consider **not** writing a system test!
@ -89,7 +113,7 @@ we have enough Unit & Integration tests), we shouldn't need to duplicate their
thorough testing at the System test level.
It's very easy to add tests, but a lot harder to remove or improve tests, so one
should take care of not introducing too many (slow and duplicated) specs.
should take care of not introducing too many (slow and duplicated) tests.
The reasons why we should follow these best practices are as follows:
@ -107,29 +131,33 @@ The reasons why we should follow these best practices are as follows:
[Poltergeist]: https://github.com/teamcapybara/capybara#poltergeist
[RackTest]: https://github.com/teamcapybara/capybara#racktest
## Black-box tests or end-to-end tests
## Black-box tests at the system level, aka end-to-end tests
Formal definitions:
- https://en.wikipedia.org/wiki/System_testing
- https://en.wikipedia.org/wiki/Black-box_testing
GitLab consists of [multiple pieces] such as [GitLab Shell], [GitLab Workhorse],
[Gitaly], [GitLab Pages], [GitLab Runner], and GitLab Rails. All theses pieces
are configured and packaged by [GitLab Omnibus].
[GitLab QA] is a tool that allows to test that all these pieces integrate well
together by building a Docker image for a given version of GitLab Rails and
running feature tests (i.e. using Capybara) against it.
The QA framework and instance-level scenarios are [part of GitLab Rails] so that
they're always in-sync with the codebase (especially the views).
The actual test scenarios and steps are [part of GitLab Rails] so that they're
always in-sync with the codebase.
Note that:
### Smoke tests
- knowledge of the internals of the application are not required
- data needed for the tests can only be created using the GUI or the API
- expectations can only be made against the browser page and API responses
Smoke tests are quick tests that may be run at any time (especially after the pre-deployment migrations).
Every new feature should come with a [test plan].
Much like feature tests - these tests run against the UI and ensure that basic functionality is working.
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
| `qa/qa/specs/features/` | [Capybara] + [RSpec] + Custom QA framework | Tests should be placed under their corresponding [Product category] |
> See [Smoke Tests](smoke.md) for more information.
Read a separate document about [end-to-end tests](end_to_end_tests.md) to
learn more.
> See [end-to-end tests](end_to_end_tests.md) for more information.
[multiple pieces]: ../architecture.md#components
[GitLab Shell]: https://gitlab.com/gitlab-org/gitlab-shell
@ -138,8 +166,29 @@ learn more.
[GitLab Pages]: https://gitlab.com/gitlab-org/gitlab-pages
[GitLab Runner]: https://gitlab.com/gitlab-org/gitlab-runner
[GitLab Omnibus]: https://gitlab.com/gitlab-org/omnibus-gitlab
[GitLab QA]: https://gitlab.com/gitlab-org/gitlab-qa
[part of GitLab Rails]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa
[test plan]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/.gitlab/issue_templates/Test%20plan.md
[Product category]: https://about.gitlab.com/handbook/product/categories/
### Smoke tests
Smoke tests are quick tests that may be run at any time (especially after the
pre-deployment migrations).
These tests run against the UI and ensure that basic functionality is working.
> See [Smoke Tests](smoke.md) for more information.
### GitLab QA orchestrator
[GitLab QA orchestrator] is a tool that allows to test that all these pieces
integrate well together by building a Docker image for a given version of GitLab
Rails and running end-to-end tests (i.e. using Capybara) against it.
Learn more in the [GitLab QA orchestrator README][gitlab-qa-readme].
[GitLab QA orchestrator]: https://gitlab.com/gitlab-org/gitlab-qa
[gitlab-qa-readme]: https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md
## EE-specific tests