Add latest changes from gitlab-org/gitlab@master

This commit is contained in:
GitLab Bot 2022-08-09 21:09:21 +00:00
parent 283318c205
commit c03dce2dc9
50 changed files with 909 additions and 502 deletions

View File

@ -0,0 +1,77 @@
<!-- A majority of the work designers do will be on themes in the (Now) Next 1-3 milestone column. These themes are comprised of high-confidence outcomes and validated needs. The UX theme issue is where collaboration should occur, including plans and discussion on subthemes, research, and design feedback. Related issues for design exploration and solution validation should stem from the theme issue.
One of the advantages of working with UX themes is that it allows us to think and design holistically by designing the theme as a whole as opposed to a single issue at a time trying to piece them together as you go. For more details please refer to this section of the handbook when creating UX Themes: https://about.gitlab.com/handbook/engineering/ux/product-design/ux-roadmaps/#theme-structure -->
### UX Theme
<!-- A theme is written as a statement that combines the beneficiary, their need, and the expected outcome when the work is delivered. Well-defined statements are concise without sacrificing the substance of the theme so that anyone can understand it at a glance. (For instance; Reduce the effort for security teams to identify and escalate business-critical risks)
!!Note: The theme statement is the defacto title that will be used to reference the theme and serve as the theme issue title.!!
-->
----
### Problem to solve
<!-- In a brief statement, summerize the problem we are intending to address with this theme. For instance, users are unable to complete [task], or, users struggle with the amount of steps required to complete [task] -->
### Beneficiary
<!-- Who is the recipient(s) of the value this theme provides; a customer, end-user, or buyer. Who benefits from this theme being executed? This can be a role, a team, or a persona. For instance: "Development teams, [or] Developers, [or], Sasha the Software Engineer". -->
- **[Direct beneficiary]**
#### Need & Primary JTBD
<!-- What is the JTBD and what are the needs related to the beneficiary and theme?
- JTBD = The JTBD statement, for instance, (When I am triaging vulns, I want to address business-critical risks, So I can ensure there is no unattended risk in my orgs assets.)
- Need = Abstracted from the JTBD, for instance, (Identify and escalate business-critical risks detected in my orgs assets.)
-->
- **JTBD:**
- **Need:**
#### Expected outcome
<!-- What will the user be able to achieve when this theme is executed? For instance, (Users will be able to effectively triage vulnerabilities at scale across all their orgs assets.) -->
#### Business objective
<!-- What business objective will result from delivering this theme? This answers why we are working on this theme from a business perspective. Examples of objectives are but are not limited to: Sales rate / conversion rate, Success rate / completion rate, Traffic / visitor count, Engagement, or other business-oriented goals. -->
#### Confidence
<!-- How well do we understand the user's problem and their need? Refer to https://about.gitlab.com/handbook/engineering/ux/product-design/ux-roadmaps/#confidence to assess confidence -->
| Confidence | Research |
| --- | --- |
| [High/Medium/Low] | [research/insight issue](Link) |
### Subthemes & Requirements
<!-- Subthemes are more granular validated needs, goals, and additional details that the theme encompasses. These are typically reserved for themes in the next (1-3 milestones) column. Subthemes may also consist of existing feature or design issues that exist in GitLab and directly relate to the theme. Subthemes answer “how” we are going to solve the user need while the theme itself answers “what” the need is and “who” will be benefiting from the solution.
Note: This is not a backlog. If the subthemes can not be delivered in the theme timeframe then the theme is too big and needs to be broken down into multiple themes. -->
#### Feature/solution subthemes
<!-- Use this table to track feature issues related to this theme (if applicable). Not all themes require subthemes as subthemes are typically discovered while working on the theme itself. Think of subthemes as if they were the result of design breaking down the issue into discrete work items.
Note: if feature issues already exist then you can add them to this table. Keep in mind that subthemes require validation if they are assumptive
Refer to https://about.gitlab.com/handbook/engineering/ux/product-designer/#ux-issue-weights for calculating UX weights.
-->
| Issue | UX Weight |
| ---------- | --------- |
| [Issue](link) | `0 - 10` |
| [Issue](link) | `0 - 10` |
| [Issue](link) | `0 - 10` |
#### Research subthemes
<!-- Use this table to track UX research related to this theme. This may include, problem validation and/or solution validation activities.
-->
| Issue | Research type | Research status |
| ---------- | --------- | --------- |
| [Issue]() | <!--Solution validation, Problem validation, etc., --> | <!-- Planned, In Progress, Complete, etc.,--> |
| [Issue]() | <!--Solution validation, Problem validation, etc., --> | <!-- Planned, In Progress, Complete, etc.,--> |
/label ~"UX" ~"UX Theme"

View File

@ -536,7 +536,7 @@ gem 'valid_email', '~> 0.1'
# JSON
gem 'json', '~> 2.5.1'
gem 'json_schemer', '~> 0.2.18'
gem 'oj', '~> 3.13.19'
gem 'oj', '~> 3.13.20'
gem 'multi_json', '~> 1.14.1'
gem 'yajl-ruby', '~> 1.4.3', require: 'yajl'

View File

@ -879,7 +879,7 @@ GEM
plist (~> 3.1)
train-core
wmi-lite (~> 1.0)
oj (3.13.19)
oj (3.13.20)
omniauth (1.9.1)
hashie (>= 3.4.6)
rack (>= 1.6.2, < 3)
@ -1646,7 +1646,7 @@ DEPENDENCIES
oauth2 (~> 2.0)
octokit (~> 4.15)
ohai (~> 16.10)
oj (~> 3.13.19)
oj (~> 3.13.20)
omniauth (~> 1.8)
omniauth-alicloud (~> 1.0.1)
omniauth-atlassian-oauth2 (~> 0.2.0)

View File

@ -221,6 +221,7 @@ export default (resolvers = {}, config = {}) => {
ac = new ApolloClient({
typeDefs,
link: appLink,
connectToDevTools: process.env.NODE_ENV !== 'production',
cache: new InMemoryCache({
...cacheConfig,
typePolicies: {

View File

@ -1,5 +1,5 @@
<script>
import { debounce } from 'lodash';
import { debounce, isEmpty } from 'lodash';
import { CONTENT_UPDATE_DEBOUNCE, EDITOR_READY_EVENT } from '~/editor/constants';
import Editor from '~/editor/source_editor';
@ -37,9 +37,9 @@ export default {
default: '',
},
extensions: {
type: [String, Array],
type: [Object, Array],
required: false,
default: () => null,
default: () => ({}),
},
editorOptions: {
type: Object,
@ -74,11 +74,13 @@ export default {
blobPath: this.fileName,
blobContent: this.value,
blobGlobalId: this.fileGlobalId,
extensions: this.extensions,
...this.editorOptions,
});
this.editor.onDidChangeModelContent(debounce(this.onFileChange.bind(this), this.debounceValue));
if (!isEmpty(this.extensions)) {
this.editor.use(this.extensions);
}
},
beforeDestroy() {
this.editor.dispose();

View File

@ -1,4 +1,4 @@
= form_for [@group, @milestone], html: { class: 'milestone-form common-note-form js-quick-submit js-requires-input' } do |f|
= gitlab_ui_form_for [@group, @milestone], html: { class: 'milestone-form common-note-form js-quick-submit js-requires-input' } do |f|
= form_errors(@milestone, pajamas_alert: true)
.form-group.row
.col-form-label.col-sm-2

View File

@ -1,7 +1,11 @@
%tr
%td.text-content
%p
#{content_tag :span, @invite_email, class: :highlight}
has #{content_tag :span, 'declined', class: :highlight} your invitation to join the
#{link_to member_source.human_name, member_source.web_url, class: :highlight} #{member_source.model_name.singular}.
- invited_user = content_tag :span, @invite_email, class: :highlight
- target_link = link_to member_source.human_name, strip_tags(member_source.web_url), class: :highlight
- target_name = sanitize_name(member_source.model_name.singular)
= sanitize(html_escape(s_('Notify|%{invited_user} has %{highlight_start}declined%{highlight_end} your invitation to join the %{target_link} %{target_name}.')) % { invited_user: invited_user,
highlight_start: '<span class="highlight">'.html_safe,
highlight_end: '</span>'.html_safe,
target_link: target_link,
target_name: target_name })

View File

@ -1,4 +1,4 @@
= form_for [@project, @milestone],
= gitlab_ui_form_for [@project, @milestone],
html: { class: 'milestone-form common-note-form js-quick-submit js-requires-input' } do |f|
= form_errors(@milestone, pajamas_alert: true)
.form-group.row

View File

@ -27,7 +27,7 @@
.row
.col
.js-access-tokens-expires-at{ data: expires_at_field_data }
= f.text_field :expires_at, class: 'datepicker gl-datepicker-input form-control gl-form-input', placeholder: 'YYYY-MM-DD', autocomplete: 'off', data: { js_name: 'expiresAt' }
= f.text_field :expires_at, class: 'gl-datepicker-input form-control gl-form-input', placeholder: 'YYYY-MM-DD', autocomplete: 'off', data: { js_name: 'expiresAt' }
- if resource
.row

View File

@ -12,7 +12,7 @@
.form-group
= f.label :expires_at, _('Expiration date (optional)'), class: 'label-bold'
= f.text_field :expires_at, class: 'datepicker form-control', data: { qa_selector: 'deploy_token_expires_at_field' }, value: f.object.expires_at
= f.gitlab_ui_datepicker :expires_at, data: { qa_selector: 'deploy_token_expires_at_field' }, value: f.object.expires_at
.text-secondary= s_('DeployTokens|Enter an expiration date for your token. Defaults to never expire.')
.form-group

View File

@ -53,4 +53,4 @@
= form.label :due_date, _('Due date'), class: "col-12"
.col-12
.issuable-form-select-holder
= form.text_field :due_date, id: "issuable-due-date", class: "datepicker form-control", placeholder: _('Select due date'), autocomplete: 'off'
= form.gitlab_ui_datepicker :due_date, placeholder: _('Select due date'), autocomplete: 'off', id: "issuable-due-date"

View File

@ -2,10 +2,10 @@
.col-form-label.col-sm-2
= f.label :start_date, _('Start Date')
.col-sm-4
= f.text_field :start_date, class: "datepicker form-control gl-form-input", data: { qa_selector: "start_date_field" }, placeholder: _('Select start date'), autocomplete: 'off'
= f.gitlab_ui_datepicker :start_date, data: { qa_selector: "start_date_field" }, placeholder: _('Select start date'), autocomplete: 'off'
%a.inline.float-right.gl-mt-2.js-clear-start-date{ href: "#" }= _('Clear start date')
.col-form-label.col-sm-2
= f.label :due_date, _('Due Date')
.col-sm-4
= f.text_field :due_date, class: "datepicker form-control gl-form-input", data: { qa_selector: "due_date_field" }, placeholder: _('Select due date'), autocomplete: 'off'
= f.gitlab_ui_datepicker :due_date, data: { qa_selector: "due_date_field" }, placeholder: _('Select due date'), autocomplete: 'off'
%a.inline.float-right.gl-mt-2.js-clear-due-date{ href: "#" }= _('Clear due date')

View File

@ -1,8 +0,0 @@
---
name: delete_deployments_api
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/92378
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/367885
milestone: '15.3'
type: development
group: group::release
default_enabled: false

View File

@ -0,0 +1,15 @@
# frozen_string_literal: true
class RemoveDescriptionHtmlLimit < Gitlab::Database::Migration[2.0]
disable_ddl_transaction!
def up
remove_text_limit :namespace_details, :description_html
remove_text_limit :namespace_details, :description
end
def down
add_text_limit :namespace_details, :description_html, 255
add_text_limit :namespace_details, :description, 255
end
end

View File

@ -0,0 +1,13 @@
# frozen_string_literal: true
class PrepareIndexRemovalSecurityFindings < Gitlab::Database::Migration[2.0]
INDEX_NAME = :index_on_security_findings_uuid_and_id_order_desc
def up
prepare_async_index_removal :security_findings, [:uuid, :id], name: INDEX_NAME
end
def down
unprepare_async_index_by_name :security_findings, INDEX_NAME
end
end

View File

@ -0,0 +1 @@
5e489655875408b2879f44f006b420a62554e6523ca687cfa64485e0123fc25c

View File

@ -0,0 +1 @@
12e5d5c0cb73c8c2fdde4f640a57ab9c70d2e41382bd6eb2e2d36c1018f299ef

View File

@ -17642,9 +17642,7 @@ CREATE TABLE namespace_details (
updated_at timestamp with time zone,
cached_markdown_version integer,
description text,
description_html text,
CONSTRAINT check_2df620eaf6 CHECK ((char_length(description_html) <= 255)),
CONSTRAINT check_2f563eec0f CHECK ((char_length(description) <= 255))
description_html text
);
CREATE TABLE namespace_limits (

View File

@ -178,11 +178,15 @@ GitLab 13.9 through GitLab 14.3 are affected by a bug in which enabling [GitLab
## Upgrading to GitLab 13.7
We've detected an issue with the `FetchRemove` call used by Geo secondaries.
This causes performance issues as we execute reference transaction hooks for
each upgraded reference. Delay any upgrade attempts until this is in the
[13.7.5 patch release.](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/3002).
More details are available [in this issue](https://gitlab.com/gitlab-org/git/-/issues/79).
- We've detected an issue with the `FetchRemove` call used by Geo secondaries.
This causes performance issues as we execute reference transaction hooks for
each upgraded reference. Delay any upgrade attempts until this is in the
[13.7.5 patch release.](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/3002).
More details are available [in this issue](https://gitlab.com/gitlab-org/git/-/issues/79).
- A new secret is generated in `/etc/gitlab/gitlab-secrets.json`.
In an HA GitLab or GitLab Geo environment, secrets need to be the same on all nodes.
Ensure this new secret is also accounted for if you are manually syncing the file across
nodes, or manually specifying secrets in `/etc/gitlab/gitlab.rb`.
## Upgrading to GitLab 13.5

View File

@ -753,7 +753,7 @@ PUT /user/status
| Attribute | Type | Required | Description |
| -------------------- | ------ | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `emoji` | string | no | Name of the emoji to use as status. If omitted `speech_balloon` is used. Emoji name can be one of the specified names in the [Gemojione index](https://github.com/bonusly/gemojione/blob/master/config/index.json). |
| `message` | string | no | Message to set as a status. It can also contain emoji codes. |
| `message` | string | no | Message to set as a status. It can also contain emoji codes. Cannot exceed 100 characters. |
| `clear_status_after` | string | no | Automatically clean up the status after a given time interval, allowed values: `30_minutes`, `3_hours`, `8_hours`, `1_day`, `3_days`, `7_days`, `30_days`
When both parameters `emoji` and `message` are empty, the status is cleared. When the `clear_status_after` parameter is missing from the request, the previously set value for `"clear_status_after` is cleared.

View File

@ -85,7 +85,7 @@ place for it.
Do not include the same information in multiple places.
[Link to a single source of truth instead.](../styleguide/index.md#link-instead-of-repeating-text)
For example, if you have code in a repository other than the [primary repositories](index.md#architecture),
For example, if you have code in a repository other than the [primary repositories](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/architecture.md),
and documentation in the same repository, you can keep the documentation in that repository.
Then you can either:

View File

@ -299,7 +299,7 @@ The [layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/layouts/global_
is fed by the [data file](#data-file), builds the global nav, and is rendered by the
[default](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/layouts/default.html) layout.
The global nav contains links from all [four upstream projects](index.md#architecture).
The global nav contains links from all [four upstream projects](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/architecture.md).
The [global nav URL](#urls) has a different prefix depending on the documentation file you change.
| Repository | Link prefix | Final URL |

View File

@ -11,57 +11,12 @@ the repository which is used to generate the GitLab documentation website and
is deployed to <https://docs.gitlab.com>. It uses the [Nanoc](https://nanoc.app/)
static site generator.
## Architecture
View the [`gitlab-docs` architecture page](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/architecture.md)
for more information.
While the source of the documentation content is stored in the repositories for
each GitLab product, the source that is used to build the documentation
site _from that content_ is located at <https://gitlab.com/gitlab-org/gitlab-docs>.
## Documentation in other repositories
The following diagram illustrates the relationship between the repositories
from where content is sourced, the `gitlab-docs` project, and the published output.
```mermaid
graph LR
A[gitlab-org/gitlab/doc]
B[gitlab-org/gitlab-runner/docs]
C[gitlab-org/omnibus-gitlab/doc]
D[gitlab-org/charts/gitlab/doc]
E[gitlab-org/cloud-native/gitlab-operator/doc]
Y[gitlab-org/gitlab-docs]
A --> Y
B --> Y
C --> Y
D --> Y
E --> Y
Y -- Build pipeline --> Z
Z[docs.gitlab.com]
M[//ee/]
N[//runner/]
O[//omnibus/]
P[//charts/]
Q[//operator/]
Z --> M
Z --> N
Z --> O
Z --> P
Z --> Q
```
GitLab docs content isn't kept in the `gitlab-docs` repository.
All documentation files are hosted in the respective repository of each
product, and all together are pulled to generate the docs website:
- [GitLab](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc)
- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/doc)
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs)
- [GitLab Chart](https://gitlab.com/gitlab-org/charts/gitlab/-/tree/master/doc)
- [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc)
Learn more about [the docs folder structure](folder_structure.md).
### Documentation in other repositories
If you have code and documentation in a repository other than the [primary repositories](#architecture),
If you have code and documentation in a repository other than the [primary repositories](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/architecture.md),
you should keep the documentation with the code in that repository.
Then you can use one of these approaches:
@ -81,187 +36,6 @@ Then you can use one of these approaches:
We do not encourage the use of [pages with lists of links](../structure.md#topics-and-resources-pages),
so only use this option if the recommended options are not feasible.
## Assets
To provide an optimized site structure, design, and a search-engine friendly
website, along with a discoverable documentation, we use a few assets for
the GitLab Documentation website.
### External libraries
GitLab Docs is built with a combination of external:
- [JavaScript libraries](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/package.json).
- [Ruby libraries](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/Gemfile).
### SEO
- [Schema.org](https://schema.org/)
- [Google Analytics](https://marketingplatform.google.com/about/analytics/)
- [Google Tag Manager](https://developers.google.com/tag-platform/tag-manager)
## Global navigation
Read through [the global navigation documentation](global_nav.md) to understand:
- How the global navigation is built.
- How to add new navigation items.
<!--
## Helpers
TBA
-->
## Pipelines
The pipeline in the `gitlab-docs` project:
- Tests changes to the docs site code.
- Builds the Docker images used in various pipeline jobs.
- Builds and deploys the docs site itself.
- Generates the review apps when the `review-docs-deploy` job is triggered.
### Rebuild the docs site Docker images
Once a week on Mondays, a scheduled pipeline runs and rebuilds the Docker images
used in various pipeline jobs, like `docs-lint`. The Docker image configuration files are
located in the [Dockerfiles directory](https://gitlab.com/gitlab-org/gitlab-docs/-/tree/main/dockerfiles).
If you need to rebuild the Docker images immediately (must have maintainer level permissions):
WARNING:
If you change the Dockerfile configuration and rebuild the images, you can break the main
pipeline in the main `gitlab` repository as well as in `gitlab-docs`. Create an image with
a different name first and test it to ensure you do not break the pipelines.
1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI/CD > Pipelines**.
1. Select **Run pipeline**.
1. See that a new pipeline is running. The jobs that build the images are in the first
stage, `build-images`. You can select the pipeline number to see the larger pipeline
graph, or select the first (`build-images`) stage in the mini pipeline graph to
expose the jobs that build the images.
1. Select the **play** (**{play}**) button next to the images you want to rebuild.
- Normally, you do not need to rebuild the `image:gitlab-docs-base` image, as it
rarely changes. If it does need to be rebuilt, be sure to only run `image:docs-lint`
after it is finished rebuilding.
### Deploy the docs site
Every four hours a scheduled pipeline builds and deploys the docs site. The pipeline
fetches the current docs from the main project's main branch, builds it with Nanoc
and deploys it to <https://docs.gitlab.com>.
To build and deploy the site immediately (must have the Maintainer role):
1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI/CD > Schedules**.
1. For the `Build docs.gitlab.com every 4 hours` scheduled pipeline, select the **play** (**{play}**) button.
Read more about [documentation deployments](deployment_process.md).
## Using YAML data files
The easiest way to achieve something similar to
[Jekyll's data files](https://jekyllrb.com/docs/datafiles/) in Nanoc is by
using the [`@items`](https://nanoc.app/doc/reference/variables/#items-and-layouts)
variable.
The data file must be placed inside the `content/` directory and then it can
be referenced in an ERB template.
Suppose we have the `content/_data/versions.yaml` file with the content:
```yaml
versions:
- 10.6
- 10.5
- 10.4
```
We can then loop over the `versions` array with something like:
```erb
<% @items['/_data/versions.yaml'][:versions].each do | version | %>
<h3><%= version %></h3>
<% end &>
```
Note that the data file must have the `yaml` extension (not `yml`) and that
we reference the array with a symbol (`:versions`).
## Archived documentation banner
A banner is displayed on archived documentation pages with the text `This is archived documentation for
GitLab. Go to the latest.` when either:
- The version of the documentation displayed is not the first version entry in `online` in
`content/_data/versions.yaml`.
- The documentation was built from the default branch (`main`).
For example, if the `online` entries for `content/_data/versions.yaml` are:
```yaml
online:
- "14.4"
- "14.3"
- "14.2"
```
In this case, the archived documentation banner isn't displayed:
- For 14.4, the docs built from the `14.4` branch. The branch name is the first entry in `online`.
- For 14.5-pre, the docs built from the default project branch (`main`).
The archived documentation banner is displayed:
- For 14.3.
- For 14.2.
- For any other version.
## Bumping versions of CSS and JavaScript
Whenever the custom CSS and JavaScript files under `content/assets/` change,
make sure to bump their version in the front matter. This method guarantees that
your changes take effect by clearing the cache of previous files.
Always use Nanoc's way of including those files, do not hardcode them in the
layouts. For example use:
```erb
<script async type="application/javascript" src="<%= @items['/assets/javascripts/badges.*'].path %>"></script>
<link rel="stylesheet" href="<%= @items['/assets/stylesheets/toc.*'].path %>">
```
The links pointing to the files should be similar to:
```erb
<%= @items['/path/to/assets/file.*'].path %>
```
Nanoc then builds and renders those links correctly according with what's
defined in [`Rules`](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/Rules).
## Linking to source files
A helper called [`edit_on_gitlab`](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/lib/helpers/edit_on_gitlab.rb) can be used
to link to a page's source file. We can link to both the simple editor and the
web IDE. Here's how you can use it in a Nanoc layout:
- Default editor: `<a href="<%= edit_on_gitlab(@item, editor: :simple) %>">Simple editor</a>`
- Web IDE: `<a href="<%= edit_on_gitlab(@item, editor: :webide) %>">Web IDE</a>`
If you don't specify `editor:`, the simple one is used by default.
## Algolia search engine
The docs site uses [Algolia DocSearch](https://docsearch.algolia.com/)
for its search function.
Learn more in <https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/docsearch.md>.
## Monthly release process (versions)
The docs website supports versions and each month we add the latest one to the list.
@ -269,5 +43,5 @@ For more information, read about the [monthly release process](https://gitlab.co
## Review Apps for documentation merge requests
If you are contributing to GitLab docs read how to
If you are contributing to GitLab docs read how to
[create a Review App with each merge request](../index.md#previewing-the-changes-live).

View File

@ -896,6 +896,51 @@ export default new VueApollo({
This is similar to the `DesignCollection` example above as new page results are appended to the
previous ones.
For some cases, it's hard to define the correct `keyArgs` for the field because all
the fields are updated. In this case, we can set `keyArgs` to `false`. This instructs
Apollo Client to not perform any automatic merge, and fully rely on the logic we
put into the `merge` function.
For example, we have a query like this:
```javascript
query searchGroupsWhereUserCanTransfer {
currentUser {
id
groups {
nodes {
id
fullName
}
pageInfo {
...PageInfo
}
}
}
}
```
Here, the `groups` field doesn't have a good candidate for `keyArgs`: both
`nodes` and `pageInfo` will be updated when we're fetching a second page.
Setting `keyArgs` to `false` makes the update work as intended:
```javascript
typePolicies: {
UserCore: {
fields: {
groups: {
keyArgs: false,
},
},
},
GroupConnection: {
fields: {
nodes: concatPagination(),
},
},
}
```
#### Using a recursive query in components
When it is necessary to fetch all paginated data initially an Apollo query can do the trick for us.

View File

@ -6,92 +6,295 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Uploads guide: Adding new uploads
Here, we describe how to add a new upload route [accelerated](index.md#workhorse-assisted-uploads) by Workhorse.
## Recommendations
Upload routes belong to one of these categories:
- When creating an uploader, [make it a subclass](#where-should-i-store-my-files) of `AttachmentUploader`
- Add your uploader to the [tables](#tables) in this document
- Do not add [new object storage buckets](#where-should-i-store-my-files)
- Implement [direct upload](#implementing-direct-upload-support)
- If you need to process your uploads, decide [where to do that](#processing-uploads)
1. Rails controllers: uploads handled by Rails controllers.
1. Grape API: uploads handled by a Grape API endpoint.
1. GraphQL API: uploads handled by a GraphQL resolve function.
## Background information
WARNING:
GraphQL uploads do not support [direct upload](index.md#direct-upload). Depending on the use case, the feature may not work on installations without NFS (like GitLab.com or Kubernetes installations). Uploading to object storage inside the GraphQL resolve function may result in timeout errors. For more details, follow [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819).
- [CarrierWave Uploaders](#carrierwave-uploaders)
- [GitLab modifications to CarrierWave](#gitlab-modifications-to-carrierwave)
## Update Workhorse for the new route
## Where should I store my files?
For both the Rails controller and Grape API uploads, Workhorse must be updated to get the
support for the new upload route.
CarrierWave Uploaders determine where files get
stored. When you create a new Uploader class you are deciding where to store the files of your new
feature.
1. Open a new issue in the [Workhorse tracker](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/new) describing precisely the new upload route:
- The route's URL.
- The upload encoding.
- If possible, provide a dump of the upload request.
1. Implement and get the MR merged for this issue above.
1. Ask the Maintainers of [Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) to create a new release. You can do that in the merge request
directly during the maintainer review, or ask for it in the `#workhorse` Slack channel.
1. Bump the [Workhorse version file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/GITLAB_WORKHORSE_VERSION)
to the version you have from the previous points, or bump it in the same merge request that contains
the Rails changes. Refer to [Implementing the new route with a Rails controller](#implementing-the-new-route-with-a-rails-controller) or [Implementing the new route with a Grape API endpoint](#implementing-the-new-route-with-a-grape-api-endpoint) below.
First of all, ask yourself if you need a new Uploader class. It is OK
to use the same Uploader class for different mountpoints or different
models.
## Implementing the new route with a Rails controller
If you do want or need your own Uploader class then you should make it
a **subclass of `AttachmentUploader`**. You then inherit the storage
location and directory scheme from that class. The directory scheme
is:
For a Rails controller upload, we usually have a `multipart/form-data` upload and there are a
few things to do:
```ruby
File.join(model.class.underscore, mounted_as.to_s, model.id.to_s)
```
1. The upload is available under the parameter name you're using. For example, it could be an `artifact`
or a nested parameter such as `user[avatar]`. If you have the upload under the
`file` parameter, reading `params[:file]` should get you an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) instance.
1. Generally speaking, it's a good idea to check if the instance is from the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) class. For example, see how we checked
[that the parameter is indeed an `UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/commit/ea30fe8a71bf16ba07f1050ab4820607b5658719#51c0cc7a17b7f12c32bc41cfab3649ff2739b0eb_79_77).
If you look around in the GitLab code base you will find quite a few
Uploaders that have their own storage location. For object storage,
this means Uploaders have their own buckets. We now **discourage**
adding new buckets for the following reasons:
WARNING:
**Do not** call `UploadedFile#from_params` directly! Do not build an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
instance using `UploadedFile#from_params`! This method can be unsafe to use depending on the `params`
passed. Instead, use the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
instance that [`multipart.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/middleware/multipart.rb)
builds automatically for you.
- Using a new bucket adds to development time because you need to make downstream changes in [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit), [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) and [CNG](https://gitlab.com/gitlab-org/build/CNG).
- Using a new bucket requires GitLab.com Infrastructure changes, which slows down the roll-out of your new feature
- Using a new bucket slows down adoption of your new feature for self-managed GitLab installation: people cannot start using your new feature until their local GitLab administrator has configured the new bucket.
## Implementing the new route with a Grape API endpoint
By using an existing bucket you avoid all this extra work
and friction. The `Gitlab.config.uploads` storage location, which is what
`AttachmentUploader` uses, is guaranteed to already be configured.
For a Grape API upload, we can have a body or multipart upload. Things are slightly more complicated: two endpoints are needed. One for the
Workhorse pre-upload authorization and one for accepting the upload metadata from Workhorse:
## Implementing Direct Upload support
1. Implement an endpoint with the URL + `/authorize` suffix that will:
- Check that the request is coming from Workhorse with the `require_gitlab_workhorse!` from the [API helpers](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/helpers.rb).
- Check user permissions.
- Set the status to `200` with `status 200`.
- Set the content type with `content_type Gitlab::Workhorse::INTERNAL_API_CONTENT_TYPE`.
- Use your dedicated `Uploader` class (let's say that it's `FileUploader`) to build the response with `FileUploader.workhorse_authorize(params)`.
1. Implement the endpoint for the upload request that will:
- Require all the `UploadedFile` objects as parameters.
- For example, if we expect a single parameter `file` to be an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) instance,
use `requires :file, type: ::API::Validations::Types::WorkhorseFile`.
- Body upload requests have their upload available under the parameter `file`.
- Check that the request is coming from Workhorse with the `require_gitlab_workhorse!` from the
[API helpers](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/helpers.rb).
- Check the user permissions.
- The remaining code of the processing. In this step, the code must read the parameter. For
our example, it would be `params[:file]`.
Below we will outline how to implement [direct upload](#direct-upload-via-workhorse) support.
WARNING:
**Do not** call `UploadedFile#from_params` directly! Do not build an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
object using `UploadedFile#from_params`! This method can be unsafe to use depending on the `params`
passed. Instead, use the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
object that [`multipart.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/middleware/multipart.rb)
builds automatically for you.
Using direct upload is not always necessary but it is usually a good
idea. Unless the uploads handled by your feature are both infrequent
and small, you probably want to implement direct upload. An example of
a feature with small and infrequent uploads is project avatars: these
rarely change and the application imposes strict size limits on them.
## Document Object Storage buckets and CarrierWave integration
If your feature handles uploads that are not both infrequent and small,
then not implementing direct upload support means that you are taking on
technical debt. At the very least, you should make sure that you _can_
add direct upload support later.
When using Object Storage, GitLab expects each kind of upload to maintain its own bucket in the respective
Object Storage destination. Moreover, the integration with CarrierWave is not used all the time.
The [Object Storage Working Group](https://about.gitlab.com/company/team/structure/working-groups/object-storage/)
is investigating an approach that unifies Object Storage buckets into a single one and removes CarrierWave
so as to simplify implementation and administration of uploads.
To support Direct Upload you need two things:
Therefore, document new uploads here by slotting them into the following tables:
1. A pre-authorization endpoint in Rails
1. A Workhorse routing rule
- [Feature bucket details](#feature-bucket-details)
- [CarrierWave integration](#carrierwave-integration)
Workhorse does not know where to store your upload. To find out it
makes a pre-authorization request. It also does not know whether or
where to make a pre-authorization request. For that you need the
routing rule.
A note to those of us who remember,
[Workhorse used to be a separate project](https://gitlab.com/groups/gitlab-org/-/epics/4826):
it is not necessary anymore to break these two steps into separate merge
requests. In fact it is probably easier to do both in one merge
request.
### Adding a Workhorse routing rule
Routing rules are defined in
[workhorse/internal/upstream/routes.go](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/workhorse/internal/upstream/routes.go).
They consist of:
- An HTTP verb (usually "POST" or "PUT")
- A path regular expression
- An upload type: MIME multipart or "full request body"
- Optionally, you can also match on HTTP headers like `Content-Type`
Example:
```golang
u.route("PUT", apiProjectPattern+`packages/nuget/`, mimeMultipartUploader),
```
You should add a test for your routing rule to `TestAcceleratedUpload`
in
[workhorse/upload_test.go](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/workhorse/upload_test.go).
You should also manually verify that when you perform an upload
request for your new feature, Workhorse makes a pre-authorization
request. You can check this by looking at the Rails access logs. This
is necessary because if you make a mistake in your routing rule you
won't get a hard failure: you just end up using the less efficient
default path.
### Adding a pre-authorization endpoint
We distinguish three cases: Rails controllers, Grape API endpoints and
GraphQL resources.
To start with the bad news: direct upload for GraphQL is currently not
supported. The reason for this is that Workhorse does not parse
GraphQL queries. Also see [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819).
Consider accepting your file upload via Grape instead.
For Grape pre-authorization endpoints, look for existing examples that
implement `/authorize` routes. One example is the
[POST `:id/uploads/authorize` endpoint](https://gitlab.com/gitlab-org/gitlab/-/blob/9ad53d623eecebb799ce89eada951e4f4a59c116/lib/api/projects.rb#L642-651).
Note that this particular example is using FileUploader, which means
that the upload will be stored in the storage location (bucket) of
that Uploader class.
For Rails endpoints you can use the
[WorkhorseAuthorization concern](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/app/controllers/concerns/workhorse_authorization.rb).
## Processing uploads
Some features require us to process uploads, for example to extract
metadata from the uploaded file. There are a couple of different ways
you can implement this. The main choice is _where_ to implement the
processing, or "who is the processor".
|Processor|Direct Upload possible?|Can reject HTTP request?|Implementation|
|---|---|---|---|
|Sidekiq|yes|no|Straightforward|
|Workhorse|yes|yes|Complex|
|Rails|no|yes|Easy|
Processing in Rails looks appealing but it tends to lead to scaling
problems down the road because you cannot use direct upload. You are
then forced to rebuild your feature with processing in Workhorse. So
if the requirements of your feature allows it, doing the processing in
Sidekiq strikes a good balance between complexity and the ability to
scale.
## CarrierWave Uploaders
GitLab uses a modified version of
[CarrierWave](https://github.com/carrierwaveuploader/carrierwave) to
manage uploads. Below we will describe how we use CarrierWave and how
we modified it.
The central concept of CarrierWave is the **Uploader** class. The
Uploader defines where files get stored, and optionally contains
validation and processing logic. To use an Uploader you must associate
it with a text column on an ActiveRecord model. This called "mounting"
and the column is called the "mountpoint". For example:
```ruby
class Project < ApplicationRecord
mount_uploader :avatar, AttachmentUploader
end
```
Now if I upload an avatar called `tanuki.png` the idea is that in the
`projects.avatar` column for my project, CarrierWave stores the string
`tanuki.png`, and that the AttachmentUploader class contains the
configuration data and directory schema. For example if the project ID
is 123, the actual file may be in
`/var/opt/gitlab/gitlab-rails/uploads/-/system/project/avatar/123/tanuki.png`.
The directory
`/var/opt/gitlab/gitlab-rails/uploads/-/system/project/avatar/123/`
was chosen by the Uploader using among others configuration
(`/var/opt/gitlab/gitlab-rails/uploads`), the model name (`project`),
the model ID (`123`) and the mountpoint (`avatar`).
> The Uploader determines the individual storage directory of your
> upload. The mountpoint column in your model contains the filename.
You never access the mountpoint column directly because CarrierWave
defines a getter and setter on your model that operates on file handle
objects.
### Optional Uploader behaviors
Besides determining the storage directory for your upload, a
CarrierWave Uploader can implement several other behaviors via
callbacks. Not all of these behaviors are usable in GitLab. In
particular, you currently cannot use the `version` mechanism of
CarrierWave. Things you can do include:
- Filename validation
- **Incompatible with direct upload:** One time pre-processing of file contents, e.g. image resizing
- **Incompatible with direct upload:** Encryption at rest
Note that CarrierWave pre-processing behaviors such as image resizing
or encryption require local access to the uploaded file. This forces
you to upload the processed file from Ruby. This flies against direct
upload, which is all about _not_ doing the upload in Ruby. If you use
direct upload with an Uploader with pre-processing behaviors then the
pre-processing behaviors will be skipped silently.
### CarrierWave Storage engines
CarrierWave has 2 storage engines:
|CarrierWave class|GitLab name|Description|
|---|---|---|
|`CarrierWave::Storage::File`|`ObjectStorage::Store::LOCAL` |Local files, accessed through the Ruby stdlib|
| `CarrierWave::Storage::Fog`|`ObjectStorage::Store::REMOTE`|Cloud files, accessed through the [Fog gem](https://github.com/fog/fog)|
GitLab uses both of these engines, depending on configuration.
The normal way to choose a storage engine in CarrierWave is to use the
`Uploader.storage` class method. In GitLab we do not do this; we have
overridden `Uploader#storage` instead. This allows us to vary the
storage engine file by file.
### CarrierWave file lifecycle
An Uploader is associated with two storage areas: regular storage and
cache storage. Each has its own storage engine. If you assign a file
to a mountpoint setter (`project.avatar =
File.open('/tmp/tanuki.png')`) you will copy/move the file to cache
storage as a side effect via the `cache!` method. To persist the file
you must somehow call the `store!` method. This either happens via
[ActiveRecord callbacks](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/orm/activerecord.rb#L55)
or by calling `store!` on an Uploader instance.
Normally you do not need to interact with `cache!` and `store!` but if
you need to debug GitLab CarrierWave modifications it is useful to
know that they are there and that they always get called.
Specifically, it is good to know that CarrierWave pre-processing
behaviors (`process` etc.) are implemented as `before :cache` hooks,
and in the case of direct upload, these hooks are ignored and do not
run.
> Direct upload skips all CarrierWave `before :cache` hooks.
## GitLab modifications to CarrierWave
GitLab uses a modified version of CarrierWave to make a number of things possible.
### Migrating data between storage engines
In
[app/uploaders/object_storage.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/app/uploaders/object_storage.rb)
there is code for migrating user data between local storage and object
storage. This code exists because for a long time, GitLab.com stored
uploads on local storage via NFS. This changed when as part of an infrastructure
migration we had to move the uploads to object storage.
This is why the CarrierWave `storage` varies from upload to upload in
GitLab, and why we have database columns like `uploads.store` or
`ci_job_artifacts.file_store`.
### Direct Upload via Workhorse
Workhorse direct upload is a mechanism that lets us accept large
uploads without spending a lot of Ruby CPU time. Workhorse is written
in Go and goroutines have a much lower resource footprint than Ruby
threads.
Direct upload works as follows.
1. Workhorse accepts a user upload request
1. Workhorse pre-authenticates the request with Rails, and receives a temporary upload location
1. Workhorse stores the file upload in the user's request to the temporary upload location
1. Workhorse propagates the request to Rails
1. Rails issues a remote copy operation to copy the uploaded file from its temporary location to the final location
1. Rails deletes the temporary upload
1. Workhorse deletes the temporary upload a second time in case Rails timed out
Normally, `cache!` returns an instance of
`CarrierWave::SanitizedFile`, and `store!` then
[uploads that file using Fog](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/storage/fog.rb#L327-L335).
In the case of object storage, with the modifications specific to GitLab, the
copying from the temporary location to the final location is
implemented by Rails fooling CarrierWave. When CarrierWave tries to
`cache!` the upload, we
[return](https://gitlab.com/gitlab-org/gitlab/-/blob/59b441d578e41cb177406a9799639e7a5aa9c7e1/app/uploaders/object_storage.rb#L367)
a `CarrierWave::Storage::Fog::File` file handle which points to the
temporary file. During the `store!` phase, CarrierWave then
[copies](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/storage/fog.rb#L325)
this file to its intended location.
## Tables
The Scalability::Frameworks team is going to make object storage and uploads more easy to use and more robust. If you add or change uploaders, it helps us if you update this table too. This helps us keep an overview of where and how uploaders are used.
### Feature bucket details

View File

@ -107,6 +107,65 @@ To change these settings:
After configuring these settings, you can configure
your chosen [provider](#supported-providers).
### Per-provider configuration
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/89379) in GitLab 15.3.
If `allow_single_sign_on` is set, GitLab uses one of the following fields returned in the OmniAuth `auth_hash` to establish a username in GitLab for the user signing in,
choosing the first that exists:
- `username`.
- `nickname`.
- `email`.
You can create GitLab configuration on a per-provider basis, which is supplied to the [provider](#supported-providers) using `args`. If you set the `gitlab_username_claim`
variable in `args` for a provider, you can select another claim to use for the GitLab username. The chosen claim must be unique to avoid collisions.
- **For Omnibus installations**
```ruby
gitlab_rails['omniauth_providers'] = [
# The generic pattern for configuring a provider with name PROVIDER_NAME
gitlab_rails['omniauth_providers'] = {
name: "PROVIDER_NAME"
...
args: { gitlab_username_claim: 'sub' } # For users signing in with the provider you configure, the GitLab username will be set to the "sub" received from the provider
},
# Here are examples using GitHub and Crowd
gitlab_rails['omniauth_providers'] = {
name: "github"
...
args: { gitlab_username_claim: 'name' } # For users signing in with GitHub, the GitLab username will be set to the "name" received from GitHub
},
{
name: "crowd"
...
args: { gitlab_username_claim: 'uid' } # For users signing in with Crowd, the GitLab username will be set to the "uid" received from Crowd
},
]
```
- **For installations from source**
```yaml
- { name: 'PROVIDER_NAME',
...
args: { gitlab_username_claim: 'sub' }
}
- { name: 'github',
...
args: { gitlab_username_claim: 'name' }
}
- { name: 'crowd',
...
args: { gitlab_username_claim: 'uid' }
}
```
### Passwords for users created via OmniAuth
The [Generated passwords for users created through integrated authentication](../security/passwords_for_integrated_authentication_methods.md)

View File

@ -221,7 +221,7 @@ The [Cohorts](user_cohorts.md) tab displays the monthly cohorts of new users and
### Prevent a user from creating groups
By default, users can create groups. To prevent a user from creating groups:
By default, users can create groups. To prevent a user from creating a top level group:
1. On the top bar, select **Menu > Admin**.
1. On the left sidebar, select **Overview > Users** (`/admin/users`).
@ -230,6 +230,8 @@ By default, users can create groups. To prevent a user from creating groups:
1. Clear the **Can create group** checkbox.
1. Select **Save changes**.
It is also possible to [limit which roles can create a subgroup within a group](../group/subgroups/index.md#change-who-can-create-subgroups).
### Administering Groups
You can administer all groups in the GitLab instance from the Admin Area's Groups page.

View File

@ -47,7 +47,7 @@ You can use GitLab to review analytics at the project level. Some of these featu
The following analytics features are available for users to create personalized views:
- [Application Security](../application_security/security_dashboard/#security-center)
- [Application Security](../application_security/security_dashboard/index.md#security-center)
Be sure to review the documentation page for this feature for GitLab tier requirements.

View File

@ -209,7 +209,12 @@ monthlyBugsCreated:
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/725) in GitLab 15.3.
The `data_source` parameter was introduced to allow visualizing data from different data sources. Currently `issuable` is the only supported value.
The `data_source` parameter was introduced to allow visualizing data from different data sources.
Supported values are:
- `issuables`: Exposes merge request or issue data.
- `dora`: Exposes DORA metrics data.
#### `Issuable` query parameters
@ -259,7 +264,7 @@ monthlyBugsCreated:
- regression
```
#### `query.params.collection_labels`
##### `query.params.collection_labels`
Group "issuable" by the configured labels.
@ -286,7 +291,7 @@ weeklyBugsBySeverity:
- S4
```
#### `query.group_by`
##### `query.group_by`
Define the X-axis of your chart.
@ -296,7 +301,7 @@ Supported values are:
- `week`: Group data per week.
- `month`: Group data per month.
#### `query.period_limit`
##### `query.period_limit`
Define how far "issuables" are queried in the past (using the `query.period_field`).
@ -333,6 +338,68 @@ NOTE:
Until [this bug](https://gitlab.com/gitlab-org/gitlab/-/issues/26911), is resolved,
you may see `created_at` in place of `merged_at`. `created_at` is used instead.
#### `DORA` query parameters
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367248) in GitLab 15.3.
An example DORA chart definition:
```yaml
dora:
title: "DORA charts"
charts:
- title: "DORA deployment frequency"
type: bar
query:
data_source: dora
params:
metric: deployment_frequency
group_by: day
period_limit: 10
projects:
only:
- 38
- title: "DORA lead time for changes"
description: "DORA lead time for changes"
type: bar
query:
data_source: dora
params:
metric: lead_time_for_changes
group_by: day
environment_tiers:
- staging
period_limit: 30
```
#### `query.metric`
Defines which DORA metric to query. The available values are:
- `deployment_frequency` (default)
- `lead_time_for_changes`
- `time_to_restore_service`
- `change_failure_rate`
The metrics are described on the [DORA API](../../../api/dora/metrics.md#the-value-field) page.
##### `query.group_by`
Define the X-axis of your chart.
Supported values are:
- `day` (default): Group data per day.
- `month`: Group data per month.
##### `query.period_limit`
Define how far the metrics are queried in the past (default: 15). Maximum lookback period is 180 days or 6 months.
##### `query.environment_tiers`
An array of environments to include into the calculation (default: production).
### `projects`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/10904) in GitLab 12.4.

View File

@ -129,95 +129,6 @@ Methods for creating a release using a CI/CD job include:
- Create a release when a Git tag is created.
- Create a release when a commit is merged to the default branch.
#### Create a release when a Git tag is created
In this CI/CD example, pushing a Git tag to the repository, or creating a Git tag in the UI triggers
the release. You can use this method if you prefer to create the Git tag manually, and create a
release as a result.
NOTE:
Do not provide Release notes when you create the Git tag in the UI. Providing release notes
creates a release, resulting in the pipeline failing.
Key points in the following _extract_ of an example `.gitlab-ci.yml` file:
- The `rules` stanza defines when the job is added to the pipeline.
- The Git tag is used in the release's name and description.
```yaml
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $CI_COMMIT_TAG # Run this job when a tag is created
script:
- echo "running release_job"
release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties
tag_name: '$CI_COMMIT_TAG'
description: '$CI_COMMIT_TAG'
```
#### Create a release when a commit is merged to the default branch
In this CI/CD example, merging a commit to the default branch triggers the pipeline. You can use
this method if your release workflow does not create a tag manually.
Key points in the following _extract_ of an example `.gitlab-ci.yml` file:
- The Git tag, description, and reference are created automatically in the pipeline.
- If you manually create a tag, the `release_job` job does not run.
```yaml
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
script:
- echo "running release_job for $TAG"
release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties
tag_name: 'v0.$CI_PIPELINE_IID' # The version is incremented per pipeline.
description: 'v0.$CI_PIPELINE_IID'
ref: '$CI_COMMIT_SHA' # The tag is created from the pipeline SHA.
```
NOTE:
Environment variables set in `before_script` or `script` are not available for expanding
in the same job. Read more about
[potentially making variables available for expanding](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/6400).
#### Skip multiple pipelines when creating a release
Creating a release using a CI/CD job could potentially trigger multiple pipelines if the associated tag does not exist already. To understand how this might happen, consider the following workflows:
- Tag first, release second:
1. A tag is created via UI or pushed.
1. A tag pipeline is triggered, and runs `release` job.
1. A release is created.
- Release first, tag second:
1. A pipeline is triggered when commits are pushed or merged to default branch. The pipeline runs `release` job.
1. A release is created.
1. A tag is created.
1. A tag pipeline is triggered. The pipeline also runs `release` job.
In the second workflow, the `release` job runs in multiple pipelines. To prevent this, you can use the [`workflow:rules` keyword](../../../ci/yaml/index.md#workflowrules) to determine if a release job should run in a tag pipeline:
```yaml
release_job:
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job in a tag pipeline
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
script:
- echo "Create release"
release:
name: 'My awesome release'
tag_name: '$CI_COMMIT_TAG'
```
### Use a custom SSL CA certificate authority
You can use the `ADDITIONAL_CA_CERT_BUNDLE` CI/CD variable to configure a custom SSL CA certificate authority,

View File

@ -0,0 +1,100 @@
---
stage: Release
group: Release
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Release CI/CD examples
GitLab release functionality is flexible, able to be configured to match your workflow. This page
features example CI/CD release jobs. Each example demonstrates a method of creating a release in a
CI/CD pipeline.
## Create a release when a Git tag is created
In this CI/CD example, pushing a Git tag to the repository, or creating a Git tag in the UI triggers
the release. You can use this method if you prefer to create the Git tag manually, and create a
release as a result.
NOTE:
Do not provide Release notes when you create the Git tag in the UI. Providing release notes
creates a release, resulting in the pipeline failing.
Key points in the following _extract_ of an example `.gitlab-ci.yml` file:
- The `rules` stanza defines when the job is added to the pipeline.
- The Git tag is used in the release's name and description.
```yaml
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $CI_COMMIT_TAG # Run this job when a tag is created
script:
- echo "running release_job"
release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties
tag_name: '$CI_COMMIT_TAG'
description: '$CI_COMMIT_TAG'
```
## Create a release when a commit is merged to the default branch
In this CI/CD example, merging a commit to the default branch triggers the pipeline. You can use
this method if your release workflow does not create a tag manually.
Key points in the following _extract_ of an example `.gitlab-ci.yml` file:
- The Git tag, description, and reference are created automatically in the pipeline.
- If you manually create a tag, the `release_job` job does not run.
```yaml
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
script:
- echo "running release_job for $TAG"
release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties
tag_name: 'v0.$CI_PIPELINE_IID' # The version is incremented per pipeline.
description: 'v0.$CI_PIPELINE_IID'
ref: '$CI_COMMIT_SHA' # The tag is created from the pipeline SHA.
```
NOTE:
Environment variables set in `before_script` or `script` are not available for expanding
in the same job. Read more about
[potentially making variables available for expanding](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/6400).
## Skip multiple pipelines when creating a release
Creating a release using a CI/CD job could potentially trigger multiple pipelines if the associated tag does not exist already. To understand how this might happen, consider the following workflows:
- Tag first, release second:
1. A tag is created via UI or pushed.
1. A tag pipeline is triggered, and runs `release` job.
1. A release is created.
- Release first, tag second:
1. A pipeline is triggered when commits are pushed or merged to default branch. The pipeline runs `release` job.
1. A release is created.
1. A tag is created.
1. A tag pipeline is triggered. The pipeline also runs `release` job.
In the second workflow, the `release` job runs in multiple pipelines. To prevent this, you can use the [`workflow:rules` keyword](../../../ci/yaml/index.md#workflowrules) to determine if a release job should run in a tag pipeline:
```yaml
release_job:
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job in a tag pipeline
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
script:
- echo "Create release"
release:
name: 'My awesome release'
tag_name: '$CI_COMMIT_TAG'
```

View File

@ -6,6 +6,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Wiki **(FREE)**
> - Page loading [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/336792) to asynchronous in GitLab 14.9.
> - Page slug encoding method [changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/71753) to `ERB::Util.url_encode` in GitLab 14.9.
If you don't want to keep your documentation in your repository, but you want
to keep it in the same project as your code, you can use the wiki GitLab provides
in each GitLab project. Every wiki is a separate Git repository, so you can create
@ -370,3 +373,12 @@ For the status of the ongoing development for CommonMark and GitLab Flavored Mar
- [Group repository storage moves API](../../../api/group_repository_storage_moves.md)
- [Group wikis API](../../../api/group_wikis.md)
- [Wiki keyboard shortcuts](../../shortcuts.md#wiki-pages)
## Troubleshooting
### Page slug rendering with Apache reverse proxy
In GitLab 14.9 and later, page slugs are now encoded using the
[`ERB::Util.url_encode`](https://www.rubydoc.info/stdlib/erb/ERB%2FUtil.url_encode) method.
If you use an Apache reverse proxy, you can add a `nocanon` argument to the `ProxyPass`
line of your Apache configuration to ensure your page slugs render correctly.

View File

@ -151,8 +151,6 @@ module API
requires :deployment_id, type: Integer, desc: 'The deployment ID'
end
delete ':id/deployments/:deployment_id' do
not_found! unless Feature.enabled?(:delete_deployments_api, user_project)
deployment = user_project.deployments.find(params[:deployment_id])
authorize!(:destroy_deployment, deployment)

View File

@ -107,8 +107,8 @@ module Gitlab
event = AuthenticationEvent.new(authentication_event_payload)
event.save!
rescue ActiveRecord::RecordInvalid => error
::Gitlab::ErrorTracking.track_exception(error, audit_operation: @name)
rescue ActiveRecord::RecordInvalid => e
::Gitlab::ErrorTracking.track_exception(e, audit_operation: @name)
end
def authentication_event_payload
@ -146,8 +146,8 @@ module Gitlab
def log_to_database(events)
AuditEvent.bulk_insert!(events)
rescue ActiveRecord::RecordInvalid => error
::Gitlab::ErrorTracking.track_exception(error, audit_operation: @name)
rescue ActiveRecord::RecordInvalid => e
::Gitlab::ErrorTracking.track_exception(e, audit_operation: @name)
end
def log_to_file(events)

View File

@ -59,14 +59,43 @@ module Gitlab
auth_hash['info']
end
def get_info(key)
value = info[key]
def coerce_utf8(value)
value.is_a?(String) ? Gitlab::Utils.force_utf8(value) : value
end
def get_info(key)
coerce_utf8(info[key])
end
def provider_config
Gitlab::Auth::OAuth::Provider.config_for(@provider) || {}
end
def provider_args
@provider_args ||= provider_config['args'].presence || {}
end
def get_from_auth_hash_or_info(key)
coerce_utf8(auth_hash[key]) || get_info(key)
end
# Allow for configuring a custom username claim per provider from
# the auth hash or use the canonical username or nickname fields
def gitlab_username_claim
provider_args.dig('gitlab_username_claim')&.to_sym
end
def username_claims
[gitlab_username_claim, :username, :nickname].compact
end
def get_username
username_claims.map { |claim| get_from_auth_hash_or_info(claim) }.find { |name| name.presence }
end
def username_and_email
@username_and_email ||= begin
username = get_info(:username).presence || get_info(:nickname).presence
username = get_username
email = get_info(:email).presence
username ||= generate_username(email) if email

View File

@ -39,4 +39,7 @@ kics-iac-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /kics/
when: never
- if: $CI_COMMIT_BRANCH
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
when: never
- if: $CI_COMMIT_BRANCH # If there's no open merge request, add it to a *branch* pipeline instead.

View File

@ -47,7 +47,7 @@ bandit-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /bandit/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.py'
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
@ -68,7 +68,7 @@ brakeman-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /brakeman/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.rb'
- '**/Gemfile'
@ -91,7 +91,7 @@ eslint-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /eslint/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.html'
- '**/*.js'
@ -120,7 +120,7 @@ flawfinder-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /flawfinder/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.c'
- '**/*.cc'
@ -152,7 +152,7 @@ kubesec-sast:
- if: $SAST_EXCLUDED_ANALYZERS =~ /kubesec/
when: never
# Add the job to merge request pipelines if there's an open merge request.
- if: $CI_MERGE_REQUEST_IID &&
- if: $CI_PIPELINE_SOURCE == "merge_request_event" &&
$SCAN_KUBERNETES_MANIFESTS == 'true'
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
when: never
@ -172,7 +172,7 @@ gosec-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /gosec/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.go'
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
@ -197,7 +197,7 @@ mobsf-android-sast:
- if: $SAST_EXCLUDED_ANALYZERS =~ /mobsf/
when: never
# Add the job to merge request pipelines if there's an open merge request.
- if: $CI_MERGE_REQUEST_IID &&
- if: $CI_PIPELINE_SOURCE == "merge_request_event" &&
$SAST_EXPERIMENTAL_FEATURES == 'true'
exists:
- '**/*.apk'
@ -219,7 +219,7 @@ mobsf-ios-sast:
- if: $SAST_EXCLUDED_ANALYZERS =~ /mobsf/
when: never
# Add the job to merge request pipelines if there's an open merge request.
- if: $CI_MERGE_REQUEST_IID &&
- if: $CI_PIPELINE_SOURCE == "merge_request_event" &&
$SAST_EXPERIMENTAL_FEATURES == 'true'
exists:
- '**/*.ipa'
@ -245,7 +245,7 @@ nodejs-scan-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /nodejs-scan/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/package.json'
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
@ -266,7 +266,7 @@ phpcs-security-audit-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /phpcs-security-audit/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.php'
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
@ -287,7 +287,7 @@ pmd-apex-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /pmd-apex/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.cls'
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
@ -308,7 +308,7 @@ security-code-scan-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /security-code-scan/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.csproj'
- '**/*.vbproj'
@ -332,7 +332,7 @@ semgrep-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /semgrep/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.py'
- '**/*.js'
@ -367,7 +367,7 @@ sobelow-sast:
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /sobelow/
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- 'mix.exs'
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
@ -392,7 +392,7 @@ spotbugs-sast:
when: never
- if: $SAST_DISABLED
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
exists:
- '**/*.groovy'
- '**/*.java'

View File

@ -8,6 +8,7 @@ variables:
TEMPLATE_REGISTRY_HOST: 'registry.gitlab.com'
SECURE_ANALYZERS_PREFIX: "$TEMPLATE_REGISTRY_HOST/security-products"
SECRET_DETECTION_IMAGE_SUFFIX: ""
SECRETS_ANALYZER_VERSION: "4"
SECRET_DETECTION_EXCLUDED_PATHS: ""
@ -29,7 +30,7 @@ secret_detection:
rules:
- if: $SECRET_DETECTION_DISABLED
when: never
- if: $CI_MERGE_REQUEST_IID # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Add the job to merge request pipelines if there's an open merge request.
- if: $CI_OPEN_MERGE_REQUESTS # Don't add it to a *branch* pipeline if it's already in a merge request pipeline.
when: never
- if: $CI_COMMIT_BRANCH # If there's no open merge request, add it to a *branch* pipeline instead.

View File

@ -59,6 +59,10 @@ module Gitlab
).render_in(@template, &block)
end
def gitlab_ui_datepicker(method, options = {})
@template.text_field @object_name, method, options.merge(class: "datepicker form-control gl-form-input")
end
private
def format_options(options)

View File

@ -6315,11 +6315,6 @@ msgstr ""
msgid "Billing|You can begin moving members in %{namespaceName} now. A member loses access to the group when you turn off %{strongStart}In a seat%{strongEnd}. If over 5 members have %{strongStart}In a seat%{strongEnd} enabled after October 19, 2022, we'll select the 5 members who maintain access. We'll first count members that have Owner and Maintainer roles, then the most recently active members until we reach 5 members. The remaining members will get a status of Over limit and lose access to the group."
msgstr ""
msgid "Billing|Your free group is now limited to %d member"
msgid_plural "Billing|Your free group is now limited to %d members"
msgstr[0] ""
msgstr[1] ""
msgid "Billing|Your group recently changed to use the Free plan. %{over_limit_message} You can free up space for new members by removing those who no longer need access or toggling them to over-limit. To get an unlimited number of members, you can %{link_start}upgrade%{link_end} to a paid tier."
msgstr ""
@ -11629,6 +11624,9 @@ msgstr ""
msgid "DORA4Metrics|Change failure rate"
msgstr ""
msgid "DORA4Metrics|Change failure rate (%%)"
msgstr ""
msgid "DORA4Metrics|Date"
msgstr ""
@ -11644,6 +11642,9 @@ msgstr ""
msgid "DORA4Metrics|Lead time for changes"
msgstr ""
msgid "DORA4Metrics|Lead time for changes (median days)"
msgstr ""
msgid "DORA4Metrics|Median (last %{days}d)"
msgstr ""
@ -11689,6 +11690,9 @@ msgstr ""
msgid "DORA4Metrics|Time to restore service"
msgstr ""
msgid "DORA4Metrics|Time to restore service (median days)"
msgstr ""
msgid "DSN"
msgstr ""
@ -26741,6 +26745,9 @@ msgstr ""
msgid "Notify|%{invite_email}, now known as %{user_name}, has accepted your invitation to join the %{target_name} %{target_model_name}."
msgstr ""
msgid "Notify|%{invited_user} has %{highlight_start}declined%{highlight_end} your invitation to join the %{target_link} %{target_name}."
msgstr ""
msgid "Notify|%{member_link} requested %{member_role} access to the %{target_source_link} %{target_type}."
msgstr ""
@ -45293,6 +45300,11 @@ msgstr ""
msgid "Your first project"
msgstr ""
msgid "Your free group is now limited to %d member"
msgid_plural "Your free group is now limited to %d members"
msgstr[0] ""
msgstr[1] ""
msgid "Your group, %{strong_start}%{namespace_name}%{strong_end} has more than %{free_user_limit} member. From October 19, 2022, the %{free_user_limit} most recently active member will remain active, and the remaining members will have the %{link_start}Over limit status%{link_end} and lose access to the group. You can go to the Usage Quotas page to manage which %{free_user_limit} member will remain in your group."
msgid_plural "Your group, %{strong_start}%{namespace_name}%{strong_end} has more than %{free_user_limit} members. From October 19, 2022, the %{free_user_limit} most recently active members will remain active, and the remaining members will have the %{link_start}Over limit status%{link_end} and lose access to the group. You can go to the Usage Quotas page to manage which %{free_user_limit} members will remain in your group."
msgstr[0] ""

View File

@ -16,10 +16,6 @@ RSpec.describe 'CI Lint', :js do
visit project_ci_lint_path(project)
editor_set_value(yaml_content)
wait_for('YAML content') do
find(content_selector).text.present?
end
end
describe 'YAML parsing' do

View File

@ -26,23 +26,23 @@ RSpec.describe Groups::AcceptingProjectTransfersFinder do
group_where_direct_developer.add_developer(user)
create(:group_group_link, :owner,
shared_with_group: group_where_direct_owner,
shared_group: shared_with_group_where_direct_owner_as_owner
shared_with_group: group_where_direct_owner,
shared_group: shared_with_group_where_direct_owner_as_owner
)
create(:group_group_link, :guest,
shared_with_group: group_where_direct_owner,
shared_group: shared_with_group_where_direct_owner_as_guest
shared_with_group: group_where_direct_owner,
shared_group: shared_with_group_where_direct_owner_as_guest
)
create(:group_group_link, :maintainer,
shared_with_group: group_where_direct_owner,
shared_group: shared_with_group_where_direct_owner_as_maintainer
shared_with_group: group_where_direct_owner,
shared_group: shared_with_group_where_direct_owner_as_maintainer
)
create(:group_group_link, :owner,
shared_with_group: group_where_direct_developer,
shared_group: shared_with_group_where_direct_developer_as_owner
shared_with_group: group_where_direct_developer,
shared_group: shared_with_group_where_direct_developer_as_owner
)
end
@ -51,13 +51,13 @@ RSpec.describe Groups::AcceptingProjectTransfersFinder do
it 'only returns groups where the user has access to transfer projects to' do
expect(result).to match_array([
group_where_direct_owner,
subgroup_of_group_where_direct_owner,
group_where_direct_maintainer,
shared_with_group_where_direct_owner_as_owner,
shared_with_group_where_direct_owner_as_maintainer,
subgroup_of_shared_with_group_where_direct_owner_as_maintainer
])
group_where_direct_owner,
subgroup_of_group_where_direct_owner,
group_where_direct_maintainer,
shared_with_group_where_direct_owner_as_owner,
shared_with_group_where_direct_owner_as_maintainer,
subgroup_of_shared_with_group_where_direct_owner_as_maintainer
])
end
end
end

View File

@ -16,6 +16,7 @@ exports[`Snippet Blob Edit component with loaded blob matches snapshot 1`] = `
<source-editor-stub
debouncevalue="250"
editoroptions="[object Object]"
extensions="[object Object]"
fileglobalid="blob_local_7"
filename="foo/bar/test.md"
value="Lorem ipsum dolar sit amet,

View File

@ -3,6 +3,7 @@ import { nextTick } from 'vue';
import { EDITOR_READY_EVENT } from '~/editor/constants';
import Editor from '~/editor/source_editor';
import SourceEditor from '~/vue_shared/components/source_editor.vue';
import * as helpers from 'jest/editor/helpers';
jest.mock('~/editor/source_editor');
@ -13,6 +14,7 @@ describe('Source Editor component', () => {
const value = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.';
const fileName = 'lorem.txt';
const fileGlobalId = 'snippet_777';
const useSpy = jest.fn();
const createInstanceMock = jest.fn().mockImplementation(() => {
mockInstance = {
onDidChangeModelContent: jest.fn(),
@ -20,6 +22,7 @@ describe('Source Editor component', () => {
getValue: jest.fn(),
setValue: jest.fn(),
dispose: jest.fn(),
use: useSpy,
};
return mockInstance;
});
@ -83,10 +86,27 @@ describe('Source Editor component', () => {
blobPath: fileName,
blobGlobalId: fileGlobalId,
blobContent: value,
extensions: null,
});
});
it.each`
description | extensions | toBeCalled
${'no extension when `undefined` is'} | ${undefined} | ${false}
${'no extension when {} is'} | ${{}} | ${false}
${'no extension when [] is'} | ${[]} | ${false}
${'single extension'} | ${{ definition: helpers.SEClassExtension }} | ${true}
${'single extension with options'} | ${{ definition: helpers.SEWithSetupExt, setupOptions: { foo: 'bar' } }} | ${true}
${'multiple extensions'} | ${[{ definition: helpers.SEClassExtension }, { definition: helpers.SEWithSetupExt }]} | ${true}
${'multiple extensions with options'} | ${[{ definition: helpers.SEClassExtension }, { definition: helpers.SEWithSetupExt, setupOptions: { foo: 'bar' } }]} | ${true}
`('installs $description passed as a prop', ({ extensions, toBeCalled }) => {
createComponent({ extensions });
if (toBeCalled) {
expect(useSpy).toHaveBeenCalledWith(extensions);
} else {
expect(useSpy).not.toHaveBeenCalled();
}
});
it('reacts to the changes in fileName', () => {
const newFileName = 'ipsum.txt';

View File

@ -14,6 +14,7 @@ RSpec.describe Gitlab::Auth::OAuth::AuthHash do
)
end
let(:provider_config) { { 'args' => { 'gitlab_username_claim' => 'first_name' } } }
let(:uid_raw) do
+"CN=Onur K\xC3\xBC\xC3\xA7\xC3\xBCk,OU=Test,DC=example,DC=net"
end
@ -35,6 +36,7 @@ RSpec.describe Gitlab::Auth::OAuth::AuthHash do
let(:email_utf8) { email_ascii.force_encoding(Encoding::UTF_8) }
let(:nickname_utf8) { nickname_ascii.force_encoding(Encoding::UTF_8) }
let(:name_utf8) { name_ascii.force_encoding(Encoding::UTF_8) }
let(:first_name_utf8) { first_name_ascii.force_encoding(Encoding::UTF_8) }
let(:info_hash) do
{
@ -91,6 +93,34 @@ RSpec.describe Gitlab::Auth::OAuth::AuthHash do
end
end
context 'custom username field provided' do
before do
allow(Gitlab::Auth::OAuth::Provider).to receive(:config_for).and_return(provider_config)
end
it 'uses the custom field for the username' do
expect(auth_hash.username).to eql first_name_utf8
end
it 'uses the default claim for the username when the custom claim is not found' do
provider_config['args']['gitlab_username_claim'] = 'nonexistent'
expect(auth_hash.username).to eql nickname_utf8
end
it 'uses the default claim for the username when the custom claim is empty' do
info_hash[:first_name] = ''
expect(auth_hash.username).to eql nickname_utf8
end
it 'uses the default claim for the username when the custom claim is nil' do
info_hash[:first_name] = nil
expect(auth_hash.username).to eql nickname_utf8
end
end
context 'auth_hash constructed with ASCII-8BIT encoding' do
it 'forces utf8 encoding on uid' do
expect(auth_hash.uid.encoding).to eql Encoding::UTF_8

View File

@ -36,9 +36,10 @@ RSpec.describe 'Jobs/SAST-IaC.latest.gitlab-ci.yml' do
let(:merge_request) { create(:merge_request, :simple, source_project: project) }
let(:pipeline) { service.execute(merge_request).payload }
it 'has no jobs' do
it 'creates a pipeline with the expected jobs' do
expect(pipeline).to be_merge_request_event
expect(build_names).to be_empty
expect(pipeline.errors.full_messages).to be_empty
expect(build_names).to match_array(%w(kics-iac-sast))
end
end

View File

@ -229,6 +229,45 @@ RSpec.describe Gitlab::FormBuilders::GitlabUiFormBuilder do
end
end
describe '#gitlab_ui_datepicker' do
subject(:datepicker_html) do
form_builder.gitlab_ui_datepicker(
:expires_at,
**optional_args
)
end
let(:optional_args) { {} }
context 'without optional arguments' do
it 'renders correct html' do
expected_html = <<~EOS
<input class="datepicker form-control gl-form-input" type="text" name="user[expires_at]" id="user_expires_at" />
EOS
expect(html_strip_whitespace(datepicker_html)).to eq(html_strip_whitespace(expected_html))
end
end
context 'with optional arguments' do
let(:optional_args) do
{
id: 'milk_gone_bad',
data: { action: 'throw' },
value: '2022-08-01'
}
end
it 'renders correct html' do
expected_html = <<~EOS
<input id="milk_gone_bad" data-action="throw" value="2022-08-01" class="datepicker form-control gl-form-input" type="text" name="user[expires_at]" />
EOS
expect(html_strip_whitespace(datepicker_html)).to eq(html_strip_whitespace(expected_html))
end
end
end
private
def html_strip_whitespace(html)

View File

@ -24,8 +24,8 @@ RSpec.describe GroupGroupLink do
it 'returns all records which are greater than Guests access' do
expect(described_class.non_guests).to match_array([
group_group_link_reporter, group_group_link,
group_group_link_maintainer, group_group_link_owner
group_group_link_reporter, group_group_link,
group_group_link_maintainer, group_group_link_owner
])
end
end
@ -38,13 +38,13 @@ RSpec.describe GroupGroupLink do
it 'returns all records which have OWNER or MAINTAINER access' do
expect(described_class.with_owner_or_maintainer_access).to match_array([
group_group_link_maintainer,
group_group_link_owner
])
group_group_link_maintainer,
group_group_link_owner
])
end
end
context 'access via group shares' do
context 'for access via group shares' do
let_it_be(:shared_with_group_1) { create(:group) }
let_it_be(:shared_with_group_2) { create(:group) }
let_it_be(:shared_with_group_3) { create(:group) }

View File

@ -483,18 +483,6 @@ RSpec.describe API::Deployments do
end
end
context 'when feature flag is disabled' do
before do
stub_feature_flags(delete_deployments_api: false)
end
it 'is not found' do
delete api("/projects/#{project.id}/deployments/#{old_deploy.id}", user)
expect(response).to have_gitlab_http_status(:not_found)
end
end
context 'as a developer' do
let(:developer) { create(:user) }

View File

@ -13,6 +13,10 @@ module Spec
editor = find('.monaco-editor')
uri = editor['data-uri']
execute_script("localMonaco.getModel('#{uri}').setValue('#{escape_javascript(value)}')")
# We only check that the first line is present because when the content is long,
# only a part of the text will be rendered in the DOM due to scrolling
page.has_selector?('.gl-source-editor .view-lines', text: value.lines.first)
end
end
end