Add latest changes from gitlab-org/gitlab@master

This commit is contained in:
GitLab Bot 2022-11-02 18:10:05 +00:00
parent e36443c1d6
commit 983f6954d1
78 changed files with 1693 additions and 274 deletions

View File

@ -2,6 +2,22 @@
documentation](doc/development/changelog.md) for instructions on adding your own
entry.
## 15.5.2 (2022-11-02)
### Security (11 changes)
- [Redact confidential references in Jira issue descriptions](gitlab-org/security/gitlab@b6df9d1e4e0c996655a41831fbfae8f457fe1e6b) ([merge request](gitlab-org/security/gitlab!2870))
- [Forbid reading emojis on internal notes](gitlab-org/security/gitlab@0015523a32c38c184ffef9067d9952d0ef54e3f2) ([merge request](gitlab-org/security/gitlab!2854))
- [Same-site redirect vulnerability](gitlab-org/security/gitlab@7fd87a5f0b8317d45171fb565c198cda4e65fa34) ([merge request](gitlab-org/security/gitlab!2878))
- [BYPASS: Stored-XSS with CSP-bypass via scoped labels' color](gitlab-org/security/gitlab@2f1777b305d632b3256076967a798dab65fe6bf4) ([merge request](gitlab-org/security/gitlab!2860))
- [Fix Running Upstream Pipelines Jobs Without Permission](gitlab-org/security/gitlab@9b3f469da7c0295eb12120027a45ac04f76cdad5) ([merge request](gitlab-org/security/gitlab!2881))
- [Add length limit to addressable URLs](gitlab-org/security/gitlab@82ffc5825c9a7761d787c66b8c4a1593b3330c50) ([merge request](gitlab-org/security/gitlab!2856))
- [Add a redirect wall before artifact redirect to pages](gitlab-org/security/gitlab@41a4480b3302ba8a67e94de5420d41298d258585) ([merge request](gitlab-org/security/gitlab!2875))
- [Sandbox swagger-ui to prevent injection attacks](gitlab-org/security/gitlab@432913f802a093b67f2e5d46cc51b5f13bb16590) ([merge request](gitlab-org/security/gitlab!2857))
- [Fix external project permission when using CI prefill variables](gitlab-org/security/gitlab@ec872da0ab949f447aec35d64d1db45b5d25b7fd) ([merge request](gitlab-org/security/gitlab!2853))
- [Resolve users can view audit events from other members](gitlab-org/security/gitlab@34ffe2e88fa462b055f22d6af84fdb93a62fa575) ([merge request](gitlab-org/security/gitlab!2855))
- [Path traversal fix for Secure Files](gitlab-org/security/gitlab@568c36b34a884cc877b6292b340de9da66537bc8) ([merge request](gitlab-org/security/gitlab!2858))
## 15.5.1 (2022-10-24)
### Fixed (2 changes)
@ -668,6 +684,23 @@ entry.
- [Add environment keyword to pages job](gitlab-org/gitlab@73af406f9101da0a2f076ac023de5dfd60c85445) by @edith007 ([merge request](gitlab-org/gitlab!98283))
- [Remove feature flag ci_variables_refactoring_to_variable](gitlab-org/gitlab@f5d1e8277fb8c326082e58536aeae21ab3fd289c) ([merge request](gitlab-org/gitlab!97967))
## 15.4.4 (2022-11-02)
### Security (12 changes)
- [Datadog API key leak by changing integration URL](gitlab-org/security/gitlab@15e90bacccbc5146411c7a4ac3400470d3985c18) ([merge request](gitlab-org/security/gitlab!2865))
- [Redact confidential references in Jira issue descriptions](gitlab-org/security/gitlab@8b60fe8c3150348973a9d3ad79d781614db47103) ([merge request](gitlab-org/security/gitlab!2871))
- [Forbid reading emojis on internal notes](gitlab-org/security/gitlab@ec9b40a9e9d3c91dc690d74d76187e41b5884ff9) ([merge request](gitlab-org/security/gitlab!2836))
- [Same-site redirect vulnerability](gitlab-org/security/gitlab@de86e0b20c34d1475ab7535bc3ed0d7a21727a20) ([merge request](gitlab-org/security/gitlab!2879))
- [BYPASS: Stored-XSS with CSP-bypass via scoped labels' color](gitlab-org/security/gitlab@52e8105445cdba63be5c4e866cb289d13b6d6e7c) ([merge request](gitlab-org/security/gitlab!2861))
- [Fix Running Upstream Pipelines Jobs Without Permission](gitlab-org/security/gitlab@ad2fe7b8555fd568944718f5fb087de8de7b2425) ([merge request](gitlab-org/security/gitlab!2882))
- [Add length limit to addressable URLs](gitlab-org/security/gitlab@8c44b371bd120979b339e4e5dbb7208fb048eadd) ([merge request](gitlab-org/security/gitlab!2829))
- [Add a redirect wall before artifact redirect to pages](gitlab-org/security/gitlab@837c0f2245847c43d6cfd8f1d7860e46ffacfe18) ([merge request](gitlab-org/security/gitlab!2812))
- [Sandbox swagger-ui to prevent injection attacks](gitlab-org/security/gitlab@90567191fae7f7d84d001e52f0adc11155dc564f) ([merge request](gitlab-org/security/gitlab!2849))
- [Fix external project permission when using CI prefill variables](gitlab-org/security/gitlab@ae39f2b5a5a27b6e85ef642b768963b7ed018a14) ([merge request](gitlab-org/security/gitlab!2822))
- [Resolve users can view audit events from other members](gitlab-org/security/gitlab@f819d033a190b2b5f7d635395575e5472b1fe8e7) ([merge request](gitlab-org/security/gitlab!2842))
- [Path traversal fix for Secure Files](gitlab-org/security/gitlab@bd138464ee5fa71755f2b4d9b9aaaa3c8017a165) ([merge request](gitlab-org/security/gitlab!2848))
## 15.4.3 (2022-10-19)
### Fixed (4 changes)
@ -1335,6 +1368,23 @@ entry.
- [Improve specs with shared examples](gitlab-org/gitlab@dd3f2ecd882e89511eaa927102fc4101f684a38f) ([merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/95539)) **GitLab Enterprise Edition**
- [Fix Style/Next offenses](gitlab-org/gitlab@bdf877063ba1d8d4df1216f7875905343d9e5e33) ([merge request](gitlab-org/gitlab!93329))
## 15.3.5 (2022-11-02)
### Security (12 changes)
- [Datadog API key leak by changing integration URL](gitlab-org/security/gitlab@3a8868210a210f07e08324a328e937fd818e34d3) ([merge request](gitlab-org/security/gitlab!2864))
- [Redact confidential references in Jira issue descriptions](gitlab-org/security/gitlab@97178d0d8e1af412b949e81b5f53d8d9cf07371b) ([merge request](gitlab-org/security/gitlab!2872))
- [Forbid reading emojis on internal notes](gitlab-org/security/gitlab@7684247b947b946e2865ec61a2f7eddd9a25daf7) ([merge request](gitlab-org/security/gitlab!2838))
- [Same-site redirect vulnerability](gitlab-org/security/gitlab@4c0737d57b9d9f5f86ffdd5b0c25f9805d05d5f7) ([merge request](gitlab-org/security/gitlab!2880))
- [BYPASS: Stored-XSS with CSP-bypass via scoped labels' color](gitlab-org/security/gitlab@a72e2384e95ed083a139252d51b6638fe4128c14) ([merge request](gitlab-org/security/gitlab!2862))
- [Fix Running Upstream Pipelines Jobs Without Permission](gitlab-org/security/gitlab@f6545466e181f3688d5ed67023cd0f1bd6220a7c) ([merge request](gitlab-org/security/gitlab!2883))
- [Add length limit to addressable URLs](gitlab-org/security/gitlab@411bba8ac053211906d40d24b9fdb2c565d33f62) ([merge request](gitlab-org/security/gitlab!2830))
- [Add a redirect wall before artifact redirect to pages](gitlab-org/security/gitlab@2b9a6ccddb77cab46217ef0fd633af2f32548313) ([merge request](gitlab-org/security/gitlab!2813))
- [Sandbox swagger-ui to prevent injection attacks](gitlab-org/security/gitlab@3b8771478b8615d24794fc49195b5f2f8257df0c) ([merge request](gitlab-org/security/gitlab!2850))
- [Fix external project permission when using CI prefill variables](gitlab-org/security/gitlab@107e583c97b39951c08728fdff1b44e8c6fa7f6f) ([merge request](gitlab-org/security/gitlab!2823))
- [Resolve users can view audit events from other members](gitlab-org/security/gitlab@cdcd5ed72312dfddcd3e91ae824188a0dd88e745) ([merge request](gitlab-org/security/gitlab!2843))
- [Path traversal fix for Secure Files](gitlab-org/security/gitlab@90bbaaa1ce2d0978cf89fabf1f302da1f9f938df) ([merge request](gitlab-org/security/gitlab!2847))
## 15.3.4 (2022-09-29)
### Security (15 changes)

View File

@ -1 +1 @@
aaf6f16ca40fdb671858e38662a72d96d51987df
165e1fbd149fd4baec8ce3957e728d3ce3412a15

View File

@ -175,8 +175,7 @@ gem 'typhoeus', '~> 1.4.0' # Used with Elasticsearch to support http keep-alive
# Markdown and HTML processing
gem 'html-pipeline', '~> 2.14.3'
gem 'deckar01-task_list', '2.3.2'
gem 'gitlab-markup', '~> 1.8.0'
gem 'github-markup', '~> 1.7.0', require: 'github/markup'
gem 'gitlab-markup', '~> 1.8.0', require: 'github/markup'
gem 'commonmarker', '~> 0.23.6'
gem 'kramdown', '~> 2.3.1'
gem 'RedCloth', '~> 4.3.2'

View File

@ -199,7 +199,6 @@
{"name":"gettext_i18n_rails_js","version":"1.3.0","platform":"ruby","checksum":"5d10afe4be3639bff78c50a56768c20f39aecdabc580c08aa45573911c2bd687"},
{"name":"git","version":"1.11.0","platform":"ruby","checksum":"7e95ba4da8298a0373ef1a6862aa22007d761f3c8274b675aa787966fecea0f1"},
{"name":"gitaly","version":"15.5.0","platform":"ruby","checksum":"d85dd4890a1f0fd95f935c848bcedf03f19b78872f20f04b9811e602bea4ef42"},
{"name":"github-markup","version":"1.7.0","platform":"ruby","checksum":"97eb27c70662d9cc1d5997cd6c99832026fae5d4913b5dce1ce6c9f65078e69d"},
{"name":"gitlab","version":"4.16.1","platform":"ruby","checksum":"13fd7059cbdad5a1a21b15fa2cf9070b97d92e27f8c688581fe3d84dc038074f"},
{"name":"gitlab-chronic","version":"0.10.5","platform":"ruby","checksum":"f80f18dc699b708870a80685243331290bc10cfeedb6b99c92219722f729c875"},
{"name":"gitlab-dangerfiles","version":"3.6.1","platform":"ruby","checksum":"f7b69b093d52acb89095d411cb7b8849f5f3b9e76f8baa4c99b5671f1564865f"},

View File

@ -549,7 +549,6 @@ GEM
rchardet (~> 1.8)
gitaly (15.5.0)
grpc (~> 1.0)
github-markup (1.7.0)
gitlab (4.16.1)
httparty (~> 0.14, >= 0.14.0)
terminal-table (~> 1.5, >= 1.5.1)
@ -1626,7 +1625,6 @@ DEPENDENCIES
gettext_i18n_rails (~> 1.8.0)
gettext_i18n_rails_js (~> 1.3)
gitaly (~> 15.5.0)
github-markup (~> 1.7.0)
gitlab-chronic (~> 0.10.5)
gitlab-dangerfiles (~> 3.6.1)
gitlab-experiment (~> 0.7.1)

View File

@ -1,23 +1,29 @@
import { SwaggerUIBundle } from 'swagger-ui-dist';
import { createAlert } from '~/flash';
import { __ } from '~/locale';
import { setAttributes } from '~/lib/utils/dom_utils';
import axios from '~/lib/utils/axios_utils';
export default () => {
const el = document.getElementById('js-openapi-viewer');
Promise.all([import(/* webpackChunkName: 'openapi' */ 'swagger-ui-dist/swagger-ui.css')])
.then(() => {
SwaggerUIBundle({
url: el.dataset.endpoint,
dom_id: '#js-openapi-viewer',
deepLinking: true,
displayOperationId: true,
});
})
.catch((error) => {
createAlert({
message: __('Something went wrong while initializing the OpenAPI viewer'),
});
throw error;
});
const createSandbox = () => {
const iframeEl = document.createElement('iframe');
setAttributes(iframeEl, {
src: '/-/sandbox/swagger',
sandbox: 'allow-scripts',
frameBorder: 0,
width: '100%',
// The height will be adjusted dynamically.
// Follow-up issue: https://gitlab.com/gitlab-org/gitlab/-/issues/377969
height: '1000',
});
return iframeEl;
};
export default async () => {
const wrapperEl = document.getElementById('js-openapi-viewer');
const sandboxEl = createSandbox();
const { data } = await axios.get(wrapperEl.dataset.endpoint);
wrapperEl.appendChild(sandboxEl);
sandboxEl.addEventListener('load', () => {
sandboxEl.contentWindow.postMessage(data, '*');
});
};

View File

@ -0,0 +1,43 @@
import { SwaggerUIBundle } from 'swagger-ui-dist';
import { safeLoad } from 'js-yaml';
import { isObject } from '~/lib/utils/type_utility';
const renderSwaggerUI = (value) => {
/* SwaggerUIBundle accepts openapi definition
* in only JSON format, so we convert the YAML
* config to JSON if it's not JSON value
*/
let spec = value;
if (!isObject(spec)) {
spec = safeLoad(spec, { json: true });
}
Promise.all([import(/* webpackChunkName: 'openapi' */ 'swagger-ui-dist/swagger-ui.css')])
.then(() => {
SwaggerUIBundle({
spec,
dom_id: '#swagger-ui',
deepLinking: true,
displayOperationId: true,
});
})
.catch((error) => {
throw error;
});
};
const addInitHook = () => {
window.addEventListener(
'message',
(event) => {
if (event.origin !== window.location.origin) {
return;
}
renderSwaggerUI(event.data);
},
false,
);
};
addInitHook();
export default {};

View File

@ -14,7 +14,7 @@ class Projects::ArtifactsController < Projects::ApplicationController
before_action :authorize_destroy_artifacts!, only: [:destroy]
before_action :extract_ref_name_and_path
before_action :validate_artifacts!, except: [:index, :download, :raw, :destroy]
before_action :entry, only: [:file]
before_action :entry, only: [:external_file, :file]
MAX_PER_PAGE = 20
@ -58,12 +58,19 @@ class Projects::ArtifactsController < Projects::ApplicationController
render_404 unless @entry.exists?
end
# External files are redirected to Gitlab Pages and might have unsecure content
# To warn the user about the possible unsecure content, we show a warning page
# before redirecting the user.
def external_file
@blob = @entry.blob
end
def file
blob = @entry.blob
conditionally_expand_blob(blob)
if blob.external_link?(build)
redirect_to blob.external_url(@project, build)
redirect_to external_file_project_job_artifacts_path(@project, @build, path: params[:path])
else
respond_to do |format|
format.html do

View File

@ -239,8 +239,7 @@ class Projects::PipelinesController < Projects::ApplicationController
def config_variables
respond_to do |format|
format.json do
project = @project.uses_external_project_ci_config? ? @project.ci_config_external_project : @project
result = Ci::ListConfigVariablesService.new(project, current_user).execute(params[:sha])
result = Ci::ListConfigVariablesService.new(@project, current_user).execute(params[:sha])
result.nil? ? head(:no_content) : render(json: result)
end

View File

@ -8,4 +8,8 @@ class SandboxController < ApplicationController # rubocop:disable Gitlab/Namespa
def mermaid
render layout: false
end
def swagger
render layout: false
end
end

View File

@ -1338,7 +1338,9 @@ module Ci
end
def reset_source_bridge!(current_user)
# break recursion when no source_pipeline bridge (first upstream pipeline)
return unless bridge_waiting?
return unless current_user.can?(:update_pipeline, source_bridge.pipeline)
source_bridge.pending!
Ci::AfterRequeueJobService.new(project, current_user).execute(source_bridge) # rubocop:disable CodeReuse/ServiceClass

View File

@ -21,7 +21,12 @@ class CommitStatus < Ci::ApplicationRecord
has_many :needs, class_name: 'Ci::BuildNeed', foreign_key: :build_id, inverse_of: :build
attribute :retried, default: false
enum scheduling_type: { stage: 0, dag: 1 }, _prefix: true
# We use `Enums::Ci::CommitStatus.failure_reasons` here so that EE can more easily
# extend this `Hash` with new values.
enum_with_nil failure_reason: Enums::Ci::CommitStatus.failure_reasons
delegate :commit, to: :pipeline
delegate :sha, :short_sha, :before_sha, to: :pipeline
@ -96,12 +101,6 @@ class CommitStatus < Ci::ApplicationRecord
merge(or_conditions)
end
# We use `Enums::Ci::CommitStatus.failure_reasons` here so that EE can more easily
# extend this `Hash` with new values.
enum_with_nil failure_reason: Enums::Ci::CommitStatus.failure_reasons
default_value_for :retried, false
##
# We still create some CommitStatuses outside of CreatePipelineService.
#

View File

@ -13,7 +13,7 @@ class DeployToken < ApplicationRecord
GITLAB_DEPLOY_TOKEN_NAME = 'gitlab-deploy-token'
REQUIRED_DEPENDENCY_PROXY_SCOPES = %i[read_registry write_registry].freeze
default_value_for(:expires_at) { Forever.date }
attribute :expires_at, default: -> { Forever.date }
# Do NOT use this `user` for the authentication/authorization of the deploy tokens.
# It's for the auditing purpose on Credential Inventory, only.

View File

@ -17,8 +17,8 @@ class ProjectCiCdSetting < ApplicationRecord
},
allow_nil: true
default_value_for :forward_deployment_enabled, true
default_value_for :separated_caches, true
attribute :forward_deployment_enabled, default: true
attribute :separated_caches, default: true
chronic_duration_attr :runner_token_expiration_interval_human_readable, :runner_token_expiration_interval

View File

@ -22,12 +22,13 @@ module Ci
end
def calculate_reactive_cache(sha)
config = project.ci_config_for(sha)
return {} unless config
config = ::Gitlab::Ci::ProjectConfig.new(project: project, sha: sha)
result = Gitlab::Ci::YamlProcessor.new(config, project: project,
user: current_user,
sha: sha).execute
return {} unless config.exists?
result = Gitlab::Ci::YamlProcessor.new(config.content, project: project,
user: current_user,
sha: sha).execute
result.valid? ? result.root_variables_with_prefill_data : {}
end

View File

@ -0,0 +1,12 @@
.nav-block
%ul.breadcrumb.repo-breadcrumb
%li.breadcrumb-item
= link_to _('Artifacts'), browse_project_job_artifacts_path(project, build)
- path_breadcrumbs do |title, breadcrumb|
- title = truncate(title, length: 40)
%li.breadcrumb-item
- if path == breadcrumb
= link_to file_project_job_artifacts_path(project, build, breadcrumb) do
%strong= title
- else
= link_to title, browse_project_job_artifacts_path(project, build, breadcrumb)

View File

@ -1,13 +1,15 @@
- blob = file.blob
- path_to_file = file_project_job_artifacts_path(@project, @build, path: file.path)
- external_link = blob.external_link?(@build)
- if external_link
- path_to_file = external_file_project_job_artifacts_path(@project, @build, path: file.path)
- else
- path_to_file = file_project_job_artifacts_path(@project, @build, path: file.path)
%tr.tree-item.js-artifact-tree-row{ data: { link: path_to_file, external_link: "#{external_link}" } }
%td.tree-item-file-name
= tree_icon('file', blob.mode, blob.name)
- if external_link
= link_to path_to_file, class: 'tree-item-file-external-link js-artifact-tree-tooltip str-truncated',
target: '_blank', rel: 'noopener noreferrer', title: _('Opens in a new window') do
= link_to path_to_file, class: 'tree-item-file-external-link js-artifact-tree-tooltip str-truncated' do
%span>= blob.name
= sprite_icon('external-link', css_class: 'js-artifact-tree-external-icon')
- else

View File

@ -0,0 +1,15 @@
- page_title @path, _('Artifacts'), "#{@build.name} (##{@build.id})", _('Jobs')
= render "projects/jobs/header"
.tree-holder
= render 'projects/artifacts/file_navigation', project: @project, build: @build, path: @path
%h2= _("You are being redirected away from GitLab")
%p= _("This page is hosted on GitLab pages but contains user-generated content and may contain malicious code. Do not accept unless you trust the author and source.")
= link_to @blob.external_url(@project, @build),
@blob.external_url(@project, @build),
target: '_blank',
title: _('Opens in a new window'),
rel: 'noopener noreferrer'

View File

@ -4,19 +4,7 @@
= render "projects/jobs/header"
.tree-holder
.nav-block
%ul.breadcrumb.repo-breadcrumb
%li.breadcrumb-item
= link_to 'Artifacts', browse_project_job_artifacts_path(@project, @build)
- path_breadcrumbs do |title, path|
- title = truncate(title, length: 40)
%li.breadcrumb-item
- if path == @path
= link_to file_project_job_artifacts_path(@project, @build, path) do
%strong= title
- else
= link_to title, browse_project_job_artifacts_path(@project, @build, path)
= render 'projects/artifacts/file_navigation', project: @project, build: @build, path: @path
%article.file-holder
- blob = @entry.blob

View File

@ -0,0 +1,9 @@
<!DOCTYPE html>
<html>
<head>
<%= webpack_bundle_tag("sandboxed_swagger") %>
</head>
<body>
<div id="swagger-ui"></div>
</body>
</html>

View File

@ -0,0 +1,8 @@
---
name: ci_partitioning_analyze_queries_partition_id_check
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/100804
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/378282
milestone: '15.6'
type: ops
group: group::pipeline execution
default_enabled: false

View File

@ -579,7 +579,7 @@ Settings.cron_jobs['cleanup_dependency_proxy_worker'] ||= Settingslogic.new({})
Settings.cron_jobs['cleanup_dependency_proxy_worker']['cron'] ||= '20 3,15 * * *'
Settings.cron_jobs['cleanup_dependency_proxy_worker']['job_class'] = 'DependencyProxy::CleanupDependencyProxyWorker'
Settings.cron_jobs['cleanup_package_registry_worker'] ||= Settingslogic.new({})
Settings.cron_jobs['cleanup_package_registry_worker']['cron'] ||= '20 0,12 * * *'
Settings.cron_jobs['cleanup_package_registry_worker']['cron'] ||= '20 * * * *'
Settings.cron_jobs['cleanup_package_registry_worker']['job_class'] = 'Packages::CleanupPackageRegistryWorker'
Settings.cron_jobs['x509_issuer_crl_check_worker'] ||= Settingslogic.new({})
Settings.cron_jobs['x509_issuer_crl_check_worker']['cron'] ||= '30 1 * * *'

View File

@ -0,0 +1,3 @@
# frozen_string_literal: true
URI.singleton_class.prepend(Gitlab::Patch::Uri::ClassMethods)

View File

@ -118,6 +118,7 @@ InitializerConnections.with_disabled_database_connections do
# sandbox
get '/sandbox/mermaid' => 'sandbox#mermaid'
get '/sandbox/swagger' => 'sandbox#swagger'
get '/whats_new' => 'whats_new#index'

View File

@ -86,6 +86,7 @@ constraints(::Constraints::ProjectUrlConstrainer.new) do
get :download
get :browse, path: 'browse(/*path)', format: false
get :file, path: 'file/*path', format: false
get :external_file, path: 'external_file/*path', format: false
get :raw, path: 'raw/*path', format: false
post :keep
end

View File

@ -18,8 +18,12 @@ scope format: false do
constraints: { id: Gitlab::PathRegex.git_reference_regex }
get '/refs/:id/logs_tree/*path',
to: redirect('%{namespace_id}/%{project_id}/-/refs/%{id}/logs_tree/%{path}'),
constraints: { id: /.*/, path: /[^\0]*/ }
constraints: { id: /.*/, path: /[^\0]*/ },
to: redirect { |params, _request|
path = params[:path]
path.gsub!('@', '-/')
Addressable::URI.escape("#{params[:namespace_id]}/#{params[:project_id]}/-/refs/#{params[:id]}/logs_tree/#{path}")
}
scope constraints: { id: /[^\0]+/ } do
# Deprecated. Keep for compatibility.

View File

@ -165,6 +165,7 @@ function generateEntries() {
jira_connect_app: './jira_connect/subscriptions/index.js',
sandboxed_mermaid: './lib/mermaid.js',
redirect_listbox: './entrypoints/behaviors/redirect_listbox.js',
sandboxed_swagger: './lib/swagger.js',
};
return Object.assign(manualEntries, incrementalCompiler.filterEntryPoints(autoEntries));

View File

@ -0,0 +1,15 @@
---
# Suggestion: gitlab.Units
#
# Recommends a space between a number and a unit of measure.
#
# For a list of all options, see https://vale.sh/docs/topics/styles/
extends: existence
message: "Add a space between the number and the unit in '%s'."
link: 'https://docs.gitlab.com/ee/development/documentation/styleguide/'
nonword: true
level: suggestion
ignorecase: true
tokens:
- \d+(?:B|kB|KiB|MB|MiB|GB|GiB|TB|TiB)
- \d+(?:ns|ms|μs|s|min|h|d)

View File

@ -376,6 +376,8 @@ Some basic Ruby runtime metrics are available:
| `ruby_process_cpu_seconds_total` | Gauge | 12.0 | Total amount of CPU time per process |
| `ruby_process_max_fds` | Gauge | 12.0 | Maximum number of open file descriptors per process |
| `ruby_process_resident_memory_bytes` | Gauge | 12.0 | Memory usage by process (RSS/Resident Set Size) |
| `ruby_process_resident_anon_memory_bytes`| Gauge | 15.6 | Anonymous memory usage by process (RSS/Resident Set Size) |
| `ruby_process_resident_file_memory_bytes`| Gauge | 15.6 | File-backed memory usage by process (RSS/Resident Set Size) |
| `ruby_process_unique_memory_bytes` | Gauge | 13.0 | Memory usage by process (USS/Unique Set Size) |
| `ruby_process_proportional_memory_bytes` | Gauge | 13.0 | Memory usage by process (PSS/Proportional Set Size) |
| `ruby_process_start_time_seconds` | Gauge | 12.0 | UNIX timestamp of process start time |

View File

@ -42,14 +42,34 @@ Example response:
"name": "myfile.jks",
"checksum": "16630b189ab34b2e3504f4758e1054d2e478deda510b2b08cc0ef38d12e80aac",
"checksum_algorithm": "sha256",
"created_at": "2022-02-22T22:22:22.222Z"
"created_at": "2022-02-22T22:22:22.222Z",
"expires_at": null,
"metadata": null
},
{
"id": 2,
"name": "myotherfile.jks",
"name": "myfile.cer",
"checksum": "16630b189ab34b2e3504f4758e1054d2e478deda510b2b08cc0ef38d12e80aa2",
"checksum_algorithm": "sha256",
"created_at": "2022-02-22T22:22:22.222Z"
"created_at": "2022-02-22T22:22:22.222Z",
"expires_at": "2022-09-21T14:56:00.000Z",
"metadata": {
"id":"75949910542696343243264405377658443914",
"issuer": {
"C":"US",
"O":"Apple Inc.",
"CN":"Apple Worldwide Developer Relations Certification Authority",
"OU":"G3"
},
"subject": {
"C":"US",
"O":"Organization Name",
"CN":"Apple Distribution: Organization Name (ABC123XYZ)",
"OU":"ABC123XYZ",
"UID":"ABC123XYZ"
},
"expires_at":"2022-09-21T14:56:00.000Z"
}
}
]
```
@ -83,7 +103,9 @@ Example response:
"name": "myfile.jks",
"checksum": "16630b189ab34b2e3504f4758e1054d2e478deda510b2b08cc0ef38d12e80aac",
"checksum_algorithm": "sha256",
"created_at": "2022-02-22T22:22:22.222Z"
"created_at": "2022-02-22T22:22:22.222Z",
"expires_at": null,
"metadata": null
}
```
@ -118,7 +140,9 @@ Example response:
"name": "myfile.jks",
"checksum": "16630b189ab34b2e3504f4758e1054d2e478deda510b2b08cc0ef38d12e80aac",
"checksum_algorithm": "sha256",
"created_at": "2022-02-22T22:22:22.222Z"
"created_at": "2022-02-22T22:22:22.222Z",
"expires_at": null,
"metadata": null
}
```

View File

@ -309,7 +309,9 @@ Based on user research, we may want to change certain features to work across or
The Pods architecture do have long lasting implications to data processing, location, scalability and the GitLab architecture.
This section links all different technical proposals that are being evaluated.
- [Stateless Router That Uses a Cache to Pick Pods and Is Redirected When Wrong Pod Is Reached](proposal-stateless-router.md)
- [Stateless Router That Uses a Cache to Pick Pod and Is Redirected When Wrong Pod Is Reached](proposal-stateless-router-with-buffering-requests.md)
- [Stateless Router That Uses a Cache to Pick Pod and pre-flight `/api/v4/pods/learn`](proposal-stateless-router-with-routes-learning.md)
## Links

View File

@ -6,7 +6,6 @@ description: 'Pods Stateless Router Proposal'
---
DISCLAIMER:
This page may contain information related to upcoming products, features and
functionality. It is important to note that the information presented is for
informational purposes only, so please do not rely on the information for
@ -41,6 +40,17 @@ different data dependent on their currently chosen "organization".
application and allow us to decide which request route to which pod, since an
organization can only be on a single pod.
## Differences
The main difference between this proposal and the one [with learning routes](proposal-stateless-router-with-routes-learning.md)
is that this proposal always sends requests to any of the Pods. If the requests cannot be processed,
the requests will be bounced back with relevant headers. This requires that request to be buffered.
It allows that request decoding can be either via URI or Body of request by Rails.
This means that each request might be sent more than once and be processed more than once as result.
The [with learning routes proposal](proposal-stateless-router-with-routes-learning.md) requires that
routable information is always encoded in URI, and the router sends a pre-flight request.
## Summary in diagrams
This shows how a user request routes via DNS to the nearest router and the router chooses a pod to send the request to.
@ -255,6 +265,11 @@ keeping settings in sync for all pods.
1. This architecture implies that implemented endpoints can only access data
that are readily accessible on a given Pod, but are unlikely
to aggregate information from many Pods.
1. All unknown routes are sent to the latest deployment which we assume to be `Pod US0`.
This is required as newly added endpoints will be only decodable by latest pod.
This Pod could later redirect to correct one that can serve the given request.
Since request processing might be heavy some Pods might receive significant amount
of traffic due to that.
## Example database configuration

View File

@ -0,0 +1,689 @@
---
stage: enablement
group: pods
comments: false
description: 'Pods Stateless Router Proposal'
---
DISCLAIMER:
This page may contain information related to upcoming products, features and
functionality. It is important to note that the information presented is for
informational purposes only, so please do not rely on the information for
purchasing or planning purposes. Just like with all projects, the items
mentioned on the page are subject to change or delay, and the development,
release, and timing of any products, features, or functionality remain at the
sole discretion of GitLab Inc.
This document is a work-in-progress and represents a very early state of the
Pods design. Significant aspects are not documented, though we expect to add
them in the future. This is one possible architecture for Pods, and we intend to
contrast this with alternatives before deciding which approach to implement.
This documentation will be kept even if we decide not to implement this so that
we can document the reasons for not choosing this approach.
# Proposal: Stateless Router
We will decompose `gitlab_users`, `gitlab_routes` and `gitlab_admin` related
tables so that they can be shared between all pods and allow any pod to
authenticate a user and route requests to the correct pod. Pods may receive
requests for the resources they don't own, but they know how to redirect back
to the correct pod.
The router is stateless and does not read from the `routes` database which
means that all interactions with the database still happen from the Rails
monolith. This architecture also supports regions by allowing for low traffic
databases to be replicated across regions.
Users are not directly exposed to the concept of Pods but instead they see
different data dependent on their currently chosen "organization".
[Organizations](index.md#organizations) will be a new model introduced to enforce isolation in the
application and allow us to decide which request route to which pod, since an
organization can only be on a single pod.
## Differences
The main difference between this proposal and one [with buffering requests](proposal-stateless-router-with-buffering-requests.md)
is that this proposal uses a pre-flight API request (`/api/v4/pods/learn`) to redirect the request body to the correct Pod.
This means that each request is sent exactly once to be processed, but the URI is used to decode which Pod it should be directed.
## Summary in diagrams
This shows how a user request routes via DNS to the nearest router and the router chooses a pod to send the request to.
```mermaid
graph TD;
user((User));
dns[DNS];
router_us(Router);
router_eu(Router);
pod_us0{Pod US0};
pod_us1{Pod US1};
pod_eu0{Pod EU0};
pod_eu1{Pod EU1};
user-->dns;
dns-->router_us;
dns-->router_eu;
subgraph Europe
router_eu-->pod_eu0;
router_eu-->pod_eu1;
end
subgraph United States
router_us-->pod_us0;
router_us-->pod_us1;
end
```
<details><summary>More detail</summary>
This shows that the router can actually send requests to any pod. The user will
get the closest router to them geographically.
```mermaid
graph TD;
user((User));
dns[DNS];
router_us(Router);
router_eu(Router);
pod_us0{Pod US0};
pod_us1{Pod US1};
pod_eu0{Pod EU0};
pod_eu1{Pod EU1};
user-->dns;
dns-->router_us;
dns-->router_eu;
subgraph Europe
router_eu-->pod_eu0;
router_eu-->pod_eu1;
end
subgraph United States
router_us-->pod_us0;
router_us-->pod_us1;
end
router_eu-.->pod_us0;
router_eu-.->pod_us1;
router_us-.->pod_eu0;
router_us-.->pod_eu1;
```
</details>
<details><summary>Even more detail</summary>
This shows the databases. `gitlab_users` and `gitlab_routes` exist only in the
US region but are replicated to other regions. Replication does not have an
arrow because it's too hard to read the diagram.
```mermaid
graph TD;
user((User));
dns[DNS];
router_us(Router);
router_eu(Router);
pod_us0{Pod US0};
pod_us1{Pod US1};
pod_eu0{Pod EU0};
pod_eu1{Pod EU1};
db_gitlab_users[(gitlab_users Primary)];
db_gitlab_routes[(gitlab_routes Primary)];
db_gitlab_users_replica[(gitlab_users Replica)];
db_gitlab_routes_replica[(gitlab_routes Replica)];
db_pod_us0[(gitlab_main/gitlab_ci Pod US0)];
db_pod_us1[(gitlab_main/gitlab_ci Pod US1)];
db_pod_eu0[(gitlab_main/gitlab_ci Pod EU0)];
db_pod_eu1[(gitlab_main/gitlab_ci Pod EU1)];
user-->dns;
dns-->router_us;
dns-->router_eu;
subgraph Europe
router_eu-->pod_eu0;
router_eu-->pod_eu1;
pod_eu0-->db_pod_eu0;
pod_eu0-->db_gitlab_users_replica;
pod_eu0-->db_gitlab_routes_replica;
pod_eu1-->db_gitlab_users_replica;
pod_eu1-->db_gitlab_routes_replica;
pod_eu1-->db_pod_eu1;
end
subgraph United States
router_us-->pod_us0;
router_us-->pod_us1;
pod_us0-->db_pod_us0;
pod_us0-->db_gitlab_users;
pod_us0-->db_gitlab_routes;
pod_us1-->db_gitlab_users;
pod_us1-->db_gitlab_routes;
pod_us1-->db_pod_us1;
end
router_eu-.->pod_us0;
router_eu-.->pod_us1;
router_us-.->pod_eu0;
router_us-.->pod_eu1;
```
</details>
## Summary of changes
1. Tables related to User data (including profile settings, authentication credentials, personal access tokens) are decomposed into a `gitlab_users` schema
1. The `routes` table is decomposed into `gitlab_routes` schema
1. The `application_settings` (and probably a few other instance level tables) are decomposed into `gitlab_admin` schema
1. A new column `routes.pod_id` is added to `routes` table
1. A new Router service exists to choose which pod to route a request to.
1. If a router receives a new request it will send `/api/v4/pods/learn?method=GET&path_info=/group-org/project` to learn which Pod can process it
1. A new concept will be introduced in GitLab called an organization
1. We require all existing endpoints to be routable by URI, or be fixed to a specific Pod for processing. This requires changing ambiguous endpoints like `/dashboard` to be scoped like `/organizations/my-organization/-/dashboard`
1. Endpoints like `/admin` would be routed always to the specific Pod, like `pod_0`
1. Each Pod can respond to `/api/v4/pods/learn` and classify each endpoint
1. Writes to `gitlab_users` and `gitlab_routes` are sent to a primary PostgreSQL server in our `US` region but reads can come from replicas in the same region. This will add latency for these writes but we expect they are infrequent relative to the rest of GitLab.
## Pre-flight request learning
While processing a request the URI will be decoded and a pre-flight request
will be sent for each non-cached endpoint.
When asking for the endpoint GitLab Rails will return information about
the routable path. GitLab Rails will decode `path_info` and match it to
an existing endpoint and find a routable entity (like project). The router will
treat this as short-lived cache information.
1. Prefix match: `/api/v4/pods/learn?method=GET&path_info=/gitlab-org/gitlab-test/-/issues`
```json
{
"path": "/gitlab-org/gitlab-test",
"pod": "pod_0",
"source": "routable"
}
```
1. Some endpoints might require an exact match: `/api/v4/pods/learn?method=GET&path_info=/-/profile`
```json
{
"path": "/-/profile",
"pod": "pod_0",
"source": "fixed",
"exact": true
}
```
## Detailed explanation of default organization in the first iteration
All users will get a new column `users.default_organization` which they can
control in user settings. We will introduce a concept of the
`GitLab.com Public` organization. This will be set as the default organization for all existing
users. This organization will allow the user to see data from all namespaces in
`Pod US0` (ie. our original GitLab.com instance). This behavior can be invisible to
existing users such that they don't even get told when they are viewing a
global page like `/dashboard` that it's even scoped to an organization.
Any new users with a default organization other than `GitLab.com Public` will have
a distinct user experience and will be fully aware that every page they load is
only ever scoped to a single organization. These users can never
load any global pages like `/dashboard` and will end up being redirected to
`/organizations/<DEFAULT_ORGANIZATION>/-/dashboard`. This may also be the case
for legacy APIs and such users may only ever be able to use APIs scoped to a
organization.
## Detailed explanation of Admin Area settings
We believe that maintaining and synchronizing Admin Area settings will be
frustrating and painful so to avoid this we will decompose and share all Admin Area
settings in the `gitlab_admin` schema. This should be safe (similar to other
shared schemas) because these receive very little write traffic.
In cases where different pods need different settings (eg. the
Elasticsearch URL), we will either decide to use a templated
format in the relevant `application_settings` row which allows it to be dynamic
per pod. Alternatively if that proves difficult we'll introduce a new table
called `per_pod_application_settings` and this will have 1 row per pod to allow
setting different settings per pod. It will still be part of the `gitlab_admin`
schema and shared which will allow us to centrally manage it and simplify
keeping settings in sync for all pods.
## Pros
1. Router is stateless and can live in many regions. We use Anycast DNS to resolve to nearest region for the user.
1. Pods can receive requests for namespaces in the wrong pod and the user
still gets the right response as well as caching at the router that
ensures the next request is sent to the correct pod so the next request
will go to the correct pod
1. The majority of the code still lives in `gitlab` rails codebase. The Router doesn't actually need to understand how GitLab URLs are composed.
1. Since the responsibility to read and write `gitlab_users`,
`gitlab_routes` and `gitlab_admin` still lives in Rails it means minimal
changes will be needed to the Rails application compared to extracting
services that need to isolate the domain models and build new interfaces.
1. Compared to a separate routing service this allows the Rails application
to encode more complex rules around how to map URLs to the correct pod
and may work for some existing API endpoints.
1. All the new infrastructure (just a router) is optional and a single-pod
self-managed installation does not even need to run the Router and there are
no other new services.
## Cons
1. `gitlab_users`, `gitlab_routes` and `gitlab_admin` databases may need to be
replicated across regions and writes need to go across regions. We need to
do an analysis on write TPS for the relevant tables to determine if this is
feasible.
1. Sharing access to the database from many different Pods means that they are
all coupled at the Postgres schema level and this means changes to the
database schema need to be done carefully in sync with the deployment of all
Pods. This limits us to ensure that Pods are kept in closely similar
versions compared to an architecture with shared services that have an API
we control.
1. Although most data is stored in the right region there can be requests
proxied from another region which may be an issue for certain types
of compliance.
1. Data in `gitlab_users` and `gitlab_routes` databases must be replicated in
all regions which may be an issue for certain types of compliance.
1. The router cache may need to be very large if we get a wide variety of URLs
(ie. long tail). In such a case we may need to implement a 2nd level of
caching in user cookies so their frequently accessed pages always go to the
right pod the first time.
1. Having shared database access for `gitlab_users` and `gitlab_routes`
from multiple pods is an unusual architecture decision compared to
extracting services that are called from multiple pods.
1. It is very likely we won't be able to find cacheable elements of a
GraphQL URL and often existing GraphQL endpoints are heavily dependent on
ids that won't be in the `routes` table so pods won't necessarily know
what pod has the data. As such we'll probably have to update our GraphQL
calls to include an organization context in the path like
`/api/organizations/<organization>/graphql`.
1. This architecture implies that implemented endpoints can only access data
that are readily accessible on a given Pod, but are unlikely
to aggregate information from many Pods.
1. All unknown routes are sent to the latest deployment which we assume to be `Pod US0`.
This is required as newly added endpoints will be only decodable by latest pod.
Likely this is not a problem for the `/pods/learn` is it is lightweight
to process and this should not cause a performance impact.
## Example database configuration
Handling shared `gitlab_users`, `gitlab_routes` and `gitlab_admin` databases, while having dedicated `gitlab_main` and `gitlab_ci` databases should already be handled by the way we use `config/database.yml`. We should also, already be able to handle the dedicated EU replicas while having a single US primary for `gitlab_users` and `gitlab_routes`. Below is a snippet of part of the database configuration for the Pod architecture described above.
<details><summary>Pod US0</summary>
```yaml
# config/database.yml
production:
main:
host: postgres-main.pod-us0.primary.consul
load_balancing:
discovery: postgres-main.pod-us0.replicas.consul
ci:
host: postgres-ci.pod-us0.primary.consul
load_balancing:
discovery: postgres-ci.pod-us0.replicas.consul
users:
host: postgres-users-primary.consul
load_balancing:
discovery: postgres-users-replicas.us.consul
routes:
host: postgres-routes-primary.consul
load_balancing:
discovery: postgres-routes-replicas.us.consul
admin:
host: postgres-admin-primary.consul
load_balancing:
discovery: postgres-admin-replicas.us.consul
```
</details>
<details><summary>Pod EU0</summary>
```yaml
# config/database.yml
production:
main:
host: postgres-main.pod-eu0.primary.consul
load_balancing:
discovery: postgres-main.pod-eu0.replicas.consul
ci:
host: postgres-ci.pod-eu0.primary.consul
load_balancing:
discovery: postgres-ci.pod-eu0.replicas.consul
users:
host: postgres-users-primary.consul
load_balancing:
discovery: postgres-users-replicas.eu.consul
routes:
host: postgres-routes-primary.consul
load_balancing:
discovery: postgres-routes-replicas.eu.consul
admin:
host: postgres-admin-primary.consul
load_balancing:
discovery: postgres-admin-replicas.eu.consul
```
</details>
## Request flows
1. `gitlab-org` is a top level namespace and lives in `Pod US0` in the `GitLab.com Public` organization
1. `my-company` is a top level namespace and lives in `Pod EU0` in the `my-organization` organization
### Experience for paying user that is part of `my-organization`
Such a user will have a default organization set to `/my-organization` and will be
unable to load any global routes outside of this organization. They may load other
projects/namespaces but their MR/Todo/Issue counts at the top of the page will
not be correctly populated in the first iteration. The user will be aware of
this limitation.
#### Navigates to `/my-company/my-project` while logged in
1. User is in Europe so DNS resolves to the router in Europe
1. They request `/my-company/my-project` without the router cache, so the router chooses randomly `Pod EU1`
1. The `/pods/learn` is sent to `Pod EU1`, which responds that resource lives on `Pod EU0`
1. `Pod EU0` returns the correct response
1. The router now caches and remembers any request paths matching `/my-company/*` should go to `Pod EU0`
```mermaid
sequenceDiagram
participant user as User
participant router_eu as Router EU
participant pod_eu0 as Pod EU0
participant pod_eu1 as Pod EU1
user->>router_eu: GET /my-company/my-project
router_eu->>pod_eu1: /api/v4/pods/learn?method=GET&path_info=/my-company/my-project
pod_eu1->>router_eu: {path: "/my-company", pod: "pod_eu0", source: "routable"}
router_eu->>pod_eu0: GET /my-company/my-project
pod_eu0->>user: <h1>My Project...
```
#### Navigates to `/my-company/my-project` while not logged in
1. User is in Europe so DNS resolves to the router in Europe
1. The router does not have `/my-company/*` cached yet so it chooses randomly `Pod EU1`
1. The `/pods/learn` is sent to `Pod EU1`, which responds that resource lives on `Pod EU0`
1. `Pod EU0` redirects them through a login flow
1. User requests `/users/sign_in`, uses random Pod to run `/pods/learn`
1. The `Pod EU1` responds with `pod_0` as a fixed route
1. User after login requests `/my-company/my-project` which is cached and stored in `Pod EU0`
1. `Pod EU0` returns the correct response
```mermaid
sequenceDiagram
participant user as User
participant router_eu as Router EU
participant pod_eu0 as Pod EU0
participant pod_eu1 as Pod EU1
user->>router_eu: GET /my-company/my-project
router_eu->>pod_eu1: /api/v4/pods/learn?method=GET&path_info=/my-company/my-project
pod_eu1->>router_eu: {path: "/my-company", pod: "pod_eu0", source: "routable"}
router_eu->>pod_eu0: GET /my-company/my-project
pod_eu0->>user: 302 /users/sign_in?redirect=/my-company/my-project
user->>router_eu: GET /users/sign_in?redirect=/my-company/my-project
router_eu->>pod_eu1: /api/v4/pods/learn?method=GET&path_info=/users/sign_in
pod_eu1->>router_eu: {path: "/users", pod: "pod_eu0", source: "fixed"}
router_eu->>pod_eu0: GET /users/sign_in?redirect=/my-company/my-project
pod_eu0-->>user: <h1>Sign in...
user->>router_eu: POST /users/sign_in?redirect=/my-company/my-project
router_eu->>pod_eu0: POST /users/sign_in?redirect=/my-company/my-project
pod_eu0->>user: 302 /my-company/my-project
user->>router_eu: GET /my-company/my-project
router_eu->>pod_eu0: GET /my-company/my-project
router_eu->>pod_eu0: GET /my-company/my-project
pod_eu0->>user: <h1>My Project...
```
#### Navigates to `/my-company/my-other-project` after last step
1. User is in Europe so DNS resolves to the router in Europe
1. The router cache now has `/my-company/* => Pod EU0`, so the router chooses `Pod EU0`
1. `Pod EU0` returns the correct response as well as the cache header again
```mermaid
sequenceDiagram
participant user as User
participant router_eu as Router EU
participant pod_eu0 as Pod EU0
participant pod_eu1 as Pod EU1
user->>router_eu: GET /my-company/my-project
router_eu->>pod_eu0: GET /my-company/my-project
pod_eu0->>user: <h1>My Project...
```
#### Navigates to `/gitlab-org/gitlab` after last step
1. User is in Europe so DNS resolves to the router in Europe
1. The router has no cached value for this URL so randomly chooses `Pod EU0`
1. `Pod EU0` redirects the router to `Pod US0`
1. `Pod US0` returns the correct response as well as the cache header again
```mermaid
sequenceDiagram
participant user as User
participant router_eu as Router EU
participant pod_eu0 as Pod EU0
participant pod_us0 as Pod US0
user->>router_eu: GET /gitlab-org/gitlab
router_eu->>pod_eu0: /api/v4/pods/learn?method=GET&path_info=/gitlab-org/gitlab
pod_eu0->>router_eu: {path: "/gitlab-org", pod: "pod_us0", source: "routable"}
router_eu->>pod_us0: GET /gitlab-org/gitlab
pod_us0->>user: <h1>GitLab.org...
```
In this case the user is not on their "default organization" so their TODO
counter will not include their normal todos. We may choose to highlight this in
the UI somewhere. A future iteration may be able to fetch that for them from
their default organization.
#### Navigates to `/`
1. User is in Europe so DNS resolves to the router in Europe
1. Router does not have a cache for `/` route (specifically rails never tells it to cache this route)
1. The Router choose `Pod EU0` randomly
1. The Rails application knows the users default organization is `/my-organization`, so
it redirects the user to `/organizations/my-organization/-/dashboard`
1. The Router has a cached value for `/organizations/my-organization/*` so it then sends the
request to `POD EU0`
1. `Pod EU0` serves up a new page `/organizations/my-organization/-/dashboard` which is the same
dashboard view we have today but scoped to an organization clearly in the UI
1. The user is (optionally) presented with a message saying that data on this page is only
from their default organization and that they can change their default
organization if it's not right.
```mermaid
sequenceDiagram
participant user as User
participant router_eu as Router EU
participant pod_eu0 as Pod EU0
user->>router_eu: GET /
router_eu->>pod_eu0: GET /
pod_eu0->>user: 302 /organizations/my-organization/-/dashboard
user->>router: GET /organizations/my-organization/-/dashboard
router->>pod_eu0: GET /organizations/my-organization/-/dashboard
pod_eu0->>user: <h1>My Company Dashboard... X-Gitlab-Pod-Cache={path_prefix:/organizations/my-organization/}
```
#### Navigates to `/dashboard`
As above, they will end up on `/organizations/my-organization/-/dashboard` as
the rails application will already redirect `/` to the dashboard page.
### Navigates to `/not-my-company/not-my-project` while logged in (but they don't have access since this project/group is private)
1. User is in Europe so DNS resolves to the router in Europe
1. The router knows that `/not-my-company` lives in `Pod US1` so sends the request to this
1. The user does not have access so `Pod US1` returns 404
```mermaid
sequenceDiagram
participant user as User
participant router_eu as Router EU
participant pod_us1 as Pod US1
user->>router_eu: GET /not-my-company/not-my-project
router_eu->>pod_us1: GET /not-my-company/not-my-project
pod_us1->>user: 404
```
#### Creates a new top level namespace
The user will be asked which organization they want the namespace to belong to.
If they select `my-organization` then it will end up on the same pod as all
other namespaces in `my-organization`. If they select nothing we default to
`GitLab.com Public` and it is clear to the user that this is isolated from
their existing organization such that they won't be able to see data from both
on a single page.
### Experience for GitLab team member that is part of `/gitlab-org`
Such a user is considered a legacy user and has their default organization set to
`GitLab.com Public`. This is a "meta" organization that does not really exist but
the Rails application knows to interpret this organization to mean that they are
allowed to use legacy global functionality like `/dashboard` to see data across
namespaces located on `Pod US0`. The rails backend also knows that the default pod to render any ambiguous
routes like `/dashboard` is `Pod US0`. Lastly the user will be allowed to
navigate to organizations on another pod like `/my-organization` but when they do the
user will see a message indicating that some data may be missing (eg. the
MRs/Issues/Todos) counts.
#### Navigates to `/gitlab-org/gitlab` while not logged in
1. User is in the US so DNS resolves to the US router
1. The router knows that `/gitlab-org` lives in `Pod US0` so sends the request
to this pod
1. `Pod US0` serves up the response
```mermaid
sequenceDiagram
participant user as User
participant router_us as Router US
participant pod_us0 as Pod US0
user->>router_us: GET /gitlab-org/gitlab
router_us->>pod_us0: GET /gitlab-org/gitlab
pod_us0->>user: <h1>GitLab.org...
```
#### Navigates to `/`
1. User is in US so DNS resolves to the router in US
1. Router does not have a cache for `/` route (specifically rails never tells it to cache this route)
1. The Router chooses `Pod US1` randomly
1. The Rails application knows the users default organization is `GitLab.com Public`, so
it redirects the user to `/dashboards` (only legacy users can see
`/dashboard` global view)
1. Router does not have a cache for `/dashboard` route (specifically rails never tells it to cache this route)
1. The Router chooses `Pod US1` randomly
1. The Rails application knows the users default organization is `GitLab.com Public`, so
it allows the user to load `/dashboards` (only legacy users can see
`/dashboard` global view) and redirects to router the legacy pod which is `Pod US0`
1. `Pod US0` serves up the global view dashboard page `/dashboard` which is the same
dashboard view we have today
```mermaid
sequenceDiagram
participant user as User
participant router_us as Router US
participant pod_us0 as Pod US0
participant pod_us1 as Pod US1
user->>router_us: GET /
router_us->>pod_us1: GET /
pod_us1->>user: 302 /dashboard
user->>router_us: GET /dashboard
router_us->>pod_us1: /api/v4/pods/learn?method=GET&path_info=/dashboard
pod_us1->>router_us: {path: "/dashboard", pod: "pod_us0", source: "routable"}
router_us->>pod_us0: GET /dashboard
pod_us0->>user: <h1>Dashboard...
```
#### Navigates to `/my-company/my-other-project` while logged in (but they don't have access since this project is private)
They get a 404.
### Experience for non-logged in users
Flow is similar to logged in users except global routes like `/dashboard` will
redirect to the login page as there is no default organization to choose from.
### A new customers signs up
They will be asked if they are already part of an organization or if they'd
like to create one. If they choose neither they end up no the default
`GitLab.com Public` organization.
### An organization is moved from 1 pod to another
TODO
### GraphQL/API requests which don't include the namespace in the URL
TODO
### The autocomplete suggestion functionality in the search bar which remembers recent issues/MRs
TODO
### Global search
TODO
## Administrator
### Loads `/admin` page
1. The `/admin` is locked to `Pod US0`
1. Some endpoints of `/admin`, like Projects in Admin are scoped to a Pod
and users needs to choose the correct one in a dropdown, which results in endpoint
like `/admin/pods/pod_0/projects`.
Admin Area settings in Postgres are all shared across all pods to avoid
divergence but we still make it clear in the URL and UI which pod is serving
the Admin Area page as there is dynamic data being generated from these pages and
the operator may want to view a specific pod.
## More Technical Problems To Solve
### Replicating User Sessions Between All Pods
Today user sessions live in Redis but each pod will have their own Redis instance. We already use a dedicated Redis instance for sessions so we could consider sharing this with all pods like we do with `gitlab_users` PostgreSQL database. But an important consideration will be latency as we would still want to mostly fetch sessions from the same region.
An alternative might be that user sessions get moved to a JWT payload that encodes all the session data but this has downsides. For example, it is difficult to expire a user session, when their password changes or for other reasons, if the session lives in a JWT controlled by the user.
### How do we migrate between Pods
Migrating data between pods will need to factor all data stores:
1. PostgreSQL
1. Redis Shared State
1. Gitaly
1. Elasticsearch
### Is it still possible to leak the existence of private groups via a timing attack?
If you have router in EU, and you know that EU router by default redirects
to EU located Pods, you know their latency (lets assume 10ms). Now, if your
request is bounced back and redirected to US which has different latency
(lets assume that roundtrip will be around 60ms) you can deduce that 404 was
returned by US Pod and know that your 404 is in fact 403.
We may defer this until we actually implement a pod in a different region. Such timing attacks are already theoretically possible with the way we do permission checks today but the timing difference is probably too small to be able to detect.
One technique to mitigate this risk might be to have the router add a random
delay to any request that returns 404 from a pod.
## Should runners be shared across all pods?
We have 2 options and we should decide which is easier:
1. Decompose runner registration and queuing tables and share them across all
pods. This may have implications for scalability, and we'd need to consider
if this would include group/project runners as this may have scalability
concerns as these are high traffic tables that would need to be shared.
1. Runners are registered per-pod and, we probably have a separate fleet of
runners for every pod or just register the same runners to many pods which
may have implications for queueing
## How do we guarantee unique ids across all pods for things that cannot conflict?
This project assumes at least namespaces and projects have unique ids across
all pods as many requests need to be routed based on their ID. Since those
tables are across different databases then guaranteeing a unique ID will
require a new solution. There are likely other tables where unique IDs are
necessary and depending on how we resolve routing for GraphQL and other APIs
and other design goals it may be determined that we want the primary key to be
unique for all tables.

View File

@ -1155,6 +1155,14 @@ For example:
See also [**enter**](#enter).
## units of measurement
Use a space between the number and the unit of measurement. For example, **128 GB**.
([Vale](../testing.md#vale) rule: [`Units.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/Units.yml))
For other guidance, follow
[the Microsoft style guidelines](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/bits-bytes-terms).
## update
Use **update** for installing a newer **patch** version of the software only. For example:

View File

@ -24,8 +24,8 @@ To create an epic in the group you're in:
1. Get to the New Epic form:
- Go to your group and from the left sidebar select **Epics**. Then select **New epic**.
- From an epic in your group, select **New epic**.
- From anywhere, in the top menu, select **New...** (**{plus-square}**) **> New epic**.
- From an epic in your group, select the vertical ellipsis (**{ellipsis_v}**). Then select **New epic**.
- From anywhere, in the top menu, select **New...** (**{plus-square}**). Then select **New epic**.
- In an empty [roadmap](../roadmap/index.md), select **New epic**.
1. Enter a title.
@ -488,9 +488,9 @@ New child epics appear at the top of the list of epics in the **Epics and Issues
When you add an epic that's already linked to a parent epic, the link to its current parent is removed.
Epics can contain multiple nested child epics, up to a total of seven levels deep.
Epics can contain multiple nested child epics, up to a total of 7 levels deep.
Maximum number of direct child epics is 100.
The maximum number of direct child epics is 100.
### Add a child epic to an epic

View File

@ -138,6 +138,7 @@ The **Create merge request** button doesn't display if:
- A branch with the same name already exists.
- A merge request already exists for this branch.
- Your project has an active fork relationship.
- Your project is private and the issue is confidential.
To make this button appear, one possible workaround is to
[remove your project's fork relationship](../settings/index.md#remove-a-fork-relationship).

View File

@ -100,7 +100,7 @@ module API
def read_ability(awardable)
case awardable
when Note
read_ability(awardable.noteable)
awardable.issuable_ability_name
when Snippet, ProjectSnippet
:read_snippet
else

View File

@ -66,7 +66,7 @@ module API
route_setting :authentication, basic_auth_personal_access_token: true, job_token_allowed: true
post ':id/secure_files' do
secure_file = user_project.secure_files.new(
name: params[:name]
name: Gitlab::Utils.check_path_traversal!(params[:name])
)
secure_file.file = params[:file]

View File

@ -9,6 +9,8 @@ module API
expose :checksum
expose :checksum_algorithm
expose :created_at
expose :expires_at
expose :metadata
end
end
end

View File

@ -154,7 +154,7 @@ module Gitlab
# Using 'self' in the CSP introduces several CSP bypass opportunities
# for this reason we list the URLs where GitLab frames itself instead
def self.allow_framed_gitlab_paths(directives)
['/admin/', '/assets/', '/-/speedscope/index.html', '/-/sandbox/mermaid'].map do |path|
['/admin/', '/assets/', '/-/speedscope/index.html', '/-/sandbox/'].map do |path|
append_to_directive(directives, 'frame_src', Gitlab::Utils.append_path(Gitlab.config.gitlab.url, path))
end
end

View File

@ -7,11 +7,14 @@ module Gitlab
# The purpose of this analyzer is to detect queries not going through a partitioning routing table
class PartitioningAnalyzer < Database::QueryAnalyzers::Base
RoutingTableNotUsedError = Class.new(QueryAnalyzerError)
PartitionIdMissingError = Class.new(QueryAnalyzerError)
ENABLED_TABLES = %w[
ci_builds_metadata
].freeze
ROUTING_TABLES = ENABLED_TABLES.map { |table| "p_#{table}" }.freeze
class << self
def enabled?
::Feature::FlipperFeature.table_exists? &&
@ -20,19 +23,80 @@ module Gitlab
def analyze(parsed)
analyze_legacy_tables_usage(parsed)
analyze_partition_id_presence(parsed) if partition_id_check_enabled?
end
private
def partition_id_check_enabled?
::Feature::FlipperFeature.table_exists? &&
::Feature.enabled?(:ci_partitioning_analyze_queries_partition_id_check, type: :ops)
end
def analyze_legacy_tables_usage(parsed)
detected = ENABLED_TABLES & (parsed.pg.dml_tables + parsed.pg.select_tables)
return if detected.none?
::Gitlab::ErrorTracking.track_and_raise_for_dev_exception(
RoutingTableNotUsedError.new("Detected non-partitioned table use #{detected.inspect}: #{parsed.sql}")
log_and_raise_error(
RoutingTableNotUsedError.new(
"Detected non-partitioned table use #{detected.inspect}: #{parsed.sql}"
)
)
end
def analyze_partition_id_presence(parsed)
detected = ROUTING_TABLES & (parsed.pg.dml_tables + parsed.pg.select_tables)
return if detected.none?
if insert_query?(parsed)
return if insert_include_partition_id?(parsed)
else
detected_with_selected_columns = parsed_detected_tables(parsed, detected)
return if partition_id_included?(detected_with_selected_columns)
end
log_and_raise_error(
PartitionIdMissingError.new(
"Detected query against a partitioned table without partition id: #{parsed.sql}"
)
)
end
def parsed_detected_tables(parsed, routing_tables)
parsed.pg.filter_columns.each_with_object(Hash.new { |h, k| h[k] = [] }) do |item, hash|
table_name = item[0] || routing_tables[0]
column_name = item[1]
hash[table_name] << column_name if routing_tables.include?(table_name)
end
end
def partition_id_included?(result)
return false if result.empty?
result.all? { |_routing_table, columns| columns.include?('partition_id') }
end
def log_and_raise_error(error)
::Gitlab::ErrorTracking.track_and_raise_for_dev_exception(error)
end
def insert_query?(parsed)
parsed.sql.start_with?('INSERT')
end
def insert_include_partition_id?(parsed)
filtered_columns_on_insert(parsed).include?('partition_id')
end
def filtered_columns_on_insert(parsed)
result = parsed.pg.tree.to_h.dig(:stmts, 0, :stmt, :insert_stmt, :cols).map do |h|
h.dig(:res_target, :name)
end
result || []
end
end
end
end

View File

@ -125,7 +125,7 @@ module Gitlab
end
def process_rss_bytes
Gitlab::Metrics::System.memory_usage_rss
Gitlab::Metrics::System.memory_usage_rss[:total]
end
def worker_id

View File

@ -35,6 +35,8 @@ module Gitlab
process_cpu_seconds_total: ::Gitlab::Metrics.gauge(metric_name(:process, :cpu_seconds_total), 'Process CPU seconds total'),
process_max_fds: ::Gitlab::Metrics.gauge(metric_name(:process, :max_fds), 'Process max fds'),
process_resident_memory_bytes: ::Gitlab::Metrics.gauge(metric_name(:process, :resident_memory_bytes), 'Memory used (RSS)', labels),
process_resident_anon_memory_bytes: ::Gitlab::Metrics.gauge(metric_name(:process, :resident_anon_memory_bytes), 'Anonymous memory used (RSS)', labels),
process_resident_file_memory_bytes: ::Gitlab::Metrics.gauge(metric_name(:process, :resident_file_memory_bytes), 'File backed memory used (RSS)', labels),
process_unique_memory_bytes: ::Gitlab::Metrics.gauge(metric_name(:process, :unique_memory_bytes), 'Memory used (USS)', labels),
process_proportional_memory_bytes: ::Gitlab::Metrics.gauge(metric_name(:process, :proportional_memory_bytes), 'Memory used (PSS)', labels),
process_start_time_seconds: ::Gitlab::Metrics.gauge(metric_name(:process, :start_time_seconds), 'Process start time seconds'),
@ -95,7 +97,10 @@ module Gitlab
end
def set_memory_usage_metrics
metrics[:process_resident_memory_bytes].set(labels, System.memory_usage_rss)
rss = System.memory_usage_rss
metrics[:process_resident_memory_bytes].set(labels, rss[:total])
metrics[:process_resident_anon_memory_bytes].set(labels, rss[:anon])
metrics[:process_resident_file_memory_bytes].set(labels, rss[:file])
if Gitlab::Utils.to_boolean(ENV['enable_memory_uss_pss'] || '1')
memory_uss_pss = System.memory_usage_uss_pss

View File

@ -18,7 +18,9 @@ module Gitlab
PRIVATE_PAGES_PATTERN = /^(Private_Clean|Private_Dirty|Private_Hugetlb):\s+(?<value>\d+)/.freeze
PSS_PATTERN = /^Pss:\s+(?<value>\d+)/.freeze
RSS_PATTERN = /VmRSS:\s+(?<value>\d+)/.freeze
RSS_TOTAL_PATTERN = /^VmRSS:\s+(?<value>\d+)/.freeze
RSS_ANON_PATTERN = /^RssAnon:\s+(?<value>\d+)/.freeze
RSS_FILE_PATTERN = /^RssFile:\s+(?<value>\d+)/.freeze
MAX_OPEN_FILES_PATTERN = /Max open files\s*(?<value>\d+)/.freeze
MEM_TOTAL_PATTERN = /^MemTotal:\s+(?<value>\d+) (.+)/.freeze
@ -27,7 +29,7 @@ module Gitlab
{
version: RUBY_DESCRIPTION,
gc_stat: GC.stat,
memory_rss: memory_usage_rss,
memory_rss: memory_usage_rss[:total],
memory_uss: proportional_mem[:uss],
memory_pss: proportional_mem[:pss],
time_cputime: cpu_time,
@ -38,7 +40,21 @@ module Gitlab
# Returns the given process' RSS (resident set size) in bytes.
def memory_usage_rss(pid: 'self')
sum_matches(PROC_STATUS_PATH % pid, rss: RSS_PATTERN)[:rss].kilobytes
results = { total: 0, anon: 0, file: 0 }
safe_yield_procfile(PROC_STATUS_PATH % pid) do |io|
io.each_line do |line|
if (value = parse_metric_value(line, RSS_TOTAL_PATTERN)) > 0
results[:total] = value.kilobytes
elsif (value = parse_metric_value(line, RSS_ANON_PATTERN)) > 0
results[:anon] = value.kilobytes
elsif (value = parse_metric_value(line, RSS_FILE_PATTERN)) > 0
results[:file] = value.kilobytes
end
end
end
results
end
# Returns the given process' USS/PSS (unique/proportional set size) in bytes.
@ -115,9 +131,7 @@ module Gitlab
safe_yield_procfile(proc_file) do |io|
io.each_line do |line|
patterns.each do |metric, pattern|
match = line.match(pattern)
value = match&.named_captures&.fetch('value', 0)
results[metric] += value.to_i
results[metric] += parse_metric_value(line, pattern)
end
end
end
@ -125,6 +139,13 @@ module Gitlab
results
end
def parse_metric_value(line, pattern)
match = line.match(pattern)
return 0 unless match
match.named_captures.fetch('value', 0).to_i
end
def proc_stat_entries
safe_yield_procfile(PROC_STAT_PATH) do |io|
io.read.split(' ')

15
lib/gitlab/patch/uri.rb Normal file
View File

@ -0,0 +1,15 @@
# frozen_string_literal: true
module Gitlab
module Patch
module Uri
module ClassMethods
def parse(uri)
raise URI::InvalidURIError, "URI is too long" if uri && uri.to_s.length > 15_000
super
end
end
end
end
end

View File

@ -221,7 +221,7 @@ module Gitlab
end
def get_rss_kb
Gitlab::Metrics::System.memory_usage_rss / 1.kilobytes
Gitlab::Metrics::System.memory_usage_rss[:total] / 1.kilobytes
end
def get_soft_limit_rss_kb

View File

@ -31,7 +31,7 @@ module Gitlab
gc_stats: gc_stats,
time_to_finish: time_to_finish,
number_of_sql_calls: sql_calls_count,
memory_usage: "#{Gitlab::Metrics::System.memory_usage_rss.to_f / 1024 / 1024} MiB",
memory_usage: "#{Gitlab::Metrics::System.memory_usage_rss[:total].to_f / 1024 / 1024} MiB",
label: ::Prometheus::PidProvider.worker_id
)

View File

@ -222,8 +222,14 @@ module UnnestedInFilters
@in_filters ||= arel_in_nodes.each_with_object({}) { |node, memo| memo[node.left.name] = node.right }
end
def model_column_names
@model_column_names ||= model.columns.map(&:name)
end
# Actively filter any nodes that don't belong to the primary queried table to prevent sql type resolution issues
# Context: https://gitlab.com/gitlab-org/gitlab/-/issues/370271#note_1151019824
def arel_in_nodes
where_clause_arel_nodes.select(&method(:in_predicate?))
where_clause_arel_nodes.select(&method(:in_predicate?)).select { model_column_names.include?(_1.left.name) }
end
# `ActiveRecord::WhereClause#ast` is returning a single node when there is only one
@ -260,7 +266,7 @@ module UnnestedInFilters
end
def filter_attributes
@filter_attributes ||= where_values_hash.keys
@filter_attributes ||= where_clause.to_h.keys
end
def order_attributes

View File

@ -14221,9 +14221,6 @@ msgstr ""
msgid "Do not force push over diverged refs. After the mirror is created, this setting can only be modified using the API. %{mirroring_docs_link_start}Learn more about this option%{link_closing_tag} and %{mirroring_api_docs_link_start}the API.%{link_closing_tag}"
msgstr ""
msgid "Do not show again"
msgstr ""
msgid "Do you want to remove this deploy key?"
msgstr ""
@ -38233,9 +38230,6 @@ msgstr ""
msgid "Something went wrong while fetching the packages list."
msgstr ""
msgid "Something went wrong while initializing the OpenAPI viewer"
msgstr ""
msgid "Something went wrong while obtaining the Let's Encrypt certificate."
msgstr ""
@ -41672,6 +41666,9 @@ msgstr ""
msgid "This only applies to repository indexing operations."
msgstr ""
msgid "This page is hosted on GitLab pages but contains user-generated content and may contain malicious code. Do not accept unless you trust the author and source."
msgstr ""
msgid "This page is unavailable because you are not allowed to read information across multiple projects."
msgstr ""
@ -46233,6 +46230,9 @@ msgstr ""
msgid "You are attempting to update a file that has changed since you started editing it."
msgstr ""
msgid "You are being redirected away from GitLab"
msgstr ""
msgid "You are billed if you exceed this number. %{qsrOverageLinkStart}How does billing work?%{qsrOverageLinkEnd}"
msgstr ""

View File

@ -131,7 +131,7 @@ module QA
Page::Group::Settings::PackageRegistries.perform(&:set_allow_duplicates_disabled)
end
it 'prevents users from publishing duplicates' do
it 'prevents users from publishing duplicates', testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/377491' do
create_duplicated_package
push_duplicated_package
@ -151,7 +151,7 @@ module QA
Page::Group::Settings::PackageRegistries.perform(&:set_allow_duplicates_enabled)
end
it 'allows users to publish duplicates' do
it 'allows users to publish duplicates', testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/377492' do
create_duplicated_package
push_duplicated_package

View File

@ -60,7 +60,7 @@ gc_stat_keys = ENV['GC_STAT_KEYS'].to_s.split(',').map(&:to_sym)
values = []
values << ENV['SETTING_CSV']
values += gc_stat_keys.map { |k| gc_stats[k] }
values << ::Gitlab::Metrics::System.memory_usage_rss
values << ::Gitlab::Metrics::System.memory_usage_rss[:total]
values << gc_total_time
values << tms.utime + tms.cutime
values << tms.stime + tms.cstime

View File

@ -262,6 +262,31 @@ RSpec.describe Projects::ArtifactsController do
end
end
describe 'GET external_file' do
before do
allow(Gitlab.config.pages).to receive(:enabled).and_return(true)
allow(Gitlab.config.pages).to receive(:artifacts_server).and_return(true)
end
context 'when the file exists' do
it 'renders the file view' do
path = 'ci_artifacts.txt'
get :external_file, params: { namespace_id: project.namespace, project_id: project, job_id: job, path: path }
expect(response).to have_gitlab_http_status(:ok)
end
end
context 'when the file does not exist' do
it 'responds Not Found' do
get :external_file, params: { namespace_id: project.namespace, project_id: project, job_id: job, path: 'unknown' }
expect(response).to have_gitlab_http_status(:not_found)
end
end
end
describe 'GET file' do
before do
allow(Gitlab.config.pages).to receive(:enabled).and_return(true)
@ -274,17 +299,11 @@ RSpec.describe Projects::ArtifactsController do
context 'when the file exists' do
it 'renders the file view' do
get :file, params: { namespace_id: project.namespace, project_id: project, job_id: job, path: 'ci_artifacts.txt' }
path = 'ci_artifacts.txt'
expect(response).to have_gitlab_http_status(:found)
end
end
get :file, params: { namespace_id: project.namespace, project_id: project, job_id: job, path: path }
context 'when the file does not exist' do
it 'responds Not Found' do
get :file, params: { namespace_id: project.namespace, project_id: project, job_id: job, path: 'unknown' }
expect(response).to be_not_found
expect(response).to redirect_to(external_file_project_job_artifacts_path(project, job, path: path))
end
end
end

View File

@ -1360,11 +1360,12 @@ RSpec.describe Projects::PipelinesController do
describe 'GET config_variables.json', :use_clean_rails_memory_store_caching do
include ReactiveCachingHelpers
let(:result) { YAML.dump(ci_config) }
let(:service) { Ci::ListConfigVariablesService.new(project, user) }
let(:ci_config) { '' }
let(:files) { { '.gitlab-ci.yml' => YAML.dump(ci_config) } }
let(:project) { create(:project, :auto_devops_disabled, :custom_repo, files: files) }
let(:service) { Ci::ListConfigVariablesService.new(project, user) }
before do
stub_gitlab_ci_yml_for_sha(sha, result)
allow(Ci::ListConfigVariablesService)
.to receive(:new)
.and_return(service)
@ -1398,7 +1399,6 @@ RSpec.describe Projects::PipelinesController do
context 'when sending an invalid sha' do
let(:sha) { 'invalid-sha' }
let(:ci_config) { nil }
before do
synchronous_reactive_cache(service)
@ -1460,11 +1460,11 @@ RSpec.describe Projects::PipelinesController do
end
context 'when project uses external project ci config' do
let(:other_project) { create(:project) }
let(:other_project) { create(:project, :custom_repo, files: other_project_files) }
let(:other_project_files) { { '.gitlab-ci.yml' => YAML.dump(other_project_ci_config) } }
let(:sha) { 'master' }
let(:service) { ::Ci::ListConfigVariablesService.new(other_project, user) }
let(:ci_config) do
let(:other_project_ci_config) do
{
variables: {
KEY1: { value: 'val 1', description: 'description 1' }
@ -1477,13 +1477,12 @@ RSpec.describe Projects::PipelinesController do
end
before do
project.update!(ci_config_path: ".gitlab-ci.yml@#{other_project.full_path}")
other_project.add_developer(user)
project.update!(ci_config_path: ".gitlab-ci.yml@#{other_project.full_path}:master")
synchronous_reactive_cache(service)
end
it 'returns other project config variables' do
expect(::Ci::ListConfigVariablesService).to receive(:new).with(other_project, anything).and_return(service)
get_config_variables
expect(response).to have_gitlab_http_status(:ok)
@ -1493,13 +1492,6 @@ RSpec.describe Projects::PipelinesController do
private
def stub_gitlab_ci_yml_for_sha(sha, result)
allow_any_instance_of(Repository)
.to receive(:gitlab_ci_yml_for)
.with(sha, '.gitlab-ci.yml')
.and_return(result)
end
def get_config_variables
get :config_variables, params: { namespace_id: project.namespace,
project_id: project,

View File

@ -395,7 +395,7 @@ RSpec.describe SearchController do
it_behaves_like 'support for active record query timeouts', :autocomplete, { term: 'hello' }, :project, :json
it 'returns an empty array when given abusive search term' do
get :autocomplete, params: { term: ('hal' * 9000), scope: 'projects' }
get :autocomplete, params: { term: ('hal' * 4000), scope: 'projects' }
expect(response).to have_gitlab_http_status(:ok)
expect(json_response).to match_array([])
end

View File

@ -13,4 +13,13 @@ FactoryBot.define do
end
end
end
factory :ci_secure_file_with_metadata, class: 'Ci::SecureFile' do
sequence(:name) { |n| "file#{n}.cer" }
file { fixture_file_upload('spec/fixtures/ci_secure_files/sample.cer', 'application/octet-stream') }
checksum { 'foo1234' }
project
after(:create, &:update_metadata!)
end
end

View File

@ -81,12 +81,11 @@ RSpec.describe "User browses artifacts" do
end
it "shows correct content" do
link = first(".tree-item-file-external-link")
expect(link[:target]).to eq("_blank")
expect(link[:rel]).to include("noopener").and include("noreferrer")
expect(page).to have_link("doc_sample.txt", href: file_project_job_artifacts_path(project, job, path: txt_entry.blob.path))
.and have_selector(".js-artifact-tree-external-icon")
expect(page)
.to have_link(
"doc_sample.txt",
href: external_file_project_job_artifacts_path(project, job, path: txt_entry.blob.path)
).and have_selector(".js-artifact-tree-external-icon")
page.within(".tree-table") do
expect(page).to have_content("..").and have_content("another-subdirectory")

View File

@ -1001,11 +1001,9 @@ RSpec.describe 'File blob', :js do
wait_for_requests
end
it 'removes `style`, `class`, and `data-*`` attributes from HTML' do
expect(page).to have_css('h1', text: 'Swagger API documentation')
expect(page).not_to have_css('.foo-bar')
expect(page).not_to have_css('[style="background-color: red;"]')
expect(page).not_to have_css('[data-foo-bar="baz"]')
it 'renders sandboxed iframe' do
expected = %(<iframe src="/-/sandbox/swagger" sandbox="allow-scripts" frameborder="0" width="100%" height="1000">)
expect(page.html).to include(expected)
end
end
end

View File

@ -1,28 +1,27 @@
import { SwaggerUIBundle } from 'swagger-ui-dist';
import axios from 'axios';
import MockAdapter from 'axios-mock-adapter';
import { setHTMLFixture, resetHTMLFixture } from 'helpers/fixtures';
import renderOpenApi from '~/blob/openapi';
jest.mock('swagger-ui-dist');
describe('OpenAPI blob viewer', () => {
const id = 'js-openapi-viewer';
const mockEndpoint = 'some/endpoint';
let mock;
beforeEach(() => {
beforeEach(async () => {
setHTMLFixture(`<div id="${id}" data-endpoint="${mockEndpoint}"></div>`);
renderOpenApi();
mock = new MockAdapter(axios).onGet().reply(200);
await renderOpenApi();
});
afterEach(() => {
resetHTMLFixture();
mock.restore();
});
it('initializes SwaggerUI with the correct configuration', () => {
expect(SwaggerUIBundle).toHaveBeenCalledWith({
url: mockEndpoint,
dom_id: `#${id}`,
deepLinking: true,
displayOperationId: true,
});
expect(document.body.innerHTML).toContain(
'<iframe src="/-/sandbox/swagger" sandbox="allow-scripts" frameborder="0" width="100%" height="1000"></iframe>',
);
});
});

View File

@ -310,4 +310,15 @@ RSpec.describe LabelsHelper do
end
end
end
describe '#wrap_label_html' do
let(:project) { build_stubbed(:project) }
let(:xss_label) do
build_stubbed(:label, name: 'xsslabel', project: project, color: '"><img src=x onerror=prompt(1)>')
end
it 'does not include the color' do
expect(wrap_label_html('xss', label: xss_label, small: false)).not_to include('color:')
end
end
end

View File

@ -97,7 +97,7 @@ RSpec.describe Gitlab::ContentSecurityPolicy::ConfigLoader do
expect(directives['style_src']).to eq("'self' 'unsafe-inline' https://cdn.example.com")
expect(directives['font_src']).to eq("'self' https://cdn.example.com")
expect(directives['worker_src']).to eq('http://localhost/assets/ blob: data: https://cdn.example.com')
expect(directives['frame_src']).to eq(::Gitlab::ContentSecurityPolicy::Directives.frame_src + " https://cdn.example.com http://localhost/admin/ http://localhost/assets/ http://localhost/-/speedscope/index.html http://localhost/-/sandbox/mermaid")
expect(directives['frame_src']).to eq(::Gitlab::ContentSecurityPolicy::Directives.frame_src + " https://cdn.example.com http://localhost/admin/ http://localhost/assets/ http://localhost/-/speedscope/index.html http://localhost/-/sandbox/")
end
end
@ -120,7 +120,7 @@ RSpec.describe Gitlab::ContentSecurityPolicy::ConfigLoader do
end
it 'adds CUSTOMER_PORTAL_URL to CSP' do
expect(directives['frame_src']).to eq(::Gitlab::ContentSecurityPolicy::Directives.frame_src + " http://localhost/admin/ http://localhost/assets/ http://localhost/-/speedscope/index.html http://localhost/-/sandbox/mermaid #{customer_portal_url}")
expect(directives['frame_src']).to eq(::Gitlab::ContentSecurityPolicy::Directives.frame_src + " http://localhost/admin/ http://localhost/assets/ http://localhost/-/speedscope/index.html http://localhost/-/sandbox/ #{customer_portal_url}")
end
end

View File

@ -54,15 +54,100 @@ RSpec.describe Gitlab::Database::QueryAnalyzers::Ci::PartitioningAnalyzer, query
context 'when analyzing non targeted table' do
it 'does not raise error' do
expect { process_sql(Ci::BuildMetadata, "SELECT 1 FROM projects") }
.not_to raise_error
expect { process_sql(Ci::BuildMetadata, "SELECT 1 FROM projects") }.not_to raise_error
end
end
context 'when querying a routing table' do
it 'does not raise error' do
expect { process_sql(Ci::BuildMetadata, "SELECT 1 FROM p_ci_builds_metadata") }
.not_to raise_error
shared_examples 'a good query' do |sql|
it 'does not raise error' do
expect { process_sql(Ci::BuildMetadata, sql) }.not_to raise_error
end
end
shared_examples 'a bad query' do |sql|
it 'raises PartitionIdMissingError' do
expect { process_sql(Ci::BuildMetadata, sql) }.to raise_error(described_class::PartitionIdMissingError)
end
end
context 'when selecting data' do
it_behaves_like 'a good query', 'SELECT * FROM p_ci_builds_metadata WHERE partition_id = 100'
end
context 'with a join query' do
sql = <<~SQL
SELECT ci_builds.id
FROM p_ci_builds
JOIN p_ci_builds_metadata ON p_ci_builds_metadata.build_id = ci_builds.id
WHERE ci_builds.type = 'Ci::Build'
AND ci_builds.partition_id = 100
AND (NOT p_ci_builds_metadata.id IN
(SELECT p_ci_builds_metadata.id
FROM p_ci_builds_metadata
WHERE p_ci_builds_metadata.build_id = ci_builds.id
AND p_ci_builds_metadata.interruptible = TRUE
AND p_ci_builds_metadata.partition_id = 100 ));
SQL
it_behaves_like 'a good query', sql
end
context 'when removing data' do
it_behaves_like 'a good query', 'DELETE FROM p_ci_builds_metadata WHERE partition_id = 100'
end
context 'when updating data' do
it_behaves_like 'a good query', 'UPDATE p_ci_builds_metadata SET interruptible = false WHERE partition_id = 100'
end
context 'when inserting a record' do
it_behaves_like 'a good query', 'INSERT INTO p_ci_builds_metadata (id, partition_id) VALUES(1, 1)'
end
context 'when partition_id is missing' do
context 'when inserting a record' do
it_behaves_like 'a bad query', 'INSERT INTO p_ci_builds_metadata (id) VALUES(1)'
end
context 'when selecting data' do
it_behaves_like 'a bad query', 'SELECT * FROM p_ci_builds_metadata WHERE id = 1'
end
context 'when removing data' do
it_behaves_like 'a bad query', 'DELETE FROM p_ci_builds_metadata WHERE id = 1'
end
context 'when updating data' do
it_behaves_like 'a bad query', 'UPDATE p_ci_builds_metadata SET interruptible = false WHERE id = 1'
end
context 'with a join query' do
sql = <<~SQL
SELECT ci_builds.id
FROM ci_builds
JOIN p_ci_builds_metadata ON p_ci_builds_metadata.build_id = ci_builds.id
WHERE ci_builds.type = 'Ci::Build'
AND ci_builds.partition_id = 100
AND (NOT p_ci_builds_metadata.id IN
(SELECT p_ci_builds_metadata.id
FROM p_ci_builds_metadata
WHERE p_ci_builds_metadata.build_id = ci_builds.id
AND p_ci_builds_metadata.interruptible = TRUE ));
SQL
it_behaves_like 'a bad query', sql
end
end
context 'when ci_partitioning_analyze_queries_partition_id_check is disabled' do
before do
stub_feature_flags(ci_partitioning_analyze_queries_partition_id_check: false)
end
it 'does not check if partition_id is included in the query' do
expect { process_sql(Ci::BuildMetadata, 'SELECT * from p_ci_builds_metadata') }.not_to raise_error
end
end
end
end

View File

@ -4,7 +4,7 @@ require 'spec_helper'
RSpec.describe Gitlab::GitalyClient::ObjectPoolService do
let(:pool_repository) { create(:pool_repository) }
let(:project) { create(:project, :repository) }
let(:project) { pool_repository.source_project }
let(:raw_repository) { project.repository.raw }
let(:object_pool) { pool_repository.object_pool }
@ -45,21 +45,32 @@ RSpec.describe Gitlab::GitalyClient::ObjectPoolService do
end
describe '#fetch' do
before do
subject.delete
context 'without changes' do
it 'fetches changes' do
expect(subject.fetch(project.repository)).to eq(Gitaly::FetchIntoObjectPoolResponse.new)
end
end
it 'restores the pool repository objects' do
subject.fetch(project.repository)
context 'with new reference in source repository' do
let(:branch) { 'ref-to-be-fetched' }
let(:source_ref) { "refs/heads/#{branch}" }
let(:pool_ref) { "refs/remotes/origin/heads/#{branch}" }
expect(object_pool.repository.exists?).to be(true)
end
before do
# Create a new reference in the source repository that we can fetch.
project.repository.write_ref(source_ref, 'refs/heads/master')
end
context 'when called twice' do
it "doesn't raise an error" do
subject.delete
it 'fetches changes' do
# Sanity-check to verify that the reference only exists in the source repository now, but not in the
# object pool.
expect(project.repository.ref_exists?(source_ref)).to be(true)
expect(object_pool.repository.ref_exists?(pool_ref)).to be(false)
expect { subject.fetch(project.repository) }.not_to raise_error
subject.fetch(project.repository)
# The fetch should've created the reference in the object pool.
expect(object_pool.repository.ref_exists?(pool_ref)).to be(true)
end
end
end

View File

@ -60,7 +60,9 @@ RSpec.describe Gitlab::Memory::Watchdog, :aggregate_failures do
describe '#call' do
before do
stub_prometheus_metrics
allow(Gitlab::Metrics::System).to receive(:memory_usage_rss).at_least(:once).and_return(1024)
allow(Gitlab::Metrics::System).to receive(:memory_usage_rss).at_least(:once).and_return(
total: 1024
)
allow(::Prometheus::PidProvider).to receive(:worker_id).and_return('worker_1')
watchdog.configure do |config|

View File

@ -35,14 +35,30 @@ RSpec.describe Gitlab::Metrics::Samplers::RubySampler do
end
describe '#sample' do
it 'adds a metric containing the process resident memory bytes' do
expect(Gitlab::Metrics::System).to receive(:memory_usage_rss).and_return(9000)
it 'adds a metric containing the process total resident memory bytes' do
expect(Gitlab::Metrics::System).to receive(:memory_usage_rss).and_return({ total: 9000 })
expect(sampler.metrics[:process_resident_memory_bytes]).to receive(:set).with({}, 9000)
sampler.sample
end
it 'adds a metric containing the process anonymous resident memory bytes' do
expect(Gitlab::Metrics::System).to receive(:memory_usage_rss).and_return({ anon: 9000 })
expect(sampler.metrics[:process_resident_anon_memory_bytes]).to receive(:set).with({}, 9000)
sampler.sample
end
it 'adds a metric containing the process file backed resident memory bytes' do
expect(Gitlab::Metrics::System).to receive(:memory_usage_rss).and_return({ file: 9000 })
expect(sampler.metrics[:process_resident_file_memory_bytes]).to receive(:set).with({}, 9000)
sampler.sample
end
it 'adds a metric containing the process unique and proportional memory bytes' do
expect(Gitlab::Metrics::System).to receive(:memory_usage_uss_pss).and_return(uss: 9000, pss: 10_000)

View File

@ -20,6 +20,7 @@ RSpec.describe Gitlab::Metrics::System do
VmHWM: 2468 kB
VmRSS: 2468 kB
RssAnon: 260 kB
RssFile: 1024 kB
SNIP
end
@ -132,18 +133,26 @@ RSpec.describe Gitlab::Metrics::System do
describe '.memory_usage_rss' do
context 'without PID' do
it "returns the current process' resident set size (RSS) in bytes" do
it "returns a hash containing RSS metrics in bytes for current process" do
mock_existing_proc_file('/proc/self/status', proc_status)
expect(described_class.memory_usage_rss).to eq(2527232)
expect(described_class.memory_usage_rss).to eq(
total: 2527232,
anon: 266240,
file: 1048576
)
end
end
context 'with PID' do
it "returns the given process' resident set size (RSS) in bytes" do
it "returns a hash containing RSS metrics in bytes for given process" do
mock_existing_proc_file('/proc/7/status', proc_status)
expect(described_class.memory_usage_rss(pid: 7)).to eq(2527232)
expect(described_class.memory_usage_rss(pid: 7)).to eq(
total: 2527232,
anon: 266240,
file: 1048576
)
end
end
end
@ -241,8 +250,12 @@ RSpec.describe Gitlab::Metrics::System do
end
describe '.memory_usage_rss' do
it 'returns 0' do
expect(described_class.memory_usage_rss).to eq(0)
it 'returns 0 for all components' do
expect(described_class.memory_usage_rss).to eq(
total: 0,
anon: 0,
file: 0
)
end
end

View File

@ -0,0 +1,15 @@
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe Gitlab::Patch::Uri do
describe '#parse' do
it 'raises an error if the URI is too long' do
expect { URI.parse("https://example.com/#{'a' * 25_000}") }.to raise_error(URI::InvalidURIError)
end
it 'does not raise an error if the URI is not too long' do
expect { URI.parse("https://example.com/#{'a' * 14_000}") }.not_to raise_error
end
end
end

View File

@ -12,6 +12,12 @@ RSpec.describe UnnestedInFilters::Rewriter do
describe '#rewrite?' do
subject(:rewrite?) { rewriter.rewrite? }
context 'when a join table is receiving an IN list query' do
let(:relation) { User.joins(:status).where(status: { message: %w[foo bar] }).order(id: :desc).limit(2) }
it { is_expected.to be_falsey }
end
context 'when the given relation does not have an `IN` predicate' do
let(:relation) { User.where(username: 'user') }
@ -215,6 +221,46 @@ RSpec.describe UnnestedInFilters::Rewriter do
end
end
context 'when a join table is receiving an IN list query' do
let(:relation) { User.joins(:status).where(status: { message: %w[foo bar] }).order(id: :desc).limit(2) }
let(:expected_query) do
<<~SQL
SELECT
"users".*
FROM
"users"
WHERE
"users"."id" IN (
SELECT
"users"."id"
FROM
LATERAL (
SELECT
message,
"users"."id"
FROM
"users"
INNER JOIN "user_statuses" "status" ON "status"."user_id" = "users"."id"
WHERE
"status"."message" IN ('foo', 'bar')
ORDER BY
"users"."id" DESC
LIMIT 2) AS users
ORDER BY
"users"."id" DESC
LIMIT 2)
ORDER BY
"users"."id" DESC
LIMIT 2
SQL
end
it 'does not rewrite the in statement for the joined table' do
expect(issued_query.gsub(/\s/, '')).to start_with(expected_query.gsub(/\s/, ''))
end
end
describe 'logging' do
subject(:load_reload) { rewriter.rewrite }

View File

@ -5158,74 +5158,147 @@ RSpec.describe Ci::Pipeline, :mailer, factory_default: :keep do
end
describe '#reset_source_bridge!' do
let(:pipeline) { create(:ci_pipeline, :created, project: project) }
subject(:reset_bridge) { pipeline.reset_source_bridge!(current_user) }
subject(:reset_bridge) { pipeline.reset_source_bridge!(project.first_owner) }
context 'with downstream pipeline' do
let_it_be(:owner) { project.first_owner }
context 'when the pipeline is a child pipeline and the bridge is depended' do
let!(:parent_pipeline) { create(:ci_pipeline) }
let!(:bridge) { create_bridge(parent_pipeline, pipeline, true) }
let!(:first_upstream_pipeline) { create(:ci_pipeline, user: owner) }
let_it_be_with_reload(:pipeline) { create(:ci_pipeline, :created, project: project, user: owner) }
it 'marks source bridge as pending' do
reset_bridge
expect(bridge.reload).to be_pending
let!(:bridge) do
create_bridge(
upstream: first_upstream_pipeline,
downstream: pipeline,
depends: true
)
end
context 'when the parent pipeline has subsequent jobs after the bridge' do
let!(:after_bridge_job) { create(:ci_build, :skipped, pipeline: parent_pipeline, stage_idx: bridge.stage_idx + 1) }
context 'when the user has permissions for the processable' do
let(:current_user) { owner }
it 'marks subsequent jobs of the bridge as processable' do
reset_bridge
context 'when the downstream has strategy: depend' do
it 'marks source bridge as pending' do
expect { reset_bridge }
.to change { bridge.reload.status }
.to('pending')
end
expect(after_bridge_job.reload).to be_created
context 'with subsequent jobs' do
let!(:after_bridge_job) { add_bridge_dependant_job }
let!(:bridge_dependant_dag_job) { add_bridge_dependant_dag_job }
it 'changes subsequent job statuses to created' do
expect { reset_bridge }
.to change { after_bridge_job.reload.status }
.from('skipped').to('created')
.and change { bridge_dependant_dag_job.reload.status }
.from('skipped').to('created')
end
context 'when the user is not the build user' do
let(:current_user) { create(:user) }
before do
project.add_maintainer(current_user)
end
it 'changes subsequent jobs user' do
expect { reset_bridge }
.to change { after_bridge_job.reload.user }
.from(owner).to(current_user)
.and change { bridge_dependant_dag_job.reload.user }
.from(owner).to(current_user)
end
end
end
context 'when the upstream pipeline pipeline has a dependent upstream pipeline' do
let(:upstream_of_upstream) { create(:ci_pipeline, project: create(:project)) }
let!(:upstream_bridge) do
create_bridge(
upstream: upstream_of_upstream,
downstream: first_upstream_pipeline,
depends: true
)
end
it 'marks all source bridges as pending' do
expect { reset_bridge }
.to change { bridge.reload.status }
.from('skipped').to('pending')
.and change { upstream_bridge.reload.status }
.from('skipped').to('pending')
end
end
context 'without strategy: depend' do
let!(:upstream_pipeline) { create(:ci_pipeline) }
let!(:bridge) do
create_bridge(
upstream: first_upstream_pipeline,
downstream: pipeline,
depends: false
)
end
it 'does not touch source bridge' do
expect { reset_bridge }.to not_change { bridge.status }
end
context 'when the upstream pipeline has a dependent upstream pipeline' do
let!(:upstream_bridge) do
create_bridge(
upstream: create(:ci_pipeline, project: create(:project)),
downstream: first_upstream_pipeline,
depends: true
)
end
it 'does not touch any source bridge' do
expect { reset_bridge }.to not_change { bridge.status }
.and not_change { upstream_bridge.status }
end
end
end
end
end
context 'when the parent pipeline has a dependent upstream pipeline' do
let!(:upstream_bridge) do
create_bridge(create(:ci_pipeline, project: create(:project)), parent_pipeline, true)
context 'when the user does not have permissions for the processable' do
let(:current_user) { create(:user) }
it 'does not change bridge status' do
expect { reset_bridge }.to not_change { bridge.status }
end
it 'marks all source bridges as pending' do
reset_bridge
context 'with subsequent jobs' do
let!(:after_bridge_job) { add_bridge_dependant_job }
let!(:bridge_dependant_dag_job) { add_bridge_dependant_dag_job }
expect(bridge.reload).to be_pending
expect(upstream_bridge.reload).to be_pending
end
end
end
context 'when the pipeline is a child pipeline and the bridge is not depended' do
let!(:parent_pipeline) { create(:ci_pipeline) }
let!(:bridge) { create_bridge(parent_pipeline, pipeline, false) }
it 'does not touch source bridge' do
reset_bridge
expect(bridge.reload).to be_success
end
context 'when the parent pipeline has a dependent upstream pipeline' do
let!(:upstream_bridge) do
create_bridge(create(:ci_pipeline, project: create(:project)), parent_pipeline, true)
end
it 'does not touch any source bridge' do
reset_bridge
expect(bridge.reload).to be_success
expect(upstream_bridge.reload).to be_success
it 'does not change job statuses' do
expect { reset_bridge }.to not_change { after_bridge_job.reload.status }
.and not_change { bridge_dependant_dag_job.reload.status }
end
end
end
end
private
def create_bridge(upstream, downstream, depend = false)
options = depend ? { trigger: { strategy: 'depend' } } : {}
def add_bridge_dependant_job
create(:ci_build, :skipped, pipeline: first_upstream_pipeline, stage_idx: bridge.stage_idx + 1, user: owner)
end
bridge = create(:ci_bridge, pipeline: upstream, status: 'success', options: options)
def add_bridge_dependant_dag_job
create(:ci_build, :skipped, name: 'dependant-build-1', pipeline: first_upstream_pipeline, user: owner).tap do |build|
create(:ci_build_need, build: build, name: bridge.name)
end
end
def create_bridge(upstream:, downstream:, depends: false)
options = depends ? { trigger: { strategy: 'depend' } } : {}
bridge = create(:ci_bridge, pipeline: upstream, status: 'skipped', options: options, user: owner)
create(:ci_sources_pipeline, pipeline: downstream, source_job: bridge)
bridge

View File

@ -32,6 +32,7 @@ RSpec.describe CommitStatus do
it { is_expected.to respond_to :failed? }
it { is_expected.to respond_to :running? }
it { is_expected.to respond_to :pending? }
it { is_expected.not_to be_retried }
describe '#author' do
subject { commit_status.author }

View File

@ -1824,4 +1824,20 @@ RSpec.describe Note do
end
end
end
describe '#issuable_ability_name' do
subject { note.issuable_ability_name }
context 'when not confidential note' do
let(:note) { build(:note) }
it { is_expected.to eq :read_note }
end
context 'when confidential note' do
let(:note) { build(:note, :confidential) }
it { is_expected.to eq :read_internal_note }
end
end
end

View File

@ -21,6 +21,12 @@ RSpec.describe ProjectCiCdSetting do
end
end
describe '#separated_caches' do
it 'is true by default' do
expect(described_class.new.separated_caches).to be_truthy
end
end
describe '#default_git_depth' do
let(:default_value) { described_class::DEFAULT_GIT_DEPTH }

View File

@ -191,6 +191,36 @@ RSpec.describe API::AwardEmoji do
expect(json_response['name']).to eq(rocket.name)
end
context 'when a confidential note' do
subject(:perform_request) { get api(request_path, current_user) }
let_it_be(:group) { create(:group) }
let_it_be(:project) { create(:project, :public, namespace: group) }
let_it_be(:issue) { create(:issue, project: project) }
let_it_be(:note) { create(:note, :confidential, project: project, noteable: issue, author: user) }
context 'with sufficient persmissions' do
let(:current_user) { user }
it 'returns an award emoji' do
perform_request
expect(response).to have_gitlab_http_status(:ok)
expect(json_response['name']).to eq(rocket.name)
end
end
context 'with insufficient permissions' do
let(:current_user) { nil }
it 'returns 404' do
perform_request
expect(response).to have_gitlab_http_status(:not_found)
end
end
end
it_behaves_like 'unauthenticated request to public awardable'
it_behaves_like 'request with insufficient permissions', :get
end

View File

@ -143,6 +143,18 @@ RSpec.describe API::Ci::SecureFiles do
expect(response).to have_gitlab_http_status(:ok)
expect(json_response['name']).to eq(secure_file.name)
expect(json_response['expires_at']).to be nil
expect(json_response['metadata']).to be nil
end
it 'returns project secure file details with metadata when supported' do
secure_file_with_metadata = create(:ci_secure_file_with_metadata, project: project)
get api("/projects/#{project.id}/secure_files/#{secure_file_with_metadata.id}", maintainer)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response['name']).to eq(secure_file_with_metadata.name)
expect(json_response['expires_at']).to eq('2022-04-26T19:20:40.000Z')
expect(json_response['metadata'].keys).to match_array(%w[id issuer subject expires_at])
end
it 'responds with 404 Not Found if requesting non-existing secure file' do
@ -341,6 +353,15 @@ RSpec.describe API::Ci::SecureFiles do
expect(response).to have_gitlab_http_status(:payload_too_large)
end
it 'returns an error when and invalid file name is supplied' do
params = file_params.merge(name: '../../upload-keystore.jks')
expect do
post api("/projects/#{project.id}/secure_files", maintainer), params: params
end.not_to change { project.secure_files.count }
expect(response).to have_gitlab_http_status(:internal_server_error)
end
end
context 'authenticated user with read permissions' do

View File

@ -189,6 +189,7 @@ RSpec.describe 'project routing' do
end
it 'to #logs_tree' do
expect(get('/gitlab/gitlabhq/-/refs/stable/logs_tree/..%2F..%2F..%2F..%2F..%2F@example.com/tree/a')).to route_to('projects/refs#logs_tree', namespace_id: 'gitlab', project_id: 'gitlabhq', id: 'stable', path: '../../../../../@example.com/tree/a')
expect(get('/gitlab/gitlabhq/-/refs/stable/logs_tree')).to route_to('projects/refs#logs_tree', namespace_id: 'gitlab', project_id: 'gitlabhq', id: 'stable')
expect(get('/gitlab/gitlabhq/-/refs/feature%2345/logs_tree')).to route_to('projects/refs#logs_tree', namespace_id: 'gitlab', project_id: 'gitlabhq', id: 'feature#45')
expect(get('/gitlab/gitlabhq/-/refs/feature%2B45/logs_tree')).to route_to('projects/refs#logs_tree', namespace_id: 'gitlab', project_id: 'gitlabhq', id: 'feature+45')
@ -214,6 +215,10 @@ RSpec.describe 'project routing' do
it_behaves_like 'redirecting a legacy path',
'/gitlab/gitlabhq/refs/stable/logs_tree/new%0A%0Aline.txt',
'/gitlab/gitlabhq/-/refs/stable/logs_tree/new%0A%0Aline.txt'
it_behaves_like 'redirecting a legacy path',
'/gitlab/gitlabhq/refs/feature%2345/logs_tree/../../../../../@example.com/tree/a',
'/gitlab/gitlabhq/-/refs/feature#45/logs_tree/../../../../../-/example.com/tree/a'
end
describe Projects::MergeRequestsController, 'routing' do

View File

@ -5,19 +5,16 @@ require 'spec_helper'
RSpec.describe Ci::ListConfigVariablesService, :use_clean_rails_memory_store_caching do
include ReactiveCachingHelpers
let(:project) { create(:project, :repository) }
let(:ci_config) { {} }
let(:files) { { '.gitlab-ci.yml' => YAML.dump(ci_config) } }
let(:project) { create(:project, :custom_repo, :auto_devops_disabled, files: files) }
let(:user) { project.creator }
let(:sha) { project.default_branch }
let(:service) { described_class.new(project, user) }
let(:result) { YAML.dump(ci_config) }
subject { service.execute(sha) }
before do
stub_gitlab_ci_yml_for_sha(sha, result)
end
subject(:result) { service.execute(sha) }
context 'when sending a valid sha' do
let(:sha) { 'master' }
let(:ci_config) do
{
variables: {
@ -38,15 +35,14 @@ RSpec.describe Ci::ListConfigVariablesService, :use_clean_rails_memory_store_cac
end
it 'returns variable list' do
expect(subject['KEY1']).to eq({ value: 'val 1', description: 'description 1' })
expect(subject['KEY2']).to eq({ value: 'val 2', description: '' })
expect(subject['KEY3']).to eq({ value: 'val 3' })
expect(subject['KEY4']).to eq({ value: 'val 4' })
expect(result['KEY1']).to eq({ value: 'val 1', description: 'description 1' })
expect(result['KEY2']).to eq({ value: 'val 2', description: '' })
expect(result['KEY3']).to eq({ value: 'val 3' })
expect(result['KEY4']).to eq({ value: 'val 4' })
end
end
context 'when config has includes' do
let(:sha) { 'master' }
let(:ci_config) do
{
include: [{ local: 'other_file.yml' }],
@ -60,24 +56,56 @@ RSpec.describe Ci::ListConfigVariablesService, :use_clean_rails_memory_store_cac
}
end
before do
allow_next_instance_of(Repository) do |repository|
allow(repository).to receive(:blob_data_at).with(sha, 'other_file.yml') do
<<~HEREDOC
variables:
KEY2:
value: 'val 2'
description: 'description 2'
HEREDOC
end
end
let(:other_file) do
{
variables: {
KEY2: { value: 'val 2', description: 'description 2' }
}
}
end
let(:files) { { '.gitlab-ci.yml' => YAML.dump(ci_config), 'other_file.yml' => YAML.dump(other_file) } }
before do
synchronous_reactive_cache(service)
end
it 'returns variable list' do
expect(subject['KEY1']).to eq({ value: 'val 1', description: 'description 1' })
expect(subject['KEY2']).to eq({ value: 'val 2', description: 'description 2' })
expect(result['KEY1']).to eq({ value: 'val 1', description: 'description 1' })
expect(result['KEY2']).to eq({ value: 'val 2', description: 'description 2' })
end
end
context 'when project CI config is external' do
let(:other_project_ci_config) do
{
variables: { KEY1: { value: 'val 1', description: 'description 1' } },
test: { script: 'echo' }
}
end
let(:other_project_files) { { '.gitlab-ci.yml' => YAML.dump(other_project_ci_config) } }
let(:other_project) { create(:project, :custom_repo, files: other_project_files) }
before do
project.update!(ci_config_path: ".gitlab-ci.yml@#{other_project.full_path}:master")
synchronous_reactive_cache(service)
end
context 'when the user has access to the external project' do
before do
other_project.add_developer(user)
end
it 'returns variable list' do
expect(result['KEY1']).to eq({ value: 'val 1', description: 'description 1' })
end
end
context 'when the user has no access to the external project' do
it 'returns empty json' do
expect(result).to eq({})
end
end
end
@ -90,12 +118,11 @@ RSpec.describe Ci::ListConfigVariablesService, :use_clean_rails_memory_store_cac
end
it 'returns empty json' do
expect(subject).to eq({})
expect(result).to eq({})
end
end
context 'when sending an invalid config' do
let(:sha) { 'master' }
let(:ci_config) do
{
variables: {
@ -113,13 +140,11 @@ RSpec.describe Ci::ListConfigVariablesService, :use_clean_rails_memory_store_cac
end
it 'returns empty result' do
expect(subject).to eq({})
expect(result).to eq({})
end
end
context 'when reading from cache' do
let(:sha) { 'master' }
let(:ci_config) { {} }
let(:reactive_cache_params) { [sha] }
let(:return_value) { { 'KEY1' => { value: 'val 1', description: 'description 1' } } }
@ -128,13 +153,11 @@ RSpec.describe Ci::ListConfigVariablesService, :use_clean_rails_memory_store_cac
end
it 'returns variable list' do
expect(subject).to eq(return_value)
expect(result).to eq(return_value)
end
end
context 'when the cache is empty' do
let(:sha) { 'master' }
let(:ci_config) { {} }
let(:reactive_cache_params) { [sha] }
it 'returns nil and enquques the worker to fill cache' do
@ -142,16 +165,7 @@ RSpec.describe Ci::ListConfigVariablesService, :use_clean_rails_memory_store_cac
.to receive(:perform_async)
.with(service.class, service.id, *reactive_cache_params)
expect(subject).to be_nil
expect(result).to be_nil
end
end
private
def stub_gitlab_ci_yml_for_sha(sha, result)
allow_any_instance_of(Repository)
.to receive(:gitlab_ci_yml_for)
.with(sha, '.gitlab-ci.yml')
.and_return(result)
end
end

View File

@ -313,10 +313,27 @@ RSpec.describe Ci::RetryPipelineService, '#execute' do
create(:ci_sources_pipeline, pipeline: pipeline, source_job: bridge)
end
it 'marks source bridge as pending' do
service.execute(pipeline)
context 'without permission' do
it 'does nothing to the bridge' do
expect { service.execute(pipeline) }.to not_change { bridge.reload.status }
.and not_change { bridge.reload.user }
end
end
expect(bridge.reload).to be_pending
context 'with permission' do
let!(:bridge_pipeline) { create(:ci_pipeline, project: create(:project)) }
let!(:bridge) do
create(:ci_bridge, :strategy_depend, status: 'success', pipeline: bridge_pipeline)
end
before do
bridge_pipeline.project.add_maintainer(user)
end
it 'marks source bridge as pending' do
expect { service.execute(pipeline) }.to change { bridge.reload.status }.to('pending')
.and not_change { bridge.reload.user }
end
end
end

View File

@ -28,7 +28,7 @@ require (
github.com/stretchr/testify v1.8.1
gitlab.com/gitlab-org/gitaly/v15 v15.5.1
gitlab.com/gitlab-org/golang-archive-zip v0.1.1
gitlab.com/gitlab-org/labkit v1.16.0
gitlab.com/gitlab-org/labkit v1.16.1
gocloud.dev v0.26.0
golang.org/x/image v0.0.0-20220722155232-062f8c9fd539
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616

View File

@ -960,8 +960,9 @@ gitlab.com/gitlab-org/gitaly/v15 v15.5.1 h1:EbkAYAeTLllJzX3N3Sy3ZcmKtBzI5OovT5c5
gitlab.com/gitlab-org/gitaly/v15 v15.5.1/go.mod h1:G5q5H6OYMSEDnKXsQoYTzI+ysCTfM4Of2z0v6xeHtRY=
gitlab.com/gitlab-org/golang-archive-zip v0.1.1 h1:35k9giivbxwF03+8A05Cm8YoxoakU8FBCj5gysjCTCE=
gitlab.com/gitlab-org/golang-archive-zip v0.1.1/go.mod h1:ZDtqpWPGPB9qBuZnZDrKQjIdJtkN7ZAoVwhT6H2o2kE=
gitlab.com/gitlab-org/labkit v1.16.0 h1:Vm3NAMZ8RqAunXlvPWby3GJ2R35vsYGP6Uu0YjyMIlY=
gitlab.com/gitlab-org/labkit v1.16.0/go.mod h1:bcxc4ZpAC+WyACgyKl7FcvT2XXAbl8CrzN6UY+w8cMc=
gitlab.com/gitlab-org/labkit v1.16.1 h1:J+HmNVR5bvPfrv9/fgKICFis2nmEugRXHMeRPvsVZUg=
gitlab.com/gitlab-org/labkit v1.16.1/go.mod h1:tzZLVHeb0/Jrm9fPFdYuCrKmrYjfjEA0NmuLPXvvM+0=
go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=