2017-09-06 05:16:26 +00:00
|
|
|
# Repository Storage Types
|
|
|
|
|
|
|
|
> [Introduced][ce-28283] in GitLab 10.0.
|
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
Two different storage layouts can be used
|
|
|
|
to store the repositories on disk and their characteristics.
|
|
|
|
|
|
|
|
GitLab can be configured to use one or multiple repository shard locations
|
|
|
|
that can be:
|
|
|
|
|
|
|
|
- Mounted to the local disk
|
|
|
|
- Exposed as an NFS shared volume
|
|
|
|
- Acessed via [gitaly] on its own machine.
|
|
|
|
|
|
|
|
In GitLab, this is configured in `/etc/gitlab/gitlab.rb` by the `git_data_dirs({})`
|
|
|
|
configuration hash. The storage layouts discussed here will apply to any shard
|
|
|
|
defined in it.
|
|
|
|
|
|
|
|
The `default` repository shard that is available in any installations
|
|
|
|
that haven't customized it, points to the local folder: `/var/opt/gitlab/git-data`.
|
|
|
|
Anything discussed below is expected to be part of that folder.
|
|
|
|
|
2017-09-06 05:16:26 +00:00
|
|
|
## Legacy Storage
|
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
Legacy Storage is the storage behavior prior to version 10.0. For historical
|
|
|
|
reasons, GitLab replicated the same mapping structure from the projects URLs:
|
2017-09-06 05:16:26 +00:00
|
|
|
|
2018-11-13 06:07:16 +00:00
|
|
|
- Project's repository: `#{namespace}/#{project_name}.git`
|
|
|
|
- Project's wiki: `#{namespace}/#{project_name}.wiki.git`
|
2017-11-08 02:36:06 +00:00
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
This structure made it simple to migrate from existing solutions to GitLab and
|
|
|
|
easy for Administrators to find where the repository is stored.
|
2017-09-06 05:16:26 +00:00
|
|
|
|
|
|
|
On the other hand this has some drawbacks:
|
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
Storage location will concentrate huge amount of top-level namespaces. The
|
|
|
|
impact can be reduced by the introduction of [multiple storage
|
|
|
|
paths][storage-paths].
|
2017-09-06 05:16:26 +00:00
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
Because backups are a snapshot of the same URL mapping, if you try to recover a
|
|
|
|
very old backup, you need to verify whether any project has taken the place of
|
|
|
|
an old removed or renamed project sharing the same URL. This means that
|
|
|
|
`mygroup/myproject` from your backup may not be the same original project that
|
|
|
|
is at that same URL today.
|
2017-09-06 05:16:26 +00:00
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
Any change in the URL will need to be reflected on disk (when groups / users or
|
|
|
|
projects are renamed). This can add a lot of load in big installations,
|
|
|
|
especially if using any type of network based filesystem.
|
2017-09-06 05:16:26 +00:00
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
For GitLab Geo in particular: Geo does work with legacy storage, but in some
|
|
|
|
edge cases due to race conditions it can lead to errors when a project is
|
|
|
|
renamed multiple times in short succession, or a project is deleted and
|
|
|
|
recreated under the same name very quickly. We expect these race events to be
|
|
|
|
rare, and we have not observed a race condition side-effect happening yet.
|
2017-10-30 13:31:10 +00:00
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
This pattern also exists in other objects stored in GitLab, like issue
|
|
|
|
Attachments, GitLab Pages artifacts, Docker Containers for the integrated
|
|
|
|
Registry, etc.
|
2017-09-06 05:16:26 +00:00
|
|
|
|
|
|
|
## Hashed Storage
|
|
|
|
|
2018-10-18 12:56:43 +00:00
|
|
|
Hashed Storage is the new storage behavior we rolled out with 10.0. Instead
|
2018-02-08 18:33:35 +00:00
|
|
|
of coupling project URL and the folder structure where the repository will be
|
|
|
|
stored on disk, we are coupling a hash, based on the project's ID. This makes
|
|
|
|
the folder structure immutable, and therefore eliminates any requirement to
|
|
|
|
synchronize state from URLs to disk structure. This means that renaming a group,
|
|
|
|
user, or project will cost only the database transaction, and will take effect
|
|
|
|
immediately.
|
2017-09-06 05:16:26 +00:00
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
The hash also helps to spread the repositories more evenly on the disk, so the
|
|
|
|
top-level directory will contain less folders than the total amount of top-level
|
|
|
|
namespaces.
|
2017-09-06 05:16:26 +00:00
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
The hash format is based on the hexadecimal representation of SHA256:
|
|
|
|
`SHA256(project.id)`. The top-level folder uses the first 2 characters, followed
|
|
|
|
by another folder with the next 2 characters. They are both stored in a special
|
|
|
|
`@hashed` folder, to be able to co-exist with existing Legacy Storage projects:
|
2017-09-06 05:16:26 +00:00
|
|
|
|
|
|
|
```ruby
|
|
|
|
# Project's repository:
|
|
|
|
"@hashed/#{hash[0..1]}/#{hash[2..3]}/#{hash}.git"
|
|
|
|
|
|
|
|
# Wiki's repository:
|
|
|
|
"@hashed/#{hash[0..1]}/#{hash[2..3]}/#{hash}.wiki.git"
|
|
|
|
```
|
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
### Hashed object pools
|
Allow public forks to be deduplicated
When a project is forked, the new repository used to be a deep copy of everything
stored on disk by leveraging `git clone`. This works well, and makes isolation
between repository easy. However, the clone is at the start 100% the same as the
origin repository. And in the case of the objects in the object directory, this
is almost always going to be a lot of duplication.
Object Pools are a way to create a third repository that essentially only exists
for its 'objects' subdirectory. This third repository's object directory will be
set as alternate location for objects. This means that in the case an object is
missing in the local repository, git will look in another location. This other
location is the object pool repository.
When Git performs garbage collection, it's smart enough to check the
alternate location. When objects are duplicated, it will allow git to
throw one copy away. This copy is on the local repository, where to pool
remains as is.
These pools have an origin location, which for now will always be a
repository that itself is not a fork. When the root of a fork network is
forked by a user, the fork still clones the full repository. Async, the
pool repository will be created.
Either one of these processes can be done earlier than the other. To
handle this race condition, the Join ObjectPool operation is
idempotent. Given its idempotent, we can schedule it twice, with the
same effect.
To accommodate the holding of state two migrations have been added.
1. Added a state column to the pool_repositories column. This column is
managed by the state machine, allowing for hooks on transitions.
2. pool_repositories now has a source_project_id. This column in
convenient to have for multiple reasons: it has a unique index allowing
the database to handle race conditions when creating a new record. Also,
it's nice to know who the host is. As that's a short link to the fork
networks root.
Object pools are only available for public project, which use hashed
storage and when forking from the root of the fork network. (That is,
the project being forked from itself isn't a fork)
In this commit message I use both ObjectPool and Pool repositories,
which are alike, but different from each other. ObjectPool refers to
whatever is on the disk stored and managed by Gitaly. PoolRepository is
the record in the database.
2018-12-03 13:49:58 +00:00
|
|
|
|
|
|
|
For deduplication of public forks and their parent repository, objects are pooled
|
|
|
|
in an object pool. These object pools are a third repository where shared objects
|
|
|
|
are stored.
|
|
|
|
|
|
|
|
```ruby
|
|
|
|
# object pool paths
|
|
|
|
"@pools/#{hash[0..1]}/#{hash[2..3]}/#{hash}.git"
|
|
|
|
```
|
|
|
|
|
|
|
|
The object pool feature is behind the `object_pools` feature flag, and can be
|
|
|
|
enabled for individual projects by executing
|
|
|
|
`Feature.enable(:object_pools, Project.find(<id>))`. Note that the project has to
|
|
|
|
be on hashed storage, should not be a fork itself, and hashed storage should be
|
|
|
|
enabled for all new projects.
|
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
### How to migrate to Hashed Storage
|
2018-06-27 03:01:09 +00:00
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
To start a migration, enable Hashed Storage for new projects:
|
|
|
|
|
|
|
|
1. Go to **Admin > Settings** and expand the **Repository Storage** section.
|
|
|
|
2. Select the **Use hashed storage paths for newly created and renamed projects** checkbox.
|
2018-06-27 03:01:09 +00:00
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
Check if the change breaks any existing integration you may have that
|
|
|
|
either runs on the same machine as your repositories are located, or may login to that machine
|
|
|
|
to access data (for example, a remote backup solution).
|
2018-06-27 03:01:09 +00:00
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
To schedule a complete rollout, see the
|
|
|
|
[rake task documentation for storage migration][rake/migrate-to-hashed] for instructions.
|
2018-06-27 03:01:09 +00:00
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
If you do have any existing integration, you may want to do a small rollout first,
|
|
|
|
to validate. You can do so by specifying a range with the operation.
|
2018-06-27 03:01:09 +00:00
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
This is an example of how to limit the rollout to Project IDs 50 to 100, running in
|
|
|
|
an Omnibus Gitlab installation:
|
2018-06-27 03:01:09 +00:00
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
```bash
|
|
|
|
sudo gitlab-rake gitlab:storage:migrate_to_hashed ID_FROM=50 ID_TO=100
|
2018-06-27 03:01:09 +00:00
|
|
|
```
|
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
Check the [documentation][rake/migrate-to-hashed] for additional information and instructions for
|
|
|
|
source-based installation.
|
|
|
|
|
|
|
|
#### Rollback
|
|
|
|
|
|
|
|
Similar to the migration, to disable Hashed Storage for new
|
|
|
|
projects:
|
2018-06-27 03:01:09 +00:00
|
|
|
|
2019-03-11 16:38:19 +00:00
|
|
|
1. Go to **Admin > Settings** and expand the **Repository Storage** section.
|
|
|
|
2. Uncheck the **Use hashed storage paths for newly created and renamed projects** checkbox.
|
|
|
|
|
|
|
|
To schedule a complete rollback, see the
|
|
|
|
[rake task documentation for storage rollback][rake/rollback-to-legacy] for instructions.
|
|
|
|
|
|
|
|
The rollback task also supports specifying a range of Project IDs. Here is an example
|
|
|
|
of limiting the rollout to Project IDs 50 to 100, in an Omnibus Gitlab installation:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
sudo gitlab-rake gitlab:storage:rollback_to_legacy ID_FROM=50 ID_TO=100
|
|
|
|
```
|
|
|
|
|
|
|
|
If you have a Geo setup, please note that the rollback will not be reflected automatically
|
|
|
|
on the **secondary** node. You may need to wait for a backfill operation to kick-in and remove
|
|
|
|
the remaining repositories from the special `@hashed/` folder manually.
|
2018-06-27 03:01:09 +00:00
|
|
|
|
2017-10-30 13:31:10 +00:00
|
|
|
### Hashed Storage coverage
|
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
We are incrementally moving every storable object in GitLab to the Hashed
|
|
|
|
Storage pattern. You can check the current coverage status below (and also see
|
2019-03-11 16:38:19 +00:00
|
|
|
the [issue][ce-2821]).
|
2017-10-30 13:31:10 +00:00
|
|
|
|
2018-02-08 18:33:35 +00:00
|
|
|
Note that things stored in an S3 compatible endpoint will not have the downsides
|
|
|
|
mentioned earlier, if they are not prefixed with `#{namespace}/#{project_name}`,
|
|
|
|
which is true for CI Cache and LFS Objects.
|
2017-10-30 13:31:10 +00:00
|
|
|
|
2017-11-08 02:36:06 +00:00
|
|
|
| Storable Object | Legacy Storage | Hashed Storage | S3 Compatible | GitLab Version |
|
|
|
|
| --------------- | -------------- | -------------- | ------------- | -------------- |
|
2017-10-30 13:31:10 +00:00
|
|
|
| Repository | Yes | Yes | - | 10.0 |
|
|
|
|
| Attachments | Yes | Yes | - | 10.2 |
|
2017-11-08 02:36:06 +00:00
|
|
|
| Avatars | Yes | No | - | - |
|
2017-10-30 13:31:10 +00:00
|
|
|
| Pages | Yes | No | - | - |
|
|
|
|
| Docker Registry | Yes | No | - | - |
|
2017-11-08 02:36:06 +00:00
|
|
|
| CI Build Logs | No | No | - | - |
|
2018-06-27 03:01:09 +00:00
|
|
|
| CI Artifacts | No | No | Yes | 9.4 / 10.6 |
|
2017-10-30 13:31:10 +00:00
|
|
|
| CI Cache | No | No | Yes | - |
|
2018-06-27 03:01:09 +00:00
|
|
|
| LFS Objects | Yes | Similar | Yes | 10.0 / 10.7 |
|
2019-03-11 16:38:19 +00:00
|
|
|
| Repository pools| No | Yes | - | 11.6 |
|
2018-06-27 03:01:09 +00:00
|
|
|
|
|
|
|
#### Implementation Details
|
|
|
|
|
|
|
|
##### Avatars
|
|
|
|
|
|
|
|
Each file is stored in a folder with its `id` from the database. The filename is always `avatar.png` for user avatars.
|
|
|
|
When avatar is replaced, `Upload` model is destroyed and a new one takes place with different `id`.
|
|
|
|
|
|
|
|
##### CI Artifacts
|
|
|
|
|
|
|
|
CI Artifacts are S3 compatible since **9.4** (GitLab Premium), and available in GitLab Core since **10.6**.
|
|
|
|
|
|
|
|
##### LFS Objects
|
|
|
|
|
|
|
|
LFS Objects implements a similar storage pattern using 2 chars, 2 level folders, following git own implementation:
|
|
|
|
|
|
|
|
```ruby
|
|
|
|
"shared/lfs-objects/#{oid[0..1}/#{oid[2..3]}/#{oid[4..-1]}"
|
|
|
|
|
|
|
|
# Based on object `oid`: `8909029eb962194cfb326259411b22ae3f4a814b5be4f80651735aeef9f3229c`, path will be:
|
|
|
|
"shared/lfs-objects/89/09/029eb962194cfb326259411b22ae3f4a814b5be4f80651735aeef9f3229c"
|
|
|
|
```
|
|
|
|
|
|
|
|
They are also S3 compatible since **10.0** (GitLab Premium), and available in GitLab Core since **10.7**.
|
2019-03-11 16:38:19 +00:00
|
|
|
|
|
|
|
[ce-2821]: https://gitlab.com/gitlab-com/infrastructure/issues/2821
|
|
|
|
[ce-28283]: https://gitlab.com/gitlab-org/gitlab-ce/issues/28283
|
|
|
|
[rake/migrate-to-hashed]: raketasks/storage.md#migrate-existing-projects-to-hashed-storage
|
|
|
|
[rake/rollback-to-legacy]: raketasks/storage.md#rollback
|
|
|
|
[storage-paths]: repository_storage_types.md
|
|
|
|
[gitaly]: gitaly/index.md
|