2015-11-23 09:18:59 -05:00
# GitLab Git LFS Administration
2015-11-20 10:10:08 -05:00
Documentation on how to use Git LFS are under [Managing large binary files with Git LFS doc ](manage_large_binaries_with_git_lfs.md ).
## Requirements
2018-11-13 01:07:16 -05:00
- Git LFS is supported in GitLab starting with version 8.2.
- Support for object storage, such as AWS S3, was introduced in 10.0.
- Users need to install [Git LFS client ](https://git-lfs.github.com ) version 1.0.1 and up.
2015-11-20 10:10:08 -05:00
## Configuration
2016-03-03 11:21:32 -05:00
Git LFS objects can be large in size. By default, they are stored on the server
GitLab is installed on.
2015-11-20 10:10:08 -05:00
2018-02-21 11:43:21 -05:00
There are various configuration options to help GitLab server administrators:
2015-11-20 10:10:08 -05:00
2018-11-13 01:07:16 -05:00
- Enabling/disabling Git LFS support
- Changing the location of LFS object storage
- Setting up object storage supported by [Fog ](http://fog.io/about/provider_documentation.html )
2015-11-20 10:10:08 -05:00
2018-03-27 13:23:05 -04:00
### Configuration for Omnibus installations
2015-11-20 10:10:08 -05:00
In `/etc/gitlab/gitlab.rb` :
```ruby
2017-09-17 07:07:08 -04:00
# Change to true to enable lfs - enabled by default if not defined
2015-11-20 10:10:08 -05:00
gitlab_rails['lfs_enabled'] = false
2016-03-24 10:58:13 -04:00
# Optionally, change the storage path location. Defaults to
# `#{gitlab_rails['shared_path']}/lfs-objects`. Which evaluates to
# `/var/opt/gitlab/gitlab-rails/shared/lfs-objects` by default.
2015-11-20 10:10:08 -05:00
gitlab_rails['lfs_storage_path'] = "/mnt/storage/lfs-objects"
```
2018-03-27 13:23:05 -04:00
### Configuration for installations from source
2015-11-20 10:10:08 -05:00
In `config/gitlab.yml` :
```yaml
2018-02-21 11:43:21 -05:00
# Change to true to enable lfs
2015-11-20 10:10:08 -05:00
lfs:
enabled: false
storage_path: /mnt/storage/lfs-objects
```
2018-06-18 04:12:25 -04:00
## Storing LFS objects in remote object storage
2018-02-21 11:43:21 -05:00
2018-03-27 13:23:05 -04:00
> [Introduced][ee-2760] in [GitLab Premium][eep] 10.0. Brought to GitLab Core
in 10.7.
2018-02-21 11:43:21 -05:00
2018-06-18 04:12:25 -04:00
It is possible to store LFS objects in remote object storage which allows you
to offload local hard disk R/W operations, and free up disk space significantly.
GitLab is tightly integrated with `Fog` , so you can refer to its [documentation ](http://fog.io/about/provider_documentation.html )
to check which storage services can be integrated with GitLab.
You can also use external object storage in a private local network. For example,
2018-09-19 12:03:00 -04:00
[Minio ](https://www.minio.io/ ) is a standalone object storage service, is easy to set up, and works well with GitLab instances.
2018-02-21 11:43:21 -05:00
2018-06-18 04:12:25 -04:00
GitLab provides two different options for the uploading mechanism: "Direct upload" and "Background upload".
2018-06-18 01:44:11 -04:00
**Option 1. Direct upload**
2018-06-18 04:12:25 -04:00
1. User pushes an lfs file to the GitLab instance
1. GitLab-workhorse uploads the file directly to the external object storage
1. GitLab-workhorse notifies GitLab-rails that the upload process is complete
2018-06-18 01:44:11 -04:00
**Option 2. Background upload**
2018-06-18 04:12:25 -04:00
1. User pushes an lfs file to the GitLab instance
1. GitLab-rails stores the file in the local file storage
1. GitLab-rails then uploads the file to the external object storage asynchronously
2018-02-21 11:43:21 -05:00
2018-03-27 13:23:05 -04:00
The following general settings are supported.
2018-02-21 11:43:21 -05:00
| Setting | Description | Default |
|---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where LFS objects will be stored| |
2018-03-27 13:23:05 -04:00
| `direct_upload` | Set to true to enable direct upload of LFS without the need of local shared storage. Option may be removed once we decide to support only single storage for all files. | `false` |
2018-02-21 11:43:21 -05:00
| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
2018-03-22 14:37:47 -04:00
| `proxy_download` | Set to true to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
2018-02-21 11:43:21 -05:00
| `connection` | Various connection options described below | |
2018-03-27 13:23:05 -04:00
The `connection` settings match those provided by [Fog ](https://github.com/fog ).
2018-02-21 11:43:21 -05:00
2018-06-18 04:12:25 -04:00
Here is a configuration example with S3.
2018-06-18 01:44:11 -04:00
| Setting | Description | example |
2018-02-21 11:43:21 -05:00
|---------|-------------|---------|
2018-06-18 01:44:11 -04:00
| `provider` | The provider name | AWS |
| `aws_access_key_id` | AWS credentials, or compatible | `ABC123DEF456` |
| `aws_secret_access_key` | AWS credentials, or compatible | `ABC123DEF456ABC123DEF456ABC123DEF456` |
2018-06-29 01:09:19 -04:00
| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
2018-02-21 11:43:21 -05:00
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio ](https://www.minio.io ), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object` . Leave as false for AWS S3 | false |
2018-09-20 16:25:03 -04:00
| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
2018-02-21 11:43:21 -05:00
2018-06-18 01:44:11 -04:00
Here is a configuration example with GCS.
| Setting | Description | example |
|---------|-------------|---------|
| `provider` | The provider name | `Google` |
| `google_project` | GCP project name | `gcp-project-12345` |
2018-06-18 04:12:25 -04:00
| `google_client_email` | The email address of the service account | `foo@gcp-project-12345.iam.gserviceaccount.com` |
| `google_json_key_location` | The json key path | `/path/to/gcp-project-12345-abcde.json` |
2018-06-18 01:44:11 -04:00
2018-06-18 04:12:25 -04:00
_NOTE: The service account must have permission to access the bucket. [See more ](https://cloud.google.com/storage/docs/authentication )_
2018-06-18 01:44:11 -04:00
### Manual uploading to an object storage
2018-06-18 04:12:25 -04:00
There are two ways to manually do the same thing as automatic uploading (described above).
2018-06-18 01:44:11 -04:00
**Option 1: rake task**
```
$ rake gitlab:lfs:migrate
```
**Option 2: rails console**
```
$ sudo gitlab-rails console # Login to rails console
> # Upload LFS files manually
> LfsObject.where(file_store: [nil, 1]).find_each do |lfs_object|
> lfs_object.file.migrate!(ObjectStorage::Store::REMOTE) if lfs_object.file.file.exists?
> end
```
2018-03-27 13:23:05 -04:00
### S3 for Omnibus installations
On Omnibus installations, the settings are prefixed by `lfs_object_store_` :
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines by replacing with
the values you want:
```ruby
gitlab_rails['lfs_object_store_enabled'] = true
gitlab_rails['lfs_object_store_remote_directory'] = "lfs-objects"
gitlab_rails['lfs_object_store_connection'] = {
'provider' => 'AWS',
'region' => 'eu-central-1',
'aws_access_key_id' => '1ABCD2EFGHI34JKLM567N',
'aws_secret_access_key' => 'abcdefhijklmnopQRSTUVwxyz0123456789ABCDE',
# The below options configure an S3 compatible host instead of AWS
'host' => 'localhost',
'endpoint' => 'http://127.0.0.1:9000',
'path_style' => true
}
```
1. Save the file and [reconfigure GitLab]s for the changes to take effect.
1. Migrate any existing local LFS objects to the object storage:
```bash
gitlab-rake gitlab:lfs:migrate
```
This will migrate existing LFS objects to object storage. New LFS objects
will be forwarded to object storage unless
`gitlab_rails['lfs_object_store_background_upload']` is set to false.
2018-02-21 11:43:21 -05:00
2018-03-27 13:23:05 -04:00
### S3 for installations from source
For source installations the settings are nested under `lfs:` and then
`object_store:` :
2018-02-21 11:43:21 -05:00
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
lines:
```yaml
lfs:
enabled: true
object_store:
enabled: false
remote_directory: lfs-objects # Bucket name
connection:
provider: AWS
aws_access_key_id: 1ABCD2EFGHI34JKLM567N
aws_secret_access_key: abcdefhijklmnopQRSTUVwxyz0123456789ABCDE
region: eu-central-1
# Use the following options to configure an AWS compatible host such as Minio
host: 'localhost'
endpoint: 'http://127.0.0.1:9000'
path_style: true
```
1. Save the file and [restart GitLab][] for the changes to take effect.
1. Migrate any existing local LFS objects to the object storage:
2018-03-27 13:23:05 -04:00
```bash
sudo -u git -H bundle exec rake gitlab:lfs:migrate RAILS_ENV=production
```
2018-02-21 11:43:21 -05:00
2018-03-27 13:23:05 -04:00
This will migrate existing LFS objects to object storage. New LFS objects
will be forwarded to object storage unless `background_upload` is set to
false.
2018-02-21 11:43:21 -05:00
2016-11-22 11:58:10 -05:00
## Storage statistics
You can see the total storage used for LFS objects on groups and projects
2017-02-06 09:46:58 -05:00
in the administration area, as well as through the [groups ](../../api/groups.md )
and [projects APIs ](../../api/projects.md ).
2016-11-22 11:58:10 -05:00
2018-06-18 01:44:11 -04:00
## Troubleshooting: `Google::Apis::TransmissionError: execution expired`
2018-06-18 04:12:25 -04:00
If LFS integration is configred with Google Cloud Storage and background uploads (`background_upload: true` and `direct_upload: false` ),
sidekiq workers may encouter this error. This is because the uploading timed out with very large files.
LFS files up to 6Gb can be uploaded without any extra steps, otherwise you need to use the following workaround.
2018-06-18 01:44:11 -04:00
```shell
$ sudo gitlab-rails console # Login to rails console
2018-06-18 04:12:25 -04:00
> # Set up timeouts. 20 minutes is enough to upload 30GB LFS files.
> # These settings are only in effect for the same session, i.e. they are not effective for sidekiq workers.
2018-06-18 01:44:11 -04:00
> ::Google::Apis::ClientOptions.default.open_timeout_sec = 1200
> ::Google::Apis::ClientOptions.default.read_timeout_sec = 1200
> ::Google::Apis::ClientOptions.default.send_timeout_sec = 1200
> # Upload LFS files manually. This process does not use sidekiq at all.
> LfsObject.where(file_store: [nil, 1]).find_each do |lfs_object|
> lfs_object.file.migrate!(ObjectStorage::Store::REMOTE) if lfs_object.file.file.exists?
> end
```
2018-06-18 04:12:25 -04:00
See more information in [!19581 ](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/19581 )
2018-06-18 01:44:11 -04:00
2015-11-20 10:10:08 -05:00
## Known limitations
2018-11-13 01:07:16 -05:00
- Support for removing unreferenced LFS objects was added in 8.14 onwards.
- LFS authentications via SSH was added with GitLab 8.12.
- Only compatible with the GitLFS client versions 1.1.0 and up, or 1.0.2.
- The storage statistics currently count each LFS object multiple times for
every project linking to it.
2018-02-21 11:43:21 -05:00
[reconfigure gitlab]: ../../administration/restart_gitlab.md#omnibus-gitlab-reconfigure "How to reconfigure Omnibus GitLab"
[restart gitlab]: ../../administration/restart_gitlab.md#installations-from-source "How to restart GitLab"
2018-07-02 20:37:24 -04:00
[eep]: https://about.gitlab.com/pricing/ "GitLab Premium"
2018-02-21 11:43:21 -05:00
[ee-2760]: https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/2760