rails--rails/guides/source/active_storage_overview.md

1056 lines
34 KiB
Markdown
Raw Normal View History

**DO NOT READ THIS FILE ON GITHUB, GUIDES ARE PUBLISHED ON https://guides.rubyonrails.org.**
2017-11-02 23:52:42 +00:00
Active Storage Overview
=======================
2017-11-02 23:52:42 +00:00
This guide covers how to attach files to your Active Record models.
2017-11-02 23:52:42 +00:00
After reading this guide, you will know:
* How to attach one or many files to a record.
* How to delete an attached file.
* How to link to an attached file.
* How to use variants to transform images.
* How to generate an image representation of a non-image file, such as a PDF or a video.
* How to send file uploads directly from browsers to a storage service,
bypassing your application servers.
* How to clean up files stored during testing.
2017-12-15 03:08:33 +00:00
* How to implement support for additional storage services.
2017-11-02 23:52:42 +00:00
--------------------------------------------------------------------------------
What is Active Storage?
-----------------------
2017-11-02 23:52:42 +00:00
Active Storage facilitates uploading files to a cloud storage service like
Amazon S3, Google Cloud Storage, or Microsoft Azure Storage and attaching those
files to Active Record objects. It comes with a local disk-based service for
development and testing and supports mirroring files to subordinate services for
backups and migrations.
Using Active Storage, an application can transform image uploads with
[ImageMagick](https://www.imagemagick.org), generate image representations of
non-image uploads like PDFs and videos, and extract metadata from arbitrary
files.
## Setup
Active Storage uses two tables in your applications database named
`active_storage_blobs` and `active_storage_attachments`. After creating a new
application (or upgrading your application to Rails 5.2), run
`bin/rails active_storage:install` to generate a migration that creates these
tables. Use `bin/rails db:migrate` to run the migration.
WARNING: `active_storage_attachments` is a polymorphic join table that stores your model's class name. If your model's class name changes, you will need to run a migration on this table to update the underlying `record_type` to your model's new class name.
WARNING: If you are using UUIDs instead of integers as the primary key on your models you will need to change the column type of `active_storage_attachments.record_id` and `active_storage_variant_records.id` in the generated migration accordingly.
Declare Active Storage services in `config/storage.yml`. For each service your
application uses, provide a name and the requisite configuration. The example
below declares three services named `local`, `test`, and `amazon`:
```yaml
local:
service: Disk
root: <%= Rails.root.join("storage") %>
test:
service: Disk
root: <%= Rails.root.join("tmp/storage") %>
amazon:
service: S3
access_key_id: ""
secret_access_key: ""
bucket: ""
region: "" # e.g. 'us-east-1'
```
Tell Active Storage which service to use by setting
`Rails.application.config.active_storage.service`. Because each environment will
likely use a different service, it is recommended to do this on a
per-environment basis. To use the disk service from the previous example in the
development environment, you would add the following to
2017-12-15 07:33:38 +00:00
`config/environments/development.rb`:
```ruby
# Store files locally.
config.active_storage.service = :local
```
To use the S3 service in production, you add the following to
`config/environments/production.rb`:
```ruby
# Store files on Amazon S3.
config.active_storage.service = :amazon
```
2018-10-04 18:34:22 +00:00
To use the test service when testing, you add the following to
`config/environments/test.rb`:
```ruby
# Store uploaded files on the local file system in a temporary directory.
config.active_storage.service = :test
```
Continue reading for more information on the built-in service adapters (e.g.
`Disk` and `S3`) and the configuration they require.
NOTE: Configuration files that are environment-specific will take precedence:
in production, for example, the `config/storage/production.yml` file (if existent)
will take precedence over the `config/storage.yml` file.
### Disk Service
2017-12-15 03:08:33 +00:00
2017-12-05 04:56:16 +00:00
Declare a Disk service in `config/storage.yml`:
```yaml
local:
service: Disk
root: <%= Rails.root.join("storage") %>
2018-01-17 01:32:02 +00:00
```
### S3 Service (Amazon S3 and S3-compatible APIs)
2017-12-15 03:08:33 +00:00
To connect to Amazon S3, declare an S3 service in `config/storage.yml`:
```yaml
amazon:
service: S3
access_key_id: ""
secret_access_key: ""
region: ""
bucket: ""
```
2017-12-28 22:11:15 +00:00
Optionally provide client and upload options:
```yaml
amazon:
service: S3
access_key_id: ""
secret_access_key: ""
region: ""
bucket: ""
http_open_timeout: 0
http_read_timeout: 0
retry_limit: 0
upload:
server_side_encryption: "" # 'aws:kms' or 'AES256'
```
TIP: Set sensible client HTTP timeouts and retry limits for your application. In certain failure scenarios, the default AWS client configuration may cause connections to be held for up to several minutes and lead to request queuing.
2017-12-28 22:11:15 +00:00
Add the [`aws-sdk-s3`](https://github.com/aws/aws-sdk-ruby) gem to your `Gemfile`:
```ruby
gem "aws-sdk-s3", require: false
```
2017-12-15 03:09:48 +00:00
NOTE: The core features of Active Storage require the following permissions: `s3:ListBucket`, `s3:PutObject`, `s3:GetObject`, and `s3:DeleteObject`. If you have additional upload options configured such as setting ACLs then additional permissions may be required.
NOTE: If you want to use environment variables, standard SDK configuration files, profiles,
IAM instance profiles or task roles, you can omit the `access_key_id`, `secret_access_key`,
and `region` keys in the example above. The S3 Service supports all of the
authentication options described in the [AWS SDK documentation]
(https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html).
To connect to an S3-compatible object storage API such as DigitalOcean Spaces, provide the `endpoint`:
```yaml
digitalocean:
service: S3
endpoint: https://nyc3.digitaloceanspaces.com
access_key_id: ...
secret_access_key: ...
# ...and other options
```
### Microsoft Azure Storage Service
2017-12-15 03:08:33 +00:00
2017-12-05 04:56:16 +00:00
Declare an Azure Storage service in `config/storage.yml`:
```yaml
azure:
service: AzureStorage
storage_account_name: ""
storage_access_key: ""
container: ""
```
Add the [`azure-storage-blob`](https://github.com/Azure/azure-storage-ruby) gem to your `Gemfile`:
```ruby
gem "azure-storage-blob", require: false
```
### Google Cloud Storage Service
2017-12-15 03:08:33 +00:00
2017-12-05 04:56:16 +00:00
Declare a Google Cloud Storage service in `config/storage.yml`:
```yaml
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
```
Optionally provide a Hash of credentials instead of a keyfile path:
```yaml
google:
service: GCS
credentials:
type: "service_account"
project_id: ""
private_key_id: <%= Rails.application.credentials.dig(:gcs, :private_key_id) %>
private_key: <%= Rails.application.credentials.dig(:gcs, :private_key).dump %>
client_email: ""
client_id: ""
auth_uri: "https://accounts.google.com/o/oauth2/auth"
token_uri: "https://accounts.google.com/o/oauth2/token"
auth_provider_x509_cert_url: "https://www.googleapis.com/oauth2/v1/certs"
client_x509_cert_url: ""
project: ""
bucket: ""
```
2017-12-28 22:11:15 +00:00
Add the [`google-cloud-storage`](https://github.com/GoogleCloudPlatform/google-cloud-ruby/tree/master/google-cloud-storage) gem to your `Gemfile`:
```ruby
gem "google-cloud-storage", "~> 1.11", require: false
```
### Mirror Service
2019-05-22 19:07:35 +00:00
You can keep multiple services in sync by defining a mirror service. A mirror
service replicates uploads and deletes across two or more subordinate services.
A mirror service is intended to be used temporarily during a migration between
services in production. You can start mirroring to a new service, copy
pre-existing files from the old service to the new, then go all-in on the new
2017-12-08 21:44:59 +00:00
service.
2019-05-22 19:07:35 +00:00
NOTE: Mirroring is not atomic. It is possible for an upload to succeed on the
primary service and fail on any of the subordinate services. Before going
all-in on a new service, verify that all files have been copied.
Define each of the services you'd like to mirror as described above. Reference
them by name when defining a mirror service:
```yaml
s3_west_coast:
service: S3
access_key_id: ""
secret_access_key: ""
region: ""
bucket: ""
s3_east_coast:
service: S3
access_key_id: ""
secret_access_key: ""
region: ""
bucket: ""
production:
service: Mirror
primary: s3_east_coast
mirrors:
- s3_west_coast
```
2019-05-22 19:07:35 +00:00
Although all secondary services receive uploads, downloads are always handled
by the primary service.
2017-11-30 17:19:57 +00:00
2019-05-22 19:07:35 +00:00
Mirror services are compatible with direct uploads. New files are directly
uploaded to the primary service. When a directly-uploaded file is attached to a
record, a background job is enqueued to copy it to the secondary services.
### Public access
By default, Active Storage assumes private access to services. This means generating signed, single-use URLs for blobs. If you'd rather make blobs publicly accessible, specify `public: true` in your app's `config/storage.yml`:
```yaml
gcs: &gcs
service: GCS
project: ""
private_gcs:
<<: *gcs
credentials: <%= Rails.root.join("path/to/private_keyfile.json") %>
bucket: ""
public_gcs:
<<: *gcs
credentials: <%= Rails.root.join("path/to/public_keyfile.json") %>
bucket: ""
public: true
```
Make sure your buckets are properly configured for public access. See docs on how to enable public read permissions for [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-bucket.html), [Google Cloud Storage](https://cloud.google.com/storage/docs/access-control/making-data-public#buckets), and [Microsoft Azure](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-manage-access-to-resources#set-container-public-access-level-in-the-azure-portal) storage services.
When converting an existing application to use `public: true`, make sure to update every individual file in the bucket to be publicly-readable before switching over.
2017-12-30 00:56:13 +00:00
Attaching Files to Records
--------------------------
2017-12-05 04:56:16 +00:00
### `has_one_attached`
The [`has_one_attached`][] macro sets up a one-to-one mapping between records and
2017-12-05 04:56:16 +00:00
files. Each record can have one file attached to it.
For example, suppose your application has a `User` model. If you want each user to
2017-12-05 04:56:16 +00:00
have an avatar, define the `User` model like this:
```ruby
class User < ApplicationRecord
has_one_attached :avatar
end
2017-12-05 04:56:16 +00:00
```
2017-12-05 04:56:16 +00:00
You can create a user with an avatar:
```erb
<%= form.file_field :avatar %>
```
```ruby
2017-12-05 04:56:16 +00:00
class SignupController < ApplicationController
def create
user = User.create!(user_params)
2017-12-05 04:56:16 +00:00
session[:user_id] = user.id
redirect_to root_path
end
2017-12-05 04:56:16 +00:00
private
def user_params
params.require(:user).permit(:email_address, :password, :avatar)
end
end
```
Call [`avatar.attach`][Attached::One#attach] to attach an avatar to an existing user:
2017-12-05 04:56:16 +00:00
```ruby
user.avatar.attach(params[:avatar])
2017-12-05 04:56:16 +00:00
```
Call [`avatar.attached?`][Attached::One#attached?] to determine whether a particular user has an avatar:
2017-12-05 04:56:16 +00:00
```ruby
user.avatar.attached?
2017-12-05 04:56:16 +00:00
```
In some cases you might want to override a default service for a specific attachment.
You can configure specific services per attachment using the `service` option:
```ruby
class User < ApplicationRecord
has_one_attached :avatar, service: :s3
end
```
You can configure specific variants per attachment by calling the `variant` method on yielded attachable object:
```ruby
class User < ApplicationRecord
has_one_attached :avatar do |attachable|
attachable.variant :thumb, resize: "100x100"
end
end
```
Call `avatar.variant(:thumb)` to get a thumb variant of an avatar:
2021-02-15 14:44:15 +00:00
```erb
<%= image_tag user.avatar.variant(:thumb) %>
```
[`has_one_attached`]: https://api.rubyonrails.org/classes/ActiveStorage/Attached/Model.html#method-i-has_one_attached
[Attached::One#attach]: https://api.rubyonrails.org/classes/ActiveStorage/Attached/One.html#method-i-attach
[Attached::One#attached?]: https://api.rubyonrails.org/classes/ActiveStorage/Attached/One.html#method-i-attached-3F
2017-12-05 04:56:16 +00:00
### `has_many_attached`
The [`has_many_attached`][] macro sets up a one-to-many relationship between records
2017-12-05 04:56:16 +00:00
and files. Each record can have many files attached to it.
For example, suppose your application has a `Message` model. If you want each
message to have many images, define the `Message` model like this:
```ruby
class Message < ApplicationRecord
has_many_attached :images
end
```
2017-12-05 04:56:16 +00:00
You can create a message with images:
```ruby
class MessagesController < ApplicationController
def create
message = Message.create!(message_params)
redirect_to message
end
private
def message_params
params.require(:message).permit(:title, :content, images: [])
end
end
```
Call [`images.attach`][Attached::Many#attach] to add new images to an existing message:
2017-12-05 04:56:16 +00:00
```ruby
@message.images.attach(params[:images])
```
Call [`images.attached?`][Attached::Many#attached?] to determine whether a particular message has any images:
2017-12-05 04:56:16 +00:00
```ruby
@message.images.attached?
```
Overriding the default service is done the same way as `has_one_attached`, by using the `service` option:
```ruby
class Message < ApplicationRecord
has_many_attached :images, service: :s3
end
```
Configuring specific variants is done the same way as `has_one_attached`, by calling the `variant` method on the yielded attachable object:
```ruby
class Message < ApplicationRecord
has_many_attached :images do |attachable|
attachable.variant :thumb, resize: "100x100"
end
end
```
[`has_many_attached`]: https://api.rubyonrails.org/classes/ActiveStorage/Attached/Model.html#method-i-has_many_attached
[Attached::Many#attach]: https://api.rubyonrails.org/classes/ActiveStorage/Attached/Many.html#method-i-attach
[Attached::Many#attached?]: https://api.rubyonrails.org/classes/ActiveStorage/Attached/Many.html#method-i-attached-3F
### Attaching File/IO Objects
Sometimes you need to attach a file that doesnt arrive via an HTTP request.
For example, you may want to attach a file you generated on disk or downloaded
from a user-submitted URL. You may also want to attach a fixture file in a
model test. To do that, provide a Hash containing at least an open IO object
and a filename:
```ruby
@message.image.attach(io: File.open('/path/to/file'), filename: 'file.pdf')
```
When possible, provide a content type as well. Active Storage attempts to
determine a files content type from its data. It falls back to the content
type you provide if it cant do that.
```ruby
@message.image.attach(io: File.open('/path/to/file'), filename: 'file.pdf', content_type: 'application/pdf')
```
You can bypass the content type inference from the data by passing in
`identify: false` along with the `content_type`.
```ruby
@message.image.attach(
io: File.open('/path/to/file'),
filename: 'file.pdf',
content_type: 'application/pdf',
identify: false
)
```
If you dont provide a content type and Active Storage cant determine the
files content type automatically, it defaults to application/octet-stream.
2017-12-30 00:56:13 +00:00
Removing Files
--------------
To remove an attachment from a model, call [`purge`][Attached::One#purge] on the
attachment. If your application is set up to use Active Job, removal can be done
in the background instead by calling [`purge_later`][Attached::One#purge_later].
Purging deletes the blob and the file from the storage service.
```ruby
# Synchronously destroy the avatar and actual resource files.
user.avatar.purge
# Destroy the associated models and actual resource files async, via Active Job.
user.avatar.purge_later
```
[Attached::One#purge]: https://api.rubyonrails.org/classes/ActiveStorage/Attached/One.html#method-i-purge
[Attached::One#purge_later]: https://api.rubyonrails.org/classes/ActiveStorage/Attached/One.html#method-i-purge_later
2017-12-30 00:56:13 +00:00
Linking to Files
----------------
Generate a permanent URL for the blob that points to the application. Upon
access, a redirect to the actual service endpoint is returned. This indirection
decouples the service URL from the actual one, and allows, for example, mirroring
attachments in different services for high-availability. The redirection has an
HTTP expiration of 5 minutes.
```ruby
url_for(user.avatar)
```
WARNING: The links generated by ActiveStorage are hard to guess, but publicly
accessible by default. Anyone that knows the blob URL will be able to download it,
even if a `before_action` in your `ApplicationController` would otherwise
require a login. If your files require a higher level of protection consider
implementing your own authenticated
[`ActiveStorage::Blobs::RedirectController`](https://github.com/rails/rails/blob/main/activestorage/app/controllers/active_storage/blobs/redirect_controller.rb) and [`ActiveStorage::Representations::RedirectController`](https://github.com/rails/rails/blob/main/activestorage/app/controllers/active_storage/representations/redirect_controller.rb).
2017-11-21 01:01:00 +00:00
To create a download link, use the `rails_blob_{path|url}` helper. Using this
helper allows you to set the disposition.
2017-11-21 01:01:00 +00:00
```ruby
2017-11-21 01:09:28 +00:00
rails_blob_path(user.avatar, disposition: "attachment")
2017-11-21 01:01:00 +00:00
```
WARNING: To prevent XSS attacks, ActiveStorage forces the Content-Disposition header
to "attachment" for some kind of files. To change this behaviour see the
2019-10-04 03:26:40 +00:00
available configuration options in [Configuring Rails Applications](configuring.html#configuring-active-storage).
If you need to create a link from outside of controller/view context (Background
jobs, Cronjobs, etc.), you can access the `rails_blob_path` like this:
```ruby
Rails.application.routes.url_helpers.rails_blob_path(user.avatar, only_path: true)
```
Downloading Files
-----------------
Sometimes you need to process a blob after its uploaded—for example, to convert
it to a different format. Use the attachment's [`download`][Blob#download] method to read a blobs
binary data into memory:
```ruby
binary = user.avatar.download
```
You might want to download a blob to a file on disk so an external program (e.g.
a virus scanner or media transcoder) can operate on it. Use the attachment's
[`open`][Blob#open] method to download a blob to a tempfile on disk:
2018-05-17 22:19:18 +00:00
```ruby
message.video.open do |file|
system '/path/to/virus/scanner', file.path
# ...
end
```
2020-07-14 19:11:34 +00:00
It's important to know that the file is not yet available in the `after_create` callback but in the `after_create_commit` only.
[Blob#download]: https://api.rubyonrails.org/classes/ActiveStorage/Blob.html#method-i-download
[Blob#open]: https://api.rubyonrails.org/classes/ActiveStorage/Blob.html#method-i-open
Analyzing Files
---------------
Active Storage analyzes files once they've been uploaded by queuing a job in Active Job. Analyzed files will store additional information in the metadata hash, including `analyzed: true`. You can check whether a blob has been analyzed by calling [`analyzed?`][] on it.
Image analysis provides `width` and `height` attributes. Video analysis provides these, as well as `duration`, `angle`, and `display_aspect_ratio`.
Analysis requires the `mini_magick` gem. Video analysis also requires the [FFmpeg](https://www.ffmpeg.org/) library, which you must include separately.
[`analyzed?`]: https://api.rubyonrails.org/classes/ActiveStorage/Blob/Analyzable.html#method-i-analyzed-3F
Displaying Images, Videos, and PDFs
---------------
Active Storage supports representing a variety of files. You can call
[`representation`][] on an attachment to display an image variant, or a
preview of a video or PDF. Before calling `representation`, check if the
attachment can be represented by calling [`representable?`]. Some file formats
can't be previewed by ActiveStorage out of the box (eg. Word documents); if
`representable?` returns false you may want to [link to](#linking-to-files)
the file instead.
```erb
<ul>
<% @message.files.each do |file| %>
<li>
<% if file.representable? %>
<%= image_tag file.representation(resize_to_limit: [100, 100]) %>
<% else %>
<%= link_to rails_blob_path(file, disposition: "attachment") do %>
<%= image_tag "placeholder.png", alt: "Download file" %>
<% end %>
<% end %>
</li>
<% end %>
</ul>
```
Internally, `representation` calls `variant` for images, and `preview` for
previewable files. You can also call these methods directly.
[`representable?`]: https://api.rubyonrails.org/classes/ActiveStorage/Blob/Representable.html#method-i-representable-3F
[`representation`]: https://api.rubyonrails.org/classes/ActiveStorage/Blob/Representable.html#method-i-representation
### Transforming Images
Transforming images allows you to display the image at your choice of dimensions.
2018-04-23 19:06:06 +00:00
To enable variants, add the `image_processing` gem to your `Gemfile`:
2017-12-08 21:44:59 +00:00
```ruby
gem 'image_processing'
2017-12-08 21:44:59 +00:00
```
To create a variation of an image, call [`variant`][] on the attachment. You
can pass any transformation supported by the variant processor to the method.
When the browser hits the variant URL, Active Storage will lazily transform
the original blob into the specified format and redirect to its new service
location.
```erb
<%= image_tag user.avatar.variant(resize_to_limit: [100, 100]) %>
Use ImageProcessing gem for ActiveStorage variants ImageProcessing gem is a wrapper around MiniMagick and ruby-vips, and implements an interface for common image resizing and processing. This is the canonical image processing gem recommended in [Shrine], and that's where it developed from. The initial implementation was extracted from Refile, which also implements on-the-fly transformations. Some features that ImageProcessing gem adds on top of MiniMagick: * resizing macros - #resize_to_limit - #resize_to_fit - #resize_to_fill - #resize_and_pad * automatic orientation * automatic thumbnail sharpening * avoids the complex and inefficient MiniMagick::Image class * will use "magick" instead of "convert" on ImageMagick 7 However, the biggest feature of the ImageProcessing gem is that it has an alternative implementation that uses libvips. Libvips is an alternative to ImageMagick that can process images very rapidly (we've seen up 10x faster than ImageMagick). What's great is that the ImageProcessing gem provides the same interface for both implementations. The macros are named the same, and the libvips implementation does auto orientation and thumbnail sharpening as well; only the operations/options specific to ImageMagick/libvips differ. The integration provided by this PR should work for both implementations. The plan is to introduce the ImageProcessing backend in Rails 6.0 as the default backend and deprecate the MiniMagick backend, then in Rails 6.1 remove the MiniMagick backend.
2018-04-05 23:48:29 +00:00
```
The default processor for Active Storage is MiniMagick, but you can also use
[Vips][]. To switch to Vips, add the following to `config/application.rb`:
Use ImageProcessing gem for ActiveStorage variants ImageProcessing gem is a wrapper around MiniMagick and ruby-vips, and implements an interface for common image resizing and processing. This is the canonical image processing gem recommended in [Shrine], and that's where it developed from. The initial implementation was extracted from Refile, which also implements on-the-fly transformations. Some features that ImageProcessing gem adds on top of MiniMagick: * resizing macros - #resize_to_limit - #resize_to_fit - #resize_to_fill - #resize_and_pad * automatic orientation * automatic thumbnail sharpening * avoids the complex and inefficient MiniMagick::Image class * will use "magick" instead of "convert" on ImageMagick 7 However, the biggest feature of the ImageProcessing gem is that it has an alternative implementation that uses libvips. Libvips is an alternative to ImageMagick that can process images very rapidly (we've seen up 10x faster than ImageMagick). What's great is that the ImageProcessing gem provides the same interface for both implementations. The macros are named the same, and the libvips implementation does auto orientation and thumbnail sharpening as well; only the operations/options specific to ImageMagick/libvips differ. The integration provided by this PR should work for both implementations. The plan is to introduce the ImageProcessing backend in Rails 6.0 as the default backend and deprecate the MiniMagick backend, then in Rails 6.1 remove the MiniMagick backend.
2018-04-05 23:48:29 +00:00
```ruby
config.active_storage.variant_processor = :vips
```
[`variant`]: https://api.rubyonrails.org/classes/ActiveStorage/Blob/Representable.html#method-i-variant
[Vips]: https://www.rubydoc.info/gems/ruby-vips/Vips/Image
### Previewing Files
2017-12-15 03:08:33 +00:00
Some non-image files can be previewed: that is, they can be presented as images.
For example, a video file can be previewed by extracting its first frame. Out of
the box, Active Storage supports previewing videos and PDF documents. To create
a link to a lazily-generated preview, use the attachment's [`preview`][] method:
```erb
<%= image_tag message.video.preview(resize_to_limit: [100, 100]) %>
```
To add support for another format, add your own previewer. See the
[`ActiveStorage::Preview`][] documentation for more information.
WARNING: Extracting previews requires third-party applications: FFmpeg v3.4+ for
video and muPDF for PDFs, and on macOS also XQuartz and Poppler.
These libraries are not provided by Rails. You must install them yourself to
use the built-in previewers. Before you install and use third-party software,
make sure you understand the licensing implications of doing so.
[`preview`]: https://api.rubyonrails.org/classes/ActiveStorage/Blob/Representable.html#method-i-preview
[`ActiveStorage::Preview`]: https://api.rubyonrails.org/classes/ActiveStorage/Preview.html
2017-12-30 00:56:13 +00:00
Direct Uploads
--------------
Active Storage, with its included JavaScript library, supports uploading
directly from the client to the cloud.
### Usage
1. Include `activestorage.js` in your application's JavaScript bundle.
Using the asset pipeline:
2017-11-30 17:19:57 +00:00
```js
//= require activestorage
```
2017-11-30 17:19:57 +00:00
Using the npm package:
2017-11-30 17:19:57 +00:00
```js
Use ES module syntax for application.js.tt and docs This change swaps the CommonJS require() syntax in the Webpacker application.js pack template file and in documentation examples with ES module import syntax. Benefits of this change include: Provides continuity with the larger frontend community: Arguably, one of the main draws in adopting Webpacker is its integration with Babel to support ES module syntax. For a fresh Rails install with Webpacker, the application.js file will be the first impression most Rails developers have with webpack and Webpacker. Most of the recent documentation and examples they will find online for using other libraries will be based on ES module syntax. Reduces confusion: Developers commonly add ES imports to their application.js pack, typically by following online examples, which means mixing require() and import statements in a single file. This leads to confusion and unnecessary friction about differences between require() and import. Embraces browser-friendliness: The ES module syntax forward-looking and is meant to be supported in browsers. On the other hand, require() syntax is synchronous by design and not browser-supported as CommonJS originally was adopted in Node.js for server-side JavaScript. That webpack supports require() syntax is merely a convenience. Encourages best practices regarding optimization: webpack can statically analyze ES modules and "tree-shake", i.e., strip out unused exports from the final build (given certain conditions are met, including `sideEffects: false` designation in package.json).
2020-06-03 03:27:43 +00:00
import * as ActiveStorage from "@rails/activestorage"
ActiveStorage.start()
```
2017-11-30 17:19:57 +00:00
2. Add `direct_upload: true` to your [file field](form_helpers.html#uploading-files):
```erb
<%= form.file_field :attachments, multiple: true, direct_upload: true %>
```
Or, if you aren't using a `FormBuilder`, add the data attribute directly:
```erb
<input type=file data-direct-upload-url="<%= rails_direct_uploads_url %>" />
```
3. Configure CORS on third-party storage services to allow direct upload requests.
4. That's it! Uploads begin upon form submission.
### Cross-Origin Resource Sharing (CORS) configuration
To make direct uploads to a third-party service work, youll need to configure the service to allow cross-origin requests from your app. Consult the CORS documentation for your service:
* [S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html#how-do-i-enable-cors)
* [Google Cloud Storage](https://cloud.google.com/storage/docs/configuring-cors)
* [Azure Storage](https://docs.microsoft.com/en-us/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services)
Take care to allow:
* All origins from which your app is accessed
* The `PUT` request method
* The following headers:
* `Origin`
* `Content-Type`
* `Content-MD5`
* `Content-Disposition` (except for Azure Storage)
* `x-ms-blob-content-disposition` (for Azure Storage only)
* `x-ms-blob-type` (for Azure Storage only)
No CORS configuration is required for the Disk service since it shares your apps origin.
#### Example: S3 CORS configuration
```json
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"https://www.example.com"
],
"ExposeHeaders": [
"Origin",
"Content-Type",
"Content-MD5",
"Content-Disposition"
],
"MaxAgeSeconds": 3600
}
]
```
#### Example: Google Cloud Storage CORS configuration
```json
[
{
"origin": ["https://www.example.com"],
"method": ["PUT"],
"responseHeader": ["Origin", "Content-Type", "Content-MD5", "Content-Disposition"],
"maxAgeSeconds": 3600
}
]
```
#### Example: Azure Storage CORS configuration
```xml
<Cors>
<CorsRule>
<AllowedOrigins>https://www.example.com</AllowedOrigins>
<AllowedMethods>PUT</AllowedMethods>
<AllowedHeaders>Origin, Content-Type, Content-MD5, x-ms-blob-content-disposition, x-ms-blob-type</AllowedHeaders>
<MaxAgeInSeconds>3600</MaxAgeInSeconds>
</CorsRule>
<Cors>
```
### Direct upload JavaScript events
| Event name | Event target | Event data (`event.detail`) | Description |
| --- | --- | --- | --- |
| `direct-uploads:start` | `<form>` | None | A form containing files for direct upload fields was submitted. |
| `direct-upload:initialize` | `<input>` | `{id, file}` | Dispatched for every file after form submission. |
| `direct-upload:start` | `<input>` | `{id, file}` | A direct upload is starting. |
| `direct-upload:before-blob-request` | `<input>` | `{id, file, xhr}` | Before making a request to your application for direct upload metadata. |
| `direct-upload:before-storage-request` | `<input>` | `{id, file, xhr}` | Before making a request to store a file. |
| `direct-upload:progress` | `<input>` | `{id, file, progress}` | As requests to store files progress. |
| `direct-upload:error` | `<input>` | `{id, file, error}` | An error occurred. An `alert` will display unless this event is canceled. |
| `direct-upload:end` | `<input>` | `{id, file}` | A direct upload has ended. |
| `direct-uploads:end` | `<form>` | None | All direct uploads have ended. |
2017-11-30 17:01:01 +00:00
### Example
You can use these events to show the progress of an upload.
![direct-uploads](https://user-images.githubusercontent.com/5355/28694528-16e69d0c-72f8-11e7-91a7-c0b8cfc90391.gif)
To show the uploaded files in a form:
2017-11-30 17:19:57 +00:00
2017-11-30 17:01:01 +00:00
```js
// direct_uploads.js
addEventListener("direct-upload:initialize", event => {
const { target, detail } = event
const { id, file } = detail
target.insertAdjacentHTML("beforebegin", `
<div id="direct-upload-${id}" class="direct-upload direct-upload--pending">
<div id="direct-upload-progress-${id}" class="direct-upload__progress" style="width: 0%"></div>
<span class="direct-upload__filename"></span>
2017-11-30 17:01:01 +00:00
</div>
`)
target.previousElementSibling.querySelector(`.direct-upload__filename`).textContent = file.name
2017-11-30 17:01:01 +00:00
})
addEventListener("direct-upload:start", event => {
const { id } = event.detail
const element = document.getElementById(`direct-upload-${id}`)
element.classList.remove("direct-upload--pending")
})
addEventListener("direct-upload:progress", event => {
const { id, progress } = event.detail
const progressElement = document.getElementById(`direct-upload-progress-${id}`)
progressElement.style.width = `${progress}%`
})
addEventListener("direct-upload:error", event => {
event.preventDefault()
const { id, error } = event.detail
const element = document.getElementById(`direct-upload-${id}`)
element.classList.add("direct-upload--error")
element.setAttribute("title", error)
})
addEventListener("direct-upload:end", event => {
const { id } = event.detail
const element = document.getElementById(`direct-upload-${id}`)
element.classList.add("direct-upload--complete")
})
```
2017-11-30 17:19:57 +00:00
Add styles:
2017-11-30 17:01:01 +00:00
```css
/* direct_uploads.css */
.direct-upload {
display: inline-block;
position: relative;
padding: 2px 4px;
margin: 0 3px 3px 0;
border: 1px solid rgba(0, 0, 0, 0.3);
border-radius: 3px;
font-size: 11px;
line-height: 13px;
}
.direct-upload--pending {
opacity: 0.6;
}
.direct-upload__progress {
position: absolute;
top: 0;
left: 0;
bottom: 0;
opacity: 0.2;
background: #0076ff;
transition: width 120ms ease-out, opacity 60ms 60ms ease-in;
transform: translate3d(0, 0, 0);
}
.direct-upload--complete .direct-upload__progress {
opacity: 0.4;
}
.direct-upload--error {
border-color: red;
}
input[type=file][data-direct-upload-url][disabled] {
display: none;
}
```
### Integrating with Libraries or Frameworks
If you want to use the Direct Upload feature from a JavaScript framework, or
you want to integrate custom drag and drop solutions, you can use the
`DirectUpload` class for this purpose. Upon receiving a file from your library
of choice, instantiate a DirectUpload and call its create method. Create takes
a callback to invoke when the upload completes.
```js
import { DirectUpload } from "@rails/activestorage"
const input = document.querySelector('input[type=file]')
// Bind to file drop - use the ondrop on a parent element or use a
// library like Dropzone
const onDrop = (event) => {
event.preventDefault()
const files = event.dataTransfer.files;
Array.from(files).forEach(file => uploadFile(file))
}
// Bind to normal file selection
input.addEventListener('change', (event) => {
Array.from(input.files).forEach(file => uploadFile(file))
// you might clear the selected files from the input
input.value = null
})
2018-12-01 08:44:28 +00:00
const uploadFile = (file) => {
// your form needs the file_field direct_upload: true, which
// provides data-direct-upload-url
const url = input.dataset.directUploadUrl
const upload = new DirectUpload(file, url)
upload.create((error, blob) => {
if (error) {
// Handle the error
} else {
// Add an appropriately-named hidden input to the form with a
// value of blob.signed_id so that the blob ids will be
// transmitted in the normal upload flow
const hiddenField = document.createElement('input')
hiddenField.setAttribute("type", "hidden");
hiddenField.setAttribute("value", blob.signed_id);
hiddenField.name = input.name
document.querySelector('form').appendChild(hiddenField)
}
})
}
```
If you need to track the progress of the file upload, you can pass a third
parameter to the `DirectUpload` constructor. During the upload, DirectUpload
will call the object's `directUploadWillStoreFileWithXHR` method. You can then
bind your own progress handler on the XHR.
```js
import { DirectUpload } from "@rails/activestorage"
class Uploader {
constructor(file, url) {
this.upload = new DirectUpload(this.file, this.url, this)
}
upload(file) {
this.upload.create((error, blob) => {
if (error) {
// Handle the error
} else {
// Add an appropriately-named hidden input to the form
// with a value of blob.signed_id
}
})
}
directUploadWillStoreFileWithXHR(request) {
request.upload.addEventListener("progress",
event => this.directUploadDidProgress(event))
}
directUploadDidProgress(event) {
// Use event.loaded and event.total to update the progress bar
}
}
```
NOTE: Using [Direct Uploads](#direct-uploads) can sometimes result in a file that uploads, but never attaches to a record. Consider [purging unattached uploads](#purging-unattached-uploads).
2017-12-30 00:56:13 +00:00
Discarding Files Stored During System Tests
-------------------------------------------
2017-11-17 03:20:31 +00:00
System tests clean up test data by rolling back a transaction. Because destroy
is never called on an object, the attached files are never cleaned up. If you
want to clear the files, you can do it in an `after_teardown` callback. Doing it
2017-11-19 01:30:26 +00:00
here ensures that all connections created during the test are complete and
you won't receive an error from Active Storage saying it can't find a file.
2017-11-17 03:20:31 +00:00
```ruby
2017-11-17 03:20:31 +00:00
class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
driven_by :selenium, using: :chrome, screen_size: [1400, 1400]
def remove_uploaded_files
FileUtils.rm_rf("#{Rails.root}/storage_test")
end
def after_teardown
super
remove_uploaded_files
end
end
```
2017-12-08 21:44:59 +00:00
If your system tests verify the deletion of a model with attachments and you're
2017-11-21 18:29:41 +00:00
using Active Job, set your test environment to use the inline queue adapter so
2017-11-17 03:20:31 +00:00
the purge job is executed immediately rather at an unknown time in the future.
You may also want to use a separate service definition for the test environment
so your tests don't delete the files you create during development.
```ruby
2017-11-17 03:20:31 +00:00
# Use inline job processing to make things happen immediately
config.active_job.queue_adapter = :inline
# Separate file storage in the test environment
config.active_storage.service = :local_test
```
Discarding Files Stored During Integration Tests
-------------------------------------------
Similarly to System Tests, files uploaded during Integration Tests will not be
automatically cleaned up. If you want to clear the files, you can do it in an
`after_teardown` callback. Doing it here ensures that all connections created
during the test are complete and you won't receive an error from Active Storage
saying it can't find a file.
```ruby
module RemoveUploadedFiles
def after_teardown
super
remove_uploaded_files
end
private
def remove_uploaded_files
FileUtils.rm_rf(Rails.root.join('tmp', 'storage'))
end
end
module ActionDispatch
class IntegrationTest
prepend RemoveUploadedFiles
end
end
```
2017-12-30 00:56:13 +00:00
Implementing Support for Other Cloud Services
---------------------------------------------
2017-12-08 21:44:59 +00:00
If you need to support a cloud service other than these, you will need to
implement the Service. Each service extends
[`ActiveStorage::Service`](https://github.com/rails/rails/blob/main/activestorage/lib/active_storage/service.rb)
by implementing the methods necessary to upload and download files to the cloud.
Purging Unattached Uploads
--------------------------
There are cases where a file is uploaded but never attached to a record. This can happen when using [Direct Uploads](#direct-uploads). You can query for unattached records using the [unattached scope](https://github.com/rails/rails/blob/8ef5bd9ced351162b673904a0b77c7034ca2bc20/activestorage/app/models/active_storage/blob.rb#L49). Below is an example using a [custom rake task](command_line.html#custom-rake-tasks).
```ruby
namespace :active_storage do
desc "Purges unattached Active Storage blobs. Run regularly."
task purge_unattached: :environment do
ActiveStorage::Blob.unattached.where("active_storage_blobs.created_at <= ?", 2.days.ago).find_each(&:purge_later)
end
end
```
WARNING: The query generated by `ActiveStorage::Blob.unattached` can be slow and potentially disruptive on applications with larger databases.