Update Redis cache store docs

This commit is contained in:
Stefan Wrobel 2018-03-17 13:26:03 -07:00 committed by Jeremy Daer
parent ef73318e29
commit a6b82a3779
1 changed files with 34 additions and 14 deletions

View File

@ -446,30 +446,28 @@ config.cache_store = :mem_cache_store, "cache-1.example.com", "cache-2.example.c
### ActiveSupport::Cache::RedisCacheStore
The Redis cache store takes advantage of Redis support for least-recently-used
and least-frequently-used key eviction when it reaches max memory, allowing it
to behave much like a Memcached cache server.
The Redis cache store takes advantage of Redis support for automatic eviction
when it reaches max memory, allowing it to behave much like a Memcached cache server.
Deployment note: Redis doesn't expire keys by default, so take care to use a
dedicated Redis cache server. Don't fill up your persistent-Redis server with
volatile cache data! Read the
[Redis cache server setup guide](https://redis.io/topics/lru-cache) in detail.
For an all-cache Redis server, set `maxmemory-policy` to an `allkeys` policy.
Redis 4+ support least-frequently-used (`allkeys-lfu`) eviction, an excellent
default choice. Redis 3 and earlier should use `allkeys-lru` for
least-recently-used eviction.
For a cache-only Redis server, set `maxmemory-policy` to one of the variants of allkeys.
Redis 4+ supports least-frequently-used eviction (`allkeys-lfu`), an excellent
default choice. Redis 3 and earlier should use least-recently-used eviction (`allkeys-lru`).
Set cache read and write timeouts relatively low. Regenerating a cached value
is often faster than waiting more than a second to retrieve it. Both read and
write timeouts default to 1 second, but may be set lower if your network is
consistently low latency.
consistently low-latency.
By default, the cache store will not attempt to reconnect to Redis if the
connection fails during a request. If you experience frequent disconnects you
may wish to enable reconnect attempts.
Cache reads and writes never raise exceptions. They just return `nil` instead,
Cache reads and writes never raise exceptions; they just return `nil` instead,
behaving as if there was nothing in the cache. To gauge whether your cache is
hitting exceptions, you may provide an `error_handler` to report to an
exception gathering service. It must accept three keyword arguments: `method`,
@ -477,12 +475,33 @@ the cache store method that was originally called; `returning`, the value that
was returned to the user, typically `nil`; and `exception`, the exception that
was rescued.
Putting it all together, a production Redis cache store may look something
like this:
To get started, add the redis gem to your Gemfile:
```ruby
cache_servers = %w[ "redis://cache-01:6379/0", "redis://cache-02:6379/0", … ],
config.cache_store = :redis_cache_store, url: cache_servers,
gem 'redis'
```
You can enable support for the faster [hiredis](https://github.com/redis/hiredis)
connection library by additionally adding its ruby wrapper to your Gemfile:
```ruby
gem 'hiredis'
```
Redis cache store will automatically require & use hiredis if available. No further
configuration is needed.
Finally, add the configuration in the relevant `config/environments/*.rb` file:
```ruby
config.cache_store = :redis_cache_store, { url: ENV['REDIS_URL'] }
```
A more complex, production Redis cache store may look something like this:
```ruby
cache_servers = %w(redis://cache-01:6379/0 redis://cache-02:6379/0)
config.cache_store = :redis_cache_store, { url: cache_servers,
connect_timeout: 30, # Defaults to 20 seconds
read_timeout: 0.2, # Defaults to 1 second
@ -491,9 +510,10 @@ config.cache_store = :redis_cache_store, url: cache_servers,
error_handler: -> (method:, returning:, exception:) {
# Report errors to Sentry as warnings
Raven.capture_exception exception, level: 'warning",
Raven.capture_exception exception, level: 'warning',
tags: { method: method, returning: returning }
}
}
```
### ActiveSupport::Cache::NullStore