class ActiveSupport::Cache::RedisCacheStore
Redis cache store.
Deployment note: Take care to use a *dedicated Redis cache* rather than pointing this at your existing Redis server. It won't cope well with mixed usage patterns and it won't expire cache entries by default.
Redis cache server setup guide: redis.io/topics/lru-cache
-
Supports vanilla Redis, hiredis, and Redis::Distributed.
-
Supports Memcached-like sharding across Redises with Redis::Distributed.
-
Fault tolerant. If the Redis server is unavailable, no exceptions are raised.
Cache
fetches are all misses and writes are dropped. -
Local cache. Hot in-memory primary cache within block/middleware scope.
-
read_multi
andwrite_multi
support for Redis mget/mset. Use Redis::Distributed 4.0.1+ for distributed mget support. -
delete_matched
support for Redis KEYS globs.
Constants
- DEFAULT_ERROR_HANDLER
- DEFAULT_REDIS_OPTIONS
- MAX_KEY_BYTESIZE
Keys are truncated with their own SHA2 digest if they exceed 1kB
- SCAN_BATCH_SIZE
The maximum number of entries to receive per SCAN call.
Attributes
Public Class Methods
Creates a new Redis cache store.
Handles four options: :redis block, :redis instance, single :url string, and multiple :url strings.
Option Class Result :redis Proc -> options[:redis].call :redis Object -> options[:redis] :url String -> Redis.new(url: …) :url Array -> Redis::Distributed.new([{ url: … }, { url: … }, …])
No namespace is set by default. Provide one if the Redis cache server is shared with other apps: namespace: 'myapp-cache'
.
Compression is enabled by default with a 1kB threshold, so cached values larger than 1kB are automatically compressed. Disable by passing compress: false
or change the threshold by passing compress_threshold: 4.kilobytes
.
No expiry is set on cache entries by default. Redis is expected to be configured with an eviction policy that automatically deletes least-recently or -frequently used keys when it reaches max memory. See redis.io/topics/lru-cache for cache server setup.
Race condition TTL is not set by default. This can be used to avoid “thundering herd” cache writes when hot cache entries are expired. See ActiveSupport::Cache::Store#fetch
for more.
ActiveSupport::Cache::Store::new
# File lib/active_support/cache/redis_cache_store.rb, line 172 def initialize(namespace: nil, compress: true, compress_threshold: 1.kilobyte, coder: DEFAULT_CODER, expires_in: nil, race_condition_ttl: nil, error_handler: DEFAULT_ERROR_HANDLER, **redis_options) @redis_options = redis_options @max_key_bytesize = MAX_KEY_BYTESIZE @error_handler = error_handler super namespace: namespace, compress: compress, compress_threshold: compress_threshold, expires_in: expires_in, race_condition_ttl: race_condition_ttl, coder: coder end
Advertise cache versioning support.
# File lib/active_support/cache/redis_cache_store.rb, line 70 def self.supports_cache_versioning? true end
Private Class Methods
# File lib/active_support/cache/redis_cache_store.rb, line 137 def build_redis_client(url:, **redis_options) ::Redis.new DEFAULT_REDIS_OPTIONS.merge(redis_options.merge(url: url)) end
# File lib/active_support/cache/redis_cache_store.rb, line 131 def build_redis_distributed_client(urls:, **redis_options) ::Redis::Distributed.new([], DEFAULT_REDIS_OPTIONS.merge(redis_options)).tap do |dist| urls.each { |u| dist.add_node url: u } end end
Public Instance Methods
Cache
Store
API implementation.
Removes expired entries. Handled natively by Redis least-recently-/ least-frequently-used expiry, so manual cleanup is not supported.
ActiveSupport::Cache::Store#cleanup
# File lib/active_support/cache/redis_cache_store.rb, line 304 def cleanup(options = nil) super end
Clear the entire cache on all Redis servers. Safe to use on shared servers if the cache is namespaced.
Failsafe: Raises errors.
# File lib/active_support/cache/redis_cache_store.rb, line 312 def clear(options = nil) failsafe :clear do if namespace = merged_options(options)[:namespace] delete_matched "*", namespace: namespace else redis.with { |c| c.flushdb } end end end
Cache
Store
API implementation.
Decrement a cached value. This method uses the Redis decr atomic operator and can only be used on values written with the :raw option. Calling it on a value not stored with :raw will initialize that value to zero.
Failsafe: Raises errors.
# File lib/active_support/cache/redis_cache_store.rb, line 285 def decrement(name, amount = 1, options = nil) instrument :decrement, name, amount: amount do failsafe :decrement do options = merged_options(options) key = normalize_key(name, options) redis.with do |c| c.decrby(key, amount).tap do write_key_expiry(c, key, options) end end end end end
Cache
Store
API implementation.
Supports Redis KEYS glob patterns:
h?llo matches hello, hallo and hxllo h*llo matches hllo and heeeello h[ae]llo matches hello and hallo, but not hillo h[^e]llo matches hallo, hbllo, ... but not hello h[a-b]llo matches hallo and hbllo
Use \ to escape special characters if you want to match them verbatim.
See redis.io/commands/KEYS for more.
Failsafe: Raises errors.
# File lib/active_support/cache/redis_cache_store.rb, line 233 def delete_matched(matcher, options = nil) instrument :delete_matched, matcher do unless String === matcher raise ArgumentError, "Only Redis glob strings are supported: #{matcher.inspect}" end redis.with do |c| pattern = namespace_key(matcher, options) cursor = "0" # Fetch keys in batches using SCAN to avoid blocking the Redis server. nodes = c.respond_to?(:nodes) ? c.nodes : [c] nodes.each do |node| begin cursor, keys = node.scan(cursor, match: pattern, count: SCAN_BATCH_SIZE) node.del(*keys) unless keys.empty? end until cursor == "0" end end end end
Cache
Store
API implementation.
Increment a cached value. This method uses the Redis incr atomic operator and can only be used on values written with the :raw option. Calling it on a value not stored with :raw will initialize that value to zero.
Failsafe: Raises errors.
# File lib/active_support/cache/redis_cache_store.rb, line 262 def increment(name, amount = 1, options = nil) instrument :increment, name, amount: amount do failsafe :increment do options = merged_options(options) key = normalize_key(name, options) redis.with do |c| c.incrby(key, amount).tap do write_key_expiry(c, key, options) end end end end end
# File lib/active_support/cache/redis_cache_store.rb, line 197 def inspect instance = @redis || @redis_options "#<#{self.class} options=#{options.inspect} redis=#{instance.inspect}>" end
Cache
Store
API implementation.
Read multiple values at once. Returns a hash of requested keys -> fetched values.
ActiveSupport::Cache::Store#read_multi
# File lib/active_support/cache/redis_cache_store.rb, line 206 def read_multi(*names) if mget_capable? instrument(:read_multi, names, options) do |payload| read_multi_mget(*names).tap do |results| payload[:hits] = results.keys end end else super end end
# File lib/active_support/cache/redis_cache_store.rb, line 184 def redis @redis ||= begin pool_options = self.class.send(:retrieve_pool_options, redis_options) if pool_options.any? self.class.send(:ensure_connection_pool_added!) ::ConnectionPool.new(pool_options) { self.class.build_redis(**redis_options) } else self.class.build_redis(**redis_options) end end end
Private Instance Methods
Delete an entry from the cache.
# File lib/active_support/cache/redis_cache_store.rb, line 416 def delete_entry(key, options) failsafe :delete_entry, returning: false do redis.with { |c| c.del key } end end
Deletes multiple entries in the cache. Returns the number of entries deleted.
# File lib/active_support/cache/redis_cache_store.rb, line 423 def delete_multi_entries(entries, **_options) redis.with { |c| c.del(entries) } end
ActiveSupport::Cache::Store#deserialize_entry
# File lib/active_support/cache/redis_cache_store.rb, line 455 def deserialize_entry(payload, raw:) if payload && raw Entry.new(payload, compress: false) else super(payload) end end
# File lib/active_support/cache/redis_cache_store.rb, line 477 def failsafe(method, returning: nil) yield rescue ::Redis::BaseError => e handle_exception exception: e, method: method, returning: returning returning end
# File lib/active_support/cache/redis_cache_store.rb, line 484 def handle_exception(exception:, method:, returning:) if @error_handler @error_handler.(method: method, exception: exception, returning: returning) end rescue => failsafe warn "RedisCacheStore ignored exception in handle_exception: #{failsafe.class}: #{failsafe.message}\n #{failsafe.backtrace.join("\n ")}" end
Truncate keys that exceed 1kB.
ActiveSupport::Cache::Store#normalize_key
# File lib/active_support/cache/redis_cache_store.rb, line 441 def normalize_key(key, options) truncate_key super&.b end
Store
provider interface: Read an entry from the cache.
# File lib/active_support/cache/redis_cache_store.rb, line 346 def read_entry(key, **options) failsafe :read_entry do raw = options&.fetch(:raw, false) deserialize_entry(redis.with { |c| c.get(key) }, raw: raw) end end
ActiveSupport::Cache::Store#read_multi_entries
# File lib/active_support/cache/redis_cache_store.rb, line 353 def read_multi_entries(names, **options) if mget_capable? read_multi_mget(*names, **options) else super end end
# File lib/active_support/cache/redis_cache_store.rb, line 361 def read_multi_mget(*names) options = names.extract_options! options = merged_options(options) return {} if names == [] raw = options&.fetch(:raw, false) keys = names.map { |name| normalize_key(name, options) } values = failsafe(:read_multi_mget, returning: {}) do redis.with { |c| c.mget(*keys) } end names.zip(values).each_with_object({}) do |(name, value), results| if value entry = deserialize_entry(value, raw: raw) unless entry.nil? || entry.expired? || entry.mismatched?(normalize_version(name, options)) results[name] = entry.value end end end end
# File lib/active_support/cache/redis_cache_store.rb, line 471 def serialize_entries(entries, raw: false) entries.transform_values do |entry| serialize_entry entry, raw: raw end end
ActiveSupport::Cache::Store#serialize_entry
# File lib/active_support/cache/redis_cache_store.rb, line 463 def serialize_entry(entry, raw: false) if raw entry.value.to_s else super(entry) end end
# File lib/active_support/cache/redis_cache_store.rb, line 333 def set_redis_capabilities case redis when Redis::Distributed @mget_capable = true @mset_capable = false else @mget_capable = true @mset_capable = true end end
# File lib/active_support/cache/redis_cache_store.rb, line 445 def truncate_key(key) if key && key.bytesize > max_key_bytesize suffix = ":sha2:#{::Digest::SHA2.hexdigest(key)}" truncate_at = max_key_bytesize - suffix.bytesize "#{key.byteslice(0, truncate_at)}#{suffix}" else key end end
Write an entry to the cache.
Requires Redis 2.6.12+ for extended SET options.
# File lib/active_support/cache/redis_cache_store.rb, line 386 def write_entry(key, entry, unless_exist: false, raw: false, expires_in: nil, race_condition_ttl: nil, **options) serialized_entry = serialize_entry(entry, raw: raw) # If race condition TTL is in use, ensure that cache entries # stick around a bit longer after they would have expired # so we can purposefully serve stale entries. if race_condition_ttl && expires_in && expires_in > 0 && !raw expires_in += 5.minutes end failsafe :write_entry, returning: false do if unless_exist || expires_in modifiers = {} modifiers[:nx] = unless_exist modifiers[:px] = (1000 * expires_in.to_f).ceil if expires_in redis.with { |c| c.set key, serialized_entry, **modifiers } else redis.with { |c| c.set key, serialized_entry } end end end
# File lib/active_support/cache/redis_cache_store.rb, line 409 def write_key_expiry(client, key, options) if options[:expires_in] && client.ttl(key).negative? client.expire key, options[:expires_in].to_i end end
Nonstandard store provider API to write multiple values at once.
ActiveSupport::Cache::Store#write_multi_entries
# File lib/active_support/cache/redis_cache_store.rb, line 428 def write_multi_entries(entries, expires_in: nil, **options) if entries.any? if mset_capable? && expires_in.nil? failsafe :write_multi_entries do redis.with { |c| c.mapped_mset(serialize_entries(entries, raw: options[:raw])) } end else super end end end