class Fog::Cache

A generic cache mechanism for fog resources. This can be for a server, security group, etc.

Currently this is a on-disk cache using yml files per-model instance, however there is nothing in the way of extending this to use various other cache backends.

Basic functionality

set the namespace where this cache will be stored:

Fog::Cache.namespace_prefix = “service-account-foo-region-bar”

cache to disk:

# after dumping, there will be a yml file on disk:
resouce.cache.dump

# you can load cached data in from a different session
Fog::Cache.load(Fog::Compute::AWS::Server, compute)

# you can also expire cache (removes cached data assocaited with the resources of this model associated to the service passed in).
Fog::Cache.expire_cache!(Fog::Compute::AWS::Server, compute)

More detailed flow/usage

Normally, you would have a bunch of resources you want to cache/reload from disk. Every fog model has a cache object injected to accomplish this. So in order to cache a server for exmaple you would do something like this:

# note this is necessary in order to segregate usage of cache between various providers regions and accounts.
# if you are using one account/region/etc only, you still must set it. 'default' will do.
Fog::Cache.namespace_prefix = "prod-emea-eu-west-1"

s = security_groups.sample; s.name # => "default"
s.cache.dump # => 2371

Now it is on disk:

shai@adsk-lappy ~ % tree ~/.fog-cache/prod-emea-eu-west-1/

/Users/shai/.fog-cache/prod-emea-eu-west-1/
  └── fog_compute_aws_real
    └── fog_compute_aws_securitygroup
     ├── default-90928073d9d5d9b4e7545e88aee7ec4f.yml

You can do the same with a SecurityGroup, Instances, Elbs, etc.

Note that when loading cache from disk, you need to pass the appropriate model class, and service associated with it. Service is passed in is so that the service/connection details can be loaded into the loaded instances so they can be re-queried, etc. Model is passed in so we can find the cache data associated to that model in the namespace of cache this session is using: Will try to load all resources associated to those. If you had 1 yml file, or 100, it would load whatever it could find. As such, the normal usage of dumping would be do it on a collection:

load_balancers.each {|elb| elb.cache.dump }

In order to load the cache into a different session with nothing but the service set up, use like so: As mentioned, will load all resources associated to the model_klass and service passed in.

instances = Fog::Cache.load(Fog::Compute::AWS::Server, compute)
instances.first.id # => "i-0569a70ae6f47d229"

Note that if there is no cache located for the model class and service passed to `Fog::Cache.load` you will get an exception you can handle (for example, to load the resources for the fisrt time):

Fog::Cache.expire_cache!(Fog::Compute::AWS::SecurityGroup, compute)
# ... now there is no SecurityGroup cache data. So, if you tried to load it, you would get an exception:

Fog::Cache.load(Fog::Compute::AWS::SecurityGroup, compute)
  rescue Fog::Cache::CacheNotFound => e
    puts "could not find any cache data for security groups on #{compute}"
    get_resources_and_dump

Extending cache backends

Currently this is on-disk using yml. If need be, this could be extended to other cache backends:

Find references of yaml in this file, split out to strategy objects/diff backends etc.

Constants

SANDBOX

where different caches per service api keys, regions etc, are stored see the namespace_prefix= method.

Attributes

model[R]

when a resource is used such as `server.cache.dump` the model klass is passed in so that it can be identified from a different session.

Public Class Methods

load(model_klass, service) click to toggle source

Loads cache associated to the model_klass and service into memory.

If no cache is found, it will raise an error for handling:

rescue Fog::Cache::CacheNotFound
  set_initial_cache
# File lib/fog/core/cache.rb, line 107
def self.load(model_klass, service)
  cache_files = Dir.glob("#{namespace(model_klass, service)}/*")

  raise CacheNotFound if cache_files.empty?

  # collection_klass and model_klass should be the same across all instances
  # choose a valid cache record from the dump to use as a sample to deterine
  # which collection/model to instantiate.
  sample_path = cache_files.detect{ |path| valid_for_load?(path) }
  model_klass = const_get_compat(load_cache(sample_path)[:model_klass])
  collection_klass = const_get_compat(load_cache(sample_path)[:collection_klass]) if load_cache(sample_path)[:collection_klass]

  # Load the cache data into actual ruby instances
  loaded = cache_files.map do |path|
      model_klass.new(load_cache(path)[:attrs]) if valid_for_load?(path)
  end.compact

  # Set the collection and service so they can be reloaded/connection is set properly.
  # See https://github.com/fog/fog-aws/issues/354#issuecomment-286789702
  loaded.each do |i|
    i.collection = collection_klass.new(:service => service) if collection_klass
    i.instance_variable_set(:@service, service)
  end

  # uniqe-ify based on the total of attributes. duplicate cache can exist due to
  # `model#identity` not being unique. but if all attributes match, they are unique
  # and shouldn't be loaded again.
  uniq_loaded = uniq_w_block(loaded) { |i| i.attributes }
  if uniq_loaded.size != loaded.size
    Fog::Logger.warning("Found duplicate items in the cache. Expire all & refresh cache soon.")
  end

  # Fog models created, free memory of cached data used for creation.
  @memoized = nil

  uniq_loaded
end