segmented_cache (segmented_cache v0.6.0)
View Sourcesegmented_cache
is a key/value pairs cache library implemented in rotating segments.
For more information, see the README, and the function documentation.
Summary
Types
Telemetry metadata with deletion error information.
Maximum number of entries per segment. When filled, rotation is ensued.
Telemetry metadata with cache hit information.
Dynamic type of keys from cache clients.
Merging function to use for resolving conflicts
Cache unique name.
Configuration values for the cache.
Strategy for cache eviction.
Dynamic type of values from cache clients.
Functions
Delete an entry in all ets segments.
Delete a pattern in all ets segments.
Get the entry for Key in cache.
Check if Key is cached.
Merge a new entry into an existing one, or add it at the front if none is found.
Add an entry to the first table in the segments.
See start_link/2
for more details
See start_link/2
for more details
See start_link/2
for more details
Start and link a cache entity in the local node.
Types
-type delete_error(Key) :: #{name => atom(), value => Key, delete_type => entry | pattern, class => throw | error | exit, reason => dynamic()}.
Telemetry metadata with deletion error information.
-type entries_limit() :: infinity | non_neg_integer().
Maximum number of entries per segment. When filled, rotation is ensued.
Telemetry metadata with cache hit information.
-type key() :: dynamic().
Dynamic type of keys from cache clients.
-type merger_fun(Value) :: fun((Value, Value) -> Value).
Merging function to use for resolving conflicts
-type name() :: atom().
Cache unique name.
-type opts() :: #{prefix => telemetry:event_name(), scope => scope(), strategy => strategy(), entries_limit => entries_limit(), segment_num => non_neg_integer(), ttl => timeout() | {erlang:time_unit(), non_neg_integer()}, merger_fun => merger_fun(dynamic())}.
Configuration values for the cache.
-type scope() :: atom().
pg
scope for cache coordination across distribution.
-type strategy() :: fifo | lru.
Strategy for cache eviction.
-type value() :: dynamic().
Dynamic type of values from cache clients.
Functions
Delete an entry in all ets segments.
Might raise a telemetry error if the request fails:
- name:
Prefix ++ [delete_error]
- measurements:
#{}
- metadata:
delete_error/1
-spec delete_pattern(name(), ets:match_pattern()) -> true.
Delete a pattern in all ets segments.
Might raise a telemetry error if the request fails:
- name:
[segmented_cache, Name, delete_error]
- measurements:
#{}
- metadata:
delete_error/1
Get the entry for Key in cache.
Raises telemetry span:
- name:
Prefix
- start metadata:
#{name => atom()}
- stop metadata:
hit/0
Check if Key is cached.
Raises a telemetry span:
- name:
Prefix
- start metadata:
#{name => atom()}
- stop metadata:
hit/0
Merge a new entry into an existing one, or add it at the front if none is found.
Race conditions considerations:
- Two writers:
compare_and_swap
will ensure they both succeed sequentially - Any writers and the cleaner: under fifo, the writer modifies the record in place
and doesn't need to be concerned with rotation. Under lru, the same considerations
than for a
put_entry_front
apply.
Add an entry to the first table in the segments.
Possible race conditions:
- Two writers: another process might attempt to put a record at the same time. It this case,
both writers will attempt
ets:insert_new
, resulting in only one of them succeeding. The one that fails, will retry three times acompare_and_swap
, attempting to merge the values and ensuring no data is lost. - One worker and the cleaner: there's a chance that by the time we insert in the ets table, this table is not the first anymore because the cleaner has taken action and pushed it behind.
- Two writers and the cleaner: a mix of the previous, it can happen that two writers can attempt to put a record at the same time, but exactly in-between, the cleaner rotates the tables, resulting in the first writter inserting in the table that immediately becomes the second, and the latter writter inserting in the recently treated as first, shadowing the previous.
To treat the data race with the cleaner, after a successful insert, we re-check the index, and if it has changed, we restart the whole operation again: we can be sure that no more rotations will be triggered in a while, so the second round will be final.
Strategy considerations:
Under a fifo strategy, no other writes can happen, but under a lru strategy,
many other workers might attemp to move a record forward. In this case,
the forwarding movement doesn't modify the record, and therefore the compare_and_swap
operation should succeed at once; then, once the record is in the front,
all other workers shouldn't be attempting to move it.
-spec start(name()) -> gen_server:start_ret().
See start_link/2
for more details
-spec start(name(), opts()) -> gen_server:start_ret().
See start_link/2
for more details
-spec start_link(name()) -> gen_server:start_ret().
See start_link/2
for more details
-spec start_link(name(), opts()) -> gen_server:start_ret().
Start and link a cache entity in the local node.
Name
must be an atom. Then the cache will be identified by the pair {segmented_cache, Name}
,
and an entry in persistent_term will be created and the worker will join a pg group of
the same name.
Opts
is a map containing the configuration.
prefix
is atelemetry
event name to prefix events raised by this library. Defaults to[segmented_cache, Name, request]
.scope
is apg
scope. Defaults topg
.strategy
can be fifo or lru. Default isfifo
.segment_num
is the number of segments for the cache. Default is3
ttl
is the live, in minutes, of each segment. Default is480
, i.e., 8 hours.merger_fun
is a function that, given a conflict, takes in order the old and new values and applies a merging strategy. See themerger_fun/1
type.