Nebulex.Cache behaviour (Nebulex v3.0.0-rc.2)
View SourceCache abstraction layer inspired by Ecto.
A cache maps to an underlying in-memory storage controlled by the adapter. For example, Nebulex ships with an adapter that implements a local generational cache.
The cache expects the :otp_app and :adapter as options when used.
The :otp_app should point to an OTP application with the cache
configuration. See the compile time options for more information:
:otp_app(atom/0) - Required. The OTP application the cache configuration is under.:adapter(module/0) - Required. The cache adapter module.:default_dynamic_cache(atom/0) - Default dynamic cache for executing cache commands. Set to the defined cache module by default. For example, when you callMyApp.Cache.start_link/1, it will start a cache with the nameMyApp.Cache.See the "Dynamic caches" section for more information.
:adapter_opts(keyword/0) - Specifies a list of options passed to the adapter in compilation time. The default value is[].
For example, the cache:
defmodule MyApp.Cache do
use Nebulex.Cache,
otp_app: :my_app,
adapter: Nebulex.Adapters.Local
endCould be configured with:
config :my_app, MyApp.Cache,
gc_interval: :timer.hours(12),
max_size: 1_000_000,
allocated_memory: 2_000_000_000,
gc_memory_check_interval: :timer.seconds(10)Most of the configuration that goes into the config is specific
to the adapter. For this particular example, you can check
Nebulex.Adapters.Local for more information.
Despite this, the following configuration values are shared
across all adapters:
:name(atom/0|{:via, reg_mod :: module(), via_name :: any()}) - The name of the supervisor process the cache is started under. Set to the defined cache module by default. For example, when you callMyApp.Cache.start_link/1, a cache namedMyApp.Cacheis started.:telemetry(boolean/0) - A flag to determine whether to emit the Telemetry cache command events.This sets the default behavior for all cache commands. It can be overridden on a per-command basis using the
:telemetryoption when calling a cache function. See the "Shared options" section for more information.The default value is
true.:telemetry_prefix(list ofatom/0) - Nebulex emits cache events using the Telemetry library. By default, the telemetry prefix is based on the module name, so if your module is calledMyApp.Cache, the prefix will be[:my_app, :cache]. See the "Telemetry events" section to see which events are emitted by Nebulex out-of-box.If you have multiple caches, keep the
:telemetry_prefixconsistent for each cache and use the:cacheproperty within the:adapter_metacoming in the event metadata to distinguish between caches. For dynamic caches, you should additionally use the:nameproperty. Alternatively, you can use different:telemetry_prefixvalues.:bypass_mode(boolean/0) - Iftrue, the cache calls are skipped by overwriting the configured adapter withNebulex.Adapters.Nilwhen the cache starts. This option is handy for tests if you want to disable or bypass the cache while running the tests. The default value isfalse.
Shared options
All of the cache functions outlined in this module accept the following options:
:timeout(timeout/0) - The time in milliseconds to wait for a command to finish (:infinityto wait indefinitely). The default value is5000. However, it may change depending on the adapter.Timeout option
Despite being a shared option accepted by almost all cache functions, it is up to the adapter to support it.
:telemetry(boolean/0) - Override the global:telemetrysetting for a specific cache command.This allows you to selectively enable or disable telemetry on a per-command basis without needing to start separate cache instances. For example, you might want to disable telemetry for frequently called read operations while keeping it enabled for write operations, or vice versa.
The default value is
false.:telemetry_event(list ofatom/0) - The telemetry event name to dispatch the event under. Defaults to what is configured in the:telemetry_prefixoption. See the "Telemetry events" section for more information.:telemetry_metadata(map ofterm/0keys andterm/0values) - Extra metadata to add to the Telemetry cache command events. These end up in the:extra_metadatametadata key of these events.See the "Telemetry events" section for more information.
Adapter-specific options
In addition to the shared options, each adapter can define its specific options. Therefore, Nebulex recommends reviewing the adapter's documentation.
Telemetry events
There are two types of telemetry events. The ones emitted by Nebulex and the ones that are adapter specific. The ones emitted by Nebulex are divided into two categories: cache lifecycle events and cache command events. Let us take a closer look at each of them.
Cache lifecycle events
All Nebulex caches emit the following events:
[:nebulex, :cache, :init]- It is dispatched whenever a cache starts. The only measurement is the current system time in native units from calling:System.system_time(). The:optskey in the metadata contains all initialization options.- Measurement:
%{system_time: integer()} - Metadata:
%{cache: module(), name: atom(), opts: keyword()}
- Measurement:
Cache command events
When the option :telemetry is set to true (the default), Nebulex will
emit Telemetry span events for each cache command. Those events will use
the :telemetry_prefix outlined in the options above which defaults to
[:my_app, :cache].
For instance, to receive all events published for the cache MyApp.Cache,
one could define a module:
defmodule MyApp.Telemetry do
def handle_event(
[:my_app, :cache, :command, event],
measurements,
metadata,
_config
) do
case event do
:start ->
# Handle start event ...
IO.puts("Cache command started: #{inspect(metadata.command)}")
IO.puts("Arguments: #{inspect(metadata.args)}")
:stop ->
# Handle stop event ...
duration = measurements.duration
IO.puts("Cache command completed: #{inspect(metadata.command)}")
IO.puts("Duration: #{duration} native units")
IO.puts("Result: #{inspect(metadata.result)}")
:exception ->
# Handle exception event ...
IO.puts("Cache command failed: #{inspect(metadata.command)}")
IO.puts("Error: #{inspect(metadata.reason)}")
IO.puts("Stacktrace: #{inspect(metadata.stacktrace)}")
end
end
endThen, in the Application.start/2 callback, attach the handler to the events
using a unique handler id:
:telemetry.attach_many(
"my-app-handler-id",
[
[:my_app, :cache, :command, :start],
[:my_app, :cache, :command, :stop],
[:my_app, :cache, :command, :exception]
],
&MyApp.Telemetry.handle_event/4,
:no_config
)See the telemetry documentation for more information.
The following are the events you should expect from Nebulex. All examples
below consider a cache named MyApp.Cache:
[:my_app, :cache, :command, :start]
This event is emitted before a cache command is executed.
The :measurements map will include the following:
:system_time- The current system time in native units from calling:System.system_time().
A Telemetry :metadata map including the following fields:
:adapter_meta- The adapter metadata.:command- The name of the invoked adapter's command.:args- The arguments passed to the invoked adapter, except for the first one, since the adapter's metadata is available in the event's metadata.:extra_metadata- Additional metadata through the runtime option:telemetry_metadata.
Example event data:
%{
measurements: %{system_time: 1_678_123_456_789},
metadata: %{
adapter_meta: %{...},
command: :put,
args: ["key", "value", [ttl: :timer.seconds(10)]],
extra_metadata: %{user_id: 123}
}
}[:my_app, :cache, :command, :stop]
This event is emitted after a cache command is executed.
The :measurements map will include the following:
:duration- The time spent executing the cache command. The measurement is given in the:nativetime unit. You can read more about it in the docs forSystem.convert_time_unit/3.
A Telemetry :metadata map including the following fields:
:adapter_meta- The adapter metadata.:command- The name of the invoked adapter's command.:args- The arguments passed to the invoked adapter, except for the first one, since the adapter's metadata is available in the event's metadata.:extra_metadata- Additional metadata through the runtime option:telemetry_metadata.:result- The command's result.
Example event data:
%{
measurements: %{duration: 1_234_567},
metadata: %{
adapter_meta: %{...},
command: :put,
args: ["key", "value", [ttl: :timer.seconds(10)]],
extra_metadata: %{user_id: 123},
result: :ok
}
}[:my_app, :cache, :command, :exception]
This event is emitted when an error or exception occurs during the cache command execution.
The :measurements map will include the following:
:duration- The time spent executing the cache command. The measurement is given in the:nativetime unit. You can read more about it in the docs forSystem.convert_time_unit/3.
A Telemetry :metadata map including the following fields:
:adapter_meta- The adapter metadata.:command- The name of the invoked adapter's command.:args- The arguments passed to the invoked adapter, except for the first one, since the adapter's metadata is available in the event's metadata.:extra_metadata- Additional metadata through the runtime option:telemetry_metadata.:kind- The type of the error::error,:exit, or:throw.:reason- The reason of the error.:stacktrace- Exception's stack trace.
Example event data:
%{
measurements: %{duration: 1_234_567},
metadata: %{
adapter_meta: %{...},
command: :put,
args: ["key", "value", [ttl: :timer.seconds(10)]],
extra_metadata: %{user_id: 123},
kind: :error,
reason: %Nebulex.KeyError{key: "key", reason: :not_found},
stacktrace: [...]
}
}Adapter-specific events
Regardless of whether Nebulex emits the telemetry events outlined above or not, the adapters can and are free to expose their own, but they will be out of Nebulex's scope. Therefore, if you are interested in using specific adapter events, you should review the adapters' documentation.
Dynamic caches
Nebulex allows you to start multiple processes from the same cache module. This feature is typically useful when you want to have different cache instances but access them through the same cache module.
When you list a cache in your supervision tree, such as MyApp.Cache, it will
start a supervision tree with a process named MyApp.Cache under the hood.
By default, the process has the same name as the cache module. Hence, whenever
you invoke a function in MyApp.Cache, such as MyApp.Cache.put/3, Nebulex
will execute the command in the cache process named MyApp.Cache.
However, with Nebulex, you can start multiple processes from the same cache. The only requirement is that they must have different process names, like this:
children = [
MyApp.Cache,
{MyApp.Cache, name: MyApp.UsersCache}
]Now you have two cache instances running: one is named MyApp.Cache, and the
other one is named MyApp.UsersCache. You can tell Nebulex which process you
want to use in your cache operations by calling:
MyApp.Cache.put_dynamic_cache(MyApp.Cache)
MyApp.Cache.put_dynamic_cache(MyApp.UsersCache)Once you call MyApp.Cache.put_dynamic_cache(name), all invocations made on
MyApp.Cache will use the cache instance denoted by name.
Nebulex also provides a handy function for invoking commands using dynamic
caches: with_dynamic_cache/2.
MyApp.Cache.with_dynamic_cache(MyApp.UsersCache, fn ->
# all commands here will use MyApp.UsersCache
MyApp.Cache.put("u1", "joe")
...
end)While these functions are handy, you may want to have the ability to pass
the dynamic cache directly to the command, avoiding the boilerplate logic
of using put_dynamic_cache/1 or with_dynamic_cache/2. From v3.0,
all Cache API commands expose an extended callback version that admits a
dynamic cache at the first argument, so you can directly interact with a
cache instance.
MyApp.Cache.put(MyApp.UsersCache, "u1", "joe", ttl: :timer.hours(1))
MyApp.Cache.get(MyApp.UsersCache, "u1", nil, [])
MyApp.Cache.delete(MyApp.UsersCache, "u1", [])This is another handy way to work with multiple cache instances through the same cache module.
Declarative Caching with Decorators
While the cache API provides imperative control for cache operations,
Nebulex also offers a declarative approach through caching decorators.
For applications that prefer declarative patterns, see the
Nebulex.Caching.Decorators module documentation for attributes like
@cacheable, @cache_put, and @cache_evict.
Distributed topologies
One of the goals of Nebulex is also to provide the ability to set up distributed cache topologies, but this feature will depend on the adapters.
Summary
User callbacks
A callback executed when the cache starts or when configuration is read.
Runtime API
Returns the adapter tied to the cache.
Returns the adapter configuration stored in the :otp_app environment.
Returns the atom name or pid of the current cache (based on Ecto dynamic repo).
Sets the dynamic cache to be used in further commands (based on Ecto dynamic repo).
Starts a supervision and return {:ok, pid} or just :ok if nothing
needs to be done.
Shuts down the cache.
Same as stop/1 but stops the cache instance given in the first argument
dynamic_cache.
Invokes the function fun using the given dynamic cache.
KV API
Decrements the counter stored at key by the given amount and returns
the current count as {:ok, count}.
Same as decr/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as decr/3 but raises an exception if an error occurs.
Same as decr!/4 but raises an exception if an error occurs.
Deletes the entry in the cache for a specific key.
Same as delete/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as delete/2 but raises an exception if an error occurs.
Same as delete!/3 but raises an exception if an error occurs.
Returns {:ok, true} if the given key exists and the new ttl is
successfully updated; otherwise, {:ok, false} is returned.
Same as expire/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as expire/3 but raises an exception if an error occurs.
Same as expire!/4 but raises an exception if an error occurs.
Fetches the value for a specific key in the cache.
Same as fetch/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as fetch/2 but raises Nebulex.KeyError if the cache doesn't contain
key or Nebulex.Error if another error occurs while executing the command.
Same as fetch/3 but raises Nebulex.KeyError if the cache doesn't contain
key or Nebulex.Error if another error occurs while executing the command.
Fetches the value for the given key from the cache. If the key is not
present, the provided anonymous function is executed.
Same as fetch_or_store/3, but the command is executed on the cache
instance given at the first argument dynamic_cache.
Same as fetch_or_store/3 but raises an exception if an error occurs.
Same as fetch_or_store!/3 but the command is executed on the cache
instance given at the first argument dynamic_cache.
Gets a value from the cache where the key matches the given key.
Same as get/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as get/3 but raises an exception if an error occurs.
Same as get!/4 but raises an exception if an error occurs.
Gets the value from key and updates it, all in one pass.
Same as get_and_update/3, but the command is executed on the cache
instance given at the first argument dynamic_cache.
Same as get_and_update/3 but raises an exception if an error occurs.
Same as get_and_update!/4 but raises an exception if an error occurs.
Gets the value for the given key from the cache. If the key is not
present, the provided anonymous function is executed and its result
is always cached under the given key.
Same as get_or_store/3, but the command is executed on the cache
instance given at the first argument dynamic_cache.
Same as get_or_store/3 but raises an exception if a cache error occurs
(e.g., the adapter failed executing the command and returns an error).
Same as get_or_store!/3 but the command is executed on the cache
instance given at the first argument dynamic_cache.
Determines if the cache contains an entry for the specified key.
Same as has_key?/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Increments the counter stored at key by the given amount and returns
the current count as {:ok, count}.
Same as incr/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as incr/3 but raises an exception if an error occurs.
Same as incr!/4 but raises an exception if an error occurs.
Puts the given value under key into the cache.
Same as put/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as put/3 but raises an exception if an error occurs.
Same as put!/4 but raises an exception if an error occurs.
Puts the given entries (key/value pairs) into the cache. It replaces
existing values with new values (just as regular put).
Same as put_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as put_all/2 but raises an exception if an error occurs.
Same as put_all!/3 but raises an exception if an error occurs.
Puts the given value under key into the cache only if it does not
already exist.
Same as put_new/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as put_new/3 but raises an exception if an error occurs.
Same as put_new!/4 but raises an exception if an error occurs.
Puts the given entries (key/value pairs) into the cache. It will not
perform any operation, even if a single key exists.
Same as put_new_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as put_new_all/2 but raises an exception if an error occurs.
Same as put_new_all!/3 but raises an exception if an error occurs.
Alters the entry stored under key, but only if the entry already exists
in the cache.
Same as replace/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as replace/3 but raises an exception if an error occurs.
Same as replace!/4 but raises an exception if an error occurs.
Removes and returns the value associated with key in the cache.
Same as take/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as take/2 but raises an exception if an error occurs.
Same as take!/3 but raises an exception if an error occurs.
Returns {:ok, true} if the given key exists and the last access time is
successfully updated; otherwise, {:ok, false} is returned.
Same as touch/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as touch/2 but raises an exception if an error occurs.
Same as touch!/3 but raises an exception if an error occurs.
Returns the remaining time-to-live for the given key.
Same as ttl/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as ttl/2 but raises an exception if an error occurs.
Same as ttl!/3 but raises an exception if an error occurs.
Updates the key in the cache with the given function.
Same as update/4, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as update/4 but raises an exception if an error occurs.
Same as update/5 but raises an exception if an error occurs.
Query API
Counts all entries matching the query specified by the given query_spec.
Same as count_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as count_all/2 but raises an exception if an error occurs.
Same as count_all!/3 but raises an exception if an error occurs.
Deletes all entries matching the query specified by the given query_spec.
Same as delete_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as delete_all/2 but raises an exception if an error occurs.
Same as delete_all!/3 but raises an exception if an error occurs.
Fetches all entries from the cache matching the given query specified through the "query-spec".
Same as get_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as get_all/2 but raises an exception if an error occurs.
Same as get_all!/3 but raises an exception if an error occurs.
Similar to get_all/2, but returns a lazy enumerable that emits all entries
matching the query specified by the given query_spec.
Same as stream/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Same as stream/2 but raises an exception if an error occurs.
Same as stream!/3 but raises an exception if an error occurs.
Transaction API
Returns {:ok, true} if the current process is inside a transaction;
otherwise, {:ok, false} is returned.
Same as in_transaction?/1, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Runs the given function inside a transaction.
Same as transaction/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
Info API
Returns {:ok, info} where info contains the requested cache information,
as specified by the spec.
Same as info/2, but the command is executed on the cache
instance given at the first argument dynamic_cache.
Same as info/2 but raises an exception if an error occurs.
Same as info/3 but raises an exception if an error occurs.
Observable API
Register a cache event listener event_listener.
Same as register_event_listener/2, but the command is executed on the cache
instance given at the first argument dynamic_cache.
Same as register_event_listener/2 but raises an exception if an error
occurs.
Same as register_event_listener/3 but raises an exception if an error
occurs.
Un-register a cache event listener.
Same as unregister_event_listener/2, but the command is executed on the
cache instance given at the first argument dynamic_cache.
Same as unregister_event_listener/2 but raises an exception if an error
occurs.
Same as unregister_event_listener/3 but raises an exception if an error
occurs.
Types
Dynamic cache value
Cache entries
Common error type
Error type for the given reason
Proxy type to a cache event filter
Proxy type to a cache event listener
Fetch error reason
Fetch or store function
Get or store function
The data type for the cache information
The type for the info item's value
Info map
Specification key for the item(s) to include in the returned info
Cache entry key
Proxy type for generic Nebulex error
Ok/Error tuple with default error reasons
Ok/Error type
Cache action options
The data type for a query spec.
Cache type
Cache entry value
User callbacks
@callback init(config) :: {:ok, config} | :ignore when config: keyword()
A callback executed when the cache starts or when configuration is read.
Runtime API
@callback __adapter__() :: Nebulex.Adapter.t()
Returns the adapter tied to the cache.
@callback config() :: keyword()
Returns the adapter configuration stored in the :otp_app environment.
If the init/1 callback is implemented in the cache, it will be invoked.
@callback get_dynamic_cache() :: dynamic_cache()
Returns the atom name or pid of the current cache (based on Ecto dynamic repo).
See also put_dynamic_cache/1.
@callback put_dynamic_cache(dynamic_cache()) :: dynamic_cache()
Sets the dynamic cache to be used in further commands (based on Ecto dynamic repo).
There are cases where you may want to have different cache instances but
access them through the same cache module. By default, when you call
MyApp.Cache.start_link/1, it will start a cache with the name
MyApp.Cache. But it is also possible to start multiple caches by using
a different name for each of them:
MyApp.Cache.start_link(name: :cache1)
MyApp.Cache.start_link(name: :cache2)You can also start caches without names by explicitly setting the name
to nil:
MyApp.Cache.start_link(name: nil)NOTE: There may be adapters requiring the
:nameoption anyway, therefore, it is highly recommended to see the adapter's documentation you want to use.
All operations through MyApp.Cache are sent by default to the cache named
MyApp.Cache. But you can change the default cache at compile-time:
use Nebulex.Cache, default_dynamic_cache: :cache_nameOr anytime at runtime by calling put_dynamic_cache/1:
MyApp.Cache.put_dynamic_cache(:another_cache_name)From this moment on, all future commands performed by the current process
will run on :another_cache_name.
Additionally, all cache commands optionally support passing the wanted dynamic cache (name or PID) as the first argument so you can directly interact with a cache instance. See the "Dynamic caches" section in the module documentation for more information.
@callback start_link(opts()) :: {:ok, pid()} | {:error, {:already_started, pid()}} | {:error, any()}
Starts a supervision and return {:ok, pid} or just :ok if nothing
needs to be done.
Returns {:error, {:already_started, pid}} if the cache is already
started or {:error, term} in case anything else goes wrong.
Options
See the configuration in the moduledoc for options shared between adapters, for adapter-specific configuration see the adapter's documentation.
@callback stop(opts()) :: :ok
Shuts down the cache.
Options
:timeout - It is an integer that specifies how many milliseconds to wait
for the cache supervisor process to terminate, or the atom :infinity to
wait indefinitely. Defaults to 5000. See Supervisor.stop/3.
See the "Shared options" section in the module documentation for more options.
@callback stop(dynamic_cache(), opts()) :: :ok
Same as stop/1 but stops the cache instance given in the first argument
dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
@callback with_dynamic_cache(dynamic_cache(), fun()) :: any()
Invokes the function fun using the given dynamic cache.
Example
MyCache.with_dynamic_cache(:my_cache, fn ->
MyCache.put("foo", "var")
end)See get_dynamic_cache/0 and put_dynamic_cache/1.
KV API
@callback decr(key(), amount :: integer(), opts()) :: ok_error_tuple(integer())
Decrements the counter stored at key by the given amount and returns
the current count as {:ok, count}.
If amount < 0, the value is incremented by that amount instead
(opposite to incr/3).
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
If the key doesn't exist, the TTL is set. Otherwise, only the counter value is updated, keeping the TTL set for the first time.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.:default(integer/0) - If the key is not present in the cache, the default value is inserted as the key's initial value before it is incremented. The default value is0.
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.decr(:a)
{:ok, -1}
iex> MyCache.decr(:a, 2)
{:ok, -3}
iex> MyCache.decr(:a, -1)
{:ok, -2}
iex> MyCache.decr(:missing_key, 2, default: 10)
{:ok, 8}
@callback decr(dynamic_cache(), key(), amount :: integer(), opts()) :: ok_error_tuple(integer())
Same as decr/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.decr(MyCache1, :a, 1, [])
{:ok, -1}
Same as decr/3 but raises an exception if an error occurs.
Examples
iex> MyCache.decr!(:a)
-1
@callback decr!(dynamic_cache(), key(), amount :: integer(), opts()) :: integer()
Same as decr!/4 but raises an exception if an error occurs.
@callback delete(key(), opts()) :: :ok | error_tuple()
Deletes the entry in the cache for a specific key.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
Options
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.delete(:a)
:ok
iex> MyCache.get!(:a)
nil
iex> MyCache.delete(:nonexistent)
:ok
@callback delete(dynamic_cache(), key(), opts()) :: :ok | error_tuple()
Same as delete/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.delete(MyCache1, :a, [])
:ok
Same as delete/2 but raises an exception if an error occurs.
@callback delete!(dynamic_cache(), key(), opts()) :: :ok
Same as delete!/3 but raises an exception if an error occurs.
@callback expire(key(), ttl :: timeout(), opts()) :: ok_error_tuple(boolean())
Returns {:ok, true} if the given key exists and the new ttl is
successfully updated; otherwise, {:ok, false} is returned.
If there's an error with executing the command, {:error, reason}
is returned; where reason is the cause of the error.
Options
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.expire(:a, :timer.hours(1))
{:ok, true}
iex> MyCache.expire(:a, :infinity)
{:ok, true}
iex> MyCache.expire(:b, 5)
{:ok, false}
@callback expire(dynamic_cache(), key(), ttl :: timeout(), opts()) :: ok_error_tuple(boolean())
Same as expire/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.expire(MyCache1, :a, :timer.hours(1), [])
{:ok, false}
Same as expire/3 but raises an exception if an error occurs.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.expire!(:a, :timer.hours(1))
true
@callback expire!(dynamic_cache(), key(), ttl :: timeout(), opts()) :: boolean()
Same as expire!/4 but raises an exception if an error occurs.
@callback fetch(key(), opts()) :: ok_error_tuple(value(), fetch_error_reason())
Fetches the value for a specific key in the cache.
If the cache contains the given key, then its value is returned
in the shape of {:ok, value}.
If there's an error with executing the command, {:error, reason}
is returned. reason is the cause of the error and can be
Nebulex.KeyError if the cache does not contain key,
Nebulex.Error otherwise.
Options
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put("foo", "bar")
:ok
iex> MyCache.fetch("foo")
{:ok, "bar"}
# Key not found error
iex> {:error, %Nebulex.KeyError{key: "bar"} = e} = MyCache.fetch("bar")
iex> e.reason
:not_found
@callback fetch(dynamic_cache(), key(), opts()) :: ok_error_tuple(value(), fetch_error_reason())
Same as fetch/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.put("foo", "bar")
:ok
iex> MyCache.fetch(MyCache1, "foo", [])
{:ok, "bar"}
Same as fetch/2 but raises Nebulex.KeyError if the cache doesn't contain
key or Nebulex.Error if another error occurs while executing the command.
Examples
iex> MyCache.put("foo", "bar")
:ok
iex> MyCache.fetch!("foo")
"bar"
@callback fetch!(dynamic_cache(), key(), opts()) :: value()
Same as fetch/3 but raises Nebulex.KeyError if the cache doesn't contain
key or Nebulex.Error if another error occurs while executing the command.
@callback fetch_or_store(key(), fetch_or_store_fun(), opts()) :: ok_error_tuple(value())
Fetches the value for the given key from the cache. If the key is not
present, the provided anonymous function is executed.
If the function returns {:ok, value}, the value is cached under the given
key and returned as the result. If it returns {:error, reason}, the value
is not cached, and the error is returned as is.
If the function returns any other value, a RuntimeError is raised.
This function is useful when you want to cache only successful computations, such as API calls or database queries that may fail. Failed operations are not cached, allowing subsequent calls to retry the operation.
Options
Since the put operation is used under the hood, the following options are
supported:
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.:keep_ttl(boolean/0) - Indicates whether to retain the time to live associated with the key. Otherwise, the value in the:ttloption overwrites the existing one. The default value isfalse.
See the "Shared options" section in the module documentation for a list of supported options.
fetch_or_store atomicity
This operation is not atomic. It uses fetch and put under the hood,
but the function is executed outside of the cache transaction. If you need
to ensure atomicity, consider wrapping the function in a transaction/2
call.
Examples
Basic usage
iex> MyCache.fetch_or_store("foo", fn -> {:ok, "bar"} end)
{:ok, "bar"}
iex> MyCache.fetch_or_store("foo", fn -> {:ok, "new value"} end)
{:ok, "bar"} # Returns cached value, function not executedError handling
When the function returns an error, the value is not cached:
iex> MyCache.fetch_or_store("user", fn -> {:error, "not found"} end)
{:error, %Nebulex.Error{reason: "not found"}}
iex> MyCache.fetch("user")
{:error, %Nebulex.KeyError{key: "user"}} # Key was not cachedInvalid return values
The function must return either {:ok, value} or {:error, reason}:
iex> MyCache.fetch_or_store("foo", fn -> :invalid end)
** (RuntimeError) the supplied lambda function must return ..With TTL
iex> MyCache.fetch_or_store(
...> "session",
...> fn -> {:ok, "data"} end,
...> ttl: :timer.minutes(5)
...> )
{:ok, "data"}Real-world example: API calls
def get_user_from_api(user_id) do
MyCache.fetch_or_store(
"user:#{user_id}",
fn ->
case HTTPClient.get("/users/#{user_id}") do
{:ok, %{status: 200, body: data}} -> {:ok, data}
{:ok, %{status: 404}} -> {:error, :not_found}
{:error, reason} -> {:error, reason}
end
end,
ttl: :timer.minutes(10)
)
endReal-world example: Database queries
def fetch_product(id) do
MyCache.fetch_or_store(
"product:#{id}",
fn ->
case Repo.get(Product, id) do
%Product{} = product -> {:ok, product}
nil -> {:error, :not_found}
end
end,
ttl: :timer.minutes(5)
)
endWhen to use fetch_or_store
Use fetch_or_store/3 when:
- The computation may fail and you don't want to cache errors.
- Working with external APIs that may return error responses.
- Fetching from databases where records might not exist.
- Performing validations where only valid results should be cached.
- Any scenario where caching failures would be problematic.
For computations that always produce a valid result to cache (even if it's
an error tuple), consider using get_or_store/3 instead.
@callback fetch_or_store(dynamic_cache(), key(), fetch_or_store_fun(), opts()) :: ok_error_tuple(value())
Same as fetch_or_store/3, but the command is executed on the cache
instance given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.fetch_or_store(MyCache1, "key", fn -> {:ok, "value"} end, [])
{:ok, "value"}
@callback fetch_or_store!(key(), fetch_or_store_fun(), opts()) :: value()
Same as fetch_or_store/3 but raises an exception if an error occurs.
Returns the unwrapped value on success, or raises Nebulex.Error if the
function returns {:error, reason}.
Examples
iex> MyCache.fetch_or_store!("key", fn -> {:ok, "value"} end)
"value"
iex> MyCache.fetch_or_store!("key", fn -> {:error, :error} end)
** (Nebulex.Error) fetch_or_store command failed with reason: :error
iex> MyCache.fetch_or_store!("key", fn -> :invalid end)
** (RuntimeError) the supplied lambda function must return ...
@callback fetch_or_store!(dynamic_cache(), key(), fetch_or_store_fun(), opts()) :: value()
Same as fetch_or_store!/3 but the command is executed on the cache
instance given at the first argument dynamic_cache.
@callback get(key(), default :: value(), opts()) :: ok_error_tuple(value())
Gets a value from the cache where the key matches the given key.
If the cache contains the given key its value is returned as
{:ok, value}.
If the cache does not contain key, {:ok, default} is returned.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
Options
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put("foo", "bar")
:ok
iex> MyCache.get("foo")
{:ok, "bar"}
iex> MyCache.get(:nonexistent)
{:ok, nil}
iex> MyCache.get(:nonexistent, :default)
{:ok, :default}
@callback get(dynamic_cache(), key(), default :: value(), opts()) :: ok_error_tuple(value())
Same as get/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.get(MyCache1, "key", nil, [])
{:ok, nil}
Same as get/3 but raises an exception if an error occurs.
@callback get!(dynamic_cache(), key(), default :: value(), opts()) :: value()
Same as get!/4 but raises an exception if an error occurs.
@callback get_and_update(key(), (value() -> {current_value, new_value} | :pop), opts()) :: ok_error_tuple({current_value, new_value}) when current_value: value(), new_value: value()
Gets the value from key and updates it, all in one pass.
fun is called with the current cached value under key (or nil if key
hasn't been cached) and must return a two-element tuple: the current value
(the retrieved value, which can be operated on before being returned) and
the new value to be stored under key. fun may also return :pop, which
means the current value shall be removed from the cache and returned.
This function returns:
{:ok, {current_value, new_value}}- Thecurrent_valueis the current cached value andnew_valuethe updated one returned byfun.{:error, reason}- An error occurred executing the command.reasonis the cause of the error.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.:keep_ttl(boolean/0) - Indicates whether to retain the time to live associated with the key. Otherwise, the value in the:ttloption overwrites the existing one. The default value isfalse.
See the "Shared options" section in the module documentation for more options.
get_and_update atomicity
This operation is not atomic. It uses get and put (or delete for
:pop) under the hood, but the function is executed outside of the cache
transaction. If you need to ensure atomicity, consider wrapping the function
in a transaction/2 call.
Examples
Update nonexistent key:
iex> MyCache.get_and_update(:a, fn current_value ->
...> {current_value, "value!"}
...> end)
{:ok, {nil, "value!"}}Update existing key:
iex> MyCache.get_and_update(:a, fn current_value ->
...> {current_value, "new value!"}
...> end)
{:ok, {"value!", "new value!"}}Pop/remove value if exist:
iex> MyCache.get_and_update(:a, fn _ -> :pop end)
{:ok, {"new value!", nil}}Pop/remove nonexistent key:
iex> MyCache.get_and_update(:b, fn _ -> :pop end)
{:ok, {nil, nil}}
@callback get_and_update( dynamic_cache(), key(), (value() -> {current_value, new_value} | :pop), opts() ) :: ok_error_tuple({current_value, new_value}) when current_value: value(), new_value: value()
Same as get_and_update/3, but the command is executed on the cache
instance given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.get_and_update(MyCache1, :a, &{&1, "value!"}, [])
{:ok, {nil, "value!"}}
@callback get_and_update!(key(), (value() -> {current_value, new_value} | :pop), opts()) :: {current_value, new_value} when current_value: value(), new_value: value()
Same as get_and_update/3 but raises an exception if an error occurs.
Examples
iex> MyCache.get_and_update!(:a, &{&1, "value!"})
{nil, "value!"}
@callback get_and_update!( dynamic_cache(), key(), (value() -> {current_value, new_value} | :pop), opts() ) :: {current_value, new_value} when current_value: value(), new_value: value()
Same as get_and_update!/4 but raises an exception if an error occurs.
@callback get_or_store(key(), get_or_store_fun(), opts()) :: ok_error_tuple(value())
Gets the value for the given key from the cache. If the key is not
present, the provided anonymous function is executed and its result
is always cached under the given key.
Unlike fetch_or_store/3, this function always caches the result
regardless of whether it's a success or error tuple. This makes it
ideal for caching expensive computations, implementing negative caching
patterns, or caching error states temporarily.
Options
Since the put operation is used under the hood, the following options are
supported:
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.:keep_ttl(boolean/0) - Indicates whether to retain the time to live associated with the key. Otherwise, the value in the:ttloption overwrites the existing one. The default value isfalse.
See the "Shared options" section in the module documentation for a list of supported options.
get_or_store atomicity
This operation is not atomic. It uses fetch and put under the hood,
but the function is executed outside of the cache transaction. If you need
to ensure atomicity, consider wrapping the function in a transaction/2
call.
Examples
Basic usage - caching any value
The function can return any value, and it will be cached:
iex> MyCache.get_or_store("api_result", fn -> "data" end)
{:ok, "data"}
iex> MyCache.get_or_store("api_result", fn -> "new data" end)
{:ok, "data"} # Returns cached value, function not executedCaching tuples
This function caches the exact return value, including tuples:
iex> MyCache.get_or_store("api_result_tuple", fn -> {:ok, "data"} end)
{:ok, {:ok, "data"}}
iex> MyCache.get_or_store("api_error", fn -> {:error, "rate_limited"} end)
{:ok, {:error, "rate_limited"}}With TTL
iex> MyCache.get_or_store(
...> "with_ttl",
...> fn -> "value" end,
...> ttl: :timer.minutes(5)
...> )
{:ok, "value"}Real-world example: Negative caching
Cache "not found" results to avoid repeated database queries:
def get_user(user_id) do
MyCache.get_or_store!(
"user:#{user_id}",
fn ->
case Repo.get(User, user_id) do
%User{} = user ->
{:ok, user}
nil ->
# Cache the "not found" result
{:error, :not_found}
end
end,
ttl: :timer.minutes(5)
)
end
# First call - queries database, caches {:error, :not_found}
get_user(999) #=> {:error, :not_found}
# Second call - returns cached error, no database query
get_user(999) #=> {:error, :not_found}Real-world example: Rate limit protection
Cache error responses temporarily to prevent hammering external services:
def fetch_from_api(endpoint) do
MyCache.get_or_store(
"api:#{endpoint}",
fn ->
case ExternalAPI.call(endpoint) do
{:ok, data} ->
{:ok, data}
{:error, :rate_limited} = error ->
# Cache rate limit error
error
{:error, _} = error ->
error
end
end,
ttl: :timer.seconds(30)
)
endReal-world example: Expensive computations
Cache results of expensive operations regardless of outcome:
def calculate_report(params) do
MyCache.get_or_store!(
"report:#{hash(params)}",
fn ->
%{
total_revenue: calculate_revenue(params),
user_count: count_users(params),
metrics: compute_metrics(params)
}
end,
ttl: :timer.hours(1)
)
endWhen to use get_or_store vs fetch_or_store
Use get_or_store/3 when:
- You want to cache all results, including errors.
- Implementing negative caching (caching "not found" states).
- The computation is expensive and you want to avoid repeating it.
- Caching error states temporarily for rate limiting or backoff strategies.
- Working with pure functions that always produce valid output.
Use fetch_or_store/3 when:
- The function may fail and you want to retry on subsequent calls.
- Errors are transient and should not be cached.
- Working with external APIs where temporary failures should trigger retries.
- You only want to cache successful results.
Comparison
| Feature | get_or_store | fetch_or_store |
|---|---|---|
| Caches all values | ✅ Yes | ❌ No |
| Caches errors | ✅ Yes | ❌ No |
| Function return type | Any value | {:ok, value} or {:error, reason} |
| Type validation | None | Validates return type |
| Best for | Negative caching, pure computations | Fallible operations |
@callback get_or_store(dynamic_cache(), key(), get_or_store_fun(), opts()) :: ok_error_tuple(value())
Same as get_or_store/3, but the command is executed on the cache
instance given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.get_or_store(MyCache, "key", fn -> "value" end, [])
{:ok, "value"}
@callback get_or_store!(key(), get_or_store_fun(), opts()) :: value()
Same as get_or_store/3 but raises an exception if a cache error occurs
(e.g., the adapter failed executing the command and returns an error).
Note that this function returns the unwrapped cached value, which may itself be an error tuple if that's what the function returned.
Examples
iex> MyCache.get_or_store!("key", fn -> "value" end)
"value"
iex> MyCache.get_or_store!("key", fn -> {:ok, "value"} end)
{:ok, "value"}
iex> MyCache.get_or_store!("key", fn -> {:error, "error"} end)
{:error, "error"}Difference from fetch_or_store!
Unlike fetch_or_store!/3 which raises when the function returns an error,
get_or_store!/3 only raises when there's a cache operation error. The
function's return value (even if it's an error tuple) is cached and returned.
# get_or_store! caches and returns error tuples
iex> MyCache.get_or_store!("key", fn -> {:error, :not_found} end)
{:error, :not_found}
# fetch_or_store! raises when function returns error
iex> MyCache.fetch_or_store!("key", fn -> {:error, :not_found} end)
** (Nebulex.Error) fetch_or_store command failed with reason: :not_found
@callback get_or_store!(dynamic_cache(), key(), get_or_store_fun(), opts()) :: value()
Same as get_or_store!/3 but the command is executed on the cache
instance given at the first argument dynamic_cache.
@callback has_key?(key(), opts()) :: ok_error_tuple(boolean())
Determines if the cache contains an entry for the specified key.
More formally, it returns {:ok, true} if the cache contains the given key.
If the cache doesn't contain key, {:ok, false} is returned.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
Options
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.has_key?(:a)
{:ok, true}
iex> MyCache.has_key?(:b)
{:ok, false}
@callback has_key?(dynamic_cache(), key(), opts()) :: ok_error_tuple(boolean())
Same as has_key?/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.has_key?(MyCache1, :a, [])
{:ok, false}
@callback incr(key(), amount :: integer(), opts()) :: ok_error_tuple(integer())
Increments the counter stored at key by the given amount and returns
the current count as {:ok, count}.
If amount < 0, the value is decremented by that amount instead.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
If the key doesn't exist, the TTL is set. Otherwise, only the counter value is updated, keeping the TTL set for the first time.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.:default(integer/0) - If the key is not present in the cache, the default value is inserted as the key's initial value before it is incremented. The default value is0.
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.incr(:a)
{:ok, 1}
iex> MyCache.incr(:a, 2)
{:ok, 3}
iex> MyCache.incr(:a, -1)
{:ok, 2}
iex> MyCache.incr(:missing_key, 2, default: 10)
{:ok, 12}
# Initialize the counter with a TTL
iex> MyCache.incr(:new_counter, 10, ttl: :timer.seconds(1))
{:ok, 10}
@callback incr(dynamic_cache(), key(), amount :: integer(), opts()) :: ok_error_tuple(integer())
Same as incr/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.incr(MyCache1, :a, 1, [])
{:ok, 1}
Same as incr/3 but raises an exception if an error occurs.
Examples
iex> MyCache.incr!(:a)
1
iex> MyCache.incr!(:a, 2)
3
@callback incr!(dynamic_cache(), key(), amount :: integer(), opts()) :: integer()
Same as incr!/4 but raises an exception if an error occurs.
@callback put(key(), value(), opts()) :: :ok | error_tuple()
Puts the given value under key into the cache.
If key already holds an entry, it is overwritten. Any previous TTL
(time to live) associated with the key is discarded on a successful
put operation.
Returns :ok if successful; {:error, reason} otherwise.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.:keep_ttl(boolean/0) - Indicates whether to retain the time to live associated with the key. Otherwise, the value in the:ttloption overwrites the existing one. The default value isfalse.
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put("foo", "bar")
:okPutting entries with specific time-to-live:
iex> MyCache.put("foo", "bar", ttl: :timer.seconds(10))
:ok
iex> MyCache.put("foo", "bar", ttl: :timer.hours(1))
:ok
iex> MyCache.put("foo", "bar", ttl: :timer.minutes(1))
:ok
iex> MyCache.put("foo", "bar", ttl: :timer.seconds(30))
:okTo keep the current TTL in case the key already exists:
iex> MyCache.put("foo", "bar", keep_ttl: true)
:ok
@callback put(dynamic_cache(), key(), value(), opts()) :: :ok | error_tuple()
Same as put/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.put(MyCache1, "foo", "bar", [])
:ok
iex> MyCache.put(MyCache2, "foo", "bar", ttl: :timer.hours(1))
:ok
Same as put/3 but raises an exception if an error occurs.
@callback put!(dynamic_cache(), key(), value(), opts()) :: :ok
Same as put!/4 but raises an exception if an error occurs.
@callback put_all(entries(), opts()) :: :ok | error_tuple()
Puts the given entries (key/value pairs) into the cache. It replaces
existing values with new values (just as regular put).
Returns :ok if successful; {:error, reason} otherwise.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put_all(apples: 3, bananas: 1)
:ok
iex> MyCache.put_all(%{apples: 2, oranges: 1}, ttl: :timer.hours(1))
:okAtomic operation
Ideally, this operation should be atomic, so all given keys are put at once. But it depends purely on the adapter's implementation and the backend used internally by the adapter. Hence, reviewing the adapter's documentation is highly recommended.
@callback put_all(dynamic_cache(), entries(), opts()) :: :ok | error_tuple()
Same as put_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.put_all(MyCache1, [apples: 3, bananas: 1], [])
:ok
iex> MyCache.put_all(MyCache1, %{oranges: 1}, ttl: :timer.hours(1))
:ok
Same as put_all/2 but raises an exception if an error occurs.
@callback put_all!(dynamic_cache(), entries(), opts()) :: :ok
Same as put_all!/3 but raises an exception if an error occurs.
@callback put_new(key(), value(), opts()) :: ok_error_tuple(boolean())
Puts the given value under key into the cache only if it does not
already exist.
Returns {:ok, true} if the value is stored; otherwise, {:ok, false}
is returned.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put_new("foo", "bar")
{:ok, true}
iex> MyCache.put_new("foo", "bar", ttl: :timer.hours(1))
{:ok, false}
@callback put_new(dynamic_cache(), key(), value(), opts()) :: ok_error_tuple(boolean())
Same as put_new/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.put_new(MyCache1, "foo", "bar", [])
{:ok, true}
iex> MyCache.put_new(MyCache1, "foo", "bar", ttl: :timer.hours(1))
{:ok, false}
Same as put_new/3 but raises an exception if an error occurs.
Examples
iex> MyCache.put_new!("foo", "bar")
true
iex> MyCache.put_new!("foo", "bar", ttl: :timer.hours(1))
false
@callback put_new!(dynamic_cache(), key(), value(), opts()) :: boolean()
Same as put_new!/4 but raises an exception if an error occurs.
@callback put_new_all(entries(), opts()) :: ok_error_tuple(boolean())
Puts the given entries (key/value pairs) into the cache. It will not
perform any operation, even if a single key exists.
Returns {:ok, true} if all entries are successfully stored, or
{:ok, false} if no key was set (at least one key already existed).
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put_new_all(apples: 3, bananas: 1)
{:ok, true}
iex> MyCache.put_new_all(%{apples: 3, oranges: 1}, ttl: :timer.hours(1))
{:ok, false}Atomic operation
Ideally, this operation should be atomic, so all given keys are put at once. But it depends purely on the adapter's implementation and the backend used internally by the adapter. Hence, reviewing the adapter's documentation is highly recommended.
@callback put_new_all(dynamic_cache(), entries(), opts()) :: ok_error_tuple(boolean())
Same as put_new_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.put_new_all(MyCache1, [apples: 3, bananas: 1], [])
{:ok, true}
iex> MyCache.put_new_all(MyCache1, %{apples: 3, oranges: 1}, ttl: :timer.seconds(10))
{:ok, false}
Same as put_new_all/2 but raises an exception if an error occurs.
Examples
iex> MyCache.put_new_all!(apples: 3, bananas: 1)
true
iex> MyCache.put_new_all!(%{apples: 3, oranges: 1}, ttl: :timer.hours(1))
false
@callback put_new_all!(dynamic_cache(), entries(), opts()) :: boolean()
Same as put_new_all!/3 but raises an exception if an error occurs.
@callback replace(key(), value(), opts()) :: ok_error_tuple(boolean())
Alters the entry stored under key, but only if the entry already exists
in the cache.
Returns {:ok, true} if the value is replaced. Otherwise, {:ok, false}
is returned.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.:keep_ttl(boolean/0) - Indicates whether to retain the time to live associated with the key. Otherwise, the value in the:ttloption overwrites the existing one. The default value istrue.
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.replace("foo", "bar")
{:ok, false}
iex> MyCache.put_new("foo", "bar")
{:ok, true}
iex> MyCache.replace("foo", "bar2")
{:ok, true}Update current value and TTL:
iex> MyCache.replace("foo", "bar3", ttl: :timer.seconds(10))
{:ok, true}To keep the current TTL:
iex> MyCache.replace("foo", "bar4", keep_ttl: true)
{:ok, true}
@callback replace(dynamic_cache(), key(), value(), opts()) :: ok_error_tuple(boolean())
Same as replace/3, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.replace(MyCache1, "foo", "bar", [])
{:ok, false}
iex> MyCache.put_new("foo", "bar")
{:ok, true}
iex> MyCache.replace(MyCache1, "foo", "bar", ttl: :timer.hours(1))
{:ok, true}
Same as replace/3 but raises an exception if an error occurs.
Examples
iex> MyCache.replace!("foo", "bar")
false
iex> MyCache.put_new!("foo", "bar")
true
iex> MyCache.replace!("foo", "bar2")
true
@callback replace!(dynamic_cache(), key(), value(), opts()) :: boolean()
Same as replace!/4 but raises an exception if an error occurs.
@callback take(key(), opts()) :: ok_error_tuple(value(), fetch_error_reason())
Removes and returns the value associated with key in the cache.
If key is present in the cache, its value is removed and returned as
{:ok, value}.
If there's an error with executing the command, {:error, reason}
is returned. reason is the cause of the error and can be
Nebulex.KeyError if the cache does not contain key or
Nebulex.Error otherwise.
Options
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.take(:a)
{:ok, 1}
iex> {:error, %Nebulex.KeyError{key: :a} = e} = MyCache.take(:a)
iex> e.reason
:not_found
@callback take(dynamic_cache(), key(), opts()) :: ok_error_tuple(value(), fetch_error_reason())
Same as take/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.take(MyCache1, :a, [])
{:ok, 1}
Same as take/2 but raises an exception if an error occurs.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.take!(:a)
1
@callback take!(dynamic_cache(), key(), opts()) :: value()
Same as take!/3 but raises an exception if an error occurs.
@callback touch(key(), opts()) :: ok_error_tuple(boolean())
Returns {:ok, true} if the given key exists and the last access time is
successfully updated; otherwise, {:ok, false} is returned.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
Options
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.touch(:a)
{:ok, true}
iex> MyCache.touch(:b)
{:ok, false}
@callback touch(dynamic_cache(), key(), opts()) :: ok_error_tuple(boolean())
Same as touch/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.touch(MyCache1, :a, [])
{:ok, false}
Same as touch/2 but raises an exception if an error occurs.
Examples
iex> MyCache.put(:a, 1)
:ok
iex> MyCache.touch!(:a)
true
@callback touch!(dynamic_cache(), key(), opts()) :: boolean()
Same as touch!/3 but raises an exception if an error occurs.
@callback ttl(key(), opts()) :: ok_error_tuple(timeout(), fetch_error_reason())
Returns the remaining time-to-live for the given key.
If key is present in the cache, its remaining TTL is returned as
{:ok, ttl}.
If there's an error with executing the command, {:error, reason}
is returned. reason is the cause of the error and can be
Nebulex.KeyError if the cache does not contain key,
Nebulex.Error otherwise.
Options
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyCache.put(:a, 1, ttl: :timer.seconds(5))
:ok
iex> MyCache.put(:b, 2)
:ok
iex> MyCache.ttl(:a)
{:ok, _remaining_ttl}
iex> MyCache.ttl(:b)
{:ok, :infinity}
iex> {:error, %Nebulex.KeyError{key: :c} = e} = MyCache.ttl(:c)
iex> e.reason
:not_found
@callback ttl(dynamic_cache(), key(), opts()) :: ok_error_tuple(timeout(), fetch_error_reason())
Same as ttl/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.put(:a, 1, ttl: :timer.seconds(5))
:ok
iex> MyCache.ttl(MyCache1, :a, [])
{:ok, _remaining_ttl}
Same as ttl/2 but raises an exception if an error occurs.
Examples
iex> MyCache.put(:a, 1, ttl: :timer.seconds(5))
:ok
iex> MyCache.ttl!(:a)
_remaining_ttl
@callback ttl!(dynamic_cache(), key(), opts()) :: timeout()
Same as ttl!/3 but raises an exception if an error occurs.
@callback update(key(), initial :: value(), (value() -> value()), opts()) :: ok_error_tuple(value())
Updates the key in the cache with the given function.
If key is present in the cache, the existing value is passed to fun and
its result is used as the updated value of key. If key is not present in
the cache, default is inserted as the value of key. The default value
will not be passed through the update function.
This function returns:
{:ok, value}- The value associated with thekeyis updated.{:error, reason}- An error occurred executing the command.reasonis the cause.
Options
:ttl(timeout/0) - The key's time-to-live (or expiry time) in milliseconds (:infinityto store indefinitely). The default value is:infinity.:keep_ttl(boolean/0) - Indicates whether to retain the time to live associated with the key. Otherwise, the value in the:ttloption overwrites the existing one. The default value isfalse.
See the "Shared options" section in the module documentation for more options.
update atomicity
This operation is not atomic. It uses fetch and put under the hood,
but the function is executed outside of the cache transaction. If you need
to ensure atomicity, consider wrapping the function in a transaction/2
call.
Examples
iex> MyCache.update(:a, 1, &(&1 * 2))
{:ok, 1}
iex> MyCache.update(:a, 1, &(&1 * 2))
{:ok, 2}
@callback update(dynamic_cache(), key(), initial :: value(), (value() -> value()), opts()) :: ok_error_tuple(value())
Same as update/4, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.update(MyCache1, :a, 1, &(&1 * 2), [])
{:ok, 1}
Same as update/4 but raises an exception if an error occurs.
Examples
iex> MyCache.update!(:a, 1, &(&1 * 2))
1
@callback update!( dynamic_cache(), key(), initial :: value(), (value() -> value()), opts() ) :: value()
Same as update/5 but raises an exception if an error occurs.
Query API
@callback count_all(query_spec(), opts()) :: ok_error_tuple(non_neg_integer())
Counts all entries matching the query specified by the given query_spec.
See get_all/2 for more information about the query_spec.
This function returns:
{:ok, count}- The cache executes the query successfully and returns thecountof the matched entries.{:error, reason}- An error occurred executing the command.reasonis the cause.
May raise Nebulex.QueryError if query validation fails.
Options
See the "Shared options" section in the module documentation for more options.
Examples
Populate the cache with some entries:
iex> Enum.each(1..5, &MyCache.put(&1, &1 * 2))
:okCount all entries in cache (cache size):
iex> MyCache.count_all()
{:ok, 5}Count all entries that match with the given query, assuming we are using
Nebulex.Adapters.Local adapter:
iex> query = [{{:_, :"$1", :"$2", :_, :_}, [{:>, :"$2", 5}], [true]}]
iex> {:ok, count} = MyCache.count_all(query: query)See get_all/2 for more query examples.
@callback count_all(dynamic_cache(), query_spec(), opts()) :: ok_error_tuple(non_neg_integer())
Same as count_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.count_all(MyCache1, [], [])
{:ok, 0}
@callback count_all!(query_spec(), opts()) :: non_neg_integer()
Same as count_all/2 but raises an exception if an error occurs.
Examples
iex> MyCache.count_all!()
0
@callback count_all!(dynamic_cache(), query_spec(), opts()) :: non_neg_integer()
Same as count_all!/3 but raises an exception if an error occurs.
@callback delete_all(query_spec(), opts()) :: ok_error_tuple(non_neg_integer())
Deletes all entries matching the query specified by the given query_spec.
See get_all/2 for more information about the query_spec.
This function returns:
{:ok, deleted_count}- The cache executes the query successfully and returns the deleted entries count.{:error, reason}- An error occurred executing the command.reasonis the cause.
May raise Nebulex.QueryError if query validation fails.
Options
See the "Shared options" section in the module documentation for more options.
Examples
Populate the cache with some entries:
iex> Enum.each(1..5, &MyCache.put(&1, &1 * 2))
:okDelete all (default args):
iex> MyCache.delete_all()
{:ok, 5}Delete only the requested keys (bulk delete):
iex> MyCache.delete_all(in: [1, 2, 10])
{:ok, 2}Delete all entries that match with the given query, assuming we are using
Nebulex.Adapters.Local adapter:
iex> query = [{{:_, :"$1", :"$2", :_, :_}, [{:>, :"$2", 5}], [true]}]
iex> {:ok, deleted_count} = MyCache.delete_all(query: query)See get_all/2 for more query examples.
@callback delete_all(dynamic_cache(), query_spec(), opts()) :: ok_error_tuple(non_neg_integer())
Same as delete_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.delete_all(MyCache1, [], [])
{:ok, 0}
@callback delete_all!(query_spec(), opts()) :: integer()
Same as delete_all/2 but raises an exception if an error occurs.
Examples
iex> MyCache.delete_all!()
0
@callback delete_all!(dynamic_cache(), query_spec(), opts()) :: integer()
Same as delete_all!/3 but raises an exception if an error occurs.
@callback get_all(query_spec(), opts()) :: ok_error_tuple([any()])
Fetches all entries from the cache matching the given query specified through the "query-spec".
This function returns:
{:ok, result}- The cache executes the query successfully. Theresultis a list with the matched entries.{:error, reason}- An error occurred executing the command.reasonis the cause.
May raise Nebulex.QueryError if query validation fails.
Query specification
There are two ways to use the Query API:
- Fetch multiple keys (all at once), like a bulk fetch.
- Fetch all entries from the cache matching a given query, more like a search (this is the most generic option).
Here is where the query_spec argument comes in to specify the type of query
to run.
The query_spec argument is a keyword/0 with options defining the desired
query. The query_spec fields or options are:
:in(list ofterm/0) - The list of keys to fetch. The value to return depends on the:selectoption. The:inoption is a predefined query meant to fetch multiple keys simultaneously.If present, it overrides the
:queryoption and instructs the underlying adapter to match the entries associated with the set of keys requested. For every key that does not hold a value or does not exist, it is ignored and not added to the returned list.:query(term/0) - The query specification to match entries in the cache.If present and set to
nil, it matches all entries in the cache. Thenilis a predefined value all adapters must support. Other than that, the value depends entirely on the adapter. The adapter is responsible for defining the query or matching specification. For example, theNebulex.Adapters.Localadapter supports the "ETS Match Spec".The default value is
nil.:select- Selects which fields to choose from the entry.The possible values are:
{:key, :value}- (Default) Selects the key and the value from the entry. They are returned as a tuple{key, value}.:key- Selects the key from the entry.:value- Selects the value from the entry.:entry- Selects the whole entry with its fields (use it carefully). The adapter defines the entry, the structure, and its fields. Therefore, Nebulex recommends checking the adapter's documentation to understand the entry's structure with its fields and to verify if the select option is supported.
The default value is
{:key, :value}.
Fetching multiple keys
While you can perform any query using the :query option (even fetching
multiple keys), the option :in is preferable. For example:
MyCache.get_all(in: ["a", "list", "of", "keys"])Fetching all entries matching a given query
As mentioned above, the option :query is the most generic way to match
entries in a cache. This option allows users to write custom queries
to be executed by the underlying adapter.
For matching all cached entries, you can skip the :query option or set it
to nil instead (the default). For example:
MyCache.get_all() #=> Equivalent to MyCache.get_all(query: nil)Using a custom query:
MyCache.get_all(query: query_supported_by_the_adapter)Nebulex recommends to see the adapter documentation when using this option.
Options
See the "Shared options" section in the module documentation for more options.
Examples
Populate the cache with some entries:
iex> MyCache.put_all(a: 1, b: 2, c: 3)
:okFetch all entries in the cache:
iex> MyCache.get_all()
{:ok, [a: 1, b: 2, c: 3]}Fetch all entries returning only the keys:
iex> MyCache.get_all(select: :key)
{:ok, [:a, :b, :c]}Fetch all entries returning only the values:
iex> MyCache.get_all(select: :value)
{:ok, [1, 2, 3]}Fetch only the requested keys (bulk fetch):
iex> MyCache.get_all(in: [:a, :b, :d])
{:ok, [a: 1, b: 2]}Fetch the requested keys returning only the keys or values:
iex> MyCache.get_all(in: [:a, :b, :d], select: :key)
{:ok, [:a, :b]}
iex> MyCache.get_all(in: [:a, :b, :d], select: :value)
{:ok, [1, 2]}Query examples for Nebulex.Adapters.Local adapter
The Nebulex.Adapters.Local adapter supports "ETS Match Spec" as query
values (in addition to nil or the option :in).
You must know the adapter's entry structure for match-spec queries, which is
{:entry, key, value, touched, ttl}. For example, one may write the following
query:
iex> match_spec = [
...> {
...> {:entry, :"$1", :"$2", :_, :_},
...> [{:>, :"$2", 1}],
...> [{{:"$1", :"$2"}}]
...> }
...> ]
iex> MyCache.get_all(query: match_spec)
{:ok, [b: 2, c: 3]}Beyond basic queries
While the examples above show basic query patterns, the
Nebulex.Adapters.Local adapter provides several advanced features for more
sophisticated cache management, including tags, references, query helpers,
and more. For detailed information on all available capabilities, see the
Local Adapter documentation.
@callback get_all(dynamic_cache(), query_spec(), opts()) :: ok_error_tuple([any()])
Same as get_all/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> MyCache.get_all(MyCache1, [], [])
{:ok, _matched_entries}
@callback get_all!(query_spec(), opts()) :: [any()]
Same as get_all/2 but raises an exception if an error occurs.
Examples
iex> MyCache.put_all(a: 1, b: 2, c: 3)
:ok
iex> MyCache.get_all!()
[a: 1, b: 2, c: 3]
iex> MyCache.get_all!(in: [:a, :b])
[a: 1, b: 2]
@callback get_all!(dynamic_cache(), query_spec(), opts()) :: [any()]
Same as get_all!/3 but raises an exception if an error occurs.
@callback stream(query_spec(), opts()) :: ok_error_tuple(Enum.t())
Similar to get_all/2, but returns a lazy enumerable that emits all entries
matching the query specified by the given query_spec.
See get_all/2 for more information about the query_spec.
This function returns:
{:ok, stream}- It returns astreamof values.{:error, reason}- An error occurred executing the command.reasonis the cause.
May raise Nebulex.QueryError if query validation fails.
Options
:max_entries(pos_integer/0) - The number of entries to load from the cache as we stream The default value is100.
See the "Shared options" section in the module documentation for more options.
Examples
Populate the cache with some entries:
iex> MyCache.put_all(a: 1, b: 2, c: 3)
:okStream all (default args):
iex> {:ok, stream} = MyCache.stream()
iex> Enum.to_list(stream)
[a: 1, b: 2, c: 3]Stream all entries returning only the keys (with :max_entries option):
iex> {:ok, stream} = MyCache.stream([select: :key], max_entries: 2)
iex> Enum.to_list(stream)
[:a, :b, :c]Stream all entries returning only the values:
iex> {:ok, stream} = MyCache.stream(select: :value)
iex> Enum.to_list(stream)
[1, 2, 3]Stream only the requested keys (lazy bulk-fetch):
iex> {:ok, stream} = MyCache.stream(in: [:a, :b, :d])
iex> Enum.to_list(stream)
[a: 1, b: 2]
iex> {:ok, stream} = MyCache.stream(in: [:a, :b, :d], select: :key)
iex> Enum.to_list(stream)
[:a, :b]
@callback stream(dynamic_cache(), query_spec(), opts()) :: ok_error_tuple(Enum.t())
Same as stream/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
iex> = MyCache.stream(MyCache1, [], [])
{:ok, _stream}
@callback stream!(query_spec(), opts()) :: Enum.t()
Same as stream/2 but raises an exception if an error occurs.
Examples
iex> MyCache.put_all(a: 1, b: 2, c: 3)
:ok
iex> MyCache.stream!() |> Enum.to_list()
[a: 1, b: 2, c: 3]
@callback stream!(dynamic_cache(), query_spec(), opts()) :: Enum.t()
Same as stream!/3 but raises an exception if an error occurs.
Transaction API
@callback in_transaction?(opts()) :: ok_error_tuple(boolean())
Returns {:ok, true} if the current process is inside a transaction;
otherwise, {:ok, false} is returned.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
Options
See the "Shared options" section in the module documentation for more options.
Examples
MyCache.in_transaction?()
#=> {:ok, false}
MyCache.transaction(fn ->
MyCache.in_transaction? #=> {:ok, true}
end)
@callback in_transaction?(dynamic_cache(), opts()) :: ok_error_tuple(boolean())
Same as in_transaction?/1, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
MyCache.in_transaction?(MyCache1, [])
@callback transaction(fun(), opts()) :: ok_error_tuple(any())
Runs the given function inside a transaction.
If an Elixir exception occurs, the exception will bubble up from the
transaction function. If the cache aborts the transaction, it returns
{:error, reason}.
A successful transaction returns the value returned by the function wrapped
in a tuple as {:ok, value}.
Nested transactions
If transaction/2 is called inside another transaction, the cache executes
the function without wrapping the new transaction call in any way.
Options
See the "Shared options" section in the module documentation for more options.
Examples
MyCache.transaction(fn ->
alice = MyCache.get(:alice)
bob = MyCache.get(:bob)
MyCache.put(:alice, %{alice | balance: alice.balance + 100})
MyCache.put(:bob, %{bob | balance: bob.balance + 100})
end)We can provide the keys to lock when using the Nebulex.Adapters.Local
adapter (or any other adapter that supports key locking):
MyCache.transaction(
fn ->
alice = MyCache.get(:alice)
bob = MyCache.get(:bob)
MyCache.put(:alice, %{alice | balance: alice.balance + 100})
MyCache.put(:bob, %{bob | balance: bob.balance + 100})
end,
keys: [:alice, :bob]
)
@callback transaction(dynamic_cache(), fun(), opts()) :: ok_error_tuple(any())
Same as transaction/2, but the command is executed on the cache instance
given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
MyCache.transaction(
MyCache1,
fn ->
alice = MyCache.get(:alice)
bob = MyCache.get(:bob)
MyCache.put(:alice, %{alice | balance: alice.balance + 100})
MyCache.put(:bob, %{bob | balance: bob.balance + 100})
end,
keys: [:alice, :bob]
)
Info API
@callback info(spec :: info_spec(), opts()) :: ok_error_tuple(info_data())
Returns {:ok, info} where info contains the requested cache information,
as specified by the spec.
If there's an error with executing the command, {:error, reason}
is returned, where reason is the cause of the error.
The spec (information specification key) can be:
- The atom
:all- returns a map with all information items. - An atom - returns the value for the requested information item.
- A list of atoms - returns a map only with the requested information items.
If the argument spec is omitted, all information items are returned;
same as if the spec was the atom :all.
The adapters are free to add the information specification keys they want. However, Nebulex suggests the adapters add the following keys:
:server- General information about the cache server (e.g., cache name, adapter, PID, etc.).:memory- Memory consumption information (e.g., used memory, allocated memory, etc.).:stats- Cache statistics (e.g., hits, misses, etc.).
Examples
The following examples assume the underlying adapter uses the implementation
provided by Nebulex.Adapters.Common.Info.
iex> {:ok, info} = MyCache.info()
iex> info
%{
server: %{
nbx_version: "3.0.0",
cache_module: "MyCache",
cache_adapter: "Nebulex.Adapters.Local",
cache_name: "MyCache",
cache_pid: #PID<0.111.0>
},
memory: %{
total: 1_000_000,
used: 0
},
stats: %{
deletions: 0,
evictions: 0,
expirations: 0,
hits: 0,
misses: 0,
updates: 0,
writes: 0
}
}
iex> {:ok, info} = MyCache.info(:server)
iex> info
%{
nbx_version: "3.0.0",
cache_module: "MyCache",
cache_adapter: "Nebulex.Adapters.Local",
cache_name: "MyCache",
cache_pid: #PID<0.111.0>
}
iex> {:ok, info} = MyCache.info([:server, :stats])
iex> info
%{
server: %{
nbx_version: "3.0.0",
cache_module: "MyCache",
cache_adapter: "Nebulex.Adapters.Local",
cache_name: "MyCache",
cache_pid: #PID<0.111.0>
},
stats: %{
deletions: 0,
evictions: 0,
expirations: 0,
hits: 0,
misses: 0,
updates: 0,
writes: 0
}
}
@callback info(dynamic_cache(), spec :: info_spec(), opts()) :: ok_error_tuple(info_data())
Same as info/2, but the command is executed on the cache
instance given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
MyCache.info(MyCache1, :all, [])
Same as info/2 but raises an exception if an error occurs.
@callback info!(dynamic_cache(), spec :: info_spec(), opts()) :: info_data()
Same as info/3 but raises an exception if an error occurs.
Observable API
@callback register_event_listener(event_listener(), opts()) :: :ok | error_tuple()
Register a cache event listener event_listener.
Returns :ok if successful; {:error, reason} otherwise.
Listeners should be implemented with care. In particular, it is important to consider their impact on performance and latency.
Listeners:
- are fired after the entry is mutated in the cache.
- block the calling process until the listener returns, when the listener is invoked synchronously; depends on the adapter's implementation.
Listeners follow the observer pattern. An exception raised by a listener does not cause the cache operation to fail.
Listeners can only raise Nebulex.Error exception. Caching implementations
must catch any other exception from a listener, then wrap and reraise it as
a Nebulex.Error exception.
Options
:id(term/0) - A unique identifier for the event listener. An error will be returned if another listener with the same ID already exists. Defaults to the event listener function itself.:filter(Nebulex.Event.filter/0) - A function that may be used to check cache entry events prior to being dispatched to event listeners.A filter must not create side effects.
:metadata(Nebulex.Event.metadata/0) - The metadata is provided when registering the listener and added to the event at invoking the listener and filter functions; the event must always have a metadata field. The default value is[].
See the "Shared options" section in the module documentation for more options.
Examples
iex> MyApp.Cache.register_event_listener(&MyApp.handle/1)
:ok
iex> MyApp.Cache.register_event_listener(&MyApp.handle/1,
...> filter: &MyApp.filter/1
...> )
:ok
iex> MyApp.Cache.register_event_listener(&MyApp.handle/2,
...> filter: &MyApp.filter/2
...> metadata: [foo: :bar]
...> )
:ok
# Register with `:id` (must be unregistered using the same `:id` value)
iex> MyApp.Cache.register_event_listener(&MyApp.handle/2,
...> id: :my_listener,
...> filter: &MyApp.filter/2
...> metadata: [foo: :bar]
...> )
:ok
@callback register_event_listener(dynamic_cache(), event_listener(), opts()) :: :ok | error_tuple()
Same as register_event_listener/2, but the command is executed on the cache
instance given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
MyApp.Cache.register_event_listener(:my_cache, &MyApp.handle/1, [])
@callback register_event_listener!(event_listener(), opts()) :: :ok
Same as register_event_listener/2 but raises an exception if an error
occurs.
@callback register_event_listener!(dynamic_cache(), event_listener(), opts()) :: :ok
Same as register_event_listener/3 but raises an exception if an error
occurs.
@callback unregister_event_listener(id :: any(), opts()) :: :ok | error_tuple()
Un-register a cache event listener.
Returns :ok if successful; {:error, reason} otherwise.
Options
See the "Shared options" section in the module documentation for more options.
Examples
# Register with default ID
iex> MyApp.Cache.register_event_listener(&MyApp.handle/1)
:ok
# Unregister with default ID
iex> MyApp.Cache.unregister_event_listener(&MyApp.handle/1)
:ok
# Register with `:id`
iex> MyApp.Cache.register_event_listener(&MyApp.handle/1, id: :listener)
:ok
# Unregister using the previously registered `:id`
iex> MyApp.Cache.unregister_event_listener(:listener)
:ok
@callback unregister_event_listener(dynamic_cache(), id :: any(), opts()) :: :ok | error_tuple()
Same as unregister_event_listener/2, but the command is executed on the
cache instance given at the first argument dynamic_cache.
See the "Dynamic caches" section in the module documentation for more information.
Examples
MyApp.Cache.unregister_event_listener(:my_cache, &MyApp.handle/1, [])
Same as unregister_event_listener/2 but raises an exception if an error
occurs.
@callback unregister_event_listener!(dynamic_cache(), id :: any(), opts()) :: :ok
Same as unregister_event_listener/3 but raises an exception if an error
occurs.
Types
Dynamic cache value
Cache entries
@type error_tuple() :: error_tuple(nbx_error_reason())
Common error type
@type error_tuple(reason) :: {:error, reason}
Error type for the given reason
@type event_filter() :: Nebulex.Event.filter()
Proxy type to a cache event filter
@type event_listener() :: Nebulex.Event.listener()
Proxy type to a cache event listener
@type fetch_error_reason() :: Nebulex.KeyError.t() | nbx_error_reason()
Fetch error reason
Fetch or store function
@type get_or_store_fun() :: (-> any())
Get or store function
The data type for the cache information
@type info_item() :: any()
The type for the info item's value
Info map
Specification key for the item(s) to include in the returned info
@type key() :: any()
Cache entry key
@type nbx_error_reason() :: Nebulex.Error.t()
Proxy type for generic Nebulex error
@type ok_error_tuple(ok) :: ok_error_tuple(ok, nbx_error_reason())
Ok/Error tuple with default error reasons
@type ok_error_tuple(ok, error) :: {:ok, ok} | {:error, error}
Ok/Error type
@type opts() :: keyword()
Cache action options
@type query_spec() :: keyword()
The data type for a query spec.
See the "query-spec" section for more information.
@type t() :: module()
Cache type
@type value() :: any()
Cache entry value