View Source FDBC.Transaction (fdbc v0.1.4)

In FoundationDB, a transaction is a mutable snapshot of a database. All read and write operations on a transaction see and modify an otherwise-unchanging version of the database and only change the underlying database if and when the transaction is committed. Read operations do see the effects of previous write operations on the same transaction. Committing a transaction usually succeeds in the absence of conflicts.

Transactions group operations into a unit with the properties of atomicity, isolation, and durability. Transactions also provide the ability to maintain an application’s invariants or integrity constraints, supporting the property of consistency. Together these properties are known as ACID.

Transactions are also causally consistent: once a transaction has been successfully committed, all subsequently created transactions will see the modifications made by it.

Applications must provide error handling and an appropriate retry loop around the application code for a transaction. FDBC provides a convenience function FDBC.transact/3 which will do just that when passed a FDBC.Database. The function roughly does the following:

tr = Transaction.create(db)
def transact(tr, fun) do
  result = fun.(transaction)
  :ok = FDBC.Transaction.commit(transaction)
  result
rescue
  e in FDBC.Error ->
    :ok = FDBC.Transaction.on_error(tr, e)
    transact(tr, fun)
end

This convenience function allows for transactional blocks to be handled like so:

db = FDBC.Database.create()
FDBC.transact(db, fn tr ->
  :ok = Transaction.set(tr, "foo", "bar")
end)

In reality the FDBC.transact/3 is more versatile than this and can be consulted for further details.

Futures

This library unlike upstream implementations does not support implicit asynchronicity, it must be explicity used.

There are two ways in which to achieve this, using Task or by using the async_* variants along with FDBC.Future.

Using Task the example in the link above can be achieved like so:

tasks = [
  Task.async(fn -> FDBC.Transaction.get(tr, "A") end),
  Task.async(fn -> FDBC.Transaction.get(tr, "B") end),
]
result = Task.await_many(tasks) |> Enum.reduce(0, fn x, acc -> acc <> x end)
IO.inspect(result)

Using the async_* variants it can be achieved with the following:

futures = [
  FDBC.Transaction.async_get(tr, "A"),
  FDBC.Transaction.async_get(tr, "B"),
]
result = FDBC.Futures.await_many(futures) |> Enum.reduce(0, fn x, acc -> acc <> x end)
IO.inspect(result)

The main difference with the two approaches is that Task will spawn each operation in a new process while FDBC.Future will operate within the calling process.

Watches

It is possible to watch keys for value changes. The most obvious way to handle this is via a GenServer like implementation:

defmodule Watcher do
  use GenServer

  alias FDBC.{Database, Future, Transaction}

  def start_link(opts) do
    GenServer.start_link(__MODULE__, :ok, opts)
  end

  def watch(key) do
    GenServer.call(__MODULE__, {:watch, key})
  end

  def init(_opts) do
    {:ok, %{tasks: %{}}}
  end

  def handle_call({:watch, key}, _from, state) do
    {future, task} = start_watch(key)
    state = put_in(state.tasks[task.ref], key)
    {:reply, :ok, state}
  end

  def handle_info({ref, nil}, state) do
    Process.demonitor(ref, [:flush])
    {key, state} = pop_in(state.tasks[ref])
    IO.puts("Value for key #{inspect(key)} has changed")
    task = start_watch(key)
    state = put_in(state.tasks[task.ref], key)
    {:noreply, state}
  end

  def handle_info({:DOWN, ref, _, _, reason}, state) do
    {_key, state} = pop_in(state.tasks[ref])
    # Should check reason to see if watcher was cancelled or errored...
    {:noreply, state}
  end

  defp start_watch(key) do
    tr = Database.create() |> Transaction.create()
    future = Transaction.watch(tr, key)
    :ok = Transaction.commit(tr)

    Task.Supervisor.async_nolink(Example.TaskSupervisor, fn ->
      Future.resolve(future)
    end)
  end
end

The above would then need to be added to the application's supervision tree:

children = [
  {Task.Supervisor, name: Example.TaskSupervisor},
  {Watcher, name: Example.Watcher}
]

Supervisor.start_link(children, strategy: :one_for_one)

Summary

Functions

Adds a conflict key to a transaction without performing the associated read or write.

Adds a conflict range to a transaction without performing the associated read or write.

The same as get/3 except it returns the unresolved future.

The same as get_addresses_for_key/2 except it returns the unresolved future.

The same as get_approximate_size/1 except it returns the unresolved future.

The same as get_estimated_range_size/3 except it returns the unresolved future.

The same as get_key/3 except it returns the unresolved future.

The same as get_read_version/1 except it returns the unresolved future.

The same as get_tag_throttled_duration/1 except it returns the unresolved future.

The same as get_total_cost/1 except it returns the unresolved future.

Returns a future that will resolve to the versionstamp used by the transaction.

Perform a mutation as an atomic operation against the database.

Cancels the transaction.

Change the options on the transaction after its initial creation.

Clear the given key from the database.

Clear the given range from the database.

Clear all keys starting with the given prefix from the database.

Attempts to commit the transaction to the database.

Creates a new transaction on the given database or tenant.

Get a value from the database.

Returns the storage server adressess storing the given key.

Returns the approximate transaction size so far.

Returns the database version number for the commited transaction.

Returns an estimated byte size of the key range.

Returns the first key in the database that matches the given key selector.

Returns the metadata version.

Returns all the key-value pairs for the given range.

Returns a list of keys that can split the given range into roughly equally sized chunks based on chunk size.

Returns the transaction snapshot read version.

Returns all the key-value pairs the start with the given prefix.

Returns the time in seconds that the transaction was throttled by the tag throttler.

Returns the cost of the transaction so far in bytes.

Implements the recommended retry and backoff behavior for a transaction.

Reset the transaction to its initial state.

Set the value for a given key.

Sets the metadata version.

Sets the snapshot read version.

Returns the transaction where :snapshot is true by default.

Streams all the key-value pairs for the given range.

Stream all the key-value pairs the start with the given prefix.

Watch for a change on the given key's value.

Types

mutation()

@type mutation() ::
  :add
  | :append_if_fits
  | :bit_and
  | :bit_or
  | :bit_xor
  | :byte_max
  | :byte_min
  | :compare_and_clear
  | :max
  | :min
  | :set_versionstamped_key
  | :set_versionstamped_value

t()

@type t() :: %FDBC.Transaction{resource: term(), snapshot: term()}

Functions

add_conflict_key(transaction, key, op)

@spec add_conflict_key(t(), binary(), :read | :write) :: :ok

Adds a conflict key to a transaction without performing the associated read or write.

Works the same way as add_conflict_range/4 by creating the range on a single key.

add_conflict_range(transaction, start, stop, op)

@spec add_conflict_range(t(), binary(), binary(), :read | :write) :: :ok

Adds a conflict range to a transaction without performing the associated read or write.

If :read is used, this function adds a range of keys to the transaction’s read conflict ranges as if you had read the range. As a result, other transactions that write a key in this range could cause the transaction to fail with a conflict.

If :write is used, this function adds a range of keys to the transaction’s write conflict ranges as if you had cleared the range. As a result, other transactions that concurrently read a key in this range could fail with a conflict.

async_get(transaction, key, opts \\ [])

@spec async_get(t(), binary(), keyword()) :: FDBC.Future.t(binary() | nil)

The same as get/3 except it returns the unresolved future.

async_get_addresses_for_key(transaction, key)

@spec async_get_addresses_for_key(t(), binary()) :: FDBC.Future.t([binary()])

The same as get_addresses_for_key/2 except it returns the unresolved future.

async_get_approximate_size(transaction)

@spec async_get_approximate_size(t()) :: FDBC.Future.t(integer())

The same as get_approximate_size/1 except it returns the unresolved future.

async_get_estimated_range_size(transaction, start, stop)

@spec async_get_estimated_range_size(t(), binary(), binary()) ::
  FDBC.Future.t(integer())

The same as get_estimated_range_size/3 except it returns the unresolved future.

async_get_key(transaction, key_selector, opts \\ [])

@spec async_get_key(t(), FDBC.KeySelector.t(), keyword()) :: FDBC.Future.t(binary())

The same as get_key/3 except it returns the unresolved future.

async_get_read_version(transaction)

@spec async_get_read_version(t()) :: FDBC.Future.t(integer())

The same as get_read_version/1 except it returns the unresolved future.

async_get_tag_throttled_duration(transaction)

@spec async_get_tag_throttled_duration(t()) :: FDBC.Future.t(float())

The same as get_tag_throttled_duration/1 except it returns the unresolved future.

async_get_total_cost(transaction)

@spec async_get_total_cost(t()) :: FDBC.Future.t(integer())

The same as get_total_cost/1 except it returns the unresolved future.

async_get_versionstamp(transaction)

@spec async_get_versionstamp(t()) :: FDBC.Future.t(binary())

Returns a future that will resolve to the versionstamp used by the transaction.

The underlying future will be ready only after the successful completion of a call to commit/1. Read-only transactions do not modify the database when committed and will result in the underlying future completing with an error. Keep in mind that a transaction which reads keys and then sets them to their current values may be optimized to a read-only transaction.

Warning

It must be called before commit/1 but resolved after it.

atomic_op(transaction, op, key, param)

@spec atomic_op(t(), mutation(), binary(), binary()) :: :ok

Perform a mutation as an atomic operation against the database.

To be more specific an atomic operation modifies the database snapshot represented by transaction to perform the operation indicated by op with operand param to the value stored by the given key.

An atomic operation is a single database command that carries out several logical steps: reading the value of a key, performing a transformation on that value, and writing the result. Different atomic operations perform different transformations. Like other database operations, an atomic operation is used within a transaction; however, its use within a transaction will not cause the transaction to conflict.

Atomic operations do not expose the current value of the key to the client but simply send the database the transformation to apply. In regard to conflict checking, an atomic operation is equivalent to a write without a read. It can only cause other transactions performing reads of the key to conflict.

By combining these logical steps into a single, read-free operation, FoundationDB can guarantee that the transaction will not conflict due to the operation. This makes atomic operations ideal for operating on keys that are frequently modified. A common example is the use of a key-value pair as a counter.

Mutations

  • :add - Performs an addition of little-endian integers. If the existing value in the database is not present or shorter than param, it is first extended to the length of param with zero bytes. If param is shorter than the existing value in the database, the existing value is truncated to match the length of param. The integers to be added must be stored in a little-endian representation. They can be signed in two's complement representation or unsigned. You can add to an integer at a known offset in the value by prepending the appropriate number of zero bytes to param and padding with zero bytes to match the length of the value. However, this offset technique requires that you know the addition will not cause the integer field within the value to overflow.

  • :append_if_fits - Appends param to the end of the existing value already in the database at the given key (or creates the key and sets the value to param if the key is empty). This will only append the value if the final concatenated value size is less than or equal to the maximum value size. WARNING: No error is surfaced back to the user if the final value is too large because the mutation will not be applied until after the transaction has been committed. Therefore, it is only safe to use this mutation type if one can guarantee that one will keep the total value size under the maximum size.

  • :bit_and - Performs a bitwise and operation. If the existing value in the database is not present, then param is stored in the database. If the existing value in the database is shorter than param, it is first extended to the length of param with zero bytes. If param is shorter than the existing value in the database, the existing value is truncated to match the length of param.

  • :bit_or - Performs a bitwise or operation. If the existing value in the database is not present or shorter than param, it is first extended to the length of param with zero bytes. If param is shorter than the existing value in the database, the existing value is truncated to match the length of param.

  • :bit_xor - Performs a bitwise xor operation. If the existing value in the database is not present or shorter than param, it is first extended to the length of param with zero bytes. If param is shorter than the existing value in the database, the existing value is truncated to match the length of param.

  • :byte_max - Performs lexicographic comparison of byte strings. If the existing value in the database is not present, then param is stored. Otherwise the larger of the two values is then stored in the database.

  • :byte_min - Performs lexicographic comparison of byte strings. If the existing value in the database is not present, then param is stored. Otherwise the smaller of the two values is then stored in the database.

  • :compare_and_clear - Performs an atomic compare and clear operation. If the existing value in the database is equal to the given value, then given key is cleared.

  • :max - Performs a little-endian comparison of byte strings. If the existing value in the database is not present or shorter than param, it is first extended to the length of param with zero bytes. If param is shorter than the existing value in the database, the existing value is truncated to match the length of param. The larger of the two values is then stored in the database.

  • :min - Performs a little-endian comparison of byte strings. If the existing value in the database is not present, then param is stored in the database. If the existing value in the database is shorter than param, it is first extended to the length of param with zero bytes. If param is shorter than the existing value in the database, the existing value is truncated to match the length of param. The smaller of the two values is then stored in the database.

  • :set_versionstamped_key - Transforms key using a versionstamp for the transaction. Sets the transformed key in the database to param. The key is transformed by removing the final four bytes from the key and reading those as a little-Endian 32-bit integer to get a position pos. The 10 bytes of the key from pos to pos + 10 are replaced with the versionstamp of the transaction used. The first byte of the key is position 0. A versionstamp is a 10 byte, unique, monotonically (but not sequentially) increasing value for each committed transaction. The first 8 bytes are the committed version of the database (serialized in big-Endian order). The last 2 bytes are monotonic in the serialization order for transactions.

  • :set_versionstamped_value - Transforms param using a versionstamp for the transaction. Sets the key given to the transformed param. The parameter is transformed by removing the final four bytes from param and reading those as a little-Endian 32-bit integer to get a position pos. The 10 bytes of the parameter from pos to pos + 10 are replaced with the versionstamp of the transaction used. The first byte of the parameter is position 0. A versionstamp is a 10 byte, unique, monotonically (but not sequentially) increasing value for each committed transaction. The first 8 bytes are the committed version of the database (serialized in big-Endian order). The last 2 bytes are monotonic in the serialization order for transactions.

cancel(transaction)

@spec cancel(t()) :: :ok

Cancels the transaction.

All pending or future uses of the transaction will raise a transaction cancelled exception. The transaction can be used again after it is reset/1.

change(transaction, opts)

@spec change(
  t(),
  keyword()
) :: t()

Change the options on the transaction after its initial creation.

See create/2 for the list of options.

clear(transaction, key, opts \\ [])

@spec clear(t(), binary(), keyword()) :: :ok

Clear the given key from the database.

Modify the database snapshot represented by transaction to remove the given key from the database. If the key was not previously present in the database, there is no effect.

Options

  • :no_write_conflict_range - (true) The operation will not generate a write conflict range. As a result, other transactions which read the key(s) being modified by the next write will not conflict with this transaction. NOTE: This is equivalent to setting :next_write_no_write_conflict_range on the transaction followed by calling this function.

clear_range(transaction, start, stop, opts \\ [])

@spec clear_range(t(), binary(), binary(), keyword()) :: :ok

Clear the given range from the database.

Modify the database snapshot represented by transaction to remove all keys (if any) which are lexicographically greater than or equal to the given begin key and lexicographically less than the given end_key.

Range clears are efficient with FoundationDB – clearing large amounts of data will be fast. However, this will not immediately free up disk - data for the deleted range is cleaned up in the background. For purposes of computing the transaction size, only the begin and end keys of a clear range are counted. The size of the data stored in the range does not count against the transaction size limit.

Options

  • :no_write_conflict_range - (true) The operation will not generate a write conflict range. As a result, other transactions which read the key(s) being modified by the next write will not conflict with this transaction. NOTE: This is equivalent to setting :next_write_no_write_conflict_range on the transaction followed by calling this function.

clear_starts_with(transaction, prefix, opts \\ [])

@spec clear_starts_with(t(), binary(), keyword()) :: :ok

Clear all keys starting with the given prefix from the database.

Calls clear_range/4 under the hood turning the prefix into a start/stop pair that will clear a range of keys that is equivalent to clearing by prefix.

Options

  • :no_write_conflict_range - (true) The operation will not generate a write conflict range. As a result, other transactions which read the key(s) being modified by the next write will not conflict with this transaction. NOTE: This is equivalent to setting :next_write_no_write_conflict_range on the transaction followed by calling this function.

commit(transaction)

@spec commit(t()) :: :ok

Attempts to commit the transaction to the database.

The commit may or may not succeed - in particular, if a conflicting transaction previously committed, then the commit must fail in order to preserve transactional isolation. If the commit does succeed, the transaction is durably committed to the database and all subsequently started transactions will observe its effects.

create(database_or_tenant, opts \\ [])

@spec create(
  FDBC.Database.t() | FDBC.Tenant.t(),
  keyword()
) :: t()

Creates a new transaction on the given database or tenant.

It is valid to pass a transaction through this function in order to modify its options.

Options

  • :access_system_keys - (true) Allows this transaction to read and modify system keys (those that start with the byte 0xFF). Implies :raw_access.

  • :authorization_token - (true) Attach given authorization token to the transaction such that subsequent tenant-aware requests are authorized.

  • :auto_throttle_tag - (binary) Adds a tag to the transaction that can be used to apply manual or automatic targeted throttling. At most 5 tags can be set on a transaction.

  • :bypass_storage_quota - (true) Allows this transaction to bypass storage quota enforcement. Should only be used for transactions that directly or indirectly decrease the size of the tenant group's data.

  • :bypass_unreadable - (true) Allows get operations to read from sections of keyspace that have become unreadable because of versionstamp operations. These reads will view versionstamp operations as if they were set operations that did not fill in the versionstamp.

  • :causal_read_disable - (true) Disable causal reads.

  • :causal_read_risky - (true) The read version will be committed, and usually will be the latest committed, but might not be the latest committed in the event of a simultaneous fault and misbehaving clock.

  • :causal_write_risky - (true) The transaction, if not self-conflicting, may be committed a second time after commit succeeds, in the event of a fault.

  • :consistency_check_required_replicas - (integer) Specifies the number of storage server replica results that the load balancer needs to compare when :enable_replica_consistency_check option is set.

  • :debug_transaction_identifier - (binary) Sets a client provided identifier for the transaction that will be used in scenarios like tracing or profiling. Client trace logging or transaction profiling must be separately enabled.

  • :enable_replica_consistency_check - (true) Enables replica consistency check, which compares the results returned by storage server replicas (as many as specified by co nsistency_check_required_replicas option) for a given read request, in client-side load balancer.

  • :expensive_clear_cost_estimation_enable - (true) Asks storage servers for how many bytes a clear key range contains. Otherwise uses the location cache to roughly estimate this.

  • :first_in_batch - (true) No other transactions will be applied before this transaction within the same commit version.

  • :lock_aware - (true) The transaction can read and write to locked databases, and is responsible for checking that it took the lock.

  • :log_transaction - (true) Enables tracing for this transaction and logs results to the client trace logs. The :debug_transaction_identifier option must be set before using this option, and client trace logging must be enabled to get log output.

  • :max_retry_delay - (integer) Set the maximum amount of backoff delay incurred in the call to on_error if the error is retryable. If the maximum retry delay is less than the current retry delay of the transaction, then the current retry delay will be clamped to the maximum retry delay. Prior to API version 610, like all other transaction options, the maximum retry delay must be reset after a call to on_error. If the API version is 610 or greater, the retry limit is not reset after an on_error call. Note that at all API versions, it is safe and legal to set the maximum retry delay each time the transaction begins, so most code written assuming the older behavior can be upgraded to the newer behavior without requiring any modification, and the caller is not required to implement special logic in retry loops to only conditionally set this option. Defaults to 1000.

  • :next_write_no_write_conflict_range - (true) The next write performed on this transaction will not generate a write conflict range. As a result, other transactions which read the key(s) being modified by the next write will not conflict with this transaction. Care needs to be taken when using this option on a transaction that is shared between multiple threads. When setting this option, write conflict ranges will be disabled on the next write operation, regardless of what thread it is on.

  • :priority - (atom) Set the priority for the transaction. Valid values are :immediate and :batch. Where :immediate specifies that this transaction should be treated as highest priority and that lower priority transactions should block behind this one. Use is discouraged outside of low-level tools. While :batch specifies that this transaction should be treated as low priority and that default priority transactions will be processed first. Batch priority transactions will also be throttled at load levels smaller than for other types of transactions and may be fully cut off in the event of machine failures. Useful for doing batch work simultaneously with latency-sensitive work.

  • :raw_access - (true) Allows this transaction to access the raw key-space when tenant mode is on.

  • :read_lock_aware - (true) The transaction can read from locked databases.

  • :read_priority - (atom) Set the priority for subsequent read requests in this transaction. Valid values are :low, :normal, and :high. Defaults to :normal.

  • :read_server_side_cache - (boolean) Whether the storage server should cache disk blocks needed for subsequent read requests in this transaction. Defaults to true.

  • :read_system_keys - (true) Allows this transaction to read system keys (those that start with the byte 0xFF). Implies :raw_access.

  • :read_your_writes_disabled - (true) Reads performed by a transaction will not see any prior mutations that occurred in that transaction, instead seeing the value which was in the database at the transaction's read version. This option may provide a small performance benefit for the client, but also disables a number of client-side optimizations which are beneficial for transactions which tend to read and write the same keys within a single transaction. It is an error to set this option after performing any reads or writes on the transaction.

  • :report_conflicting_keys - (true) The transaction can retrieve keys that are conflicting with other transactions.

  • :retry_limit - (integer) Set a maximum number of retries after which additional calls to on_error will throw the most recently seen error code. If set to -1, will disable the retry limit. Prior to API version 610, like all other transaction options, the retry limit must be reset after a call to on_error. If the API version is 610 or greater, the retry limit is not reset after an on_error call. Note that at all API versions, it is safe and legal to set the retry limit each time the trans action begins, so most code written assuming the older behavior can be upgraded to the newer behavior without requiring any modification, and the caller is not required to implement special logic in retry loops to only conditionally set this option.

  • :server_request_tracing - (true) Sets an identifier for server tracing of this transaction. When committed, this identifier triggers logging when each part of the transaction authority encounters it, which is helpful in diagnosing slowness in misbehaving clusters. The identifier is randomly generated. When there is also a :debug_transaction_identifier, both IDs are logged together.

  • :size_limit - (integer) Set the transaction size limit in bytes. The size is calculated by combining the sizes of all keys and values written or mutated, all key ranges cleared, and all read and write conflict ranges. (In other words, it includes the total size of all data included in the request to the cluster to commit the transaction.) Large transactions can cause performance problems on FoundationDB clusters, so setting this limit to a smaller value than the default can help prevent the client from accidentally degrading the cluster's performance. This value must be at least 32 and cannot be set to higher than 10,000,000, the default transaction size limit.

  • :snapshot_ryw - (boolean) Allow snapshot read operations will see the results of writes done in the same transaction. Defaults to true.

  • :span_parent - (binary) Adds a parent to the Span of this transaction. Used for transaction tracing. A span can be identified with a 33 bytes serialized binary format which consists of: 8 bytes protocol version, e.g. 0x0FDBC00B073000000LL in little-endian format, 16 bytes trace id, 8 bytes span id, 1 byte set to 1 if sampling is enabled.

  • :special_key_space_enable_writes - (true) By default, users are not allowed to write to special keys. Enable this option will implicitly enable all options required to achieve the configuration change.

  • :special_key_space_relaxed - (true) By default, the special key space will only allow users to read from exactly one module (a subspace in the special key space). Use this option to allow reading from zero or more modules. Users who set this option should be prepared for new modules, which may have different behaviors than the modules they're currently reading. For example, a new module might block or return an error.

  • :tag - (binary) Adds a tag to the transaction that can be used to apply manual targeted throttling. At most 5 tags can be set on a transaction.

  • :timeout - (integer) Set a timeout in milliseconds which, when elapsed, will cause the transaction automatically to be cancelled. If set to 0, will disable all timeouts. All pending and any future uses of the transaction will throw a n exception. The transaction can be used again after it is reset. Prior to API version 610, like all other transaction options, the timeout must be reset after a call to on_error. If the API version is 610 or greater, the timeout is not reset after an on_error call. This allows the user to specify a longer timeout on specific transactions than the default timeout specified through the :transaction_timeout database option without the shorter database timeout cancelling transactions that encounter a retryable error. Note that at all API versions, it is safe and legal to set the timeout each time the transaction begins, so most code written assum ing the older behavior can be upgraded to the newer behavior without requiring any modification, and the caller is not required to i mplement special logic in retry loops to only conditionally set this option.

  • :transaction_logging_max_field_length - (integer) Sets the maximum escaped length of key and value fields to be logged to the trace file via the :log_transaction option, after which the field will be truncated. A negative value disables truncation.

  • :use_grv_cache - (boolean) Allows this transaction to use cached GRV from the database context. Upon first usage, starts a background updater to periodically update the cache to avoid stale read versions. The disable_client_bypass option must also be set. Defaults to false.

  • :used_during_commit_protection_disable - (true) By default, operations that are performed on a transaction while it is being committed will not only fail themselves, but the y will attempt to fail other in-flight operations (such as the commit) as well. This behavior is intended to help developers discove r situations where operations could be unintentionally executed after the transaction has been reset. Setting this option removes th at protection, causing only the offending operation to fail.

get(transaction, key, opts \\ [])

@spec get(t(), binary(), keyword()) :: binary() | nil

Get a value from the database.

Reads a value from the database snapshot represented by the transaction.

Options

get_addresses_for_key(transaction, key)

@spec get_addresses_for_key(t(), binary()) :: [binary()]

Returns the storage server adressess storing the given key.

Returns a list of public network addresses as strings, one for each of the storage servers responsible for storing key_name and its associated value.

get_approximate_size(transaction)

@spec get_approximate_size(t()) :: integer()

Returns the approximate transaction size so far.

This is a summation of the estimated size of mutations, read conflict ranges, and write conflict ranges.

get_committed_version(transaction)

@spec get_committed_version(t()) :: integer()

Returns the database version number for the commited transaction.

Retrieves the database version number at which a given transaction was committed. commit/1 must have been called on transaction. Read-only transactions do not modify the database when committed and will have a committed version of -1. Keep in mind that a transaction which reads keys and then sets them to their current values may be optimized to a read-only transaction.

Note that database versions are not necessarily unique to a given transaction and so cannot be used to determine in what order two transactions completed. The only use for this function is to manually enforce causal consistency when calling set_read_version/2 on another subsequent transaction.

get_estimated_range_size(transaction, start, stop)

@spec get_estimated_range_size(t(), binary(), binary()) :: integer()

Returns an estimated byte size of the key range.

Note

The estimated size is calculated based on the sampling done by FDB server. The sampling algorithm works roughly in this way: the larger the key-value pair is, the more likely it would be sampled and the more accurate its sampled size would be. And due to that reason it is recommended to use this API to query against large ranges for accuracy considerations. For a rough reference, if the returned size is larger than 3MB, one can consider the size to be accurate.

get_key(transaction, key_selector, opts \\ [])

@spec get_key(t(), FDBC.KeySelector.t(), keyword()) :: binary()

Returns the first key in the database that matches the given key selector.

Options

get_metadata_version(transaction)

@spec get_metadata_version(t()) :: binary() | nil

Returns the metadata version.

The metadata version key \xff/metadataVersion is a key intended to help layers deal with hot keys. The value of this key is sent to clients along with the read version from the proxy, so a client can read its value without communicating with a storage server.

It is stored as a versionstamp, and can be nil if its yet to be utilised.

get_range(transaction, start, stop, opts \\ [])

@spec get_range(
  t(),
  binary() | FDBC.KeySelector.t(),
  binary() | FDBC.KeySelector.t(),
  keyword()
) :: [
  {binary(), binary()}
]

Returns all the key-value pairs for the given range.

Options

  • :limit - (integer) Indicates the maximum number of key-value pairs to return.

  • :mode - (atom) The mode in which to return the data to the caller. Defaults to :iterator.

    • :exact - The client has passed a specific row limit and wants that many rows delivered in a single batch. Because of iterator operation in client drivers make request batches transparent to the user, consider :want_all instead. A row :limit must be specified if this mode is used.

    • :iterator - The client doesn't know how much of the range it is likely to used and wants different performance concerns to be balanced. Only a small portion of data is transferred to the client initially (in order to minimize costs if the client doesn't read the entire range), and as the caller iterates over more items in the range larger batches will be transferred in order to minimize latency. After enough iterations, the iterator mode will eventually reach the same byte limit as :want_all.

    • :large - Transfer data in batches large enough to be, in a high-concurrency environment, nearly as efficient as possible. If the client stops iteration early, some disk and network bandwidth may be wasted. The batch size may still be too small to allow a single client to get high throughput from the database, so if that is what you need consider the :serial instead.

    • :medium - Transfer data in batches sized in between small and large.

    • :serial - Transfer data in batches large enough that an individual client can get reasonable read bandwidth from the database. If the client stops iteration early, considerable disk and network bandwidth may be wasted.

    • :small - Transfer data in batches small enough to not be much more expensive than reading individual rows, to minimize cost if iteration stops early.

    • :want_all - Client intends to consume the entire range and would like it all transferred as early as possible.

  • :reverse - (boolean) The key-value pairs will be returned in reverse lexicographical order beginning at the end of the range. Reading ranges in reverse is supported natively by the database and should have minimal extra cost.

  • :snapshot - (boolean) Perform the get as a snapshot read.

get_range_split_points(transaction, start, stop, size)

@spec get_range_split_points(t(), binary(), binary(), non_neg_integer()) :: [binary()]

Returns a list of keys that can split the given range into roughly equally sized chunks based on chunk size.

The returned split points contain the start key and end key of the given range.

get_read_version(transaction)

@spec get_read_version(t()) :: integer()

Returns the transaction snapshot read version.

The transaction obtains a snapshot read version automatically at the time of the first call to get_*() (including this one) and (unless causal consistency has been deliberately compromised by transaction options) is guaranteed to represent all transactions which were reported committed before that call.

get_starts_with(transaction, prefix, opts \\ [])

@spec get_starts_with(t(), binary(), keyword()) :: [{binary(), binary()}]

Returns all the key-value pairs the start with the given prefix.

This function calls get_range/4 and therefore supports the same options as it.

get_tag_throttled_duration(transaction)

@spec get_tag_throttled_duration(t()) :: float()

Returns the time in seconds that the transaction was throttled by the tag throttler.

get_total_cost(transaction)

@spec get_total_cost(t()) :: integer()

Returns the cost of the transaction so far in bytes.

The cost is computed by the tag throttler, and used for tag throttling if throughput quotas are specified.

on_error(transaction, error)

@spec on_error(t(), FDBC.Error.t()) :: :ok

Implements the recommended retry and backoff behavior for a transaction.

This function knows which of the error codes generated by other FDBC.Transaction functions represent temporary error conditions and which represent application errors that should be handled by the application. It also implements an exponential backoff strategy to avoid swamping the database cluster with excessive retries when there is a high level of conflict between transactions.

reset(transaction)

@spec reset(t()) :: :ok

Reset the transaction to its initial state.

It is not necessary to call reset/1 when handling an error with on_error/2 since the transaction has already been reset.

set(transaction, key, value, opts \\ [])

@spec set(t(), binary(), binary(), keyword()) :: :ok

Set the value for a given key.

Options

  • :no_write_conflict_range - (true) The operation will not generate a write conflict range. As a result, other transactions which read the key(s) being modified by the next write will not conflict with this transaction. NOTE: This is equivalent to setting :next_write_no_write_conflict_range on the transaction followed by calling this function.

set_metadata_version(transaction)

@spec set_metadata_version(t()) :: :ok

Sets the metadata version.

It takes no value as the database is responsible for setting it.

set_read_version(transaction, version)

@spec set_read_version(t(), integer()) :: :ok

Sets the snapshot read version.

This is not needed in simple cases. If the given version is too old, subsequent reads will fail with error code 'transactiontoo_old'; if it is too new, subsequent reads may be delayed indefinitely and/or fail with error code 'future_version'. If any of get*() have been called on this transaction already, the result is undefined.

snapshot(transaction)

@spec snapshot(t()) :: t()

Returns the transaction where :snapshot is true by default.

When used any get_* function that supports snapshot reads will do so by default, thus inverting the default of the optional.

Snapshot reads selectively relax FoundationDB’s isolation property, reducing conflicts but making it harder to reason about concurrency.

By default, FoundationDB transactions guarantee strictly serializable isolation, resulting in a state that is as if transactions were executed one at a time, even if they were executed concurrently. Serializability has little performance cost when there are few conflicts but can be expensive when there are many. FoundationDB therefore also permits individual reads within a transaction to be done as snapshot reads.

Snapshot reads differ from ordinary (strictly serializable) reads by permitting the values they read to be modified by concurrent transactions, whereas strictly serializable reads cause conflicts in that case. Like strictly serializable reads, snapshot reads see the effects of prior writes in the same transaction. For more information on the use of snapshot reads, see Snapshot reads.

Snapshot reads also interact with transaction commit a little differently than normal reads. If a snapshot read is outstanding when transaction commit is called that read will immediately return an error. (Normally, transaction commit will wait until outstanding reads return before committing.)

stream(transaction, start, stop, opts \\ [])

@spec stream(
  t(),
  binary() | FDBC.KeySelector.t(),
  binary() | FDBC.KeySelector.t(),
  keyword()
) ::
  Enumerable.t({binary(), binary()})

Streams all the key-value pairs for the given range.

This function supports the same options as get_range/4.

stream_starts_with(transaction, prefix, opts \\ [])

@spec stream_starts_with(t(), binary(), keyword()) ::
  Enumerable.t({binary(), binary()})

Stream all the key-value pairs the start with the given prefix.

This function calls get_range/4 and therefore supports the same options as it.

watch(transaction, key)

@spec watch(t(), binary()) :: FDBC.Future.t(nil)

Watch for a change on the given key's value.

A watch’s behavior is relative to the transaction that created it. A watch will report a change in relation to the key’s value as readable by that transaction. The initial value used for comparison is either that of the transaction’s read version or the value as modified by the transaction itself prior to the creation of the watch. If the value changes and then changes back to its initial value, the watch might not report the change.

Until the transaction that created it has been committed, a watch will not report changes made by other transactions. In contrast, a watch will immediately report changes made by the transaction itself. Watches cannot be created if the transaction has set the :read_your_writes_disabled transaction option, and an attempt to do so will return an watches_disabled error.

If the transaction used to create a watch encounters an error during commit, then the watch will be set with that error. A transaction whose commit result is unknown will set all of its watches with the commit_unknown_result error. If an uncommitted transaction is reset or destroyed, then any watches it created will be set with the transaction_cancelled error.

Returns an FDBC.Future representing an empty value that will be set once the watch has detected a change to the value at the specified key.

By default, each database connection can have no more than 10,000 watches that have not yet reported a change. When this number is exceeded, an attempt to create a watch will return a too_many_watches error. This limit can be changed using the :max_watches database option. Because a watch outlives the transaction that creates it, any watch that is no longer needed should be cancelled by calling cancel/1 on its returned future.