rocksdb (rocksdb v2.5.0)

View Source

Erlang Wrapper for RocksDB

Summary

Functions

create a new batch in memory. A batch is a nif resource attached to the current process. Pay attention when you share it with other processes as it may not been released. To force its release you will need to use the close_batch function.

reset the batch, clear all operations.

return the number of operations in the batch

Retrieve data size of the batch.

batch implementation of delete operation to the batch

like batch_delete/2 but apply the operation to a column family

Like batch_delete_range/3 but apply the operation to a column family

add a merge operation to the batch For posting list operations, Value can be: - {posting_add, Key} to add a key to the posting list - {posting_delete, Key} to mark a key as tombstoned

like batch_mege/3 but apply the operation to a column family For posting list operations, Value can be: - {posting_add, Key} to add a key to the posting list - {posting_delete, Key} to mark a key as tombstoned

add a put operation to the batch

like batch_put/3 but apply the operation to a column family

rollback the operations to the latest checkpoint

store a checkpoint in the batch to which you can rollback later

batch implementation of single_delete operation to the batch

like batch_single_delete/2 but apply the operation to a column family

return all the operation sin the batch as a list of operations

return informations of a cache as a list of tuples. * {capacity, integer >=0} the maximum configured capacity of the cache. * {strict_capacity, boolean} the flag whether to return error on insertion when cache reaches its full capacity. * {usage, integer >=0} the memory size for the entries residing in the cache. * {pinned_usage, integer >= 0} the memory size for the entries in use by the system

return the information associated with Item for cache Cache

take a snapshot of a running RocksDB database in a separate directory http://rocksdb.org/blog/2609/use-checkpoints-for-efficient-snapshots/

Close RocksDB

stop and close the backup engine note: experimental for testing only

Return a coalescing iterator over multiple column families. The iterator merges results from all column families and returns keys in sorted order. When the same key exists in multiple column families, only one value is returned (from the first CF in the list).

Compact the underlying storage for the key range [*begin,*end]. The actual compaction interval might be superset of [*begin, *end]. In particular, deleted and overwritten versions are discarded, and the data is rearranged to reduce the cost of operations needed to access the data. This operation should typically only be invoked by users who understand the underlying implementation.

Compact the underlying storage for the key range ["BeginKey", "EndKey"). like compact_range/3 but for a column family

Reply to a compaction filter callback request. This function is called by the Erlang handler process when it has processed a batch of keys sent by the compaction filter.

Return the approximate number of keys in the default column family. Implemented by calling GetIntProperty with "rocksdb.estimate-num-keys"

Return the approximate number of keys in the specified column family.

Create a new column family with a specific TTL in a TTL database. The TTL is specified in seconds.

Captures the state of the database in the latest backup

Delete a key/value pair in the default column family

Delete a key/value pair in the specified column family

deletes a specific backup

Delete an entity (same as regular delete). Entities are deleted using the normal delete operation - all columns are removed when the key is deleted.

Delete an entity from a column family (same as regular delete).

Removes the database entries in the range ["BeginKey", "EndKey"), i.e., including "BeginKey" and excluding "EndKey". Returns OK on success, and a non-OK status on error. It is not an error if no keys exist in the range ["BeginKey", "EndKey").

Removes the database entries in the range ["BeginKey", "EndKey"). like delete_range/3 but for a column family

Destroy the contents of the specified database. Be very careful using this method.

destroy an environment

Flush all mem-table data.

Flush all mem-table data for a column family

Calls Fun(Elem, AccIn) on successive elements in the default column family starting with AccIn == Acc0. Fun/2 must return a new accumulator which is passed to the next call. The function returns the final value of the accumulator. Acc0 is returned if the default column family is empty.

Calls Fun(Elem, AccIn) on successive elements in the specified column family Other specs are same with fold/4

Calls Fun(Elem, AccIn) on successive elements in the default column family starting with AccIn == Acc0. Fun/2 must return a new accumulator which is passed to the next call. The function returns the final value of the accumulator. Acc0 is returned if the default column family is empty.

Calls Fun(Elem, AccIn) on successive elements in the specified column family Other specs are same with fold_keys/4

Will delete all the files we don't need anymore It will do the full scan of the files/ directory and delete all the files that are not referenced.

Retrieve a key/value pair in the default column family

Retrieve a key/value pair in the specified column family

The method is similar to GetApproximateSizes, except it returns approximate number of records in memtables.

For each i in [0,n-1], store in "Sizes[i]", the approximate file system space used by keys in "[range[i].start .. range[i].limit)".

Returns info about backups in backup_info

Get column family metadata including blob file information.

Get column family metadata for a specific column family.

Retrieve an entity (wide-column key) from the default column family. Returns the columns as a proplist of {Name, Value} tuples.

Retrieve an entity (wide-column key) from the specified column family.

gThe sequence number of the most recent transaction.

Return the RocksDB internal status of the default column family specified at Property

Return the RocksDB internal status of the specified column family specified at Property

returns Snapshot's sequence number

Get the current TTL for a column family in a TTL database. Returns the TTL in seconds.

Ingest external SST files into the database.

Ingest external SST files into a specific column family.

is the database empty

Return a iterator over the contents of the database. The result of iterator() is initially invalid (caller must call iterator_move function on the iterator before using it).

Return a iterator over the contents of the database. The result of iterator() is initially invalid (caller must call iterator_move function on the iterator before using it).

Close a iterator

Get the columns of the current iterator entry. Returns the wide columns for the current entry. For entities, returns all columns. For regular key-values, returns a single column with an empty name (the default column) containing the value.

Move to the specified place

Load the blob value for the current iterator position. Use with {allow_unprepared_value, true} to enable efficient key-only scanning with selective value loading.

Refresh iterator

Return a iterator over the contents of the specified column family.

List column families

Merge a key/value pair into the default column family For posting list operations, Value can be: - {posting_add, Key} to add a key to the posting list - {posting_delete, Key} to mark a key as tombstoned

Merge a key/value pair into the specified column family For posting list operations, Value can be: - {posting_add, Key} to add a key to the posting list - {posting_delete, Key} to mark a key as tombstoned

Retrieve multiple key/value pairs in a single call. Returns a list of results in the same order as the input keys. Each result is either {ok, Value}, not_found, or {error, Reason}. This is more efficient than calling get/3 multiple times.

Retrieve multiple key/value pairs from a specific column family. Returns a list of results in the same order as the input keys.

// Create a new cache.

return a default db environment

return a db environment

create new SstFileManager with the default options: RateBytesPerSec = 0, MaxTrashDbRatio = 0.25, BytesMaxDeleteChunk = 64 * 1024 * 1024.

create new SstFileManager that can be shared among multiple RocksDB instances to track SST file and control there deletion rate.

create a new WriteBufferManager.

create a new WriteBufferManager. a WriteBufferManager is for managing memory allocation for one or more MemTables.

Open RocksDB with the defalut column family

Open RocksDB with the specified column families

open a new backup engine for creating new backups.

open a database with pessimistic transaction support. Pessimistic transactions acquire locks on keys when they are accessed, providing strict serializability at the cost of potential lock contention.

open a database with pessimistic transaction support and column families.

Open read-only RocksDB with the specified column families

Open RocksDB with TTL support This API should be used to open the db when key-values inserted are meant to be removed from the db in a non-strict TTL amount of time Therefore, this guarantees that key-values inserted will remain in the db for >= TTL amount of time and the db will make efforts to remove the key-values as soon as possible after ttl seconds of their insertion.

Open a RocksDB database with TTL support and multiple column families. Each column family can have its own TTL value.

create a new pessimistic transaction. Pessimistic transactions use row-level locking with deadlock detection.

create a new pessimistic transaction with transaction options. Transaction options include: {set_snapshot, boolean()} - acquire a snapshot at start {deadlock_detect, boolean()} - enable deadlock detection {lock_timeout, integer()} - lock wait timeout in ms

commit the transaction atomically.

delete a key from the transaction.

delete a key from a column family within the transaction.

get a value from the transaction (read without acquiring lock).

get a value from a column family within the transaction.

get a value and acquire an exclusive lock on the key. This is useful for read-modify-write patterns.

get a value from a column family and acquire an exclusive lock.

get the unique ID of a pessimistic transaction. This ID can be used to identify the transaction in deadlock detection and waiting transaction lists.

get information about transactions this transaction is waiting on. Returns a map with: - column_family_id: The column family ID of the key being waited on - key: The key being waited on (binary) - waiting_txns: List of transaction IDs that hold locks this transaction needs

create an iterator over the transaction's view of the database.

create an iterator over a column family within the transaction.

batch get multiple values within a pessimistic transaction. Returns a list of results in the same order as the input keys. This does not acquire locks on the keys.

batch get multiple values and acquire exclusive locks on all keys. This is useful for read-modify-write patterns on multiple keys.

pop the most recent savepoint without rolling back. The savepoint is simply discarded.

put a key-value pair in the transaction.

put a key-value pair in a column family within the transaction.

rollback the transaction, discarding all changes.

rollback a pessimistic transaction to the most recent savepoint. All operations since the last call to pessimistic_transaction_set_savepoint/1 are undone and the savepoint is removed.

set a savepoint in a pessimistic transaction. Use pessimistic_transaction_rollback_to_savepoint/1 to rollback to this point.

Fast bitmap-based contains check. Uses hash lookup for V2 format - may have rare false positives. Use posting_list_contains/2 for exact checks.

Check if a key is active (exists and not tombstoned). This is a NIF function for efficiency.

Count the number of active keys (not tombstoned). This is a NIF function for efficiency.

Decode a posting list binary to a list of entries. Returns all entries including tombstones, in order of appearance.

Compute difference of two posting lists (Bin1 - Bin2). Returns keys that are in Bin1 but not in Bin2.

Find a key in the posting list. Returns {ok, IsTombstone} if found, or not_found if not present. This is a NIF function for efficiency.

Fold over all entries in a posting list (including tombstones).

Intersect multiple posting lists efficiently. Processes lists from smallest to largest for optimal performance.

Compute intersection of two posting lists. Returns a new V2 posting list containing only keys present in both inputs.

Fast intersection count using roaring bitmap when available. For V2 posting lists, uses bitmap cardinality for O(1) performance.

Get list of active keys (deduplicated, tombstones filtered out). This is a NIF function for efficiency.

Convert posting list to a map of key => active | tombstone. This is a NIF function for efficiency.

Compute union of two posting lists. Returns a new V2 posting list containing all keys from both inputs.

Get the format version of a posting list binary. Returns 1 for V1 (legacy) format, 2 for V2 (sorted with roaring bitmap).

Check if key exists in postings resource (bitmap hash lookup). O(1) lookup but may have rare false positives due to hash collisions.

Check if key exists in postings resource (exact match). O(log n) lookup using sorted set.

Get count of keys in postings resource.

Difference of two postings (A - B). Accepts binary or resource, returns resource.

Intersect multiple postings efficiently.

Intersect two postings (AND). Accepts binary or resource, returns resource.

Fast intersection count using bitmap.

Get all keys from postings resource (sorted).

Open/parse posting list binary into a resource for fast repeated lookups. Use this when you need to perform multiple contains checks on the same posting list. The resource holds parsed keys and bitmap for fast lookups.

Convert postings resource back to binary (V2 format).

Union two postings (OR). Accepts binary or resource, returns resource.

deletes old backups, keeping latest num_backups_to_keep alive

Put a key/value pair into the default column family

Put a key/value pair into the specified column family

Put an entity (wide-column key) in the default column family. An entity is a key with multiple named columns stored as a proplist.

Put an entity (wide-column key) in the specified column family.

release the cache

release a pessimistic transaction.

release the limiter

release the SstFileManager

Release the SST file reader resource.

Release the SST file writer resource.

release the Statistics Handle

Try to repair as much of the contents of the database as possible. Some data may be lost, so be careful when calling this function

restore from backup with backup_id

sets the maximum configured capacity of the cache. When the new capacity is less than the old capacity and the existing usage is greater than new capacity, the implementation will do its best job to purge the released entries from the cache in order to lower the usage

set background threads of a database

set database background threads of low and high prioriry threads pool of an environment Flush threads are in the HIGH priority pool, while compaction threads are in the LOW priority pool. To increase the number of threads in each pool call

set background threads of an environment

set background threads of low and high prioriry threads pool of an environment Flush threads are in the HIGH priority pool, while compaction threads are in the LOW priority pool. To increase the number of threads in each pool call

sets strict_capacity_limit flag of the cache. If the flag is set to true, insert to cache will fail if no enough capacity can be free.

Set the default TTL for a TTL database. The TTL is specified in seconds.

Set the TTL for a specific column family in a TTL database. The TTL is specified in seconds.

Remove the database entry for "key". Requires that the key exists and was not overwritten. Returns OK on success, and a non-OK status on error. It is not an error if "key" did not exist in the database.

like single_delete/3 but on the specified column family

return a database snapshot Snapshots provide consistent read-only views over the entire state of the key-value store

set certains flags for the SST file manager * max_allowed_space_usage: Update the maximum allowed space that should be used by RocksDB, if the total size of the SST files exceeds MaxAllowedSpace, writes to RocksDB will fail.

return informations of a Sst File Manager as a list of tuples.

return the information associated with Item for an SST File Manager SstFileManager

Returns a list of all SST files being tracked and their sizes. Each element is a tuple of {FilePath, Size} where FilePath is a binary and Size is the file size in bytes.

Get the table properties of the SST file.

Create an iterator for reading the contents of the SST file.

Close an SST file reader iterator.

Move the SST file reader iterator to a new position.

Open an SST file for reading.

Verify the checksums of all blocks in the SST file.

Verify the checksums of all blocks in the SST file.

Add a delete tombstone to the SST file.

Add a range delete tombstone to the SST file.

Get the current file size during writing.

Finalize writing to the SST file and close it.

Finalize writing to the SST file and return file info.

Add a merge operation to the SST file.

Open a new SST file for writing.

Add a key-value pair to the SST file.

Add a wide-column entity to the SST file.

Get histogram data for a specific statistics histogram. Returns histogram information including median, percentiles, average, etc. For integrated BlobDB, relevant histograms are blob_db_blob_file_write_micros, blob_db_blob_file_read_micros, blob_db_compression_micros, etc.

Get the count for a specific statistics ticker. Returns the count for tickers such as blob_db_num_put, block_cache_hit, number_keys_written, compact_read_bytes, etc.

Return the current stats of the default column family Implemented by calling GetProperty with "rocksdb.stats"

Return the current stats of the specified column family Implemented by calling GetProperty with "rocksdb.stats"

Sync the wal. Note that Write() followed by SyncWAL() is not exactly the same as Write() with sync=true: in the latter case the changes won't be visible until the sync is done. Currently only works if allow_mmap_writes = false in Options.

create a new iterator to retrive ethe transaction log since a sequce

close the transaction log

go to the last update as a binary in the transaction log, can be ussed with the write_binary_update function.

like tlog_nex_binary_update/1 but also return the batch as a list of operations

create a new transaction When opened as a Transaction or Optimistic Transaction db, a user can both read and write to a transaction without committing anything to the disk until they decide to do so.

commit a transaction to disk atomically (?)

transaction implementation of delete operation to the transaction

like transaction_delete/2 but apply the operation to a column family

do a get operation on the contents of the transaction

like transaction_get/3 but apply the operation to a column family

get a value and track the key for conflict detection at commit time. For optimistic transactions, this records the key so that if another transaction modifies it before commit, the commit will fail with a conflict.

Return a iterator over the contents of the database and uncommited writes and deletes in the current transaction. The result of iterator() is initially invalid (caller must call iterator_move function on the iterator before using it).

Return a iterator over the contents of the database and uncommited writes and deletes in the current transaction. The result of iterator() is initially invalid (caller must call iterator_move function on the iterator before using it).

batch get multiple values within a transaction. Returns a list of results in the same order as the input keys.

batch get multiple values and track keys for conflict detection. For optimistic transactions, this records the keys so that if another transaction modifies any of them before commit, the commit will fail.

add a put operation to the transaction

like transaction_put/3 but apply the operation to a column family

rollback a transaction to disk atomically (?)

checks that each file exists and that the size of the file matches our expectations. it does not check file checksum.

Apply the specified updates to the database. this function will be removed on the next major release. You should use the batch_* API instead.

write the batch to the database

apply a set of operation coming from a transaction log to another database. Can be useful to use it in slave mode.

return informations of a Write Buffer Manager as a list of tuples.

return the information associated with Item for a Write Buffer Manager.

Types

access_hint/0

-type access_hint() :: normal | sequential | willneed | none.

backup_engine/0

-opaque backup_engine()

backup_info/0

-type backup_info() ::
          #{id := non_neg_integer(),
            timestamp := non_neg_integer(),
            size := non_neg_integer(),
            number_files := non_neg_integer()}.

batch_handle/0

-opaque batch_handle()

blob_db_histogram/0

-type blob_db_histogram() ::
          blob_db_key_size | blob_db_value_size | blob_db_write_micros | blob_db_get_micros |
          blob_db_multiget_micros | blob_db_seek_micros | blob_db_next_micros | blob_db_prev_micros |
          blob_db_blob_file_write_micros | blob_db_blob_file_read_micros |
          blob_db_blob_file_sync_micros | blob_db_compression_micros | blob_db_decompression_micros.

blob_db_ticker/0

-type blob_db_ticker() ::
          blob_db_num_put | blob_db_num_write | blob_db_num_get | blob_db_num_multiget |
          blob_db_num_seek | blob_db_num_next | blob_db_num_prev | blob_db_num_keys_written |
          blob_db_num_keys_read | blob_db_bytes_written | blob_db_bytes_read | blob_db_write_inlined |
          blob_db_write_inlined_ttl | blob_db_write_blob | blob_db_write_blob_ttl |
          blob_db_blob_file_bytes_written | blob_db_blob_file_bytes_read | blob_db_blob_file_synced |
          blob_db_blob_index_expired_count | blob_db_blob_index_expired_size |
          blob_db_blob_index_evicted_count | blob_db_blob_index_evicted_size | blob_db_gc_num_files |
          blob_db_gc_num_new_files | blob_db_gc_failures | blob_db_gc_num_keys_relocated |
          blob_db_gc_bytes_relocated | blob_db_fifo_num_files_evicted | blob_db_fifo_num_keys_evicted |
          blob_db_fifo_bytes_evicted | blob_db_cache_miss | blob_db_cache_hit | blob_db_cache_add |
          blob_db_cache_add_failures | blob_db_cache_bytes_read | blob_db_cache_bytes_write.

blob_metadata/0

-type blob_metadata() ::
          #{blob_file_number => non_neg_integer(),
            blob_file_name => binary(),
            blob_file_path => binary(),
            size => non_neg_integer(),
            total_blob_count => non_neg_integer(),
            total_blob_bytes => non_neg_integer(),
            garbage_blob_count => non_neg_integer(),
            garbage_blob_bytes => non_neg_integer()}.

block_based_table_options/0

-type block_based_table_options() ::
          [{no_block_cache, boolean()} |
           {block_size, pos_integer()} |
           {block_cache, cache_handle()} |
           {block_cache_size, pos_integer()} |
           {bloom_filter_policy, BitsPerKey :: pos_integer()} |
           {format_version, 0 | 1 | 2 | 3 | 4 | 5} |
           {cache_index_and_filter_blocks, boolean()}].

block_cache_ticker/0

-type block_cache_ticker() ::
          block_cache_miss | block_cache_hit | block_cache_add | block_cache_add_failures |
          block_cache_index_miss | block_cache_index_hit | block_cache_filter_miss |
          block_cache_filter_hit | block_cache_data_miss | block_cache_data_hit |
          block_cache_bytes_read | block_cache_bytes_write.

bottommost_level_compaction/0

-type bottommost_level_compaction() :: skip | if_have_compaction_filter | force | force_optimized.

cache_handle/0

-opaque cache_handle()

cache_type/0

-type cache_type() :: lru | clock.

cf_descriptor/0

-type cf_descriptor() :: {string(), cf_options()}.

cf_handle/0

-opaque cf_handle()

cf_metadata/0

-type cf_metadata() ::
          #{size => non_neg_integer(),
            file_count => non_neg_integer(),
            name => binary(),
            blob_file_size => non_neg_integer(),
            blob_files => [blob_metadata()]}.

cf_options/0

-type cf_options() ::
          [{block_cache_size_mb_for_point_lookup, non_neg_integer()} |
           {memtable_memory_budget, pos_integer()} |
           {write_buffer_size, pos_integer()} |
           {max_write_buffer_number, pos_integer()} |
           {min_write_buffer_number_to_merge, pos_integer()} |
           {enable_blob_files, boolean()} |
           {min_blob_size, non_neg_integer()} |
           {blob_file_size, non_neg_integer()} |
           {blob_compression_type, compression_type()} |
           {enable_blob_garbage_collection, boolean()} |
           {blob_garbage_collection_age_cutoff, float()} |
           {blob_garbage_collection_force_threshold, float()} |
           {blob_compaction_readahead_size, non_neg_integer()} |
           {blob_file_starting_level, non_neg_integer()} |
           {blob_cache, cache_handle()} |
           {prepopulate_blob_cache, prepopulate_blob_cache()} |
           {compression, compression_type()} |
           {bottommost_compression, compression_type()} |
           {compression_opts, compression_opts()} |
           {bottommost_compression_opts, compression_opts()} |
           {num_levels, pos_integer()} |
           {ttl, pos_integer()} |
           {level0_file_num_compaction_trigger, integer()} |
           {level0_slowdown_writes_trigger, integer()} |
           {level0_stop_writes_trigger, integer()} |
           {target_file_size_base, pos_integer()} |
           {target_file_size_multiplier, pos_integer()} |
           {max_bytes_for_level_base, pos_integer()} |
           {max_bytes_for_level_multiplier, pos_integer()} |
           {max_compaction_bytes, pos_integer()} |
           {arena_block_size, integer()} |
           {disable_auto_compactions, boolean()} |
           {compaction_style, compaction_style()} |
           {compaction_pri, compaction_pri()} |
           {compaction_options_fifo, compaction_options_fifo()} |
           {filter_deletes, boolean()} |
           {max_sequential_skip_in_iterations, pos_integer()} |
           {inplace_update_support, boolean()} |
           {inplace_update_num_locks, pos_integer()} |
           {table_factory_block_cache_size, pos_integer()} |
           {in_memory_mode, boolean()} |
           {block_based_table_options, block_based_table_options()} |
           {level_compaction_dynamic_level_bytes, boolean()} |
           {optimize_filters_for_hits, boolean()} |
           {prefix_extractor,
            {fixed_prefix_transform, integer()} | {capped_prefix_transform, integer()}} |
           {merge_operator, merge_operator()} |
           {compaction_filter, compaction_filter_opts()}].

column_family/0

-type column_family() :: cf_handle() | default_column_family.

compact_range_options/0

-type compact_range_options() ::
          [{exclusive_manual_compaction, boolean()} |
           {change_level, boolean()} |
           {target_level, integer()} |
           {allow_write_stall, boolean()} |
           {max_subcompactions, non_neg_integer()} |
           {bottommost_level_compaction, bottommost_level_compaction()}].

compaction_filter_opts/0

-type compaction_filter_opts() ::
          #{rules := [filter_rule()]} |
          #{handler := pid(), batch_size => pos_integer(), timeout => pos_integer()}.

compaction_options_fifo/0

-type compaction_options_fifo() ::
          [{max_table_file_size, pos_integer()} | {allow_compaction, boolean()}].

compaction_pri/0

-type compaction_pri() :: compensated_size | oldest_largest_seq_first | oldest_smallest_seq_first.

compaction_style/0

-type compaction_style() :: level | universal | fifo | none.

compaction_ticker/0

-type compaction_ticker() ::
          compact_read_bytes | compact_write_bytes | flush_write_bytes |
          compaction_key_drop_newer_entry | compaction_key_drop_obsolete |
          compaction_key_drop_range_del | compaction_key_drop_user | compaction_cancelled |
          number_superversion_acquires | number_superversion_releases.

compression_opts/0

-type compression_opts() ::
          [{enabled, boolean()} |
           {window_bits, pos_integer()} |
           {level, non_neg_integer()} |
           {strategy, integer()} |
           {max_dict_bytes, non_neg_integer()} |
           {zstd_max_train_bytes, non_neg_integer()}].

compression_type/0

-type compression_type() :: snappy | zlib | bzip2 | lz4 | lz4h | zstd | none.

core_operation_histogram/0

-type core_operation_histogram() ::
          db_get | db_write | db_multiget | db_seek | compaction_time | flush_time.

db_handle/0

-opaque db_handle()

db_operation_ticker/0

-type db_operation_ticker() ::
          number_keys_written | number_keys_read | number_keys_updated | bytes_written | bytes_read |
          iter_bytes_read | number_db_seek | number_db_next | number_db_prev | number_db_seek_found |
          number_db_next_found | number_db_prev_found.

db_options/0

-type db_options() ::
          [{env, env()} |
           {total_threads, pos_integer()} |
           {create_if_missing, boolean()} |
           {create_missing_column_families, boolean()} |
           {error_if_exists, boolean()} |
           {paranoid_checks, boolean()} |
           {max_open_files, integer()} |
           {max_total_wal_size, non_neg_integer()} |
           {use_fsync, boolean()} |
           {db_paths, [#db_path{path :: file:filename_all(), target_size :: non_neg_integer()}]} |
           {db_log_dir, file:filename_all()} |
           {wal_dir, file:filename_all()} |
           {delete_obsolete_files_period_micros, pos_integer()} |
           {max_background_jobs, pos_integer()} |
           {max_background_compactions, pos_integer()} |
           {max_background_flushes, pos_integer()} |
           {max_log_file_size, non_neg_integer()} |
           {log_file_time_to_roll, non_neg_integer()} |
           {keep_log_file_num, pos_integer()} |
           {max_manifest_file_size, pos_integer()} |
           {table_cache_numshardbits, pos_integer()} |
           {wal_ttl_seconds, non_neg_integer()} |
           {manual_wal_flush, boolean()} |
           {wal_size_limit_mb, non_neg_integer()} |
           {manifest_preallocation_size, pos_integer()} |
           {allow_mmap_reads, boolean()} |
           {allow_mmap_writes, boolean()} |
           {is_fd_close_on_exec, boolean()} |
           {stats_dump_period_sec, non_neg_integer()} |
           {advise_random_on_open, boolean()} |
           {access_hint, access_hint()} |
           {compaction_readahead_size, non_neg_integer()} |
           {use_adaptive_mutex, boolean()} |
           {bytes_per_sync, non_neg_integer()} |
           {skip_stats_update_on_db_open, boolean()} |
           {wal_recovery_mode, wal_recovery_mode()} |
           {allow_concurrent_memtable_write, boolean()} |
           {enable_write_thread_adaptive_yield, boolean()} |
           {db_write_buffer_size, non_neg_integer()} |
           {in_memory, boolean()} |
           {rate_limiter, rate_limiter_handle()} |
           {sst_file_manager, sst_file_manager()} |
           {write_buffer_manager, write_buffer_manager()} |
           {max_subcompactions, non_neg_integer()} |
           {atomic_flush, boolean()} |
           {use_direct_reads, boolean()} |
           {use_direct_io_for_flush_and_compaction, boolean()} |
           {enable_pipelined_write, boolean()} |
           {unordered_write, boolean()} |
           {two_write_queues, boolean()} |
           {statistics, statistics_handle()}].

env/0

-opaque env()

env_handle/0

-opaque env_handle()

env_priority/0

-type env_priority() :: priority_high | priority_low.

env_type/0

-type env_type() :: default | memenv.

filter_decision/0

-type filter_decision() :: keep | remove | {change_value, binary()}.

filter_rule/0

-type filter_rule() ::
          {key_prefix, binary()} |
          {key_suffix, binary()} |
          {key_contains, binary()} |
          {value_empty} |
          {value_prefix, binary()} |
          {ttl_from_key,
           Offset :: non_neg_integer(),
           Length :: non_neg_integer(),
           TTLSeconds :: non_neg_integer()} |
          {always_delete}.

flush_options/0

-type flush_options() :: [{wait, boolean()} | {allow_write_stall, boolean()}].

fold_fun/0

-type fold_fun() :: fun(({Key :: binary(), Value :: binary()}, any()) -> any()).

fold_keys_fun/0

-type fold_keys_fun() :: fun((Key :: binary(), any()) -> any()).

histogram_info/0

-type histogram_info() ::
          #{median => float(),
            percentile95 => float(),
            percentile99 => float(),
            average => float(),
            standard_deviation => float(),
            max => float(),
            count => non_neg_integer(),
            sum => non_neg_integer()}.

ingest_external_file_option/0

-type ingest_external_file_option() ::
          {move_files, boolean()} |
          {failed_move_fall_back_to_copy, boolean()} |
          {snapshot_consistency, boolean()} |
          {allow_global_seqno, boolean()} |
          {allow_blocking_flush, boolean()} |
          {ingest_behind, boolean()} |
          {verify_checksums_before_ingest, boolean()} |
          {verify_checksums_readahead_size, non_neg_integer()} |
          {verify_file_checksum, boolean()} |
          {fail_if_not_bottommost_level, boolean()} |
          {allow_db_generated_files, boolean()} |
          {fill_cache, boolean()}.

io_sync_histogram/0

-type io_sync_histogram() ::
          sst_read_micros | sst_write_micros | table_sync_micros | wal_file_sync_micros |
          bytes_per_read | bytes_per_write.

iterator_action/0

-type iterator_action() ::
          first | last | next | prev | binary() | {seek, binary()} | {seek_for_prev, binary()}.

itr_handle/0

-opaque itr_handle()

memtable_stall_ticker/0

-type memtable_stall_ticker() ::
          memtable_hit | memtable_miss | stall_micros | write_done_by_self | write_done_by_other |
          wal_file_synced.

merge_operator/0

-type merge_operator() ::
          erlang_merge_operator | bitset_merge_operator |
          {bitset_merge_operator, non_neg_integer()} |
          counter_merge_operator.

options/0

-type options() :: db_options() | cf_options().

posting_entry/0

-type posting_entry() :: {Key :: binary(), IsTombstone :: boolean()}.

prepopulate_blob_cache/0

-type prepopulate_blob_cache() :: disable | flush_only.

range/0

-type range() :: {Start :: binary(), Limit :: binary()}.

rate_limiter_handle/0

-opaque rate_limiter_handle()

read_options/0

-type read_options() ::
          [{read_tier, read_tier()} |
           {verify_checksums, boolean()} |
           {fill_cache, boolean()} |
           {iterate_upper_bound, binary()} |
           {iterate_lower_bound, binary()} |
           {tailing, boolean()} |
           {total_order_seek, boolean()} |
           {prefix_same_as_start, boolean()} |
           {snapshot, snapshot_handle()} |
           {auto_refresh_iterator_with_snapshot, boolean()} |
           {auto_readahead_size, boolean()} |
           {readahead_size, non_neg_integer()} |
           {async_io, boolean()} |
           {allow_unprepared_value, boolean()}].

read_tier/0

-type read_tier() :: read_all_tier | block_cache_tier | persisted_tier | memtable_tier.

size_approximation_flag/0

-type size_approximation_flag() :: none | include_memtables | include_files | include_both.

snapshot_handle/0

-opaque snapshot_handle()

sst_file_info/0

-type sst_file_info() ::
          #{file_path := binary(),
            smallest_key := binary(),
            largest_key := binary(),
            smallest_range_del_key := binary(),
            largest_range_del_key := binary(),
            file_size := non_neg_integer(),
            num_entries := non_neg_integer(),
            num_range_del_entries := non_neg_integer(),
            sequence_number := non_neg_integer()}.

sst_file_manager/0

-opaque sst_file_manager()

sst_file_reader/0

-opaque sst_file_reader()

sst_file_reader_itr/0

-opaque sst_file_reader_itr()

sst_file_writer/0

-opaque sst_file_writer()

statistics_handle/0

-opaque statistics_handle()

stats_level/0

-type stats_level() ::
          stats_disable_all | stats_except_tickers | stats_except_histogram_or_timers |
          stats_except_timers | stats_except_detailed_timers | stats_except_time_for_mutex | stats_all.

table_properties/0

-type table_properties() ::
          #{data_size := non_neg_integer(),
            index_size := non_neg_integer(),
            index_partitions := non_neg_integer(),
            top_level_index_size := non_neg_integer(),
            filter_size := non_neg_integer(),
            raw_key_size := non_neg_integer(),
            raw_value_size := non_neg_integer(),
            num_data_blocks := non_neg_integer(),
            num_entries := non_neg_integer(),
            num_deletions := non_neg_integer(),
            num_merge_operands := non_neg_integer(),
            num_range_deletions := non_neg_integer(),
            format_version := non_neg_integer(),
            fixed_key_len := non_neg_integer(),
            column_family_id := non_neg_integer(),
            column_family_name := binary(),
            filter_policy_name := binary(),
            comparator_name := binary(),
            merge_operator_name := binary(),
            prefix_extractor_name := binary(),
            property_collectors_names := binary(),
            compression_name := binary(),
            compression_options := binary(),
            creation_time := non_neg_integer(),
            oldest_key_time := non_neg_integer(),
            file_creation_time := non_neg_integer(),
            slow_compression_estimated_data_size := non_neg_integer(),
            fast_compression_estimated_data_size := non_neg_integer(),
            external_sst_file_global_seqno_offset := non_neg_integer()}.

transaction_handle/0

-opaque transaction_handle()

transaction_histogram/0

-type transaction_histogram() :: num_op_per_transaction.

transaction_ticker/0

-type transaction_ticker() ::
          txn_prepare_mutex_overhead | txn_old_commit_map_mutex_overhead | txn_duplicate_key_overhead |
          txn_snapshot_mutex_overhead | txn_get_try_again.

wal_recovery_mode/0

-type wal_recovery_mode() ::
          tolerate_corrupted_tail_records | absolute_consistency | point_in_time_recovery |
          skip_any_corrupted_records.

write_actions/0

-type write_actions() ::
          [{put, Key :: binary(), Value :: binary()} |
           {put, ColumnFamilyHandle :: cf_handle(), Key :: binary(), Value :: binary()} |
           {delete, Key :: binary()} |
           {delete, ColumnFamilyHandle :: cf_handle(), Key :: binary()} |
           {single_delete, Key :: binary()} |
           {single_delete, ColumnFamilyHandle :: cf_handle(), Key :: binary()} |
           clear].

write_buffer_manager/0

-opaque write_buffer_manager()

write_options/0

-type write_options() ::
          [{sync, boolean()} |
           {disable_wal, boolean()} |
           {ignore_missing_column_families, boolean()} |
           {no_slowdown, boolean()} |
           {low_pri, boolean()}].

Functions

batch()

-spec batch() -> {ok, Batch :: batch_handle()}.

create a new batch in memory. A batch is a nif resource attached to the current process. Pay attention when you share it with other processes as it may not been released. To force its release you will need to use the close_batch function.

batch_clear(Batch)

-spec batch_clear(Batch :: batch_handle()) -> ok.

reset the batch, clear all operations.

batch_count(_Batch)

-spec batch_count(_Batch :: batch_handle()) -> Count :: non_neg_integer().

return the number of operations in the batch

batch_data_size(_Batch)

-spec batch_data_size(_Batch :: batch_handle()) -> BatchSize :: non_neg_integer().

Retrieve data size of the batch.

batch_delete(Batch, Key)

-spec batch_delete(Batch :: batch_handle(), Key :: binary()) -> ok.

batch implementation of delete operation to the batch

batch_delete(Batch, ColumnFamily, Key)

-spec batch_delete(Batch :: batch_handle(), ColumnFamily :: cf_handle(), Key :: binary()) -> ok.

like batch_delete/2 but apply the operation to a column family

batch_delete_range(Batch, Begin, End)

-spec batch_delete_range(Batch :: batch_handle(), Begin :: binary(), End :: binary()) -> ok.

Batch implementation of delete_range/5

batch_delete_range(Batch, ColumnFamily, Begin, End)

-spec batch_delete_range(Batch :: batch_handle(),
                         ColumnFamily :: cf_handle(),
                         Begin :: binary(),
                         End :: binary()) ->
                            ok.

Like batch_delete_range/3 but apply the operation to a column family

batch_merge(Batch, Key, Value)

-spec batch_merge(Batch :: batch_handle(),
                  Key :: binary(),
                  Value :: binary() | {posting_add, binary()} | {posting_delete, binary()}) ->
                     ok.

add a merge operation to the batch For posting list operations, Value can be: - {posting_add, Key} to add a key to the posting list - {posting_delete, Key} to mark a key as tombstoned

batch_merge(Batch, ColumnFamily, Key, Value)

-spec batch_merge(Batch :: batch_handle(),
                  ColumnFamily :: cf_handle(),
                  Key :: binary(),
                  Value :: binary() | {posting_add, binary()} | {posting_delete, binary()}) ->
                     ok.

like batch_mege/3 but apply the operation to a column family For posting list operations, Value can be: - {posting_add, Key} to add a key to the posting list - {posting_delete, Key} to mark a key as tombstoned

batch_put(Batch, Key, Value)

-spec batch_put(Batch :: batch_handle(), Key :: binary(), Value :: binary()) -> ok.

add a put operation to the batch

batch_put(Batch, ColumnFamily, Key, Value)

-spec batch_put(Batch :: batch_handle(),
                ColumnFamily :: cf_handle(),
                Key :: binary(),
                Value :: binary()) ->
                   ok.

like batch_put/3 but apply the operation to a column family

batch_rollback(Batch)

-spec batch_rollback(Batch :: batch_handle()) -> ok.

rollback the operations to the latest checkpoint

batch_savepoint(Batch)

-spec batch_savepoint(Batch :: batch_handle()) -> ok.

store a checkpoint in the batch to which you can rollback later

batch_single_delete(Batch, Key)

-spec batch_single_delete(Batch :: batch_handle(), Key :: binary()) -> ok.

batch implementation of single_delete operation to the batch

batch_single_delete(Batch, ColumnFamily, Key)

-spec batch_single_delete(Batch :: batch_handle(), ColumnFamily :: cf_handle(), Key :: binary()) -> ok.

like batch_single_delete/2 but apply the operation to a column family

batch_tolist(Batch)

-spec batch_tolist(Batch :: batch_handle()) -> Ops :: write_actions().

return all the operation sin the batch as a list of operations

cache_info(Cache)

-spec cache_info(Cache) -> InfoList
                    when
                        Cache :: cache_handle(),
                        InfoList :: [InfoTuple],
                        InfoTuple ::
                            {capacity, non_neg_integer()} |
                            {strict_capacity, boolean()} |
                            {usage, non_neg_integer()} |
                            {pinned_usage, non_neg_integer()}.

return informations of a cache as a list of tuples. * {capacity, integer >=0} the maximum configured capacity of the cache. * {strict_capacity, boolean} the flag whether to return error on insertion when cache reaches its full capacity. * {usage, integer >=0} the memory size for the entries residing in the cache. * {pinned_usage, integer >= 0} the memory size for the entries in use by the system

cache_info(Cache, Item)

-spec cache_info(Cache, Item) -> Value
                    when
                        Cache :: cache_handle(),
                        Item :: capacity | strict_capacity | usage | pinned_usage,
                        Value :: term().

return the information associated with Item for cache Cache

checkpoint(DbHandle, Path)

-spec checkpoint(DbHandle :: db_handle(), Path :: file:filename_all()) -> ok | {error, any()}.

take a snapshot of a running RocksDB database in a separate directory http://rocksdb.org/blog/2609/use-checkpoints-for-efficient-snapshots/

close(DBHandle)

-spec close(DBHandle) -> Res when DBHandle :: db_handle(), Res :: ok | {error, any()}.

Close RocksDB

close_backup_engine(BackupEngine)

-spec close_backup_engine(backup_engine()) -> ok.

stop and close the backup engine note: experimental for testing only

close_updates_iterator(Itr)

coalescing_iterator(DBHandle, CFHandles, ReadOpts)

-spec coalescing_iterator(DBHandle, CFHandles, ReadOpts) -> {ok, itr_handle()} | {error, any()}
                             when
                                 DBHandle :: db_handle(),
                                 CFHandles :: [cf_handle()],
                                 ReadOpts :: read_options().

Return a coalescing iterator over multiple column families. The iterator merges results from all column families and returns keys in sorted order. When the same key exists in multiple column families, only one value is returned (from the first CF in the list).

compact_range(DBHandle, BeginKey, EndKey, CompactRangeOpts)

-spec compact_range(DBHandle, BeginKey, EndKey, CompactRangeOpts) -> Res
                       when
                           DBHandle :: db_handle(),
                           BeginKey :: binary() | undefined,
                           EndKey :: binary() | undefined,
                           CompactRangeOpts :: compact_range_options(),
                           Res :: ok | {error, any()}.

Compact the underlying storage for the key range [*begin,*end]. The actual compaction interval might be superset of [*begin, *end]. In particular, deleted and overwritten versions are discarded, and the data is rearranged to reduce the cost of operations needed to access the data. This operation should typically only be invoked by users who understand the underlying implementation.

"begin==undefined" is treated as a key before all keys in the database. "end==undefined" is treated as a key after all keys in the database. Therefore the following call will compact the entire database: rocksdb::compact_range(Options, undefined, undefined); Note that after the entire database is compacted, all data are pushed down to the last level containing any data. If the total data size after compaction is reduced, that level might not be appropriate for hosting all the files. In this case, client could set options.change_level to true, to move the files back to the minimum level capable of holding the data set or a given level (specified by non-negative target_level).

compact_range(DBHandle, CFHandle, BeginKey, EndKey, CompactRangeOpts)

-spec compact_range(DBHandle, CFHandle, BeginKey, EndKey, CompactRangeOpts) -> Res
                       when
                           DBHandle :: db_handle(),
                           CFHandle :: cf_handle(),
                           BeginKey :: binary() | undefined,
                           EndKey :: binary() | undefined,
                           CompactRangeOpts :: compact_range_options(),
                           Res :: ok | {error, any()}.

Compact the underlying storage for the key range ["BeginKey", "EndKey"). like compact_range/3 but for a column family

compaction_filter_reply(BatchRef, Decisions)

-spec compaction_filter_reply(reference(), [filter_decision()]) -> ok.

Reply to a compaction filter callback request. This function is called by the Erlang handler process when it has processed a batch of keys sent by the compaction filter.

BatchRef is the reference received in the {compaction_filter, BatchRef, Keys} message. Decisions is a list of filter_decision() values corresponding to each key: - keep: Keep the key-value pair - remove: Delete the key-value pair - {change_value, NewBinary}: Keep the key but replace the value

Example handler:

  filter_handler() ->
      receive
          {compaction_filter, BatchRef, Keys} ->
              Decisions = [decide(K, V) || {_Level, K, V} <- Keys],
              rocksdb:compaction_filter_reply(BatchRef, Decisions),
              filter_handler()
      end.
 
  decide(<<"tmp_", _/binary>>, _Value) -> remove;
  decide(_Key, <<>>) -> remove;
  decide(_Key, Value) when byte_size(Value) > 1000 ->
      {change_value, binary:part(Value, 0, 1000)};
  decide(_, _) -> keep.

count(DBHandle)

-spec count(DBHandle :: db_handle()) -> non_neg_integer() | {error, any()}.

Return the approximate number of keys in the default column family. Implemented by calling GetIntProperty with "rocksdb.estimate-num-keys"

this function is deprecated and will be removed in next major release.

count(DBHandle, CFHandle)

-spec count(DBHandle :: db_handle(), CFHandle :: cf_handle()) -> non_neg_integer() | {error, any()}.

Return the approximate number of keys in the specified column family.

this function is deprecated and will be removed in next major release.

create_column_family(DBHandle, Name, CFOpts)

-spec create_column_family(DBHandle, Name, CFOpts) -> Res
                              when
                                  DBHandle :: db_handle(),
                                  Name :: string(),
                                  CFOpts :: cf_options(),
                                  Res :: {ok, cf_handle()} | {error, any()}.

Create a new column family

create_column_family_with_ttl(DBHandle, Name, CFOpts, TTL)

-spec create_column_family_with_ttl(DBHandle, Name, CFOpts, TTL) -> {ok, cf_handle()} | {error, any()}
                                       when
                                           DBHandle :: db_handle(),
                                           Name :: string(),
                                           CFOpts :: cf_options(),
                                           TTL :: integer().

Create a new column family with a specific TTL in a TTL database. The TTL is specified in seconds.

create_new_backup(BackupEngine, Db)

-spec create_new_backup(BackupEngine :: backup_engine(), Db :: db_handle()) -> ok | {error, term()}.

Captures the state of the database in the latest backup

default_env()

delete(DBHandle, Key, WriteOpts)

-spec delete(DBHandle, Key, WriteOpts) -> ok | {error, any()}
                when DBHandle :: db_handle(), Key :: binary(), WriteOpts :: write_options().

Delete a key/value pair in the default column family

delete(DBHandle, CFHandle, Key, WriteOpts)

-spec delete(DBHandle, CFHandle, Key, WriteOpts) -> Res
                when
                    DBHandle :: db_handle(),
                    CFHandle :: cf_handle(),
                    Key :: binary(),
                    WriteOpts :: write_options(),
                    Res :: ok | {error, any()}.

Delete a key/value pair in the specified column family

delete_backup(BackupEngine, BackupId)

-spec delete_backup(BackupEngine :: backup_engine(), BackupId :: non_neg_integer()) ->
                       ok | {error, any()}.

deletes a specific backup

delete_entity(DBHandle, Key, WriteOpts)

-spec delete_entity(DBHandle, Key, WriteOpts) -> Res
                       when
                           DBHandle :: db_handle(),
                           Key :: binary(),
                           WriteOpts :: write_options(),
                           Res :: ok | {error, any()}.

Delete an entity (same as regular delete). Entities are deleted using the normal delete operation - all columns are removed when the key is deleted.

delete_entity(DBHandle, CFHandle, Key, WriteOpts)

-spec delete_entity(DBHandle, CFHandle, Key, WriteOpts) -> Res
                       when
                           DBHandle :: db_handle(),
                           CFHandle :: cf_handle(),
                           Key :: binary(),
                           WriteOpts :: write_options(),
                           Res :: ok | {error, any()}.

Delete an entity from a column family (same as regular delete).

delete_range(DBHandle, BeginKey, EndKey, WriteOpts)

-spec delete_range(DBHandle, BeginKey, EndKey, WriteOpts) -> Res
                      when
                          DBHandle :: db_handle(),
                          BeginKey :: binary(),
                          EndKey :: binary(),
                          WriteOpts :: write_options(),
                          Res :: ok | {error, any()}.

Removes the database entries in the range ["BeginKey", "EndKey"), i.e., including "BeginKey" and excluding "EndKey". Returns OK on success, and a non-OK status on error. It is not an error if no keys exist in the range ["BeginKey", "EndKey").

This feature is currently an experimental performance optimization for deleting very large ranges of contiguous keys. Invoking it many times or on small ranges may severely degrade read performance; in particular, the resulting performance can be worse than calling Delete() for each key in the range. Note also the degraded read performance affects keys outside the deleted ranges, and affects database operations involving scans, like flush and compaction.

Consider setting ReadOptions::ignore_range_deletions = true to speed up reads for key(s) that are known to be unaffected by range deletions.

delete_range(DBHandle, CFHandle, BeginKey, EndKey, WriteOpts)

-spec delete_range(DBHandle, CFHandle, BeginKey, EndKey, WriteOpts) -> Res
                      when
                          DBHandle :: db_handle(),
                          CFHandle :: cf_handle(),
                          BeginKey :: binary(),
                          EndKey :: binary(),
                          WriteOpts :: write_options(),
                          Res :: ok | {error, any()}.

Removes the database entries in the range ["BeginKey", "EndKey"). like delete_range/3 but for a column family

destroy(Name, DBOpts)

-spec destroy(Name :: file:filename_all(), DBOpts :: db_options()) -> ok | {error, any()}.

Destroy the contents of the specified database. Be very careful using this method.

destroy_column_family(CFHandle)

destroy_column_family(DBHandle, CFHandle)

-spec destroy_column_family(DBHandle, CFHandle) -> Res
                               when
                                   DBHandle :: db_handle(),
                                   CFHandle :: cf_handle(),
                                   Res :: ok | {error, any()}.

Destroy a column family

destroy_env(Env)

-spec destroy_env(Env :: env_handle()) -> ok.

destroy an environment

drop_column_family(CFHandle)

drop_column_family(DBHandle, CFHandle)

-spec drop_column_family(DBHandle, CFHandle) -> Res
                            when
                                DBHandle :: db_handle(),
                                CFHandle :: cf_handle(),
                                Res :: ok | {error, any()}.

Drop a column family

flush(DbHandle, FlushOptions)

-spec flush(db_handle(), flush_options()) -> ok | {error, term()}.

Flush all mem-table data.

flush(DbHandle, Cf, FlushOptions)

-spec flush(db_handle(), column_family(), flush_options()) -> ok | {error, term()}.

Flush all mem-table data for a column family

fold(DBHandle, Fun, AccIn, ReadOpts)

-spec fold(DBHandle, Fun, AccIn, ReadOpts) -> AccOut
              when
                  DBHandle :: db_handle(),
                  Fun :: fold_fun(),
                  AccIn :: any(),
                  ReadOpts :: read_options(),
                  AccOut :: any().

Calls Fun(Elem, AccIn) on successive elements in the default column family starting with AccIn == Acc0. Fun/2 must return a new accumulator which is passed to the next call. The function returns the final value of the accumulator. Acc0 is returned if the default column family is empty.

this function is deprecated and will be removed in next major release. You should use the iterator API instead.

fold(DBHandle, CFHandle, Fun, AccIn, ReadOpts)

-spec fold(DBHandle, CFHandle, Fun, AccIn, ReadOpts) -> AccOut
              when
                  DBHandle :: db_handle(),
                  CFHandle :: cf_handle(),
                  Fun :: fold_fun(),
                  AccIn :: any(),
                  ReadOpts :: read_options(),
                  AccOut :: any().

Calls Fun(Elem, AccIn) on successive elements in the specified column family Other specs are same with fold/4

this function is deprecated and will be removed in next major release. You should use the iterator API instead.

fold_keys(DBHandle, Fun, AccIn, ReadOpts)

-spec fold_keys(DBHandle, Fun, AccIn, ReadOpts) -> AccOut
                   when
                       DBHandle :: db_handle(),
                       Fun :: fold_keys_fun(),
                       AccIn :: any(),
                       ReadOpts :: read_options(),
                       AccOut :: any().

Calls Fun(Elem, AccIn) on successive elements in the default column family starting with AccIn == Acc0. Fun/2 must return a new accumulator which is passed to the next call. The function returns the final value of the accumulator. Acc0 is returned if the default column family is empty.

this function is deprecated and will be removed in next major release. You should use the iterator API instead.

fold_keys(DBHandle, CFHandle, Fun, AccIn, ReadOpts)

-spec fold_keys(DBHandle, CFHandle, Fun, AccIn, ReadOpts) -> AccOut
                   when
                       DBHandle :: db_handle(),
                       CFHandle :: cf_handle(),
                       Fun :: fold_keys_fun(),
                       AccIn :: any(),
                       ReadOpts :: read_options(),
                       AccOut :: any().

Calls Fun(Elem, AccIn) on successive elements in the specified column family Other specs are same with fold_keys/4

this function is deprecated and will be removed in next major release. You should use the iterator API instead.

gc_backup_engine(BackupEngine)

-spec gc_backup_engine(backup_engine()) -> ok.

Will delete all the files we don't need anymore It will do the full scan of the files/ directory and delete all the files that are not referenced.

get(DBHandle, Key, ReadOpts)

-spec get(DBHandle, Key, ReadOpts) -> Res
             when
                 DBHandle :: db_handle(),
                 Key :: binary(),
                 ReadOpts :: read_options(),
                 Res :: {ok, binary()} | not_found | {error, {corruption, string()}} | {error, any()}.

Retrieve a key/value pair in the default column family

get(DBHandle, CFHandle, Key, ReadOpts)

-spec get(DBHandle, CFHandle, Key, ReadOpts) -> Res
             when
                 DBHandle :: db_handle(),
                 CFHandle :: cf_handle(),
                 Key :: binary(),
                 ReadOpts :: read_options(),
                 Res :: {ok, binary()} | not_found | {error, {corruption, string()}} | {error, any()}.

Retrieve a key/value pair in the specified column family

get_approximate_memtable_stats(DBHandle, StartKey, LimitKey)

-spec get_approximate_memtable_stats(DBHandle, StartKey, LimitKey) -> Res
                                        when
                                            DBHandle :: db_handle(),
                                            StartKey :: binary(),
                                            LimitKey :: binary(),
                                            Res ::
                                                {ok,
                                                 {Count :: non_neg_integer(), Size :: non_neg_integer()}}.

The method is similar to GetApproximateSizes, except it returns approximate number of records in memtables.

get_approximate_memtable_stats(DBHandle, CFHandle, StartKey, LimitKey)

-spec get_approximate_memtable_stats(DBHandle, CFHandle, StartKey, LimitKey) -> Res
                                        when
                                            DBHandle :: db_handle(),
                                            CFHandle :: cf_handle(),
                                            StartKey :: binary(),
                                            LimitKey :: binary(),
                                            Res ::
                                                {ok,
                                                 {Count :: non_neg_integer(), Size :: non_neg_integer()}}.

get_approximate_sizes(DBHandle, Ranges, IncludeFlags)

-spec get_approximate_sizes(DBHandle, Ranges, IncludeFlags) -> Sizes
                               when
                                   DBHandle :: db_handle(),
                                   Ranges :: [range()],
                                   IncludeFlags :: size_approximation_flag(),
                                   Sizes :: [non_neg_integer()].

For each i in [0,n-1], store in "Sizes[i]", the approximate file system space used by keys in "[range[i].start .. range[i].limit)".

Note that the returned sizes measure file system space usage, so if the user data compresses by a factor of ten, the returned sizes will be one-tenth the size of the corresponding user data size.

If IncludeFlags defines whether the returned size should include the recently written data in the mem-tables (if the mem-table type supports it), data serialized to disk, or both.

get_approximate_sizes(DBHandle, CFHandle, Ranges, IncludeFlags)

-spec get_approximate_sizes(DBHandle, CFHandle, Ranges, IncludeFlags) -> Sizes
                               when
                                   DBHandle :: db_handle(),
                                   CFHandle :: cf_handle(),
                                   Ranges :: [range()],
                                   IncludeFlags :: size_approximation_flag(),
                                   Sizes :: [non_neg_integer()].

get_backup_info(BackupEngine)

-spec get_backup_info(backup_engine()) -> [backup_info()].

Returns info about backups in backup_info

get_capacity(Cache)

get_column_family_metadata(DBHandle)

-spec get_column_family_metadata(DBHandle) -> {ok, cf_metadata()} when DBHandle :: db_handle().

Get column family metadata including blob file information.

get_column_family_metadata(DBHandle, CFHandle)

-spec get_column_family_metadata(DBHandle, CFHandle) -> {ok, cf_metadata()}
                                    when DBHandle :: db_handle(), CFHandle :: cf_handle().

Get column family metadata for a specific column family.

get_entity(DBHandle, Key, ReadOpts)

-spec get_entity(DBHandle, Key, ReadOpts) -> Res
                    when
                        DBHandle :: db_handle(),
                        Key :: binary(),
                        ReadOpts :: read_options(),
                        Res :: {ok, [{binary(), binary()}]} | not_found | {error, any()}.

Retrieve an entity (wide-column key) from the default column family. Returns the columns as a proplist of {Name, Value} tuples.

get_entity(DBHandle, CFHandle, Key, ReadOpts)

-spec get_entity(DBHandle, CFHandle, Key, ReadOpts) -> Res
                    when
                        DBHandle :: db_handle(),
                        CFHandle :: cf_handle(),
                        Key :: binary(),
                        ReadOpts :: read_options(),
                        Res :: {ok, [{binary(), binary()}]} | not_found | {error, any()}.

Retrieve an entity (wide-column key) from the specified column family.

get_latest_sequence_number(Db)

-spec get_latest_sequence_number(Db :: db_handle()) -> Seq :: non_neg_integer().

gThe sequence number of the most recent transaction.

get_pinned_usage(Cache)

get_property(DBHandle, Property)

-spec get_property(DBHandle :: db_handle(), Property :: binary()) -> {ok, any()} | {error, any()}.

Return the RocksDB internal status of the default column family specified at Property

get_property(DBHandle, CFHandle, Property)

-spec get_property(DBHandle :: db_handle(), CFHandle :: cf_handle(), Property :: binary()) ->
                      {ok, binary()} | {error, any()}.

Return the RocksDB internal status of the specified column family specified at Property

get_snapshot_sequence(SnapshotHandle)

-spec get_snapshot_sequence(SnapshotHandle :: snapshot_handle()) -> Sequence :: non_neg_integer().

returns Snapshot's sequence number

get_ttl(DBHandle, CFHandle)

-spec get_ttl(DBHandle, CFHandle) -> {ok, integer()} | {error, any()}
                 when DBHandle :: db_handle(), CFHandle :: cf_handle().

Get the current TTL for a column family in a TTL database. Returns the TTL in seconds.

get_usage(Cache)

ingest_external_file(DbHandle, Files, Options)

-spec ingest_external_file(DbHandle, Files, Options) -> Result
                              when
                                  DbHandle :: db_handle(),
                                  Files :: [file:filename_all()],
                                  Options :: [ingest_external_file_option()],
                                  Result :: ok | {error, any()}.

Ingest external SST files into the database.

This function loads one or more external SST files created by sst_file_writer into the database. The files are ingested at the appropriate level in the LSM tree based on their key ranges.

Options: - move_files: Move files instead of copying (default: false) - failed_move_fall_back_to_copy: Fall back to copy if move fails (default: true) - snapshot_consistency: Check snapshot consistency (default: true) - allow_global_seqno: Allow assigning global sequence numbers (default: true) - allow_blocking_flush: Allow blocking flush (default: true) - ingest_behind: Ingest files to bottommost level (default: false) - verify_checksums_before_ingest: Verify checksums before ingest (default: true) - verify_checksums_readahead_size: Readahead size for checksum verification (default: 0) - verify_file_checksum: Verify file checksum if present (default: true) - fail_if_not_bottommost_level: Fail if files don't go to bottommost level (default: false) - allow_db_generated_files: Allow files generated by this DB (default: false) - fill_cache: Fill block cache on ingest (default: true)

ingest_external_file(DbHandle, CfHandle, Files, Options)

-spec ingest_external_file(DbHandle, CfHandle, Files, Options) -> Result
                              when
                                  DbHandle :: db_handle(),
                                  CfHandle :: cf_handle(),
                                  Files :: [file:filename_all()],
                                  Options :: [ingest_external_file_option()],
                                  Result :: ok | {error, any()}.

Ingest external SST files into a specific column family.

Same as ingest_external_file/3 but allows specifying a column family.

is_empty(DBHandle)

-spec is_empty(DBHandle :: db_handle()) -> true | false.

is the database empty

iterator(DBHandle, ReadOpts)

-spec iterator(DBHandle, ReadOpts) -> Res
                  when
                      DBHandle :: db_handle(),
                      ReadOpts :: read_options(),
                      Res :: {ok, itr_handle()} | {error, any()}.

Return a iterator over the contents of the database. The result of iterator() is initially invalid (caller must call iterator_move function on the iterator before using it).

iterator(DBHandle, CFHandle, ReadOpts)

-spec iterator(DBHandle, CFHandle, ReadOpts) -> Res
                  when
                      DBHandle :: db_handle(),
                      CFHandle :: cf_handle(),
                      ReadOpts :: read_options(),
                      Res :: {ok, itr_handle()} | {error, any()}.

Return a iterator over the contents of the database. The result of iterator() is initially invalid (caller must call iterator_move function on the iterator before using it).

iterator_close(ITRHandle)

-spec iterator_close(ITRHandle) -> ok | {error, _} when ITRHandle :: itr_handle().

Close a iterator

iterator_columns(ITRHandle)

-spec iterator_columns(ITRHandle) -> Res
                          when
                              ITRHandle :: itr_handle(),
                              Res :: {ok, [{binary(), binary()}]} | {error, any()}.

Get the columns of the current iterator entry. Returns the wide columns for the current entry. For entities, returns all columns. For regular key-values, returns a single column with an empty name (the default column) containing the value.

iterator_move(ITRHandle, ITRAction)

-spec iterator_move(ITRHandle, ITRAction) ->
                       {ok, Key :: binary(), Value :: binary()} |
                       {ok, Key :: binary()} |
                       {error, invalid_iterator} |
                       {error, iterator_closed}
                       when ITRHandle :: itr_handle(), ITRAction :: iterator_action().

Move to the specified place

iterator_prepare_value(ITRHandle)

-spec iterator_prepare_value(ITRHandle) -> ok | {error, any()} when ITRHandle :: itr_handle().

Load the blob value for the current iterator position. Use with {allow_unprepared_value, true} to enable efficient key-only scanning with selective value loading.

iterator_refresh(ITRHandle)

-spec iterator_refresh(ITRHandle) -> ok when ITRHandle :: itr_handle().

Refresh iterator

iterators(DBHandle, CFHandle, ReadOpts)

-spec iterators(DBHandle, CFHandle, ReadOpts) -> {ok, itr_handle()} | {error, any()}
                   when DBHandle :: db_handle(), CFHandle :: cf_handle(), ReadOpts :: read_options().

Return a iterator over the contents of the specified column family.

list_column_families(Name, DBOpts)

-spec list_column_families(Name, DBOpts) -> Res
                              when
                                  Name :: file:filename_all(),
                                  DBOpts :: db_options(),
                                  Res :: {ok, [string()]} | {error, any()}.

List column families

mem_env()

merge(DBHandle, Key, Value, WriteOpts)

-spec merge(DBHandle, Key, Value, WriteOpts) -> Res
               when
                   DBHandle :: db_handle(),
                   Key :: binary(),
                   Value :: binary() | {posting_add, binary()} | {posting_delete, binary()},
                   WriteOpts :: write_options(),
                   Res :: ok | {error, any()}.

Merge a key/value pair into the default column family For posting list operations, Value can be: - {posting_add, Key} to add a key to the posting list - {posting_delete, Key} to mark a key as tombstoned

merge(DBHandle, CFHandle, Key, Value, WriteOpts)

-spec merge(DBHandle, CFHandle, Key, Value, WriteOpts) -> Res
               when
                   DBHandle :: db_handle(),
                   CFHandle :: cf_handle(),
                   Key :: binary(),
                   Value :: binary() | {posting_add, binary()} | {posting_delete, binary()},
                   WriteOpts :: write_options(),
                   Res :: ok | {error, any()}.

Merge a key/value pair into the specified column family For posting list operations, Value can be: - {posting_add, Key} to add a key to the posting list - {posting_delete, Key} to mark a key as tombstoned

multi_get(DBHandle, Keys, ReadOpts)

-spec multi_get(DBHandle, Keys, ReadOpts) -> Results
                   when
                       DBHandle :: db_handle(),
                       Keys :: [binary()],
                       ReadOpts :: read_options(),
                       Results :: [{ok, binary()} | not_found | {error, any()}].

Retrieve multiple key/value pairs in a single call. Returns a list of results in the same order as the input keys. Each result is either {ok, Value}, not_found, or {error, Reason}. This is more efficient than calling get/3 multiple times.

multi_get(DBHandle, CFHandle, Keys, ReadOpts)

-spec multi_get(DBHandle, CFHandle, Keys, ReadOpts) -> Results
                   when
                       DBHandle :: db_handle(),
                       CFHandle :: cf_handle(),
                       Keys :: [binary()],
                       ReadOpts :: read_options(),
                       Results :: [{ok, binary()} | not_found | {error, any()}].

Retrieve multiple key/value pairs from a specific column family. Returns a list of results in the same order as the input keys.

new_cache(Type, Capacity)

-spec new_cache(Type :: cache_type(), Capacity :: non_neg_integer()) -> {ok, cache_handle()}.

// Create a new cache.

new_clock_cache(Capacity)

new_env()

-spec new_env() -> {ok, env_handle()}.

return a default db environment

new_env(EnvType)

-spec new_env(EnvType :: env_type()) -> {ok, env_handle()}.

return a db environment

new_lru_cache(Capacity)

new_rate_limiter(RateBytesPerSec, Auto)

create new Limiter

new_sst_file_manager(Env)

-spec new_sst_file_manager(env_handle()) -> {ok, sst_file_manager()} | {error, any()}.

create new SstFileManager with the default options: RateBytesPerSec = 0, MaxTrashDbRatio = 0.25, BytesMaxDeleteChunk = 64 * 1024 * 1024.

new_sst_file_manager(Env, OptionsList)

-spec new_sst_file_manager(Env, OptionsList) -> Result
                              when
                                  Env :: env_handle(),
                                  OptionsList :: [OptionTuple],
                                  OptionTuple ::
                                      {delete_rate_bytes_per_sec, non_neg_integer()} |
                                      {max_trash_db_ratio, float()} |
                                      {bytes_max_delete_chunk, non_neg_integer()},
                                  Result :: {ok, sst_file_manager()} | {error, any()}.

create new SstFileManager that can be shared among multiple RocksDB instances to track SST file and control there deletion rate.

* Env is an environment resource created using rocksdb:new_env/{0,1}. * delete_rate_bytes_per_sec: How many bytes should be deleted per second, If this value is set to 1024 (1 Kb / sec) and we deleted a file of size 4 Kb in 1 second, we will wait for another 3 seconds before we delete other files, Set to 0 to disable deletion rate limiting. * max_trash_db_ratio: If the trash size constitutes for more than this fraction of the total DB size we will start deleting new files passed to DeleteScheduler immediately * bytes_max_delete_chunk: if a file to delete is larger than delete chunk, ftruncate the file by this size each time, rather than dropping the whole file. 0 means to always delete the whole file. If the file has more than one linked names, the file will be deleted as a whole. Either way, delete_rate_bytes_per_sec will be appreciated. NOTE that with this option, files already renamed as a trash may be partial, so users should not directly recover them without checking.

new_statistics()

-spec new_statistics() -> {ok, statistics_handle()}.

new_write_buffer_manager(BufferSize)

-spec new_write_buffer_manager(BufferSize :: non_neg_integer()) -> {ok, write_buffer_manager()}.

create a new WriteBufferManager.

new_write_buffer_manager(BufferSize, Cache)

-spec new_write_buffer_manager(BufferSize :: non_neg_integer(), Cache :: cache_handle()) ->
                                  {ok, write_buffer_manager()}.

create a new WriteBufferManager. a WriteBufferManager is for managing memory allocation for one or more MemTables.

The memory usage of memtable will report to this object. The same object can be passed into multiple DBs and it will track the sum of size of all the DBs. If the total size of all live memtables of all the DBs exceeds a limit, a flush will be triggered in the next DB to which the next write is issued.

If the object is only passed to on DB, the behavior is the same as db_write_buffer_size. When write_buffer_manager is set, the value set will override db_write_buffer_size.

next_binary_update(Itr)

next_update(Itr)

open(Name, DBOpts)

-spec open(Name, DBOpts) -> Result
              when
                  Name :: file:filename_all(),
                  DBOpts :: options(),
                  Result :: {ok, db_handle()} | {error, any()}.

Open RocksDB with the defalut column family

open(Name, DBOpts, CFDescriptors)

-spec open(Name, DBOpts, CFDescriptors) -> {ok, db_handle(), [cf_handle()]} | {error, any()}
              when
                  Name :: file:filename_all(),
                  DBOpts :: db_options(),
                  CFDescriptors :: [cf_descriptor()].

Open RocksDB with the specified column families

open_backup_engine(Path)

-spec open_backup_engine(Path :: string()) -> {ok, backup_engine()} | {error, term()}.

open a new backup engine for creating new backups.

open_optimistic_transaction_db(Name, DbOpts)

open_optimistic_transaction_db(Name, DbOpts, CFDescriptors)

open_pessimistic_transaction_db(Name, DbOpts)

-spec open_pessimistic_transaction_db(Name :: file:filename_all(), DbOpts :: db_options()) ->
                                         {ok, db_handle(), [cf_handle()]} | {error, any()}.

open a database with pessimistic transaction support. Pessimistic transactions acquire locks on keys when they are accessed, providing strict serializability at the cost of potential lock contention.

open_pessimistic_transaction_db(Name, DbOpts, CfDescriptors)

-spec open_pessimistic_transaction_db(Name :: file:filename_all(),
                                      DbOpts :: db_options(),
                                      CfDescriptors :: [cf_descriptor()]) ->
                                         {ok, db_handle(), [cf_handle()]} | {error, any()}.

open a database with pessimistic transaction support and column families.

open_readonly(Name, DBOpts)

-spec open_readonly(Name, DBOpts) -> Result
                       when
                           Name :: file:filename_all(),
                           DBOpts :: options(),
                           Result :: {ok, db_handle()} | {error, any()}.

open_readonly(Name, DBOpts, CFDescriptors)

-spec open_readonly(Name, DBOpts, CFDescriptors) -> {ok, db_handle(), [cf_handle()]} | {error, any()}
                       when
                           Name :: file:filename_all(),
                           DBOpts :: db_options(),
                           CFDescriptors :: [cf_descriptor()].

Open read-only RocksDB with the specified column families

open_with_cf(Name, DbOpts, CFDescriptors)

open_with_cf_readonly(Name, DbOpts, CFDescriptors)

open_with_ttl(Name, DBOpts, TTL, ReadOnly)

-spec open_with_ttl(Name, DBOpts, TTL, ReadOnly) -> {ok, db_handle()} | {error, any()}
                       when
                           Name :: file:filename_all(),
                           DBOpts :: db_options(),
                           TTL :: integer(),
                           ReadOnly :: boolean().

Open RocksDB with TTL support This API should be used to open the db when key-values inserted are meant to be removed from the db in a non-strict TTL amount of time Therefore, this guarantees that key-values inserted will remain in the db for >= TTL amount of time and the db will make efforts to remove the key-values as soon as possible after ttl seconds of their insertion.

BEHAVIOUR: TTL is accepted in seconds (int32_t)Timestamp(creation) is suffixed to values in Put internally Expired TTL values deleted in compaction only:(Timestamp+ttl<time_now) Get/Iterator may return expired entries(compaction not run on them yet) Different TTL may be used during different Opens Example: Open1 at t=0 with TTL=4 and insert k1,k2, close at t=2 Open2 at t=3 with TTL=5. Now k1,k2 should be deleted at t>=5 Readonly=true opens in the usual read-only mode. Compactions will not be triggered(neither manual nor automatic), so no expired entries removed

open_with_ttl_cf(Name, DBOpts, CFDescriptors, ReadOnly)

-spec open_with_ttl_cf(Name, DBOpts, CFDescriptors, ReadOnly) ->
                          {ok, db_handle(), [cf_handle()]} | {error, any()}
                          when
                              Name :: file:filename_all(),
                              DBOpts :: db_options(),
                              CFDescriptors ::
                                  [{Name :: string(), CFOpts :: cf_options(), TTL :: integer()}],
                              ReadOnly :: boolean().

Open a RocksDB database with TTL support and multiple column families. Each column family can have its own TTL value.

See also: open_with_ttl/4.

pessimistic_transaction(TransactionDB, WriteOptions)

-spec pessimistic_transaction(TransactionDB :: db_handle(), WriteOptions :: write_options()) ->
                                 {ok, transaction_handle()} | {error, any()}.

create a new pessimistic transaction. Pessimistic transactions use row-level locking with deadlock detection.

pessimistic_transaction(TransactionDB, WriteOptions, TxnOptions)

-spec pessimistic_transaction(TransactionDB :: db_handle(),
                              WriteOptions :: write_options(),
                              TxnOptions :: list()) ->
                                 {ok, transaction_handle()} | {error, any()}.

create a new pessimistic transaction with transaction options. Transaction options include: {set_snapshot, boolean()} - acquire a snapshot at start {deadlock_detect, boolean()} - enable deadlock detection {lock_timeout, integer()} - lock wait timeout in ms

pessimistic_transaction_commit(Transaction)

-spec pessimistic_transaction_commit(Transaction :: transaction_handle()) ->
                                        ok | {error, busy} | {error, expired} | {error, any()}.

commit the transaction atomically.

pessimistic_transaction_delete(Transaction, Key)

-spec pessimistic_transaction_delete(Transaction :: transaction_handle(), Key :: binary()) ->
                                        ok | {error, busy} | {error, timed_out} | {error, any()}.

delete a key from the transaction.

pessimistic_transaction_delete(Transaction, ColumnFamily, Key)

-spec pessimistic_transaction_delete(Transaction :: transaction_handle(),
                                     ColumnFamily :: cf_handle(),
                                     Key :: binary()) ->
                                        ok | {error, busy} | {error, timed_out} | {error, any()}.

delete a key from a column family within the transaction.

pessimistic_transaction_get(Transaction, Key, Opts)

-spec pessimistic_transaction_get(Transaction :: transaction_handle(),
                                  Key :: binary(),
                                  Opts :: read_options()) ->
                                     {ok, binary()} | not_found | {error, any()}.

get a value from the transaction (read without acquiring lock).

pessimistic_transaction_get(Transaction, ColumnFamily, Key, Opts)

-spec pessimistic_transaction_get(Transaction :: transaction_handle(),
                                  ColumnFamily :: cf_handle(),
                                  Key :: binary(),
                                  Opts :: read_options()) ->
                                     {ok, binary()} | not_found | {error, any()}.

get a value from a column family within the transaction.

pessimistic_transaction_get_for_update(Transaction, Key, Opts)

-spec pessimistic_transaction_get_for_update(Transaction :: transaction_handle(),
                                             Key :: binary(),
                                             Opts :: read_options()) ->
                                                {ok, binary()} |
                                                not_found |
                                                {error, busy} |
                                                {error, timed_out} |
                                                {error, any()}.

get a value and acquire an exclusive lock on the key. This is useful for read-modify-write patterns.

pessimistic_transaction_get_for_update(Transaction, ColumnFamily, Key, Opts)

-spec pessimistic_transaction_get_for_update(Transaction :: transaction_handle(),
                                             ColumnFamily :: cf_handle(),
                                             Key :: binary(),
                                             Opts :: read_options()) ->
                                                {ok, binary()} |
                                                not_found |
                                                {error, busy} |
                                                {error, timed_out} |
                                                {error, any()}.

get a value from a column family and acquire an exclusive lock.

pessimistic_transaction_get_id(Transaction)

-spec pessimistic_transaction_get_id(Transaction :: transaction_handle()) -> {ok, non_neg_integer()}.

get the unique ID of a pessimistic transaction. This ID can be used to identify the transaction in deadlock detection and waiting transaction lists.

pessimistic_transaction_get_waiting_txns(Transaction)

-spec pessimistic_transaction_get_waiting_txns(Transaction :: transaction_handle()) ->
                                                  {ok,
                                                   #{column_family_id := non_neg_integer(),
                                                     key := binary(),
                                                     waiting_txns := [non_neg_integer()]}}.

get information about transactions this transaction is waiting on. Returns a map with: - column_family_id: The column family ID of the key being waited on - key: The key being waited on (binary) - waiting_txns: List of transaction IDs that hold locks this transaction needs

If the transaction is not currently waiting, returns an empty waiting_txns list.

pessimistic_transaction_iterator(TransactionHandle, ReadOpts)

-spec pessimistic_transaction_iterator(TransactionHandle :: transaction_handle(),
                                       ReadOpts :: read_options()) ->
                                          {ok, itr_handle()} | {error, any()}.

create an iterator over the transaction's view of the database.

pessimistic_transaction_iterator(TransactionHandle, CFHandle, ReadOpts)

-spec pessimistic_transaction_iterator(TransactionHandle :: transaction_handle(),
                                       CFHandle :: cf_handle(),
                                       ReadOpts :: read_options()) ->
                                          {ok, itr_handle()} | {error, any()}.

create an iterator over a column family within the transaction.

pessimistic_transaction_multi_get(Transaction, Keys, Opts)

-spec pessimistic_transaction_multi_get(Transaction :: transaction_handle(),
                                        Keys :: [binary()],
                                        Opts :: read_options()) ->
                                           [{ok, binary()} | not_found | {error, any()}].

batch get multiple values within a pessimistic transaction. Returns a list of results in the same order as the input keys. This does not acquire locks on the keys.

pessimistic_transaction_multi_get(Transaction, ColumnFamily, Keys, Opts)

-spec pessimistic_transaction_multi_get(Transaction :: transaction_handle(),
                                        ColumnFamily :: cf_handle(),
                                        Keys :: [binary()],
                                        Opts :: read_options()) ->
                                           [{ok, binary()} | not_found | {error, any()}].

like pessimistic_transaction_multi_get/3 but apply to a column family

pessimistic_transaction_multi_get_for_update(Transaction, Keys, Opts)

-spec pessimistic_transaction_multi_get_for_update(Transaction :: transaction_handle(),
                                                   Keys :: [binary()],
                                                   Opts :: read_options()) ->
                                                      [{ok, binary()} |
                                                       not_found |
                                                       {error, busy} |
                                                       {error, timed_out} |
                                                       {error, any()}].

batch get multiple values and acquire exclusive locks on all keys. This is useful for read-modify-write patterns on multiple keys.

pessimistic_transaction_multi_get_for_update(Transaction, ColumnFamily, Keys, Opts)

-spec pessimistic_transaction_multi_get_for_update(Transaction :: transaction_handle(),
                                                   ColumnFamily :: cf_handle(),
                                                   Keys :: [binary()],
                                                   Opts :: read_options()) ->
                                                      [{ok, binary()} |
                                                       not_found |
                                                       {error, busy} |
                                                       {error, timed_out} |
                                                       {error, any()}].

like pessimistic_transaction_multi_get_for_update/3 but apply to a column family

pessimistic_transaction_pop_savepoint(Transaction)

-spec pessimistic_transaction_pop_savepoint(Transaction :: transaction_handle()) -> ok | {error, any()}.

pop the most recent savepoint without rolling back. The savepoint is simply discarded.

pessimistic_transaction_put(Transaction, Key, Value)

-spec pessimistic_transaction_put(Transaction :: transaction_handle(),
                                  Key :: binary(),
                                  Value :: binary()) ->
                                     ok | {error, busy} | {error, timed_out} | {error, any()}.

put a key-value pair in the transaction.

pessimistic_transaction_put(Transaction, ColumnFamily, Key, Value)

-spec pessimistic_transaction_put(Transaction :: transaction_handle(),
                                  ColumnFamily :: cf_handle(),
                                  Key :: binary(),
                                  Value :: binary()) ->
                                     ok | {error, busy} | {error, timed_out} | {error, any()}.

put a key-value pair in a column family within the transaction.

pessimistic_transaction_rollback(Transaction)

-spec pessimistic_transaction_rollback(Transaction :: transaction_handle()) -> ok | {error, any()}.

rollback the transaction, discarding all changes.

pessimistic_transaction_rollback_to_savepoint(Transaction)

-spec pessimistic_transaction_rollback_to_savepoint(Transaction :: transaction_handle()) ->
                                                       ok | {error, any()}.

rollback a pessimistic transaction to the most recent savepoint. All operations since the last call to pessimistic_transaction_set_savepoint/1 are undone and the savepoint is removed.

pessimistic_transaction_set_savepoint(Transaction)

-spec pessimistic_transaction_set_savepoint(Transaction :: transaction_handle()) -> ok.

set a savepoint in a pessimistic transaction. Use pessimistic_transaction_rollback_to_savepoint/1 to rollback to this point.

posting_list_bitmap_contains(Bin, Key)

-spec posting_list_bitmap_contains(binary(), binary()) -> boolean().

Fast bitmap-based contains check. Uses hash lookup for V2 format - may have rare false positives. Use posting_list_contains/2 for exact checks.

posting_list_contains(Bin, Key)

-spec posting_list_contains(binary(), binary()) -> boolean().

Check if a key is active (exists and not tombstoned). This is a NIF function for efficiency.

posting_list_count(Bin)

-spec posting_list_count(binary()) -> non_neg_integer().

Count the number of active keys (not tombstoned). This is a NIF function for efficiency.

posting_list_decode(_)

-spec posting_list_decode(binary()) -> [posting_entry()].

Decode a posting list binary to a list of entries. Returns all entries including tombstones, in order of appearance.

posting_list_difference(Bin1, Bin2)

-spec posting_list_difference(binary(), binary()) -> binary().

Compute difference of two posting lists (Bin1 - Bin2). Returns keys that are in Bin1 but not in Bin2.

posting_list_find(Bin, Key)

-spec posting_list_find(binary(), binary()) -> {ok, boolean()} | not_found.

Find a key in the posting list. Returns {ok, IsTombstone} if found, or not_found if not present. This is a NIF function for efficiency.

posting_list_fold(Fun, Acc, Bin)

-spec posting_list_fold(Fun, Acc, binary()) -> Acc
                           when
                               Fun :: fun((Key :: binary(), IsTombstone :: boolean(), Acc) -> Acc),
                               Acc :: term().

Fold over all entries in a posting list (including tombstones).

posting_list_intersect_all(Lists)

-spec posting_list_intersect_all([binary()]) -> binary().

Intersect multiple posting lists efficiently. Processes lists from smallest to largest for optimal performance.

posting_list_intersection(Bin1, Bin2)

-spec posting_list_intersection(binary(), binary()) -> binary().

Compute intersection of two posting lists. Returns a new V2 posting list containing only keys present in both inputs.

posting_list_intersection_count(Bin1, Bin2)

-spec posting_list_intersection_count(binary(), binary()) -> non_neg_integer().

Fast intersection count using roaring bitmap when available. For V2 posting lists, uses bitmap cardinality for O(1) performance.

posting_list_keys(Bin)

-spec posting_list_keys(binary()) -> [binary()].

Get list of active keys (deduplicated, tombstones filtered out). This is a NIF function for efficiency.

posting_list_to_map(Bin)

-spec posting_list_to_map(binary()) -> #{binary() => active | tombstone}.

Convert posting list to a map of key => active | tombstone. This is a NIF function for efficiency.

posting_list_union(Bin1, Bin2)

-spec posting_list_union(binary(), binary()) -> binary().

Compute union of two posting lists. Returns a new V2 posting list containing all keys from both inputs.

posting_list_version(Bin)

-spec posting_list_version(binary()) -> 1 | 2.

Get the format version of a posting list binary. Returns 1 for V1 (legacy) format, 2 for V2 (sorted with roaring bitmap).

postings_bitmap_contains(Postings, Key)

-spec postings_bitmap_contains(reference(), binary()) -> boolean().

Check if key exists in postings resource (bitmap hash lookup). O(1) lookup but may have rare false positives due to hash collisions.

postings_contains(Postings, Key)

-spec postings_contains(reference(), binary()) -> boolean().

Check if key exists in postings resource (exact match). O(log n) lookup using sorted set.

postings_count(Postings)

-spec postings_count(reference()) -> non_neg_integer().

Get count of keys in postings resource.

postings_difference(A, B)

-spec postings_difference(binary() | reference(), binary() | reference()) ->
                             {ok, reference()} | {error, term()}.

Difference of two postings (A - B). Accepts binary or resource, returns resource.

postings_intersect_all(List)

-spec postings_intersect_all([binary() | reference()]) -> {ok, reference()} | {error, term()}.

Intersect multiple postings efficiently.

postings_intersection(A, B)

-spec postings_intersection(binary() | reference(), binary() | reference()) ->
                               {ok, reference()} | {error, term()}.

Intersect two postings (AND). Accepts binary or resource, returns resource.

postings_intersection_count(A, B)

-spec postings_intersection_count(binary() | reference(), binary() | reference()) -> non_neg_integer().

Fast intersection count using bitmap.

postings_keys(Postings)

-spec postings_keys(reference()) -> [binary()].

Get all keys from postings resource (sorted).

postings_open(Bin)

-spec postings_open(binary()) -> {ok, reference()} | {error, term()}.

Open/parse posting list binary into a resource for fast repeated lookups. Use this when you need to perform multiple contains checks on the same posting list. The resource holds parsed keys and bitmap for fast lookups.

postings_to_binary(Postings)

-spec postings_to_binary(reference()) -> binary().

Convert postings resource back to binary (V2 format).

postings_union(A, B)

-spec postings_union(binary() | reference(), binary() | reference()) ->
                        {ok, reference()} | {error, term()}.

Union two postings (OR). Accepts binary or resource, returns resource.

purge_old_backup(BackupEngine, NumBackupToKeep)

-spec purge_old_backup(BackupEngine :: backup_engine(), NumBackupToKeep :: non_neg_integer()) ->
                          ok | {error, any()}.

deletes old backups, keeping latest num_backups_to_keep alive

put(DBHandle, Key, Value, WriteOpts)

-spec put(DBHandle, Key, Value, WriteOpts) -> Res
             when
                 DBHandle :: db_handle(),
                 Key :: binary(),
                 Value :: binary(),
                 WriteOpts :: write_options(),
                 Res :: ok | {error, any()}.

Put a key/value pair into the default column family

put(DBHandle, CFHandle, Key, Value, WriteOpts)

-spec put(DBHandle, CFHandle, Key, Value, WriteOpts) -> Res
             when
                 DBHandle :: db_handle(),
                 CFHandle :: cf_handle(),
                 Key :: binary(),
                 Value :: binary(),
                 WriteOpts :: write_options(),
                 Res :: ok | {error, any()}.

Put a key/value pair into the specified column family

put_entity(DBHandle, Key, Columns, WriteOpts)

-spec put_entity(DBHandle, Key, Columns, WriteOpts) -> Res
                    when
                        DBHandle :: db_handle(),
                        Key :: binary(),
                        Columns :: [{binary(), binary()}],
                        WriteOpts :: write_options(),
                        Res :: ok | {error, any()}.

Put an entity (wide-column key) in the default column family. An entity is a key with multiple named columns stored as a proplist.

put_entity(DBHandle, CFHandle, Key, Columns, WriteOpts)

-spec put_entity(DBHandle, CFHandle, Key, Columns, WriteOpts) -> Res
                    when
                        DBHandle :: db_handle(),
                        CFHandle :: cf_handle(),
                        Key :: binary(),
                        Columns :: [{binary(), binary()}],
                        WriteOpts :: write_options(),
                        Res :: ok | {error, any()}.

Put an entity (wide-column key) in the specified column family.

release_batch(Batch)

-spec release_batch(Batch :: batch_handle()) -> ok.

release_cache(Cache)

release the cache

release_pessimistic_transaction(TransactionHandle)

-spec release_pessimistic_transaction(TransactionHandle :: transaction_handle()) -> ok.

release a pessimistic transaction.

release_rate_limiter(Limiter)

release the limiter

release_snapshot(SnapshotHandle)

-spec release_snapshot(SnapshotHandle :: snapshot_handle()) -> ok | {error, any()}.

release a snapshot

release_sst_file_manager(SstFileManager)

-spec release_sst_file_manager(sst_file_manager()) -> ok.

release the SstFileManager

release_sst_file_reader(SstFileReader)

-spec release_sst_file_reader(SstFileReader) -> ok when SstFileReader :: sst_file_reader().

Release the SST file reader resource.

Closes the SST file and releases all associated resources. Any iterators created from this reader will become invalid.

release_sst_file_writer(SstFileWriter)

-spec release_sst_file_writer(SstFileWriter) -> ok when SstFileWriter :: sst_file_writer().

Release the SST file writer resource.

Note: If finish/1,2 was not called, the partially written file may remain.

release_statistics(Statistics)

-spec release_statistics(statistics_handle()) -> ok.

release the Statistics Handle

release_transaction(TransactionHandle)

-spec release_transaction(TransactionHandle :: transaction_handle()) -> ok.

release a transaction

release_write_buffer_manager(WriteBufferManager)

-spec release_write_buffer_manager(write_buffer_manager()) -> ok.

repair(Name, DBOpts)

-spec repair(Name :: file:filename_all(), DBOpts :: db_options()) -> ok | {error, any()}.

Try to repair as much of the contents of the database as possible. Some data may be lost, so be careful when calling this function

restore_db_from_backup(BackupEngine, BackupId, DbDir)

-spec restore_db_from_backup(BackupEngine, BackupId, DbDir) -> Result
                                when
                                    BackupEngine :: backup_engine(),
                                    BackupId :: non_neg_integer(),
                                    DbDir :: string(),
                                    Result :: ok | {error, any()}.

restore from backup with backup_id

restore_db_from_backup(BackupEngine, BackupId, DbDir, WalDir)

-spec restore_db_from_backup(BackupEngine, BackupId, DbDir, WalDir) -> Result
                                when
                                    BackupEngine :: backup_engine(),
                                    BackupId :: non_neg_integer(),
                                    DbDir :: string(),
                                    WalDir :: string(),
                                    Result :: ok | {error, any()}.

restore from backup with backup_id

restore_db_from_latest_backup(BackupEngine, DbDir)

-spec restore_db_from_latest_backup(BackupEngine, DbDir) -> Result
                                       when
                                           BackupEngine :: backup_engine(),
                                           DbDir :: string(),
                                           Result :: ok | {error, any()}.

restore from the latest backup

restore_db_from_latest_backup(BackupEngine, DbDir, WalDir)

-spec restore_db_from_latest_backup(BackupEngine, DbDir, WalDir) -> Result
                                       when
                                           BackupEngine :: backup_engine(),
                                           DbDir :: string(),
                                           WalDir :: string(),
                                           Result :: ok | {error, any()}.

restore from the latest backup

set_capacity(Cache, Capacity)

-spec set_capacity(Cache :: cache_handle(), Capacity :: non_neg_integer()) -> ok.

sets the maximum configured capacity of the cache. When the new capacity is less than the old capacity and the existing usage is greater than new capacity, the implementation will do its best job to purge the released entries from the cache in order to lower the usage

set_db_background_threads(DB, N)

-spec set_db_background_threads(DB :: db_handle(), N :: non_neg_integer()) -> ok.

set background threads of a database

set_db_background_threads(DB, N, Priority)

-spec set_db_background_threads(DB :: db_handle(), N :: non_neg_integer(), Priority :: env_priority()) ->
                                   ok.

set database background threads of low and high prioriry threads pool of an environment Flush threads are in the HIGH priority pool, while compaction threads are in the LOW priority pool. To increase the number of threads in each pool call:

set_env_background_threads(Env, N)

-spec set_env_background_threads(Env :: env_handle(), N :: non_neg_integer()) -> ok.

set background threads of an environment

set_env_background_threads(Env, N, Priority)

-spec set_env_background_threads(Env :: env_handle(),
                                 N :: non_neg_integer(),
                                 Priority :: env_priority()) ->
                                    ok.

set background threads of low and high prioriry threads pool of an environment Flush threads are in the HIGH priority pool, while compaction threads are in the LOW priority pool. To increase the number of threads in each pool call:

set_stats_level(StatisticsHandle, StatsLevel)

-spec set_stats_level(statistics_handle(), stats_level()) -> ok.

set_strict_capacity_limit(Cache, StrictCapacityLimit)

-spec set_strict_capacity_limit(Cache :: cache_handle(), StrictCapacityLimit :: boolean()) -> ok.

sets strict_capacity_limit flag of the cache. If the flag is set to true, insert to cache will fail if no enough capacity can be free.

set_ttl(DBHandle, TTL)

-spec set_ttl(DBHandle, TTL) -> ok | {error, any()} when DBHandle :: db_handle(), TTL :: integer().

Set the default TTL for a TTL database. The TTL is specified in seconds.

set_ttl(DBHandle, CFHandle, TTL)

-spec set_ttl(DBHandle, CFHandle, TTL) -> ok | {error, any()}
                 when DBHandle :: db_handle(), CFHandle :: cf_handle(), TTL :: integer().

Set the TTL for a specific column family in a TTL database. The TTL is specified in seconds.

single_delete(DBHandle, Key, WriteOpts)

-spec single_delete(DBHandle, Key, WriteOpts) -> ok | {error, any()}
                       when DBHandle :: db_handle(), Key :: binary(), WriteOpts :: write_options().

Remove the database entry for "key". Requires that the key exists and was not overwritten. Returns OK on success, and a non-OK status on error. It is not an error if "key" did not exist in the database.

If a key is overwritten (by calling Put() multiple times), then the result of calling SingleDelete() on this key is undefined. SingleDelete() only behaves correctly if there has been only one Put() for this key since the previous call to SingleDelete() for this key.

This feature is currently an experimental performance optimization for a very specific workload. It is up to the caller to ensure that SingleDelete is only used for a key that is not deleted using Delete() or written using Merge(). Mixing SingleDelete operations with Deletes can result in undefined behavior.

Note: consider setting options {sync, true}.

single_delete(DBHandle, CFHandle, Key, WriteOpts)

-spec single_delete(DBHandle, CFHandle, Key, WriteOpts) -> Res
                       when
                           DBHandle :: db_handle(),
                           CFHandle :: cf_handle(),
                           Key :: binary(),
                           WriteOpts :: write_options(),
                           Res :: ok | {error, any()}.

like single_delete/3 but on the specified column family

snapshot(DbHandle)

-spec snapshot(DbHandle :: db_handle()) -> {ok, snapshot_handle()} | {error, any()}.

return a database snapshot Snapshots provide consistent read-only views over the entire state of the key-value store

sst_file_manager_flag(SstFileManager, Flag, Value)

-spec sst_file_manager_flag(SstFileManager, Flag, Value) -> Result
                               when
                                   SstFileManager :: sst_file_manager(),
                                   Flag ::
                                       max_allowed_space_usage | compaction_buffer_size |
                                       delete_rate_bytes_per_sec | max_trash_db_ratio,
                                   Value :: non_neg_integer() | float(),
                                   Result :: ok.

set certains flags for the SST file manager * max_allowed_space_usage: Update the maximum allowed space that should be used by RocksDB, if the total size of the SST files exceeds MaxAllowedSpace, writes to RocksDB will fail.

Setting MaxAllowedSpace to 0 will disable this feature; maximum allowed pace will be infinite (Default value). * compaction_buffer_size: Set the amount of buffer room each compaction should be able to leave. In other words, at its maximum disk space consumption, the compaction should still leave compaction_buffer_size available on the disk so that other background functions may continue, such as logging and flushing. * delete_rate_bytes_per_sec: Update the delete rate limit in bytes per second. zero means disable delete rate limiting and delete files immediately * max_trash_db_ratio: Update trash/DB size ratio where new files will be deleted immediately (float)

sst_file_manager_info(SstFileManager)

-spec sst_file_manager_info(SstFileManager) -> InfoList
                               when
                                   SstFileManager :: sst_file_manager(),
                                   InfoList :: [InfoTuple],
                                   InfoTuple ::
                                       {total_size, non_neg_integer()} |
                                       {delete_rate_bytes_per_sec, non_neg_integer()} |
                                       {max_trash_db_ratio, float()} |
                                       {total_trash_size, non_neg_integer()} |
                                       {is_max_allowed_space_reached, boolean()} |
                                       {max_allowed_space_reached_including_compactions, boolean()}.

return informations of a Sst File Manager as a list of tuples.

* {total_size, Int>0}: total size of all tracked files * {delete_rate_bytes_per_sec, Int > 0}: delete rate limit in bytes per second * {max_trash_db_ratio, Float>0}: trash/DB size ratio where new files will be deleted immediately * {total_trash_size, Int > 0}: total size of trash files * {is_max_allowed_space_reached, Boolean} true if the total size of SST files exceeded the maximum allowed space usage * {max_allowed_space_reached_including_compactions, Boolean}: true if the total size of SST files as well as estimated size of ongoing compactions exceeds the maximums allowed space usage

sst_file_manager_info(SstFileManager, Item)

-spec sst_file_manager_info(SstFileManager, Item) -> Value
                               when
                                   SstFileManager :: sst_file_manager(),
                                   Item ::
                                       total_size | delete_rate_bytes_per_sec | max_trash_db_ratio |
                                       total_trash_size | is_max_allowed_space_reached |
                                       max_allowed_space_reached_including_compactions,
                                   Value :: term().

return the information associated with Item for an SST File Manager SstFileManager

sst_file_manager_tracked_files(SstFileManager)

-spec sst_file_manager_tracked_files(SstFileManager) -> [{binary(), non_neg_integer()}]
                                        when SstFileManager :: sst_file_manager().

Returns a list of all SST files being tracked and their sizes. Each element is a tuple of {FilePath, Size} where FilePath is a binary and Size is the file size in bytes.

sst_file_reader_get_table_properties(SstFileReader)

-spec sst_file_reader_get_table_properties(SstFileReader) -> Result
                                              when
                                                  SstFileReader :: sst_file_reader(),
                                                  Result :: {ok, table_properties()} | {error, any()}.

Get the table properties of the SST file.

Returns a map containing metadata about the SST file including: - data_size: Size of data blocks in bytes - index_size: Size of index blocks in bytes - filter_size: Size of filter block (if any) - num_entries: Number of key-value entries - num_deletions: Number of delete tombstones - compression_name: Name of compression algorithm used - creation_time: Unix timestamp when file was created And many more properties.

sst_file_reader_iterator(SstFileReader, Options)

-spec sst_file_reader_iterator(SstFileReader, Options) -> Result
                                  when
                                      SstFileReader :: sst_file_reader(),
                                      Options :: read_options(),
                                      Result :: {ok, sst_file_reader_itr()} | {error, any()}.

Create an iterator for reading the contents of the SST file.

Returns an iterator that can be used to scan through all key-value pairs in the SST file. The iterator supports the same movement operations as regular database iterators.

Options: - verify_checksums: Verify block checksums during iteration (default: false) - fill_cache: Fill block cache during iteration (default: true)

sst_file_reader_iterator_close(Iterator)

-spec sst_file_reader_iterator_close(Iterator) -> ok when Iterator :: sst_file_reader_itr().

Close an SST file reader iterator.

Releases resources associated with the iterator.

sst_file_reader_iterator_move(Iterator, Action)

-spec sst_file_reader_iterator_move(Iterator, Action) -> Result
                                       when
                                           Iterator :: sst_file_reader_itr(),
                                           Action ::
                                               first | last | next | prev |
                                               {seek, binary()} |
                                               {seek_for_prev, binary()},
                                           Result ::
                                               {ok, Key :: binary(), Value :: binary()} | {error, any()}.

Move the SST file reader iterator to a new position.

Supported actions: - first: Move to the first entry - last: Move to the last entry - next: Move to the next entry - prev: Move to the previous entry - {seek, Key}: Seek to the entry at or after Key - {seek_for_prev, Key}: Seek to the entry at or before Key

Returns {ok, Key, Value} if the iterator is valid, or {error, Reason} if not.

sst_file_reader_open(Options, FilePath)

-spec sst_file_reader_open(Options, FilePath) -> Result
                              when
                                  Options :: db_options() | cf_options(),
                                  FilePath :: file:filename_all(),
                                  Result :: {ok, sst_file_reader()} | {error, any()}.

Open an SST file for reading.

Creates an SST file reader that allows inspecting the contents of an SST file without loading it into a database. This is useful for offline verification, debugging, or extracting data from SST files.

Options are the same as database options (compression, block_size, etc.)

sst_file_reader_verify_checksum(SstFileReader)

-spec sst_file_reader_verify_checksum(SstFileReader) -> Result
                                         when
                                             SstFileReader :: sst_file_reader(),
                                             Result :: ok | {error, any()}.

Verify the checksums of all blocks in the SST file.

Reads through all data blocks and verifies their checksums. Returns ok if all checksums are valid, or an error if any are corrupted.

sst_file_reader_verify_checksum(SstFileReader, Options)

-spec sst_file_reader_verify_checksum(SstFileReader, Options) -> Result
                                         when
                                             SstFileReader :: sst_file_reader(),
                                             Options :: read_options(),
                                             Result :: ok | {error, any()}.

Verify the checksums of all blocks in the SST file.

Same as verify_checksum/1 but with read options.

sst_file_writer_delete(SstFileWriter, Key)

-spec sst_file_writer_delete(SstFileWriter, Key) -> Result
                                when
                                    SstFileWriter :: sst_file_writer(),
                                    Key :: binary(),
                                    Result :: ok | {error, any()}.

Add a delete tombstone to the SST file.

IMPORTANT: Keys must be added in sorted order according to the comparator.

sst_file_writer_delete_range(SstFileWriter, BeginKey, EndKey)

-spec sst_file_writer_delete_range(SstFileWriter, BeginKey, EndKey) -> Result
                                      when
                                          SstFileWriter :: sst_file_writer(),
                                          BeginKey :: binary(),
                                          EndKey :: binary(),
                                          Result :: ok | {error, any()}.

Add a range delete tombstone to the SST file.

Deletes all keys in the range [BeginKey, EndKey). Range deletions can be added in any order.

sst_file_writer_file_size(SstFileWriter)

-spec sst_file_writer_file_size(SstFileWriter) -> Size
                                   when SstFileWriter :: sst_file_writer(), Size :: non_neg_integer().

Get the current file size during writing.

sst_file_writer_finish(SstFileWriter)

-spec sst_file_writer_finish(SstFileWriter) -> Result
                                when SstFileWriter :: sst_file_writer(), Result :: ok | {error, any()}.

Finalize writing to the SST file and close it.

After this call, the SST file is ready to be ingested into the database.

sst_file_writer_finish(SstFileWriter, _)

-spec sst_file_writer_finish(SstFileWriter, with_file_info) -> Result
                                when
                                    SstFileWriter :: sst_file_writer(),
                                    Result :: {ok, sst_file_info()} | {error, any()}.

Finalize writing to the SST file and return file info.

Returns a map with file metadata including: - file_path: Path to the created SST file - smallest_key: Smallest key in the file - largest_key: Largest key in the file - file_size: Size of the file in bytes - num_entries: Number of entries in the file - sequence_number: Sequence number assigned to keys

sst_file_writer_merge(SstFileWriter, Key, Value)

-spec sst_file_writer_merge(SstFileWriter, Key, Value) -> Result
                               when
                                   SstFileWriter :: sst_file_writer(),
                                   Key :: binary(),
                                   Value :: binary(),
                                   Result :: ok | {error, any()}.

Add a merge operation to the SST file.

IMPORTANT: Keys must be added in sorted order according to the comparator.

sst_file_writer_open(Options, FilePath)

-spec sst_file_writer_open(Options, FilePath) -> Result
                              when
                                  Options :: db_options() | cf_options(),
                                  FilePath :: file:filename_all(),
                                  Result :: {ok, sst_file_writer()} | {error, any()}.

Open a new SST file for writing.

Creates an SST file writer that can be used to build SST files externally. Keys must be added in sorted order (according to the comparator). Once finished, the SST file can be ingested into the database using ingest_external_file/3,4.

Options are the same as database options (compression, block_size, etc.)

sst_file_writer_put(SstFileWriter, Key, Value)

-spec sst_file_writer_put(SstFileWriter, Key, Value) -> Result
                             when
                                 SstFileWriter :: sst_file_writer(),
                                 Key :: binary(),
                                 Value :: binary(),
                                 Result :: ok | {error, any()}.

Add a key-value pair to the SST file.

IMPORTANT: Keys must be added in sorted order according to the comparator. Adding a key that is not greater than the previous key will result in an error.

sst_file_writer_put_entity(SstFileWriter, Key, Columns)

-spec sst_file_writer_put_entity(SstFileWriter, Key, Columns) -> Result
                                    when
                                        SstFileWriter :: sst_file_writer(),
                                        Key :: binary(),
                                        Columns :: [{ColumnName :: binary(), ColumnValue :: binary()}],
                                        Result :: ok | {error, any()}.

Add a wide-column entity to the SST file.

IMPORTANT: Keys must be added in sorted order according to the comparator.

statistics_histogram(Statistics, Histogram)

Get histogram data for a specific statistics histogram. Returns histogram information including median, percentiles, average, etc. For integrated BlobDB, relevant histograms are blob_db_blob_file_write_micros, blob_db_blob_file_read_micros, blob_db_compression_micros, etc.

statistics_info(Statistics)

-spec statistics_info(Statistics) -> InfoList
                         when
                             Statistics :: statistics_handle(),
                             InfoList :: [InfoTuple],
                             InfoTuple :: {stats_level, stats_level()}.

statistics_ticker(Statistics, Ticker)

Get the count for a specific statistics ticker. Returns the count for tickers such as blob_db_num_put, block_cache_hit, number_keys_written, compact_read_bytes, etc.

stats(DBHandle)

-spec stats(DBHandle :: db_handle()) -> {ok, any()} | {error, any()}.

Return the current stats of the default column family Implemented by calling GetProperty with "rocksdb.stats"

stats(DBHandle, CFHandle)

-spec stats(DBHandle :: db_handle(), CFHandle :: cf_handle()) -> {ok, any()} | {error, any()}.

Return the current stats of the specified column family Implemented by calling GetProperty with "rocksdb.stats"

stop_backup(BackupEngine)

-spec stop_backup(backup_engine()) -> ok.

sync_wal(DbHandle)

-spec sync_wal(db_handle()) -> ok | {error, term()}.

Sync the wal. Note that Write() followed by SyncWAL() is not exactly the same as Write() with sync=true: in the latter case the changes won't be visible until the sync is done. Currently only works if allow_mmap_writes = false in Options.

tlog_iterator(Db, Since)

-spec tlog_iterator(Db :: db_handle(), Since :: non_neg_integer()) -> {ok, Iterator :: term()}.

create a new iterator to retrive ethe transaction log since a sequce

tlog_iterator_close(Iterator)

-spec tlog_iterator_close(term()) -> ok.

close the transaction log

tlog_next_binary_update(Iterator)

-spec tlog_next_binary_update(Iterator :: term()) ->
                                 {ok, LastSeq :: non_neg_integer(), BinLog :: binary()} |
                                 {error, term()}.

go to the last update as a binary in the transaction log, can be ussed with the write_binary_update function.

tlog_next_update(Iterator)

-spec tlog_next_update(Iterator :: term()) ->
                          {ok, LastSeq :: non_neg_integer(), Log :: write_actions(), BinLog :: binary()} |
                          {error, term()}.

like tlog_nex_binary_update/1 but also return the batch as a list of operations

transaction(TransactionDB, WriteOptions)

-spec transaction(TransactionDB :: db_handle(), WriteOptions :: write_options()) ->
                     {ok, transaction_handle()}.

create a new transaction When opened as a Transaction or Optimistic Transaction db, a user can both read and write to a transaction without committing anything to the disk until they decide to do so.

transaction_commit(Transaction)

-spec transaction_commit(Transaction :: transaction_handle()) -> ok | {error, term()}.

commit a transaction to disk atomically (?)

transaction_delete(Transaction, Key)

-spec transaction_delete(Transaction :: transaction_handle(), Key :: binary()) -> ok.

transaction implementation of delete operation to the transaction

transaction_delete(Transaction, ColumnFamily, Key)

-spec transaction_delete(Transaction :: transaction_handle(),
                         ColumnFamily :: cf_handle(),
                         Key :: binary()) ->
                            ok.

like transaction_delete/2 but apply the operation to a column family

transaction_get(Transaction, Key, Opts)

-spec transaction_get(Transaction :: transaction_handle(), Key :: binary(), Opts :: read_options()) ->
                         Res ::
                             {ok, binary()} |
                             not_found |
                             {error, {corruption, string()}} |
                             {error, any()}.

do a get operation on the contents of the transaction

transaction_get(Transaction, ColumnFamily, Key, Opts)

-spec transaction_get(Transaction :: transaction_handle(),
                      ColumnFamily :: cf_handle(),
                      Key :: binary(),
                      Opts :: read_options()) ->
                         Res ::
                             {ok, binary()} |
                             not_found |
                             {error, {corruption, string()}} |
                             {error, any()}.

like transaction_get/3 but apply the operation to a column family

transaction_get_for_update(Transaction, Key, Opts)

-spec transaction_get_for_update(Transaction :: transaction_handle(),
                                 Key :: binary(),
                                 Opts :: read_options()) ->
                                    Res ::
                                        {ok, binary()} |
                                        not_found |
                                        {error, busy} |
                                        {error, {corruption, string()}} |
                                        {error, any()}.

get a value and track the key for conflict detection at commit time. For optimistic transactions, this records the key so that if another transaction modifies it before commit, the commit will fail with a conflict.

transaction_get_for_update(Transaction, ColumnFamily, Key, Opts)

-spec transaction_get_for_update(Transaction :: transaction_handle(),
                                 ColumnFamily :: cf_handle(),
                                 Key :: binary(),
                                 Opts :: read_options()) ->
                                    Res ::
                                        {ok, binary()} |
                                        not_found |
                                        {error, busy} |
                                        {error, {corruption, string()}} |
                                        {error, any()}.

like transaction_get_for_update/3 but apply the operation to a column family

transaction_iterator(TransactionHandle, ReadOpts)

-spec transaction_iterator(TransactionHandle, ReadOpts) -> Res
                              when
                                  TransactionHandle :: transaction_handle(),
                                  ReadOpts :: read_options(),
                                  Res :: {ok, itr_handle()} | {error, any()}.

Return a iterator over the contents of the database and uncommited writes and deletes in the current transaction. The result of iterator() is initially invalid (caller must call iterator_move function on the iterator before using it).

transaction_iterator(TransactionHandle, CFHandle, ReadOpts)

-spec transaction_iterator(TransactionHandle, CFHandle, ReadOpts) -> Res
                              when
                                  TransactionHandle :: transaction_handle(),
                                  CFHandle :: cf_handle(),
                                  ReadOpts :: read_options(),
                                  Res :: {ok, itr_handle()} | {error, any()}.

Return a iterator over the contents of the database and uncommited writes and deletes in the current transaction. The result of iterator() is initially invalid (caller must call iterator_move function on the iterator before using it).

transaction_multi_get(Transaction, Keys, Opts)

-spec transaction_multi_get(Transaction :: transaction_handle(),
                            Keys :: [binary()],
                            Opts :: read_options()) ->
                               [{ok, binary()} | not_found | {error, any()}].

batch get multiple values within a transaction. Returns a list of results in the same order as the input keys.

transaction_multi_get(Transaction, ColumnFamily, Keys, Opts)

-spec transaction_multi_get(Transaction :: transaction_handle(),
                            ColumnFamily :: cf_handle(),
                            Keys :: [binary()],
                            Opts :: read_options()) ->
                               [{ok, binary()} | not_found | {error, any()}].

like transaction_multi_get/3 but apply the operation to a column family

transaction_multi_get_for_update(Transaction, Keys, Opts)

-spec transaction_multi_get_for_update(Transaction :: transaction_handle(),
                                       Keys :: [binary()],
                                       Opts :: read_options()) ->
                                          [{ok, binary()} | not_found | {error, any()}].

batch get multiple values and track keys for conflict detection. For optimistic transactions, this records the keys so that if another transaction modifies any of them before commit, the commit will fail.

transaction_multi_get_for_update(Transaction, ColumnFamily, Keys, Opts)

-spec transaction_multi_get_for_update(Transaction :: transaction_handle(),
                                       ColumnFamily :: cf_handle(),
                                       Keys :: [binary()],
                                       Opts :: read_options()) ->
                                          [{ok, binary()} | not_found | {error, any()}].

like transaction_multi_get_for_update/3 but apply to a column family

transaction_put(Transaction, Key, Value)

-spec transaction_put(Transaction :: transaction_handle(), Key :: binary(), Value :: binary()) ->
                         ok | {error, any()}.

add a put operation to the transaction

transaction_put(Transaction, ColumnFamily, Key, Value)

-spec transaction_put(Transaction :: transaction_handle(),
                      ColumnFamily :: cf_handle(),
                      Key :: binary(),
                      Value :: binary()) ->
                         ok | {error, any()}.

like transaction_put/3 but apply the operation to a column family

transaction_rollback(Transaction)

-spec transaction_rollback(Transaction :: transaction_handle()) -> ok | {error, term()}.

rollback a transaction to disk atomically (?)

updates_iterator(DBH, Since)

verify_backup(BackupEngine, BackupId)

-spec verify_backup(BackupEngine :: backup_engine(), BackupId :: non_neg_integer()) ->
                       ok | {error, any()}.

checks that each file exists and that the size of the file matches our expectations. it does not check file checksum.

write(DBHandle, WriteActions, WriteOpts)

-spec write(DBHandle, WriteActions, WriteOpts) -> Res
               when
                   DBHandle :: db_handle(),
                   WriteActions :: write_actions(),
                   WriteOpts :: write_options(),
                   Res :: ok | {error, any()}.

Apply the specified updates to the database. this function will be removed on the next major release. You should use the batch_* API instead.

write_batch(Db, Batch, WriteOptions)

-spec write_batch(Db :: db_handle(), Batch :: batch_handle(), WriteOptions :: write_options()) ->
                     ok | {error, term()}.

write the batch to the database

write_binary_update(DbHandle, BinLog, WriteOptions)

-spec write_binary_update(DbHandle :: db_handle(), BinLog :: binary(), WriteOptions :: write_options()) ->
                             ok | {error, term()}.

apply a set of operation coming from a transaction log to another database. Can be useful to use it in slave mode.

write_buffer_manager_info(WriteBufferManager)

-spec write_buffer_manager_info(WriteBufferManager) -> InfoList
                                   when
                                       WriteBufferManager :: write_buffer_manager(),
                                       InfoList :: [InfoTuple],
                                       InfoTuple ::
                                           {memory_usage, non_neg_integer()} |
                                           {mutable_memtable_memory_usage, non_neg_integer()} |
                                           {buffer_size, non_neg_integer()} |
                                           {enabled, boolean()}.

return informations of a Write Buffer Manager as a list of tuples.

write_buffer_manager_info(WriteBufferManager, Item)

-spec write_buffer_manager_info(WriteBufferManager, Item) -> Value
                                   when
                                       WriteBufferManager :: write_buffer_manager(),
                                       Item ::
                                           memory_usage | mutable_memtable_memory_usage | buffer_size |
                                           enabled,
                                       Value :: term().

return the information associated with Item for a Write Buffer Manager.