View Source Mongo (mongodb-driver v1.5.0)

The main entry point for doing queries. All functions take a topology to run the query on.

Generic options

All operations take these options.

  • :timeout - The maximum time that the caller is allowed the to hold the connection’s state (ignored when using a run/transaction connection, default: 15_000)
  • :checkout_timeout - The maximum time for checking out a new session and connection (default: 60_000). When the connection pool exhausted then the function call times out after :checkout_timeout.
  • :pool - The pooling behaviour module to use, this option is required unless the default DBConnection pool is used
  • :pool_timeout - The maximum time to wait for a reply when making a synchronous call to the pool (default: 5_000)
  • :queue - Whether to block waiting in an internal queue for the connection's state (boolean, default: true)
  • :log - A function to log information about a call, either a 1-arity fun, {module, function, args} with DBConnection.LogEntry.t prepended to args or nil. See DBConnection.LogEntry (default: nil)
  • :database - the database to run the operation on
  • :connect_timeout - maximum timeout for connect (default: 5_000)

Read options

All read operations that returns a cursor take the following options for controlling the behaviour of the cursor.

  • :batch_size - Number of documents to fetch in each batch
  • :limit - Maximum number of documents to fetch with the cursor
  • :read_preference - specifies the rules for selecting a server to query

Write options

All write operations take the following options for controlling the write concern.

  • :w - The number of servers to replicate to before returning from write operators, a 0 value will return immediately, :majority will wait until the operation propagates to a majority of members in the replica set (Default: 1)
  • :j If true, the write operation will only return after it has been committed to journal - (Default: false)
  • :wtimeout - If the write concern is not satisfied in the specified interval, the operation returns an error

Summary

Functions

Executes an admin command against the admin database using always the primary. Retryable writes are disabled.

Performs aggregation operation using the aggregation pipeline and returns a Mongo.Stream. It should be noted that code that uses the paginated query results without engaging Mongo.Streams Enumerable behavior can result in the sessions hanging around and causing resource exhaustion.

Convenient function to execute write and read operation with a causal consistency session.

Issue a database command. If the command has parameters use a keyword list for the document because the "command key" has to be the first in the document.

Similar to command/3 but unwraps the result and raises on error.

Returns the count of documents that would match a find/4 query.

Similar to count_documents/4 but unwraps the result and raises on error.

Explicitly creates a collection or view.

Convenient function to creates new indexes in the collection coll. The indexes parameter is a list with all options for creating indexes in the MongoDB.

Remove all documents matching the filter from the collection.

Similar to delete_many/4 but unwraps the result and raises on error.

Remove a document matching the filter from the collection.

Similar to delete_one/4 but unwraps the result and raises on error.

Finds the distinct values for a specified field across a collection.

Similar to distinct/5 but unwraps the result and raises on error.

Convenient function that drops the collection coll.

Convenient function that drops the database name.

Convenient function that drops the index name in the collection coll.

Estimate the number of documents in a collection using collection metadata.

Similar to estimated_document_count/3 but unwraps the result and raises on error.

Selects documents in a collection and returns a cursor for the selected documents.

Selects a single document in a collection and returns either a document or nil.

Finds a document and updates it (using atomic modifiers).

Insert multiple documents into the collection.

Similar to insert_many/4 but unwraps the result and raises on error.

Insert a single document into the collection.

Similar to insert_one/4 but unwraps the result and raises on error.

This function is very fundamental.

Returns the limits of the database.

Convenient function that returns a cursor with the names of the indexes.

Returns a cursor to enumerate all indexes

Generates a new BSON.ObjectId.

Sends a ping command to the server.

Changes the name of an existing collection. Specify collection names to rename_collection in the form of a complete namespace (<database>.<collection>).

Replace a single document matching the filter with the new document.

Similar to replace_one/5 but unwraps the result and raises on error.

In case of retryable reads are enabled, the keyword :read_counter is added with the value of 1.

In case of retryable writes are enabled, the keyword :write_counter is added with the value of 1.

Start and link to a database connection process.

Converts the DataTime to a MongoDB timestamp.

Convenient function for running multiple write commands in a transaction.

Performs one or more update operations.

Update all documents matching the filter.

Similar to update_many/5 but unwraps the result and raises on error.

Update a single document matching the filter.

Similar to update_one/5 but unwraps the result and raises on error.

Creates a new UUID.

Converts the binary to UUID

Similar to uuid/1 except it will unwrap the error tuple and raise in case of errors.

Creates a change stream cursor all collections of the database.

Returns the wire version of the database

Types

@type collection() :: String.t()
@type conn() :: DbConnection.Conn
@type cursor() :: Mongo.Cursor.t()
@type result(t) ::
  {:ok, t} | {:error, Mongo.Error.t()} | {:error, Mongo.WriteError.t()}
@type result!(t) :: t

Functions

Link to this function

abort_transaction(reason)

View Source
Link to this function

abort_transaction(arg1, reason)

View Source
Link to this function

admin_command(topology_pid, cmd)

View Source

Executes an admin command against the admin database using always the primary. Retryable writes are disabled.

Example

iex> cmd = [

configureFailPoint: "failCommand",
mode: "alwaysOn",
data: [errorCode: 6, failCommands: ["commitTransaction"], errorLabels: ["TransientTransactionError"]]

]

iex> {:ok, _doc} = Mongo.admin_command(top, cmd)

Link to this function

aggregate(topology_pid, coll, pipeline, opts \\ [])

View Source
@spec aggregate(GenServer.server(), collection(), [BSON.document()], Keyword.t()) ::
  cursor()

Performs aggregation operation using the aggregation pipeline and returns a Mongo.Stream. It should be noted that code that uses the paginated query results without engaging Mongo.Streams Enumerable behavior can result in the sessions hanging around and causing resource exhaustion.

Example:

# Results in an open session
%Mongo.Stream{docs: docs} = Mongo.aggregate(@topology, collection, pipeline, opts)
docs |> Enum.map(fn elem -> elem end)

# Results in a closed session via the Enumerable protocol
Mongo.aggregate(@topology, collection, pipeline, opts)
|> Enum.map(fn elem -> elem end)

For all options see Options

Link to this function

causal_consistency(topology_pid, fun, opts \\ [])

View Source

Convenient function to execute write and read operation with a causal consistency session.

With causally consistent sessions, MongoDB executes causal operations in an order that respect their causal relationships, and clients observe results that are consistent with the causal relationships.

Example

{:ok, 0} = Mongo.causal_consistency(top, fn ->
    Mongo.delete_many(top, "dogs", %{name: "Greta"}, w: :majority)
    Mongo.count(top, "dogs", %{name: "Greta"}, read_concern: %{level: :majority})
end)

The function creates a causal consistency session and stores it in the process dictionary under the key :session. But you need to specify the write and read concerns for each operation to :majority

Link to this function

command(topology_pid, cmd, opts \\ [])

View Source

Issue a database command. If the command has parameters use a keyword list for the document because the "command key" has to be the first in the document.

Link to this function

command!(topology_pid, cmd, opts \\ [])

View Source

Similar to command/3 but unwraps the result and raises on error.

Link to this function

count_documents(topology_pid, coll, filter, opts \\ [])

View Source
@spec count_documents(GenServer.server(), collection(), BSON.document(), Keyword.t()) ::
  result(non_neg_integer())

Returns the count of documents that would match a find/4 query.

Options

  • :limit - Maximum number of documents to fetch with the cursor
  • :skip - Number of documents to skip before returning the first
Link to this function

count_documents!(topology_pid, coll, filter, opts \\ [])

View Source
@spec count_documents!(GenServer.server(), collection(), BSON.document(), Keyword.t()) ::
  result!(non_neg_integer())

Similar to count_documents/4 but unwraps the result and raises on error.

Link to this function

create(topology_pid, coll, opts \\ [])

View Source
@spec create(GenServer.server(), collection(), Keyword.t()) ::
  :ok | {:error, Mongo.Error.t()}

Explicitly creates a collection or view.

Link to this function

create_indexes(topology_pid, coll, indexes, opts \\ [])

View Source
@spec create_indexes(GenServer.server(), String.t(), [Keyword.t()], Keyword.t()) ::
  :ok | {:error, Mongo.Error.t()}

Convenient function to creates new indexes in the collection coll. The indexes parameter is a list with all options for creating indexes in the MongoDB.

See options about the details of each parameter.

Link to this function

delete_many(topology_pid, coll, filter, opts \\ [])

View Source

Remove all documents matching the filter from the collection.

Link to this function

delete_many!(topology_pid, coll, filter, opts \\ [])

View Source

Similar to delete_many/4 but unwraps the result and raises on error.

Link to this function

delete_one(topology_pid, coll, filter, opts \\ [])

View Source

Remove a document matching the filter from the collection.

Link to this function

delete_one!(topology_pid, coll, filter, opts \\ [])

View Source

Similar to delete_one/4 but unwraps the result and raises on error.

Link to this function

distinct(topology_pid, coll, field, filter, opts \\ [])

View Source
@spec distinct(
  GenServer.server(),
  collection(),
  String.t() | atom(),
  BSON.document(),
  Keyword.t()
) ::
  result([BSON.t()])

Finds the distinct values for a specified field across a collection.

Options

  • :max_time - Specifies a time limit in milliseconds
  • :collation - Optionally specifies a collation to use in MongoDB 3.4 and
Link to this function

distinct!(topology_pid, coll, field, filter, opts \\ [])

View Source
@spec distinct!(
  GenServer.server(),
  collection(),
  String.t() | atom(),
  BSON.document(),
  Keyword.t()
) ::
  result!([BSON.t()])

Similar to distinct/5 but unwraps the result and raises on error.

Link to this function

drop_collection(topology_pid, coll, opts \\ [])

View Source
@spec drop_collection(GenServer.server(), String.t(), Keyword.t()) ::
  :ok | {:error, Mongo.Error.t()}

Convenient function that drops the collection coll.

Link to this function

drop_database(topology_pid, name, opts \\ [])

View Source

Convenient function that drops the database name.

Link to this function

drop_index(topology_pid, coll, name, opts \\ [])

View Source
@spec drop_index(GenServer.server(), String.t(), String.t(), Keyword.t()) ::
  :ok | {:error, Mongo.Error.t()}

Convenient function that drops the index name in the collection coll.

Link to this function

estimated_document_count(topology_pid, coll, opts)

View Source
@spec estimated_document_count(GenServer.server(), collection(), Keyword.t()) ::
  result(non_neg_integer())

Estimate the number of documents in a collection using collection metadata.

Link to this function

estimated_document_count!(topology_pid, coll, opts)

View Source
@spec estimated_document_count!(GenServer.server(), collection(), Keyword.t()) ::
  result!(non_neg_integer())

Similar to estimated_document_count/3 but unwraps the result and raises on error.

Link to this function

exec_hello(conn, cmd, opts)

View Source
Link to this function

exec_more_to_come(conn, opts)

View Source
Link to this function

find(topology_pid, coll, filter, opts \\ [])

View Source
@spec find(GenServer.server(), collection(), BSON.document(), Keyword.t()) ::
  cursor() | {:error, term()}

Selects documents in a collection and returns a cursor for the selected documents.

For all options see Options

Use the underscore style, for example to set the option singleBatch use single_batch. Another example:

 Mongo.find(top, "jobs", %{}, batch_size: 2)
Link to this function

find_one(topology_pid, coll, filter, opts \\ [])

View Source
@spec find_one(GenServer.server(), collection(), BSON.document(), Keyword.t()) ::
  BSON.document() | nil | {:error, any()}

Selects a single document in a collection and returns either a document or nil.

If multiple documents satisfy the query, this method returns the first document according to the natural order which reflects the order of documents on the disk.

For all options see Options

Use the underscore style, for example to set the option readConcern use read_concern. Another example:

 Mongo.find_one(top, "jobs", %{}, read_concern: %{level: "local"})
Link to this function

find_one_and_delete(topology_pid, coll, filter, opts \\ [])

View Source
@spec find_one_and_delete(
  GenServer.server(),
  collection(),
  BSON.document(),
  Keyword.t()
) ::
  result(BSON.document())

Finds a document and deletes it.

Options

  • :max_time - The maximum amount of time to allow the query to run (in MS)
  • :projection - Limits the fields to return for all matching documents.
  • :sort - Determines which document the operation modifies if the query selects multiple documents.
  • :collation - Optionally specifies a collation to use in MongoDB 3.4 and higher.
Link to this function

find_one_and_replace(topology_pid, coll, filter, replacement, opts \\ [])

View Source

Finds a document and replaces it.

Options

  • :bypass_document_validation - Allows the write to opt-out of document level validation
  • :max_time - The maximum amount of time to allow the query to run (in MS)
  • :projection - Limits the fields to return for all matching documents.
  • :return_document - Returns the replaced or inserted document rather than the original. Values are :before or :after. (default is :before)
  • :sort - Determines which document the operation modifies if the query selects multiple documents.
  • :upsert - Create a document if no document matches the query or updates the document.
  • :collation - Optionally specifies a collation to use in MongoDB 3.4 and higher.
Link to this function

find_one_and_update(topology_pid, coll, filter, update, opts \\ [])

View Source

Finds a document and updates it (using atomic modifiers).

Options

  • :bypass_document_validation - Allows the write to opt-out of document level validation
  • :max_time - The maximum amount of time to allow the query to run (in MS)
  • :projection - Limits the fields to return for all matching documents.
  • :return_document - Returns the replaced or inserted document rather than the original. Values are :before or :after. (default is :before)
  • :sort - Determines which document the operation modifies if the query selects multiple documents.
  • :upsert - Create a document if no document matches the query or updates the document.
  • :collation - Optionally specifies a collation to use in MongoDB 3.4 and
Link to this function

insert_many(topology_pid, coll, docs, opts \\ [])

View Source

Insert multiple documents into the collection.

If any of the documents is missing the _id field or it is nil, an ObjectId will be generated, and inserted into the document. Ids of all documents will be returned in the result struct.

Options

For more information about options see Options

Examples

Mongo.insert_many(pid, "users", [%{first_name: "John", last_name: "Smith"}, %{first_name: "Jane", last_name: "Doe"}])
Link to this function

insert_many!(topology_pid, coll, docs, opts \\ [])

View Source

Similar to insert_many/4 but unwraps the result and raises on error.

Link to this function

insert_one(topology_pid, coll, doc, opts \\ [])

View Source

Insert a single document into the collection.

If the document is missing the _id field or it is nil, an ObjectId will be generated, inserted into the document, and returned in the result struct.

Examples

Mongo.insert_one(pid, "users", %{first_name: "John", last_name: "Smith"})

{:ok, session} = Session.start_session(pid)
Session.start_transaction(session)
Mongo.insert_one(pid, "users", %{first_name: "John", last_name: "Smith"}, session: session)
Session.commit_transaction(session)
Session.end_session(pid)
Link to this function

insert_one!(topology_pid, coll, doc, opts \\ [])

View Source

Similar to insert_one/4 but unwraps the result and raises on error.

Link to this function

issue_command(topology_pid, cmd, atom, opts)

View Source

This function is very fundamental.

@spec limits(GenServer.server()) :: {:ok, BSON.document()} | {:error, Mongo.Error.t()}

Returns the limits of the database.

Example

{:ok, top} = Mongo.start_link(...)
Mongo.limits(top)

{:ok, %{
   compression: [],
   logical_session_timeout: 30,
   max_bson_object_size: 16777216,
   max_message_size_bytes: 48000000,
   max_wire_version: 8,
   max_write_batch_size: 100000,
   read_only: false
}}
Link to this function

list_index_names(topology_pid, coll, opts \\ [])

View Source
@spec list_index_names(GenServer.server(), String.t(), Keyword.t()) :: cursor()

Convenient function that returns a cursor with the names of the indexes.

Link to this function

list_indexes(topology_pid, coll, opts \\ [])

View Source
@spec list_indexes(GenServer.server(), String.t(), Keyword.t()) :: cursor()

Returns a cursor to enumerate all indexes

@spec object_id() :: BSON.ObjectId.t()

Generates a new BSON.ObjectId.

@spec ping(GenServer.server()) :: result(BSON.document())

Sends a ping command to the server.

Link to this function

rename_collection(topology_pid, collection, to, opts \\ [])

View Source
@spec rename_collection(GenServer.server(), collection(), collection(), Keyword.t()) ::
  :ok | {:error, Mongo.Error.t()}

Changes the name of an existing collection. Specify collection names to rename_collection in the form of a complete namespace (<database>.<collection>).

Link to this function

replace_one(topology_pid, coll, filter, replacement, opts \\ [])

View Source

Replace a single document matching the filter with the new document.

Options

  • :upsert - if set to true creates a new document when no document matches the filter (default: false)
Link to this function

replace_one!(topology_pid, coll, filter, replacement, opts \\ [])

View Source

Similar to replace_one/5 but unwraps the result and raises on error.

In case of retryable reads are enabled, the keyword :read_counter is added with the value of 1.

In other cases like

  • :retryable_reads is false or nil
  • :session is nil
  • :read_counter is nil

the opts is unchanged

Example

iex> Mongo.retryable_reads([retryable_reads: true]) [retryable_reads: true, read_counter: 1]

Link to this function

retryable_writes(opts, bool)

View Source

In case of retryable writes are enabled, the keyword :write_counter is added with the value of 1.

In other cases like

  • :retryable_writes is false or nil
  • :session is nil
  • :write_counter is nil

the opts is unchanged

Example

iex> Mongo.retryable_writes([retryable_writes: true], true) [retryable_writes: true, write_counter: 1]

Link to this function

show_collections(topology_pid, opts \\ [])

View Source
@spec show_collections(GenServer.server(), Keyword.t()) :: cursor()

Getting Collection Names

@spec start_link(Keyword.t()) :: {:ok, pid()} | {:error, Mongo.Error.t() | atom()}

Start and link to a database connection process.

Options

  • :database - The database to use (required)
  • :hostname - The host to connect to (require)
  • :port - The port to connect to your server (default: 27017)
  • :url - A mongo connection url. Can be used in place of :hostname and :database (optional)
  • :socket_dir - Connect to MongoDB via UNIX sockets in the given directory. The socket name is derived based on the port. This is the preferred method for configuring sockets and it takes precedence over the hostname. If you are connecting to a socket outside of the MongoDB convection, use :socket instead.
  • :socket - Connect to MongoDB via UNIX sockets in the given path. This option takes precedence over :hostname and :socket_dir.
  • :database (optional)
  • :seeds - A list of host names in the cluster. Can be used in place of :hostname (optional)
  • :username - The User to connect with (optional)
  • :password - The password to connect with (optional)
  • :auth_source - The database to authenticate against
  • :appname - The name of the application used the driver for the MongoDB-Handshake
  • :set_name - The name of the replica set to connect to (required if connecting to a replica set)
  • :type - a hint of the topology type. See t:initial_type/0 for valid values (default: :unknown)
  • :idle - The idle strategy, :passive to avoid checkin when idle and :active to checking when idle (default: :passive)
  • :idle_timeout - The idle timeout to ping the database (default: 1_000)
  • :connect_timeout - The maximum timeout for the initial connection (default: 5_000)
  • :backoff_min - The minimum backoff interval (default: 1_000)
  • :backoff_max - The maximum backoff interval (default: 30_000)
  • :backoff_type - The backoff strategy, :stop for no backoff and to stop, :exp of exponential, :rand for random and :ran_exp for random exponential (default: :rand_exp)
  • :after_connect - A function to run on connect use run/3. Either a 1-arity fun, {module, function, args} with DBConnection.t, prepended to args or nil (default: nil)
  • :auth_mechanism - options for the mongo authentication mechanism, currently only supports :x509 atom as a value
  • :ssl - Set to true if ssl should be used (default: false)
  • :ssl_opts - A list of ssl options, see the ssl docs

Error Reasons

  • :single_topology_multiple_hosts - A topology of :single was set but multiple hosts were given
  • :set_name_bad_topology - A :set_name was given but the topology was set to something other than :replica_set_no_primary or :single
@spec timestamp(DateTime.t()) :: BSON.Timestamp.t()

Converts the DataTime to a MongoDB timestamp.

Link to this function

transaction(topology_pid, fun, opts \\ [])

View Source

Convenient function for running multiple write commands in a transaction.

In case of TransientTransactionError or UnknownTransactionCommitResult the function will retry the whole transaction or the commit of the transaction. You can specify a timeout (:transaction_retry_timeout_s) to limit the time of repeating. The default value is 120 seconds. If you don't wait so long, you call with_transaction with the option transaction_retry_timeout_s: 10. In this case after 10 seconds of retrying, the function will return an error.

Example

{:ok, ids} = Mongo.transaction(top, fn ->
{:ok, %InsertOneResult{:inserted_id => id1}} = Mongo.insert_one(top, "dogs", %{name: "Greta"})
{:ok, %InsertOneResult{:inserted_id => id2}} = Mongo.insert_one(top, "dogs", %{name: "Waldo"})
{:ok, %InsertOneResult{:inserted_id => id3}} = Mongo.insert_one(top, "dogs", %{name: "Tom"})
{:ok, [id1, id2, id3]}
end, transaction_retry_timeout_s: 10)

If transaction/3 is called inside another transaction, the function is simply executed, without wrapping the new transaction call in any way. If there is an error in the inner transaction and the error is rescued, or the inner transaction is aborted (abort_transaction/1), the whole outer transaction is aborted, guaranteeing nothing will be committed.

Link to this function

update(topology_pid, coll, updates, opts \\ [])

View Source

Performs one or more update operations.

This function is especially useful for more complex update operations (e.g. upserting multiple documents). For more straightforward use cases you may prefer to use these higher level APIs:

Each update in updates may be specified using either the short-hand Mongo-style syntax (in reference to their docs) or using a long-hand, Elixir friendly syntax.

See https://docs.mongodb.com/manual/reference/command/update/#update-statements

e.g. long-hand query becomes short-hand q, snake case array_filters becomes arrayFilters

Example:

Mongo.update(MongoPool,
  "test_collection",
  query: %{foo => 4},
  update: %{"$set": %{"modified_field": "new_value"}},
  multi: true)

  Mongo.update(MongoPool,
    "test_collection",
    query: %{foo: 4},
    update: %{foo: 5, new_field: "new_value"}},
    upsert: true)

  Mongo.update(MongoPool, "test_collection", [
    [q: %{foo: 24}, update: %{flag: "old"}],
    [q: %{foo: 99}, update: %{luftballons: "yes"}, upsert: true]
  ])
Link to this function

update_many(topology_pid, coll, filter, update, opts \\ [])

View Source

Update all documents matching the filter.

Uses MongoDB update operators to specify the updates. For more information and all options please refer to the MongoDB documentation

Link to this function

update_many!(topology_pid, coll, filter, update, opts \\ [])

View Source

Similar to update_many/5 but unwraps the result and raises on error.

Link to this function

update_one(topology_pid, coll, filter, update, opts \\ [])

View Source

Update a single document matching the filter.

Uses MongoDB update operators to specify the updates. For more information please refer to the MongoDB documentation

Example:

Mongo.update_one(MongoPool,
  "my_test_collection",
  %{"filter_field": "filter_value"},
  %{"$set": %{"modified_field": "new_value"}})

Options

  • :upsert - if set to true creates a new document when no document matches the filter (default: false)
Link to this function

update_one!(topology_pid, coll, filter, update, opts \\ [])

View Source

Similar to update_one/5 but unwraps the result and raises on error.

@spec uuid() :: BSON.Binary.t()

Creates a new UUID.

@spec uuid(any()) :: {:ok, BSON.Binary.t()} | {:error, Exception.t()}

Converts the binary to UUID

Example

iex> Mongo.uuid("848e90e9-5750-4e0a-ab73-66ac6b328242")
{:ok, #BSON.UUID<848e90e9-5750-4e0a-ab73-66ac6b328242>}

iex> Mongo.uuid("848e90e9-5750-4e0a-ab73-66ac6b328242x")
{:error, %ArgumentError{message: "invalid UUID string"}}

iex> Mongo.uuid("848e90e9-5750-4e0a-ab73-66-c6b328242")
{:error, %ArgumentError{message: "non-alphabet digit found: "-" (byte 45)"}}

Similar to uuid/1 except it will unwrap the error tuple and raise in case of errors.

Example

iex> Mongo.uuid!("848e90e9-5750-4e0a-ab73-66ac6b328242")
#BSON.UUID<848e90e9-5750-4e0a-ab73-66ac6b328242>

iex> Mongo.uuid!("848e90e9-5750-4e0a-ab73-66ac6b328242x")
** (ArgumentError) invalid UUID string
(mongodb_driver 0.6.4) lib/mongo.ex:205: Mongo.uuid!/1
Link to this function

watch_collection(topology_pid, coll, pipeline, on_resume_token \\ nil, opts \\ [])

View Source
@spec watch_collection(
  GenServer.server(),
  collection() | 1,
  [BSON.document()],
  (... -> any()) | nil,
  Keyword.t()
) :: cursor()

Creates a change stream cursor on collections.

on_resume_token is function that takes the new resume token, if it changed.

Options

  • :full_document -
  • :max_time - Specifies a time limit in milliseconds. This option is used on getMore commands
  • :batch_size - Specifies the number of maximum number of documents to return (default: 1)
  • :resume_after - Specifies the logical starting point for the new change stream.
  • :start_at_operation_time - The change stream will only provide changes that occurred at or after the specified timestamp (since 4.0)
  • :start_after - Similar to resumeAfter, this option takes a resume token and starts a new change stream returning the first notification after the token. This will allow users to watch collections that have been dropped and recreated or newly renamed collections without missing any notifications. (since 4.0.7)
Link to this function

watch_db(topology_pid, pipeline, on_resume_token \\ nil, opts \\ [])

View Source
@spec watch_db(
  GenServer.server(),
  [BSON.document()],
  (... -> any()) | nil,
  Keyword.t()
) :: cursor()

Creates a change stream cursor all collections of the database.

on_resume_token is function that takes the new resume token, if it changed.

Options

  • :full_document -
  • :max_time - Specifies a time limit in milliseconds. This option is used on getMore commands
  • :batch_size - Specifies the number of maximum number of documents to return (default: 1)
  • :resume_after - Specifies the logical starting point for the new change stream.
  • :start_at_operation_time - The change stream will only provide changes that occurred at or after the specified timestamp (since 4.0)
  • :start_after - Similar to resumeAfter, this option takes a resume token and starts a new change stream returning the first notification after the token. This will allow users to watch collections that have been dropped and recreated or newly renamed collections without missing any notifications. (since 4.0.7)
Link to this function

wire_version(topology_pid)

View Source
@spec wire_version(GenServer.server()) :: {:ok, integer()} | {:error, Mongo.Error.t()}

Returns the wire version of the database

Example

{:ok, top} = Mongo.start_link(...)
Mongo.wire_version(top)

{:ok, 8}