RethinkDB.Ecto v0.7.0 RethinkDB.Ecto

Ecto adapter module for RethinkDB.

It uses the RethinkDB driver to connect and communicate with a RethinkDB database.

The adapter tries to serialize SQL-like Ecto queries to the ReQL query language in a performant manner. Lots of the query patterns are inspired by the SQL to ReQL cheat-sheet. If you want to know how a specific function is implemented, look at the RethinkDB.Ecto.NormalizedQuery module.

Migration support

You can create and drop databases using mix ecto.create and mix.ecto.drop.

Migrations will work for creating tables and indexes. Table column specifications are not supported by RethinkDB and will be ommited when executing the migration.

This adapter provides support for creating compound and multi indexes out of the box.

To create a compound index, simply pass multiple column names to Ecto.Migration.index/3:

create index(:users, [:first_name, :last_name])

To create a multi index, pass the :multi options as follow:

create index(:posts, [:tags], options: [multi: true])

Executing ReQL queries

This adapter enhances the repository it is used with, by providing the RethinkDB.run/3 function.

You can run RethinkDB specific queries against your repository as follow:

import RethinkDB.{Query, Lambda}

table("users")
|> has_fields(["first_name", "last_name"])
|> map(lambda & &1[:first_name] + " " + &1[:last_name])
|> MyApp.Repo.run()

Known Limitations

RethinkDB beeing by nature a NoSQL database with basic support for table relationship, you should be aware of following limitations/incompabilities with Ecto.

Connection Pool

The adapter does not support connection pooling. All the queries are executed on the same connection. Due to the multiplex nature of RethinkDB connections, a single connection should do just fine for most use cases.

Primary Keys

The data type of a primary key is a UUID :binary_id. In order to work properly, you must add the following attributes to your schema definitions:

@primary_key {:id, :binary_id, autogenerate: false}
@foreign_key_type :binary_id

You can set the :autogenerate option to true if you want to generate primary keys on the client side.

Unique Indexes

RethinkDB does not support unique secondary indexes. When running migrations with unique indexes, you will get a warning. Nevertheless, the index will be created.

Summary

Functions

Called to autogenerate a value for id/embed_id/binary_id

Returns the childspec that starts the adapter process

Deletes a single struct with the given filters

Returns the dumpers for a given type

Ensure all applications necessary to run the adapter are started

Executes a previously prepared query

Executes migration commands

Inserts a single new struct in the data store

Returns the loaders for a given type

Commands invoked to prepare a query for all, update_all and delete_all

Drops the storage given by options

Creates the storage given by options

Checks if the adapter supports ddl transaction

Updates a single struct with the given filters

Functions

autogenerate(atom)

Called to autogenerate a value for id/embed_id/binary_id.

Returns the autogenerated value, or nil if it must be autogenerated inside the storage or raise if not supported.

Callback implementation for Ecto.Adapter.autogenerate/1.

child_spec(repo, options)

Returns the childspec that starts the adapter process.

Callback implementation for Ecto.Adapter.child_spec/2.

delete(repo, meta, filters, options)

Deletes a single struct with the given filters.

While filters can be any record column, it is expected that at least the primary key (or any other key that uniquely identifies an existing record) be given as a filter. Therefore, in case there is no record matching the given filters, {:error, :stale} is returned.

Callback implementation for Ecto.Adapter.delete/4.

dumpers(arg1, type)

Returns the dumpers for a given type.

It receives the primitive type and the Ecto type (which may be primitive as well). It returns a list of dumpers with the given type usually at the beginning.

This allows developers to properly translate values coming from the Ecto into adapter ones. For example, if the database does not support booleans but instead returns 0 and 1 for them, you could add:

def dumpers(:boolean, type), do: [type, &bool_encode/1]
def dumpers(_primitive, type), do: [type]

defp bool_encode(false), do: {:ok, 0}
defp bool_encode(true), do: {:ok, 1}

All adapters are required to implement a clause or :binary_id types, since they are adapter specific. If your adapter does not provide binary ids, you may simply use Ecto.UUID:

def dumpers(:binary_id, type), do: [type, Ecto.UUID]
def dumpers(_primitive, type), do: [type]

Callback implementation for Ecto.Adapter.dumpers/2.

ensure_all_started(repo, type)

Ensure all applications necessary to run the adapter are started.

Callback implementation for Ecto.Adapter.ensure_all_started/2.

execute(repo, meta, arg, params, preprocess, options)

Executes a previously prepared query.

It must return a tuple containing the number of entries and the result set as a list of lists. The result set may also be nil if a particular operation does not support them.

The meta field is a map containing some of the fields found in the Ecto.Query struct.

It receives a process function that should be invoked for each selected field in the query result in order to convert them to the expected Ecto type. The process function will be nil if no result set is expected from the query.

Callback implementation for Ecto.Adapter.execute/6.

execute_ddl(repo, arg, options)

Executes migration commands.

Options

  • :timeout - The time in milliseconds to wait for the query call to finish, :infinity will wait indefinitely (default: 15000);
  • :pool_timeout - The time in milliseconds to wait for calls to the pool to finish, :infinity will wait indefinitely (default: 5000);
  • :log - When false, does not log begin/commit/rollback queries

Callback implementation for Ecto.Adapter.Migration.execute_ddl/3.

in_transaction?(repo)
insert(repo, meta, fields, on_conflict, returning, options)

Inserts a single new struct in the data store.

Autogenerate

The primary key will be automatically included in returning if the field has type :id or :binary_id and no value was set by the developer or none was autogenerated by the adapter.

Callback implementation for Ecto.Adapter.insert/6.

insert_all(repo, meta, header, fields, on_conflict, returning, options)

Inserts multiple entries into the data store.

Callback implementation for Ecto.Adapter.insert_all/7.

loaders(arg1, type)

Returns the loaders for a given type.

It receives the primitive type and the Ecto type (which may be primitive as well). It returns a list of loaders with the given type usually at the end.

This allows developers to properly translate values coming from the adapters into Ecto ones. For example, if the database does not support booleans but instead returns 0 and 1 for them, you could add:

def loaders(:boolean, type), do: [&bool_decode/1, type]
def loaders(_primitive, type), do: [type]

defp bool_decode(0), do: {:ok, false}
defp bool_decode(1), do: {:ok, true}

All adapters are required to implement a clause for :binary_id types, since they are adapter specific. If your adapter does not provide binary ids, you may simply use Ecto.UUID:

def loaders(:binary_id, type), do: [Ecto.UUID, type]
def loaders(_primitive, type), do: [type]

Callback implementation for Ecto.Adapter.loaders/2.

prepare(func, query)

Commands invoked to prepare a query for all, update_all and delete_all.

The returned result is given to execute/6.

Callback implementation for Ecto.Adapter.prepare/2.

rollback(repo, value)
storage_down(options)

Drops the storage given by options.

Returns :ok if it was dropped successfully.

Returns {:error, :already_down} if the storage has already been dropped or {:error, term} in case anything else goes wrong.

Examples

storage_down(username: postgres,
             database: 'ecto_test',
             hostname: 'localhost')

Callback implementation for Ecto.Adapter.Storage.storage_down/1.

storage_up(options)

Creates the storage given by options.

Returns :ok if it was created successfully.

Returns {:error, :already_up} if the storage has already been created or {:error, term} in case anything else goes wrong.

Examples

storage_up(username: postgres,
           database: 'ecto_test',
           hostname: 'localhost')

Callback implementation for Ecto.Adapter.Storage.storage_up/1.

supports_ddl_transaction?()

Checks if the adapter supports ddl transaction.

Callback implementation for Ecto.Adapter.Migration.supports_ddl_transaction?/0.

update(repo, meta, fields, filters, returning, options)

Updates a single struct with the given filters.

While filters can be any record column, it is expected that at least the primary key (or any other key that uniquely identifies an existing record) be given as a filter. Therefore, in case there is no record matching the given filters, {:error, :stale} is returned.

Callback implementation for Ecto.Adapter.update/6.