Blink.Adapter.Postgres (blink v0.6.1)
View SourcePostgreSQL adapter for Blink bulk copy operations.
This adapter uses PostgreSQL's COPY FROM STDIN command for efficient bulk
insertion of data. It is the default adapter used by Blink.
Usage
This adapter is used automatically by default:
Blink.copy_to_table(rows, "users", MyApp.Repo)Or explicitly:
Blink.copy_to_table(rows, "users", MyApp.Repo, adapter: Blink.Adapter.Postgres)
Summary
Functions
Copies rows into a database table using PostgreSQL's COPY command.
Functions
@spec call( rows :: Enumerable.t(), table_name :: String.t(), repo :: Ecto.Repo.t(), opts :: Keyword.t() ) :: :ok
Copies rows into a database table using PostgreSQL's COPY command.
Parameters
rows- An enumerable (list or stream) of maps where each map represents a row to insert. All maps must have the same keys, which correspond to the table columns. Using a stream allows for memory-efficient seeding of large datasets.table_name- The name of the table to insert into (string).repo- An Ecto repository module configured with a Postgres adapter.opts- Keyword list of options::batch_size- Number of rows per batch (default: 8,000). Items are chunked into batches, each inserted via a separate COPY operation. To disable batching, set this to a value equal to or greater than the total number of rows.:max_concurrency- Number of parallel COPY operations (default: 6). When greater than 1, batches are inserted using multiple database connections in parallel.:timeout- Timeout in milliseconds for each batch operation (default::infinity).
Returns
:ok- When the copy operation succeeds
Raises an exception when the copy operation fails.
Examples
iex> rows = [%{id: 1, name: "Alice"}, %{id: 2, name: "Bob"}]
iex> Blink.Adapter.Postgres.call(rows, "users", MyApp.Repo)
:ok
# Using a stream for memory-efficient seeding
iex> stream = Stream.map(1..1_000_000, fn i -> %{id: i, name: "User #{i}"} end)
iex> Blink.Adapter.Postgres.call(stream, "users", MyApp.Repo)
:okNotes
The function assumes all rows have the same keys. NULL values are represented
as \N in the CSV format. Nested maps are automatically JSON-encoded for
JSONB columns.