Blink.Adapter.Postgres (blink v0.5.1)

View Source

PostgreSQL adapter for Blink bulk copy operations.

This adapter uses PostgreSQL's COPY FROM STDIN command for efficient bulk insertion of data. It is the default adapter used by Blink.

Usage

This adapter is used automatically by default:

Blink.copy_to_table(items, "users", MyApp.Repo)

Or explicitly:

Blink.copy_to_table(items, "users", MyApp.Repo, adapter: Blink.Adapter.Postgres)

Implementation

The adapter implements the Blink.Adapter behaviour by streaming data to PostgreSQL in CSV format using the pipe delimiter.

Summary

Functions

Copies items into a database table using PostgreSQL's COPY command.

Functions

call(items, table_name, repo, opts \\ [])

@spec call(
  items :: Enumerable.t(),
  table_name :: String.t(),
  repo :: Ecto.Repo.t(),
  opts :: Keyword.t()
) :: :ok

Copies items into a database table using PostgreSQL's COPY command.

This function uses PostgreSQL's COPY FROM STDIN command for efficient bulk insertion of data.

Parameters

  • items - An enumerable (list or stream) of maps where each map represents a row to insert. All maps must have the same keys, which correspond to the table columns. Using a stream allows for memory-efficient seeding of large datasets.
  • table_name - The name of the table to insert into (string or atom).
  • repo - An Ecto repository module configured with a Postgres adapter.
  • opts - Keyword list of options:
    • :batch_size - Number of rows per batch when streaming (default: 10,000). Only applies to streams; lists are sent as a single batch.

Returns

  • :ok - When the copy operation succeeds

Raises an exception when the copy operation fails.

Examples

iex> items = [%{id: 1, name: "Alice"}, %{id: 2, name: "Bob"}]
iex> Blink.Adapter.Postgres.call(items, "users", MyApp.Repo)
:ok

# Using a stream for memory-efficient seeding
iex> stream = Stream.map(1..1_000_000, fn i -> %{id: i, name: "User #{i}"} end)
iex> Blink.Adapter.Postgres.call(stream, "users", MyApp.Repo)
:ok

Notes

The function assumes all items have the same keys. NULL values are represented as \N in the CSV format. Nested maps are automatically JSON-encoded for JSONB columns.