# `Threadline.Export`
[🔗](https://github.com/szTheory/threadline/blob/v0.2.0/lib/threadline/export.ex#L1)

CSV and JSON export for audited row changes.

Uses the **same** `filters` and `opts` as `Threadline.Query.timeline/2`, including
`:repo` resolution: `Keyword.get(opts, :repo) || Keyword.fetch!(filters, :repo)`.

Filter keys are validated via `Threadline.Query.validate_timeline_filters!/1`
(`:repo`, `:table`, `:actor_ref`, `:from`, `:to` only). Unknown keys raise
`ArgumentError`.

## CSV columns

Fixed column order: `id`, `transaction_id`, `table_schema`, `table_name`, `op`,
`captured_at`, `table_pk`, `data_after`, `changed_fields`, `changed_from`,
`transaction_json`. The last column is a JSON object with transaction
`id`, `occurred_at`, `actor_ref`, and `source`. Datetimes are ISO 8601 UTC.

## JSON

Wrapped format (default) is one object with `format_version`, `generated_at`,
and `changes`. Pass `json_format: :ndjson` for one JSON object per line (no
outer wrapper).

## Row limits

Default `max_rows` is 10_000. Exports use `limit: max_rows + 1`
to detect truncation; successful results include `truncated`, `returned_count`,
and `max_rows`. Empty matches return header-only CSV (one header row) and
`changes: []` in JSON.

## Streaming

`stream_changes/2` pages by `(captured_at, id)` keyset and does **not** apply
`max_rows` — cap with `Stream.take/2` or use `to_csv_iodata/2` / `to_json_document/2`
for bounded exports.

Database errors from `Ecto.Repo` raise like `timeline/2`.

# `count_matching`

```elixir
@spec count_matching(keyword(), keyword()) :: {:ok, %{count: non_neg_integer()}}
```

Counts changes matching `filters` without loading row payloads.

Same validation and join semantics as `Threadline.Query.timeline/2`.

# `stream_changes`

```elixir
@spec stream_changes(keyword(), keyword()) :: Enumerable.t()
```

Lazily enumerates `AuditChange` structs in timeline order using keyset pages.

Does **not** enforce `max_rows` — combine with `Stream.take/2` if needed.

## Options

- `:repo` — optional if present in `filters`
- `:page_size` — defaults to `1000`

# `to_csv_iodata`

```elixir
@spec to_csv_iodata(keyword(), keyword()) :: {:ok, map()}
```

Returns CSV as iodata plus truncation metadata.

See module documentation for `filters`, `opts`, and column layout.

## Options

- `:repo` — optional if `:repo` is present in `filters`
- `:max_rows` — defaults to `10000`

# `to_json_document`

```elixir
@spec to_json_document(keyword(), keyword()) :: {:ok, map()}
```

Returns JSON (wrapped object or NDJSON lines) as iodata plus truncation metadata.

## Options

- `:repo`, `:max_rows` — same as `to_csv_iodata/2`
- `:json_format` — `:wrapped` (default) or `:ndjson`

---

*Consult [api-reference.md](api-reference.md) for complete listing*
