Changelog
View Source0.6.1 - 2026-02-14
Fixed
Fixed
@specforrun/3injected byuse Blinkreturning{:ok, any()} | {:error, any()}instead of:ok
0.6.0 - 2026-02-01
Added
- Added
:max_concurrencyoption torun/3andcopy_to_table/4for parallel COPY operations (default: 6). - Added
:timeoutoption tocopy_to_table/4for batch operations (default::infinity). - Added per-table options support via
with_table/4::batch_sizeand:max_concurrencycan now be set per table, overriding the global options passed torun/3. - Added Configuring Options guide.
Changed
- Changed default
:batch_sizefrom 10,000 to 8,000 based on performance benchmarks. - Batching now applies to both lists and streams (previously only streams were batched)
0.5.1 - 2026-01-21
Changed
- Removed try-rescue block in
copy_to_table/4for invalid adapters, allowing standard Elixir error handling
Fixed
- Fixed stream being materialized twice when seeding from CSV files
0.5.0 - 2026-01-18
Added
- Added
:timeoutoption torun/3to configure transaction timeout - Added
:batch_sizeoption torun/3to control stream chunking for backpressure (default: 10,000 rows per chunk). Only applies to streams; lists are sent as a single batch. This is different from the previously removedbatch_sizeoption which controlled CSV value batching. - Added stream support:
table/2callbacks can now return streams in addition to lists, enabling memory-efficient seeding of large datasets - Added
:streamoption tofrom_csv/2to return a stream instead of a list for memory-efficient processing of large CSV files - Added support for seeding JSONB columns: nested maps are automatically JSON-encoded during insertion
Changed
- Breaking: Renamed
Blink.StoretoBlink.Seeder - Breaking: Renamed
Blink.Seeder.insert/3toBlink.Seeder.run/3 - Breaking: Renamed
add_table/2towith_table/2 - Breaking: Renamed
add_context/2towith_context/2 - Breaking:
run/3now returns:okon success and raises on failure (previously returned{:ok, :inserted}or{:error, exception}) - Breaking:
copy_to_table/4now returns:okon success and raises on failure - Breaking: Adapter
call/4callback now returns:okon success and raises on failure - Breaking: Adapter
call/4callback now receivestable_nameas a string (previously could be atom or string)
Fixed
- Fixed CSV escaping in PostgreSQL COPY adapter: strings containing special characters (pipe
|, double quotes", newlines, carriage returns, backslashes) are now properly escaped to prevent data corruption
Performance
- Optimized CSV encoding
0.4.1 - 2026-01-11
Added
use Blinknow importsnew/0,from_csv/1,from_csv/2,from_json/1,from_json/2,copy_to_table/3, andcopy_to_table/4for convenience
Changed
- Moved batch size documentation to its own guide
- Simplified the using_context guide
0.4.0 - 2026-01-11
Added
- Initial release of Blink
- Fast bulk data insertion using PostgreSQL's COPY command
- Callback-based pattern for defining seeders with
use Blink - Support for multiple tables with deterministic insertion order to respect foreign key constraints
- Context sharing between table definitions
- Configurable batch size for large datasets (including
batch_size: :infinityto disable batching) - Transaction support with automatic rollback on errors
Blink.from_csv/2function for reading CSV files into mapsBlink.from_json/2function for reading JSON files into maps- Adapter pattern with
Blink.Adapter.Postgresfor database-specific bulk insert implementations - Comprehensive test suite with integration tests
- Full documentation and examples