# `Slither.Examples.BatchStats.StatsDemo`
[🔗](https://github.com/nshkrdotcom/slither/blob/v0.1.0/lib/slither/examples/batch_stats/stats_demo.ex#L1)

Demonstrates Dispatch with all three batching strategies, streaming, and
fault isolation via per-worker process boundaries.

Generates 50 numeric datasets of varying sizes (small, medium, large) plus
5 "poison pill" datasets that contain invalid data (nil, NaN, empty).
Dispatches them to a Python `batch_stats` module using:

  1. **FixedBatch** -- fixed chunks of 10 items
  2. **WeightedBatch** -- batches capped at 500 total values
  3. **KeyPartition** -- one batch per dataset size category
  4. **Streaming** -- lazy stream with `as_completed` ordering

After all strategy runs, queries each worker for its accumulated running
statistics (Welford's online mean/variance) to show that per-process state
remains internally consistent -- something impossible under free-threaded
Python where concurrent threads corrupt shared accumulators.

Run with `Slither.Examples.BatchStats.StatsDemo.run_demo/0`.

# `run_demo`

Run the full batch statistics demo, printing results to stdout.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
