CLI Reference Guide
View SourceComplete reference for the Tinkex command-line interface. The CLI provides a thin wrapper over the SDK for quick checkpoint management, text generation, and API exploration without writing Elixir code.
Overview
The Tinkex CLI is distributed as an escript executable that bundles the entire application into a single file. It supports three main command groups:
checkpoint- Save and manage model checkpointsrun- Generate text completions and manage training runsversion- Display version information
All commands support a consistent set of global options for API configuration, and most operations return structured output that can be saved to files or piped to other tools.
Installation
Build the escript from source:
cd tinkex
MIX_ENV=prod mix escript.build # emits ./tinkex
Optionally install to your PATH:
mix escript.install ./tinkex # installs to ~/.mix/escripts
Verify the installation:
./tinkex version
# or if installed:
tinkex version
Global Options
These options are available for all commands that interact with the Tinker API:
--api-key <key>- API key for authentication (required, or setTINKER_API_KEYenv var)--base-url <url>- API base URL (defaults to production endpoint)--timeout <ms>- Request timeout in milliseconds (default: 120000)
Example using global options:
./tinkex run \
--api-key "$TINKER_API_KEY" \
--base-url "https://tinker.thinkingmachines.dev/services/tinker-prod" \
--timeout 60000 \
--prompt "Hello"
tinkex checkpoint - Checkpoint Management
The checkpoint command provides two modes: saving new checkpoints and managing existing ones.
Save Checkpoints
Create and save a checkpoint for a model configuration:
./tinkex checkpoint \
--base-model meta-llama/Llama-3.1-8B \
--rank 32 \
--output ./checkpoint.json \
--api-key "$TINKER_API_KEY"
Options
Model Configuration:
--base-model <id>- Base model identifier (required, e.g.,meta-llama/Llama-3.1-8B)--model-path <path>- Local model path (alternative to--base-model)
Output:
--output <path>- Path to write checkpoint metadata JSON (required)
LoRA Configuration:
--rank <int>- LoRA rank (default: 32)--seed <int>- Random seed for reproducibility--train-mlp- Enable MLP training (default: true)--train-attn- Enable attention training (default: true)--train-unembed- Enable unembedding training (default: true)
Checkpoint Metadata Format
The checkpoint command writes a JSON metadata file to the specified --output path:
{
"base_model": "meta-llama/Llama-3.1-8B",
"model_id": "run-abc123/weights/0001",
"weights_path": "/path/to/weights",
"saved_at": "2024-11-26T12:34:56Z",
"response": {
"model_id": "run-abc123/weights/0001",
"path": "/path/to/weights"
}
}Note: The actual model weights are stored on the Tinker service. The local metadata file contains references and timestamps for tracking purposes.
Example: Save Checkpoint with Custom LoRA Config
./tinkex checkpoint \
--base-model Qwen/Qwen3-8B \
--rank 64 \
--seed 42 \
--train-mlp \
--train-attn \
--output checkpoints/qwen-lora-64.json \
--api-key "$TINKER_API_KEY"
List Checkpoints
List all user checkpoints with pagination and JSON output, or filter by training run:
./tinkex checkpoint list [--run-id <id>] [--limit <int>] [--offset <int>] [--format table|json]
Options:
--run-id <id>- Restrict results to a single training run--limit <int>- Maximum number of checkpoints to return (0= fetch all; default: 20)--offset <int>- Number of checkpoints to skip (default: 0)--format table|json/--json- Output format (default: table); JSON includestotalandshowncounts
When multiple pages are fetched (--limit 0 or large lists), progress is printed to stderr while stdout remains clean for piping/JSON.
Examples:
# Fetch all checkpoints with JSON output
./tinkex checkpoint list --limit 0 --format json --api-key "$TINKER_API_KEY"
# List checkpoints for a single run
./tinkex checkpoint list --run-id run-123 --limit 5 --api-key "$TINKER_API_KEY"
JSON shape (truncated):
{
"total": 3,
"shown": 3,
"checkpoints": [
{
"checkpoint_id": "ckpt-1",
"checkpoint_type": "weights",
"training_run_id": "run-123",
"size_bytes": 1024,
"public": true,
"time": "2025-11-26T00:00:00Z",
"tinker_path": "tinker://run-123/weights/0001"
}
]
}Get Checkpoint Info
Retrieve detailed information about a specific checkpoint, including size, visibility, timestamps, training run ID, and base model/LoRA metadata:
./tinkex checkpoint info <tinker_path> [--format table|json]
Arguments:
<tinker_path>- Checkpoint path (e.g.,tinker://run-123/weights/0001)
Example output (table):
Checkpoint ID: ckpt-1
Training run ID: run-123
Type: weights
Path: tinker://run-123/weights/0001
Size: 1.0 KB
Public: true
Created: 2025-11-26T00:00:00Z
Base model: meta-llama/Llama-3.1-8B
LoRA: true
LoRA rank: 32Pass --format json (or --json) to receive the full checkpoint + weights metadata as a JSON object.
Example:
./tinkex checkpoint info tinker://run-abc123/weights/0001 \
--api-key "$TINKER_API_KEY"
Publish Checkpoint
Make a checkpoint publicly accessible:
./tinkex checkpoint publish <tinker_path>
Example:
./tinkex checkpoint publish tinker://run-123/weights/0001 \
--api-key "$TINKER_API_KEY"
# Output: Published tinker://run-123/weights/0001
Unpublish Checkpoint
Remove public access from a checkpoint:
./tinkex checkpoint unpublish <tinker_path>
Example:
./tinkex checkpoint unpublish tinker://run-123/weights/0001 \
--api-key "$TINKER_API_KEY"
# Output: Unpublished tinker://run-123/weights/0001
Delete Checkpoint
Permanently delete one or more checkpoints with a single confirmation:
./tinkex checkpoint delete <tinker_path> [<tinker_path> ...] [--yes]
Warning: This operation is irreversible. Ensure you have backups if needed. Use --yes to
skip the interactive confirmation prompt.
Example:
./tinkex checkpoint delete tinker://run-old/weights/0001 tinker://run-old/weights/0002 \
--api-key "$TINKER_API_KEY"
# Output: Preparing to delete 2 checkpoints...
# Deleted tinker://run-old/weights/0001
# Deleted tinker://run-old/weights/0002
For an end-to-end live flow that creates two checkpoints and deletes both with a
single --yes confirmation, see examples/checkpoint_multi_delete_live.exs.
Download Checkpoint
Download and extract checkpoint files locally:
./tinkex checkpoint download <tinker_path> [--output <dir>] [--force]
Options:
--output <dir>- Output directory for extracted files (default: current directory)--force- Overwrite existing files if present
Example:
./tinkex checkpoint download tinker://run-123/weights/0001 \
--output ./models/checkpoint-001 \
--force \
--api-key "$TINKER_API_KEY"
# Output: Downloaded to ./models/checkpoint-001
Help for Checkpoint Commands
./tinkex checkpoint --help
./tinkex checkpoint list --help
tinkex run - Text Generation
The run command generates text completions using the Tinker sampling API and manages training runs.
Generate Text
Sample text completions from a model:
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt "Hello there" \
--max-tokens 64 \
--temperature 0.7 \
--num-samples 2 \
--api-key "$TINKER_API_KEY"
Options
Model Configuration:
--base-model <id>- Base model identifier (required, e.g.,meta-llama/Llama-3.1-8B)--model-path <path>- Local model path (alternative to--base-model)
Prompt Input (choose one):
--prompt <text>- Prompt text directly on command line--prompt-file <path>- Path to file containing prompt (see Prompt Input Formats)
Sampling Parameters:
--max-tokens <int>- Maximum tokens to generate--temperature <float>- Sampling temperature (default: 1.0)--top-k <int>- Top-k sampling parameter (default: -1, disabled)--top-p <float>- Nucleus sampling parameter (default: 1.0)--num-samples <int>- Number of samples to return (default: 1)
Output Control:
--output <path>- Write output to file instead of stdout--json- Output full response as JSON instead of plain text
Advanced:
--http-pool <name>- HTTP pool name to use for connection pooling
Plain Text Output
By default, tinkex run decodes tokens and prints human-readable text:
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt "The capital of France is" \
--max-tokens 10 \
--api-key "$TINKER_API_KEY"
Output:
Starting sampling...
Sample 1:
Paris, which is located in the northern
stop_reason=length | avg_logprob=-1.234
Sampling complete (1 sequences)JSON Output
Use --json to get the full structured response:
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt "Hello" \
--max-tokens 5 \
--json \
--api-key "$TINKER_API_KEY"
Output:
{
"sequences": [
{
"tokens": [1245, 345, 678, 901, 234],
"logprobs": [-0.123, -0.456, -0.789, -0.234, -0.567],
"stop_reason": "length"
}
],
"prompt_logprobs": null,
"topk_prompt_logprobs": null,
"type": "sample"
}Prompt Input Formats
The CLI supports multiple prompt input formats via --prompt-file:
Plain Text File:
# Create a text file
echo "Write a haiku about coding" > prompt.txt
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt-file prompt.txt \
--max-tokens 50 \
--api-key "$TINKER_API_KEY"
JSON Token Array:
For precise control, provide pre-tokenized input as a JSON array of integers:
# Create a JSON file with token IDs
echo '[1, 2, 3, 4, 5]' > tokens.json
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt-file tokens.json \
--max-tokens 20 \
--api-key "$TINKER_API_KEY"
JSON Token Object:
Alternatively, wrap tokens in an object:
{
"tokens": [1, 2, 3, 4, 5]
}The CLI automatically detects the format:
- If the file parses as JSON and contains an integer array (or
{"tokens": [...]}), it's treated as token IDs - Otherwise, it's treated as plain text
Writing Output to Files
Use --output to write results to a file instead of stdout:
# Plain text output
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt "Generate a story" \
--max-tokens 200 \
--output story.txt \
--api-key "$TINKER_API_KEY"
# JSON output
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt "Hello world" \
--max-tokens 50 \
--json \
--output response.json \
--api-key "$TINKER_API_KEY"
Multiple Samples
Generate multiple completions in a single request:
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt "Once upon a time" \
--max-tokens 50 \
--num-samples 3 \
--temperature 0.9 \
--api-key "$TINKER_API_KEY"
Output:
Starting sampling...
Sample 1:
, there was a brave knight...
stop_reason=length | avg_logprob=-1.234
Sample 2:
, in a land far away...
stop_reason=length | avg_logprob=-1.456
Sample 3:
, a young wizard discovered...
stop_reason=length | avg_logprob=-1.123
Sampling complete (3 sequences)List Training Runs
List all training runs with pagination and JSON output:
./tinkex run list [--limit <int>] [--offset <int>] [--format table|json]
Options:
--limit <int>- Maximum number of runs to return (0= fetch all; default: 20)--offset <int>- Number of runs to skip (default: 0)--format table|json/--json- Output format (default: table); JSON includestotal/shownplus full run objects
Progress is printed to stderr when multiple pages are fetched (e.g., --limit 0).
Example (JSON):
./tinkex run list --limit 0 --format json --api-key "$TINKER_API_KEY"
{
"total": 3,
"shown": 3,
"runs": [
{
"training_run_id": "run-123",
"base_model": "meta-llama/Llama-3.1-8B",
"model_owner": "owner@example.com",
"is_lora": true,
"lora_rank": 16,
"corrupted": false,
"last_request_time": "2025-11-26T00:00:00Z",
"last_checkpoint": {...},
"last_sampler_checkpoint": {...},
"user_metadata": {"stage": "prod"}
}
]
}Get Training Run Info
Retrieve detailed information about a specific training run:
./tinkex run info <run_id> [--format table|json]
Arguments:
<run_id>- Training run identifier
Table output:
run-abc123 (meta-llama/Llama-3.1-8B)
Owner: user@example.com
LoRA: Yes
LoRA rank: 16
Status: Active
Last update: 2025-11-26T00:00:00Z
Last training checkpoint: ckpt-123
Time: 2025-11-26T00:00:00Z
Path: tinker://run-abc123/weights/0001
Metadata: stage=prod--format json (or --json) returns the full training run object, including owner, LoRA rank, last training/sampler checkpoints, and user metadata.
Help for Run Commands
./tinkex run --help
./tinkex run list --help
tinkex version - Version Information
Display version and build information:
./tinkex version
# Output: tinkex 0.1.8 (abc1234)
./tinkex --version # alias
JSON Output
Get structured version information:
./tinkex version --json
Output:
{
"version": "0.1.8",
"commit": "abc1234"
}The commit hash is the short Git SHA from the build environment (7 characters). If Git is unavailable or the build is not from a Git repository, the commit field will be null.
Programmatic CLI Invocation
You can invoke the CLI from Elixir scripts using Tinkex.CLI.run/1:
# examples/cli_run_text.exs
defmodule MyScript do
alias Tinkex.CLI
def run do
{:ok, _} = Application.ensure_all_started(:tinkex)
args = [
"run",
"--base-model", "meta-llama/Llama-3.1-8B",
"--prompt", "Hello from Elixir",
"--max-tokens", "64",
"--temperature", "0.7",
"--api-key", System.fetch_env!("TINKER_API_KEY")
]
case CLI.run(args) do
{:ok, %{response: response}} ->
IO.inspect(response, label: "sampling response")
{:error, reason} ->
IO.puts(:stderr, "CLI failed: #{inspect(reason)}")
end
end
end
MyScript.run()Using Prompt Files Programmatically
# examples/cli_run_prompt_file.exs
defmodule MyScript do
alias Tinkex.CLI
def run do
{:ok, _} = Application.ensure_all_started(:tinkex)
# Create temporary prompt file
tmp_dir = System.tmp_dir!()
prompt_path = Path.join(tmp_dir, "prompt.txt")
output_path = Path.join(tmp_dir, "output.json")
File.write!(prompt_path, "Hello from a prompt file")
args = [
"run",
"--base-model", "meta-llama/Llama-3.1-8B",
"--prompt-file", prompt_path,
"--json",
"--output", output_path,
"--api-key", System.fetch_env!("TINKER_API_KEY")
]
case CLI.run(args) do
{:ok, _} ->
IO.puts("JSON output written to #{output_path}")
IO.puts(File.read!(output_path))
{:error, reason} ->
IO.puts(:stderr, "CLI failed: #{inspect(reason)}")
end
end
end
MyScript.run()Return Values
Tinkex.CLI.run/1 returns:
{:ok, result}- Success, whereresultis a map with command-specific data{:error, reason}- Failure, with error details
The result map structure varies by command:
Checkpoint save:
{:ok, %{
command: :checkpoint,
metadata: %{
"base_model" => "meta-llama/Llama-3.1-8B",
"model_id" => "run-123/weights/0001",
"saved_at" => "2024-11-26T12:34:56Z",
...
}
}}Run (sampling):
{:ok, %{
command: :run,
response: %Tinkex.Types.SampleResponse{
sequences: [...],
...
}
}}Version:
{:ok, %{
command: :version,
version: "0.1.8",
commit: "abc1234",
options: %{json: false}
}}Complete Examples
Example 1: Save Checkpoint and Generate Text
#!/bin/bash
set -e
API_KEY="$TINKER_API_KEY"
MODEL="meta-llama/Llama-3.1-8B"
# Save checkpoint
echo "Saving checkpoint..."
./tinkex checkpoint \
--base-model "$MODEL" \
--rank 32 \
--output checkpoint.json \
--api-key "$API_KEY"
# Generate text
echo "Generating text..."
./tinkex run \
--base-model "$MODEL" \
--prompt "The meaning of life is" \
--max-tokens 100 \
--temperature 0.8 \
--output generation.txt \
--api-key "$API_KEY"
echo "Done! Check checkpoint.json and generation.txt"
Example 2: Batch Text Generation with JSON Output
#!/bin/bash
API_KEY="$TINKER_API_KEY"
MODEL="meta-llama/Llama-3.1-8B"
# Create prompts directory
mkdir -p prompts outputs
# Create multiple prompt files
echo "Write a haiku about code" > prompts/haiku.txt
echo "Explain recursion simply" > prompts/recursion.txt
echo "List 5 programming languages" > prompts/languages.txt
# Process each prompt
for prompt_file in prompts/*.txt; do
base=$(basename "$prompt_file" .txt)
echo "Processing: $base"
./tinkex run \
--base-model "$MODEL" \
--prompt-file "$prompt_file" \
--max-tokens 100 \
--temperature 0.7 \
--json \
--output "outputs/${base}.json" \
--api-key "$API_KEY"
done
echo "All prompts processed! Results in outputs/"
Example 3: Checkpoint Management Workflow
#!/bin/bash
API_KEY="$TINKER_API_KEY"
# List all checkpoints
echo "=== Your Checkpoints ==="
./tinkex checkpoint list --limit 20 --api-key "$API_KEY"
# Get info on specific checkpoint
CHECKPOINT_PATH="tinker://run-123/weights/0001"
echo ""
echo "=== Checkpoint Info ==="
./tinkex checkpoint info "$CHECKPOINT_PATH" --api-key "$API_KEY"
# Download checkpoint
echo ""
echo "=== Downloading Checkpoint ==="
./tinkex checkpoint download "$CHECKPOINT_PATH" \
--output ./models/checkpoint-001 \
--force \
--api-key "$API_KEY"
echo ""
echo "Checkpoint saved to ./models/checkpoint-001"
Example 4: Using Token IDs for Precise Control
#!/bin/bash
API_KEY="$TINKER_API_KEY"
MODEL="meta-llama/Llama-3.1-8B"
# Create a JSON file with specific token IDs
# (These would be actual token IDs from your tokenizer)
cat > tokens.json <<EOF
{
"tokens": [1, 450, 3783, 315, 2324, 374]
}
EOF
# Generate text from token IDs
./tinkex run \
--base-model "$MODEL" \
--prompt-file tokens.json \
--max-tokens 50 \
--temperature 0.7 \
--json \
--output output.json \
--api-key "$API_KEY"
echo "Generated text from token IDs:"
cat output.json | jq .
Example 5: Environment-Based Configuration
#!/bin/bash
# Set environment variables for cleaner command lines
export TINKER_API_KEY="tml-your-api-key"
export TINKER_BASE_URL="https://tinker.thinkingmachines.dev/services/tinker-prod"
# Now you can omit --api-key and --base-url
./tinkex run \
--base-model meta-llama/Llama-3.1-8B \
--prompt "Hello world" \
--max-tokens 20
# Or use them programmatically
./tinkex checkpoint \
--base-model Qwen/Qwen3-8B \
--output checkpoint.json
Error Handling
The CLI provides clear error messages for common issues:
Missing API Key:
Checkpoint failed. Please check your inputs: Missing --api-keyMissing Required Options:
Checkpoint failed. Please check your inputs: --output is required for checkpoint commandInvalid Options:
Invalid option(s) for {:checkpoint, :save}: --invalid-flagServer Errors:
Sampling failed due to server or transient error. Consider retrying: API request failedTimeout:
Sampling failed due to server or transient error. Consider retrying: Timed out while awaiting samplingExit Codes
The CLI uses standard exit codes:
0- Success1- Error (validation, server error, or timeout)
This allows for shell scripting:
#!/bin/bash
if ./tinkex run --prompt "Test" --base-model meta-llama/Llama-3.1-8B; then
echo "Success!"
else
echo "Failed with exit code: $?"
exit 1
fi
Performance Tips
Connection Pooling: The CLI automatically uses HTTP/2 connection pools. For batch operations, consider using the SDK directly with
ServiceClientto reuse connections.Timeouts: Adjust
--timeoutfor large generation requests:./tinkex run --timeout 300000 --max-tokens 2000 ...Parallel Processing: For multiple independent requests, use shell parallelization:
# Generate 4 samples in parallel for i in {1..4}; do ./tinkex run --prompt "Sample $i" ... & done waitOutput Formats: Use
--format json(or--json) on checkpoint/run management commands when you need to parse output programmatically;--jsonremains available on sampling commands. Plain text is more efficient for human reading.
Troubleshooting
Command Not Found
If tinkex is not found after building:
# Use relative path
./tinkex version
# Or add to PATH
export PATH="$PATH:$PWD"
tinkex version
# Or install globally
mix escript.install ./tinkex
# Then ensure ~/.mix/escripts is in PATH
export PATH="$PATH:$HOME/.mix/escripts"
SSL/TLS Errors
If you encounter certificate verification errors:
# Set base URL explicitly
./tinkex run \
--base-url "https://tinker.thinkingmachines.dev/services/tinker-prod" \
...
Large Prompts
For very large prompts, use --prompt-file instead of --prompt:
# This may fail if the prompt is too large for command line
./tinkex run --prompt "$(cat large_prompt.txt)" ...
# Use prompt file instead
./tinkex run --prompt-file large_prompt.txt ...
JSON Parsing
When using --json, ensure you have jq or similar tools for parsing:
./tinkex run --json ... | jq '.sequences[0].tokens'
See Also
- Getting Started Guide - Installation and setup
- API Reference - SDK API documentation
- Troubleshooting Guide - Common issues and solutions
- Training Loop Guide - End-to-end training workflows
- Examples Directory (
examples/) - Runnable example scripts
Help Commands
All commands support --help or -h:
./tinkex --help
./tinkex checkpoint --help
./tinkex checkpoint list --help
./tinkex run --help
./tinkex run list --help
./tinkex version --help