py (erlang_python v1.2.0)
View SourceHigh-level API for executing Python code from Erlang.
This module provides a simple interface to call Python functions, execute Python code, and stream results from Python generators.
Examples
%% Call a Python function
{ok, Result} = py:call(json, dumps, [#{foo => bar}]).
%% Call with keyword arguments
{ok, Result} = py:call(json, dumps, [Data], #{indent => 2}).
%% Execute raw Python code
{ok, Result} = py:eval("1 + 2").
%% Stream from a generator
{ok, Stream} = py:stream(mymodule, generate_tokens, [Prompt]),
lists:foreach(fun(Token) -> io:format("~s", [Token]) end, Stream).
Summary
Functions
Activate a Python virtual environment. This modifies sys.path to use packages from the specified venv. The venv path should be the root directory (containing bin/lib folders).
Wait for an async call to complete.
Wait for an async call with timeout. Note: Identical to await/2 - provided for API symmetry with async_call.
Call a Python async function (coroutine). Returns immediately with a reference. Use async_await/1,2 to get the result. This is for calling functions defined with async def in Python.
Call a Python async function with keyword arguments.
Execute multiple async calls concurrently using asyncio.gather. Takes a list of {Module, Func, Args} tuples and executes them all concurrently, returning when all are complete.
Stream results from a Python async generator. Returns a list of all yielded values.
Stream results from a Python async generator with kwargs.
Wait for an async call to complete.
Wait for an async call with timeout.
Bind current process to a dedicated Python worker. All subsequent py:call/eval/exec operations from this process will use the same worker, preserving Python state (variables, imports) across calls.
Create an explicit context with a dedicated worker. Returns a context handle that can be passed to call/eval/exec variants. Multiple contexts can exist per process.
Call a Python function synchronously.
Call a Python function with keyword arguments.
Call a Python function with keyword arguments and custom timeout. Timeout is in milliseconds. Use infinity for no timeout. Rate limited via ETS-based semaphore to prevent overload.
Call a Python function asynchronously, returns immediately with a ref.
Call a Python function asynchronously with kwargs.
Call with explicit context.
Call with explicit context and kwargs.
Call with explicit context, kwargs, and timeout.
Eval with explicit context.
Eval with explicit context and locals.
Eval with explicit context, locals, and timeout.
Exec with explicit context.
Deactivate the current virtual environment. Restores sys.path to its original state.
Evaluate a Python expression and return the result.
Evaluate a Python expression with local variables.
Evaluate a Python expression with local variables and timeout. Timeout is in milliseconds. Use infinity for no timeout.
Execute Python statements (no return value expected).
Get the current execution mode. Returns one of: - free_threaded: Python 3.13+ with no GIL (Py_GIL_DISABLED) - subinterp: Python 3.12+ with per-interpreter GIL - multi_executor: Traditional Python with N executor threads
Force Python garbage collection. Performs a full collection (all generations). Returns the number of unreachable objects collected.
Force garbage collection of a specific generation. Generation 0 collects only the youngest objects. Generation 1 collects generations 0 and 1. Generation 2 (default) performs a full collection.
Check if current process is bound.
Get Python memory statistics. Returns a map containing: - gc_stats: List of per-generation GC statistics - gc_count: Tuple of object counts per generation - gc_threshold: Collection thresholds per generation - traced_memory_current: Current traced memory (if tracemalloc enabled) - traced_memory_peak: Peak traced memory (if tracemalloc enabled)
Get the number of executor threads. For multi_executor mode, this is the number of executor threads. For other modes, returns 1.
Execute multiple Python calls in true parallel using sub-interpreters. Each call runs in its own sub-interpreter with its own GIL, allowing CPU-bound Python code to run in parallel.
Register an Erlang function to be callable from Python. Python code can then call: erlang.call('name', arg1, arg2, ...) The function should accept a list of arguments and return a term.
Register an Erlang module:function to be callable from Python. The function will be called as Module:Function(Args).
Reload a Python module across all workers. This uses importlib.reload() to refresh the module from disk. Useful during development when Python code changes.
Clear all shared state.
Atomically decrement a counter by 1.
Atomically decrement a counter by Amount.
Fetch a value from shared state. This state is accessible from Python workers via state_get('key').
Atomically increment a counter by 1.
Atomically increment a counter by Amount.
Get all keys in shared state.
Remove a key from shared state.
Store a value in shared state. This state is accessible from Python workers via state_set('key', value).
Stream results from a Python generator. Returns a list of all yielded values.
Stream results from a Python generator with kwargs.
Stream results from a Python generator expression. Evaluates the expression and if it returns a generator, streams all values.
Stream results from a Python generator expression with local variables.
Check if true parallel execution is supported. Returns true on Python 3.12+ which supports per-interpreter GIL.
Start memory allocation tracing. After starting, memory_stats() will include traced_memory_current and traced_memory_peak values.
Start memory tracing with specified frame depth. Higher frame counts provide more detailed tracebacks but use more memory.
Stop memory allocation tracing.
Release bound worker for current process.
Release explicit context's worker.
Unregister a previously registered function.
Get information about the currently active virtual environment. Returns a map with venv_path and site_packages, or none if no venv is active.
Get Python version string.
Execute function with temporary bound context. Automatically binds before and unbinds after (even on exception).
Types
Functions
Activate a Python virtual environment. This modifies sys.path to use packages from the specified venv. The venv path should be the root directory (containing bin/lib folders).
Example:
ok = py:activate_venv(<<"/path/to/myenv">>).
{ok, _} = py:call(sentence_transformers, 'SentenceTransformer', [<<"all-MiniLM-L6-v2">>]).
Wait for an async call to complete.
Wait for an async call with timeout. Note: Identical to await/2 - provided for API symmetry with async_call.
Call a Python async function (coroutine). Returns immediately with a reference. Use async_await/1,2 to get the result. This is for calling functions defined with async def in Python.
Example:
Ref = py:async_call(aiohttp, get, [<<"https://example.com">>]),
{ok, Response} = py:async_await(Ref).
Call a Python async function with keyword arguments.
Execute multiple async calls concurrently using asyncio.gather. Takes a list of {Module, Func, Args} tuples and executes them all concurrently, returning when all are complete.
Example:
{ok, Results} = py:async_gather([
{aiohttp, get, [Url1]},
{aiohttp, get, [Url2]},
{aiohttp, get, [Url3]}
]).
Stream results from a Python async generator. Returns a list of all yielded values.
Stream results from a Python async generator with kwargs.
Wait for an async call to complete.
Wait for an async call with timeout.
-spec bind() -> ok | {error, term()}.
Bind current process to a dedicated Python worker. All subsequent py:call/eval/exec operations from this process will use the same worker, preserving Python state (variables, imports) across calls.
Example:
ok = py:bind(),
ok = py:exec(<<"x = 42">>),
{ok, 42} = py:eval(<<"x">>), % Same worker, x persists
ok = py:unbind().
Create an explicit context with a dedicated worker. Returns a context handle that can be passed to call/eval/exec variants. Multiple contexts can exist per process.
Example:
{ok, Ctx1} = py:bind(new),
{ok, Ctx2} = py:bind(new),
ok = py:exec(Ctx1, <<"x = 1">>),
ok = py:exec(Ctx2, <<"x = 2">>),
{ok, 1} = py:eval(Ctx1, <<"x">>), % Isolated
{ok, 2} = py:eval(Ctx2, <<"x">>), % Isolated
ok = py:unbind(Ctx1),
ok = py:unbind(Ctx2).
Call a Python function synchronously.
Call a Python function with keyword arguments.
Call a Python function with keyword arguments and custom timeout. Timeout is in milliseconds. Use infinity for no timeout. Rate limited via ETS-based semaphore to prevent overload.
Call a Python function asynchronously, returns immediately with a ref.
Call a Python function asynchronously with kwargs.
Call with explicit context.
Call with explicit context and kwargs.
Call with explicit context, kwargs, and timeout.
Eval with explicit context.
Eval with explicit context and locals.
Eval with explicit context, locals, and timeout.
Exec with explicit context.
-spec deactivate_venv() -> ok | {error, term()}.
Deactivate the current virtual environment. Restores sys.path to its original state.
Evaluate a Python expression and return the result.
Evaluate a Python expression with local variables.
Evaluate a Python expression with local variables and timeout. Timeout is in milliseconds. Use infinity for no timeout.
Execute Python statements (no return value expected).
-spec execution_mode() -> free_threaded | subinterp | multi_executor.
Get the current execution mode. Returns one of: - free_threaded: Python 3.13+ with no GIL (Py_GIL_DISABLED) - subinterp: Python 3.12+ with per-interpreter GIL - multi_executor: Traditional Python with N executor threads
Force Python garbage collection. Performs a full collection (all generations). Returns the number of unreachable objects collected.
Force garbage collection of a specific generation. Generation 0 collects only the youngest objects. Generation 1 collects generations 0 and 1. Generation 2 (default) performs a full collection.
-spec is_bound() -> boolean().
Check if current process is bound.
Get Python memory statistics. Returns a map containing: - gc_stats: List of per-generation GC statistics - gc_count: Tuple of object counts per generation - gc_threshold: Collection thresholds per generation - traced_memory_current: Current traced memory (if tracemalloc enabled) - traced_memory_peak: Peak traced memory (if tracemalloc enabled)
-spec num_executors() -> pos_integer().
Get the number of executor threads. For multi_executor mode, this is the number of executor threads. For other modes, returns 1.
Execute multiple Python calls in true parallel using sub-interpreters. Each call runs in its own sub-interpreter with its own GIL, allowing CPU-bound Python code to run in parallel.
Requires Python 3.12+. Use subinterp_supported/0 to check availability.
Example:
%% Run numpy matrix operations in parallel
{ok, Results} = py:parallel([
{numpy, dot, [MatrixA, MatrixB]},
{numpy, dot, [MatrixC, MatrixD]},
{numpy, dot, [MatrixE, MatrixF]}
]).On older Python versions, returns {error, subinterpreters_not_supported}.
Register an Erlang function to be callable from Python. Python code can then call: erlang.call('name', arg1, arg2, ...) The function should accept a list of arguments and return a term.
Register an Erlang module:function to be callable from Python. The function will be called as Module:Function(Args).
Reload a Python module across all workers. This uses importlib.reload() to refresh the module from disk. Useful during development when Python code changes.
Note: This only affects already-imported modules. If the module hasn't been imported in a worker yet, the reload is a no-op for that worker.
Example:
%% After modifying mymodule.py on disk:
ok = py:reload(mymodule).Returns ok if reload succeeded in all workers, or {error, Reasons} if any workers failed.
-spec state_clear() -> ok.
Clear all shared state.
Atomically decrement a counter by 1.
Atomically decrement a counter by Amount.
Fetch a value from shared state. This state is accessible from Python workers via state_get('key').
Atomically increment a counter by 1.
Atomically increment a counter by Amount.
-spec state_keys() -> [term()].
Get all keys in shared state.
-spec state_remove(term()) -> ok.
Remove a key from shared state.
Store a value in shared state. This state is accessible from Python workers via state_set('key', value).
Stream results from a Python generator. Returns a list of all yielded values.
Stream results from a Python generator with kwargs.
Stream results from a Python generator expression. Evaluates the expression and if it returns a generator, streams all values.
Stream results from a Python generator expression with local variables.
-spec subinterp_supported() -> boolean().
Check if true parallel execution is supported. Returns true on Python 3.12+ which supports per-interpreter GIL.
-spec tracemalloc_start() -> ok | {error, term()}.
Start memory allocation tracing. After starting, memory_stats() will include traced_memory_current and traced_memory_peak values.
-spec tracemalloc_start(pos_integer()) -> ok | {error, term()}.
Start memory tracing with specified frame depth. Higher frame counts provide more detailed tracebacks but use more memory.
-spec tracemalloc_stop() -> ok | {error, term()}.
Stop memory allocation tracing.
-spec unbind() -> ok.
Release bound worker for current process.
-spec unbind(py_ctx()) -> ok.
Release explicit context's worker.
Unregister a previously registered function.
Get information about the currently active virtual environment. Returns a map with venv_path and site_packages, or none if no venv is active.
Get Python version string.
-spec with_context(fun(() -> Result) | fun((py_ctx()) -> Result)) -> Result.
Execute function with temporary bound context. Automatically binds before and unbinds after (even on exception).
With arity-0 function (uses implicit process binding):
Result = py:with_context(fun() ->
ok = py:exec(<<"total = 0">>),
ok = py:exec(<<"total += 1">>),
py:eval(<<"total">>)
end).
%% {ok, 1}With arity-1 function (receives explicit context):
Result = py:with_context(fun(Ctx) ->
ok = py:exec(Ctx, <<"x = 10">>),
py:eval(Ctx, <<"x * 2">>)
end).
%% {ok, 20}