network_onnx (macula_tweann v0.18.1)
View SourceONNX export for neural networks.
Exports networks to ONNX (Open Neural Network Exchange) format for cross-platform inference in Python, JavaScript, C++, mobile, and edge devices.
Usage
Export a trained network: {ok, OnnxBinary} = network_onnx:to_onnx(Network) file:write_file("model.onnx", OnnxBinary)
Load in Python: import onnxruntime as ort session = ort.InferenceSession("model.onnx") outputs = session.run(None, {"input": inputs})
Load in JavaScript: const session = await ort.InferenceSession.create("model.onnx") const results = await session.run({input: tensor})
Summary
Functions
-spec to_onnx(network_evaluator:network()) -> {ok, binary()} | {error, term()}.
Export network to ONNX format.
-spec to_onnx(network_evaluator:network(), map()) -> {ok, binary()} | {error, term()}.
Export network to ONNX format with options.
Options: model_name - Name of the model (default: "macula_network") producer - Producer name (default: "macula-tweann")