Wrapper for Python class Module.
Summary
Functions
Python method Module._base_init.
Python method Module._set_lm_usage.
Python method Module.acall.
Processes a list of dspy.Example instances in parallel using the Parallel module.
Deep copy the module.
Python method Module.dump_state.
Python method Module.get_lm.
Python method Module.inspect_history.
Load the saved module. You may also want to check out dspy.load, if you want to
Python method Module.load_state.
Applies a function to all named predictors.
Unlike PyTorch, handles (non-recursive) lists of parameters too.
Python method Module.named_predictors.
Find all sub-modules in the module, as well as their names.
Initialize self. See help(type(self)) for accurate signature.
Python method Module.parameters.
Python method Module.predictors.
Deep copy the module and reset all parameters.
Save the module.
Python method Module.set_lm.
Types
Functions
@spec _base_init( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module._base_init.
Returns
term()
@spec _set_lm_usage( SnakeBridge.Ref.t(), %{optional(String.t()) => term()}, term(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module._set_lm_usage.
Parameters
tokens(%{optional(String.t()) => term()})output(term())
Returns
term()
@spec acall(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.acall.
Parameters
args(term())kwargs(term())
Returns
term()
@spec batch(SnakeBridge.Ref.t(), [term()], [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Processes a list of dspy.Example instances in parallel using the Parallel module.
Parameters
examples- List of dspy.Example instances to process.num_threads- Number of threads to use for parallel processing.max_errors- Maximum number of errors allowed before stopping execution. IfNone, inherits fromdspy.settings.max_errors.return_failed_examples- Whether to return failed examples and exceptions.provide_traceback- Whether to include traceback information in error logs.disable_progress_bar- Whether to display the progress bar.timeout- Seconds before a straggler task is resubmitted. Set to 0 to disable.straggler_limit- Only check for stragglers when this many or fewer tasks remain.
Returns
term()
@spec deepcopy( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Deep copy the module.
This is a tweak to the default python deepcopy that only deep copies self.parameters(), and for other
attributes, we just do the shallow copy.
Returns
term()
@spec dump_state(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.dump_state.
Parameters
json_mode(term() default: True)
Returns
term()
@spec get_lm( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.get_lm.
Returns
term()
@spec inspect_history(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.inspect_history.
Parameters
n(integer() default: 1)
Returns
term()
@spec load(SnakeBridge.Ref.t(), term(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Load the saved module. You may also want to check out dspy.load, if you want to
load an entire program, not just the state for an existing program.
Parameters
path- Path to the saved state file, which should be a .json or a .pkl file (type:String.t())allow_pickle- If True, allow loading .pkl files, which can run arbitrary code. This is dangerous and should only be used if you are sure about the source of the file and in a trusted environment. (type:boolean())
Returns
term()
@spec load_state(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.load_state.
Parameters
state(term())
Returns
term()
@spec map_named_predictors(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Applies a function to all named predictors.
Parameters
func(term())
Returns
term()
@spec named_parameters( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Unlike PyTorch, handles (non-recursive) lists of parameters too.
Returns
term()
@spec named_predictors( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.named_predictors.
Returns
term()
@spec named_sub_modules(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Find all sub-modules in the module, as well as their names.
Say self.children[4]['key'].sub_module is a sub-module. Then the name will be
children[4]['key'].sub_module. But if the sub-module is accessible at different
paths, only one of the paths will be returned.
Parameters
type_(term() default: None)skip_compiled(term() default: False)
Returns
term()
@spec new( [term()], keyword() ) :: {:ok, SnakeBridge.Ref.t()} | {:error, Snakepit.Error.t()}
Initialize self. See help(type(self)) for accurate signature.
Parameters
callbacks(term() default: None)
@spec parameters( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.parameters.
Returns
term()
@spec predictors( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.predictors.
Returns
term()
@spec reset_copy( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Deep copy the module and reset all parameters.
Returns
term()
@spec save(SnakeBridge.Ref.t(), term(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Save the module.
Save the module to a directory or a file. There are two modes:
save_program=False: Save only the state of the module to a json or pickle file, based on the value of the file extension.save_program=True: Save the whole module to a directory via cloudpickle, which contains both the state and architecture of the model.
If save_program=True and modules_to_serialize are provided, it will register those modules for serialization
with cloudpickle's register_pickle_by_value. This causes cloudpickle to serialize the module by value rather
than by reference, ensuring the module is fully preserved along with the saved program. This is useful
when you have custom modules that need to be serialized alongside your program. If None, then no modules
will be registered for serialization.
We also save the dependency versions, so that the loaded model can check if there is a version mismatch on critical dependencies or DSPy version.
Parameters
path- Path to the saved state file, which should be a .json or .pkl file whensave_program=False, and a directory whensave_program=True. (type:String.t())save_program- If True, save the whole module to a directory via cloudpickle, otherwise only save the state. (type:boolean())modules_to_serialize- A list of modules to serialize with cloudpickle'sregister_pickle_by_value. If None, then no modules will be registered for serialization. (type:list())
Returns
term()
@spec set_lm(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method Module.set_lm.
Parameters
lm(term())
Returns
term()