OORL.MCTS (object v0.1.2)

Monte Carlo Tree Search implementation for OORL framework with Q* optimal policy enhancement.

Provides MCTS search with:

  • Q* optimality guarantees
  • Self-reflective reasoning
  • Adaptive simulation depth
  • AAOS specification compliance

Summary

Functions

Creates a new MCTS configuration.

Performs MCTS search with Q* optimal policy enhancement.

Types

action()

@type action() :: any()

mcts_node()

@type mcts_node() :: %OORL.MCTS.Node{
  action: term(),
  available_actions: term(),
  children: term(),
  depth: term(),
  is_terminal: term(),
  parent: term(),
  q_value: term(),
  state: term(),
  total_reward: term(),
  ucb_value: term(),
  visits: term()
}

mcts_state()

@type mcts_state() :: any()

reward()

@type reward() :: float()

Functions

new(opts \\ [])

Creates a new MCTS configuration.

Parameters

  • opts: Configuration options

Returns

%OORL.MCTS{} struct

search(initial_state, environment, options \\ %{})

Performs MCTS search with Q* optimal policy enhancement.

Parameters

  • initial_state: Starting state for search
  • environment: Environment definition with transition and reward functions
  • options: Search configuration including iterations, exploration constant

Returns

{:ok, %{best_action: action, policy: policy, search_tree: tree}} or {:error, reason}

Examples

iex> OORL.MCTS.search(%{x: 0, y: 0}, environment, %{iterations: 1000})
{:ok, %{best_action: :move_right, confidence: 0.85, q_value: 2.3}}