Quant.Strategy.Optimization.Results (quant v0.1.0-alpha.1)

Analysis and ranking of optimization results.

This module provides functions to analyze, rank, and visualize parameter optimization results.

Summary

Functions

Combine individual optimization result maps into a single DataFrame.

Find the best parameter combination based on a specific metric.

Analyze parameter correlation.

Find Pareto frontier for multi-objective optimization.

Rank results by a specific metric.

Perform sensitivity analysis on a parameter.

Get top N parameter combinations for a metric.

Functions

combine_results(results)

@spec combine_results([map()]) :: Explorer.DataFrame.t()

Combine individual optimization result maps into a single DataFrame.

Takes a list of result maps (each representing one parameter combination) and creates a structured DataFrame for analysis.

Examples

iex> results = [
...>   %{fast_period: 5, slow_period: 20, total_return: 0.15},
...>   %{fast_period: 10, slow_period: 25, total_return: 0.23}
...> ]
iex> df = combine_results(results)
iex> DataFrame.n_rows(df)
2

find_best_params(dataframe, metric)

@spec find_best_params(Explorer.DataFrame.t(), atom()) :: map() | nil

Find the best parameter combination based on a specific metric.

Examples

iex> df = DataFrame.new(%{
...>   fast_period: [5, 10, 15],
...>   total_return: [0.1, 0.3, 0.2]
...> })
iex> best = find_best_params(df, :total_return)
iex> best.fast_period
10

parameter_correlation(dataframe, param1, param2)

@spec parameter_correlation(Explorer.DataFrame.t(), atom(), atom()) :: float()

Analyze parameter correlation.

Shows how two parameters correlate in their effect on performance.

Examples

iex> df = DataFrame.new(%{
...>   fast_period: [5, 10, 15, 20],
...>   slow_period: [20, 25, 30, 35],
...>   total_return: [0.1, 0.2, 0.15, 0.25]
...> })
iex> corr = parameter_correlation(df, :fast_period, :slow_period)
iex> is_number(corr)
true

parameter_heatmap(dataframe, x_param, y_param, metric)

@spec parameter_heatmap(Explorer.DataFrame.t(), atom(), atom(), atom()) ::
  {:ok, Explorer.DataFrame.t()} | {:error, term()}

Generate parameter heatmap data.

Creates a pivot table showing how a metric varies across two parameters.

Examples

iex> df = DataFrame.new(%{
...>   fast_period: [5, 5, 10, 10],
...>   slow_period: [20, 25, 20, 25],
...>   total_return: [0.1, 0.15, 0.2, 0.25]
...> })
iex> {:ok, heatmap} = parameter_heatmap(df, :fast_period, :slow_period, :total_return)
iex> DataFrame.n_rows(heatmap) > 0
true

pareto_frontier(dataframe, metrics)

@spec pareto_frontier(Explorer.DataFrame.t(), [atom()]) :: Explorer.DataFrame.t()

Find Pareto frontier for multi-objective optimization.

Identifies parameter combinations that are not dominated by others across multiple metrics.

Examples

iex> df = DataFrame.new(%{
...>   fast_period: [5, 10, 15],
...>   total_return: [0.1, 0.3, 0.2],
...>   max_drawdown: [0.2, 0.1, 0.15]
...> })
iex> frontier = pareto_frontier(df, [:total_return, :max_drawdown])
iex> DataFrame.n_rows(frontier) <= DataFrame.n_rows(df)
true

rank_by_metric(dataframe, metric, order \\ :desc)

@spec rank_by_metric(Explorer.DataFrame.t(), atom(), :asc | :desc) ::
  Explorer.DataFrame.t()

Rank results by a specific metric.

Examples

iex> df = DataFrame.new(%{
...>   fast_period: [5, 10, 15],
...>   total_return: [0.1, 0.3, 0.2]
...> })
iex> ranked = rank_by_metric(df, :total_return, :desc)
iex> DataFrame.n_rows(ranked)
3

sensitivity_analysis(dataframe, param_name)

@spec sensitivity_analysis(Explorer.DataFrame.t(), atom()) ::
  {:ok, map()} | {:error, term()}

Perform sensitivity analysis on a parameter.

Shows how changes in a parameter affect performance metrics.

Examples

iex> df = DataFrame.new(%{
...>   fast_period: [5, 10, 15, 20],
...>   total_return: [0.1, 0.2, 0.15, 0.25]
...> })
iex> {:ok, analysis} = sensitivity_analysis(df, :fast_period)
iex> Map.has_key?(analysis, :correlation)
true

stability_analysis(dataframe, metric, threshold)

@spec stability_analysis(Explorer.DataFrame.t(), atom(), float()) ::
  {:ok, map()} | {:error, term()}

Analyze parameter stability.

Identifies parameter combinations that perform consistently well across multiple metrics.

Examples

iex> df = DataFrame.new(%{
...>   fast_period: [5, 10, 15],
...>   total_return: [0.1, 0.3, 0.2],
...>   sharpe_ratio: [0.5, 1.2, 0.8]
...> })
iex> {:ok, stability} = stability_analysis(df, :total_return, 0.1)
iex> Map.has_key?(stability, :stable_params)
true

top_n_params(dataframe, n, metric \\ :total_return)

@spec top_n_params(Explorer.DataFrame.t(), pos_integer(), atom()) ::
  Explorer.DataFrame.t()

Get top N parameter combinations for a metric.

Examples

iex> df = DataFrame.new(%{
...>   fast_period: [5, 10, 15, 20],
...>   total_return: [0.1, 0.3, 0.2, 0.25]
...> })
iex> top3 = top_n_params(df, 3, :total_return)
iex> DataFrame.n_rows(top3)
3