View Source Benchee.Configuration (Benchee v1.3.0)

Functions to handle the configuration of Benchee, exposes init/1 function.

Summary

Types

t()

Generated configuration struct from the user supplied configuration options.

The configuration supplied by the user as either a map or a keyword list

Functions

Returns the initial benchmark configuration for Benchee, composed of defaults and an optional custom configuration.

Types

@type t() :: %Benchee.Configuration{
  after_each: Benchee.Benchmark.Hooks.hook_function() | nil,
  after_scenario: Benchee.Benchmark.Hooks.hook_function() | nil,
  assigns: map(),
  before_each: Benchee.Benchmark.Hooks.hook_function() | nil,
  before_scenario: Benchee.Benchmark.Hooks.hook_function() | nil,
  formatters: [
    (Benchee.Suite.t() -> Benchee.Suite.t())
    | module()
    | {module(), user_configuration()}
  ],
  input_names: [String.t()],
  inputs:
    %{required(Benchee.Suite.key()) => any()}
    | [{Benchee.Suite.key(), any()}]
    | nil,
  load: String.t() | [String.t()] | false,
  measure_function_call_overhead: boolean(),
  memory_time: number(),
  parallel: pos_integer(),
  percentiles: term(),
  pre_check: boolean(),
  print: map(),
  profile_after: boolean() | atom() | {atom(), keyword()},
  reduction_time: number(),
  save: map() | false,
  time: number(),
  title: String.t() | nil,
  unit_scaling: Benchee.Conversion.Scale.scaling_strategy(),
  warmup: number()
}

Generated configuration struct from the user supplied configuration options.

Filled in with a lot of defaults. Also notably every option is already converted to a map or struct at this point for easier handling in Benchee.

@type user_configuration() :: map() | keyword()

The configuration supplied by the user as either a map or a keyword list

Possible options are:

Possible options:

  • warmup - the time in seconds for which a benchmarking job should be run without measuring times before "real" measurements start. This simulates a "warm" running system. Defaults to 2.
  • time - the time in seconds for how long each individual benchmarking job should be run for measuring the execution times (run time performance). Defaults to 5.
  • memory_time - the time in seconds for how long memory measurements should be conducted. Defaults to 0 (turned off).
  • inputs - a map from descriptive input names to some different input, your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to nil, aka no input specified and functions are called without an argument.
  • title - this option is purely cosmetic. If you would like to add a title with some meaning to a given suite, you can do so by providing a single string here. This is only for use by formatters.
  • formatters - list of formatters either as a module implementing the formatter behaviour, a tuple of said module and options it should take or formatter functions. They are run when using Benchee.run/2 or you can invoktem them through Benchee.Formatter.output/1. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter Benchee.Formatters.Console. See Formatters.
  • pre_check - whether or not to run each job with each input - including all given before or after scenario or each hooks - before the benchmarks are measured to ensure that your code executes without error. This can save time while developing your suites. Defaults to false.
  • parallel - each the function of each job will be executed in parallel number processes. If parallel is 4 then 4 processes will be spawned that all execute the same function for the given time. When these finish/the time is up 4 new processes will be spawned for the next job/function. This gives you more data in the same time, but also puts a load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking check the wiki. Defaults to 1 (no parallel execution).
  • save - specify a path where to store the results of the current benchmarking suite, tagged with the specified tag.
  • load - load saved suit or suits to compare your current benchmarks against. Can be a string or a list of strings or patterns.
  • print - a map from atoms to true or false to configure if the output identified by the atom will be printed. All options are enabled by default (true). Options are:
    • :benchmarking - print when Benchee starts benchmarking a new job (Benchmarking name ..) as well as when statistics are being calculated or formatting begins.
    • :configuration - a summary of configured benchmarking options including estimated total run time is printed before benchmarking starts
    • :fast_warning - warnings are displayed if functions are executed too fast leading to inaccurate measures
  • console - options for the built-in console formatter:
    • :comparison - if the comparison of the different benchmarking jobs (x times slower than) is shown (true/false). Enabled by default.
    • extended_statistics - display more statistics, aka minimum, maximum, sample_size and mode. Disabled by default.
  • percentiles - if you are using extended statistics and want to see the results for certain percentiles of results beyond just the median. Defaults to [50, 99] to calculate the 50th and 99th percentiles.
  • :unit_scaling - the strategy for choosing a unit for durations and counts. May or may not be implemented by a given formatter (The console formatter implements it). When scaling a value, Benchee finds the "best fit" unit (the largest unit for which the result is at least 1). For example, 1_200_000 scales to 1.2 M, while 800_000 scales to 800 K. The unit_scaling strategy determines how Benchee chooses the best fit unit for an entire list of values, when the individual values in the list may have different best fit units. There are four strategies, defaulting to :best:
    • :best - the most frequent best fit unit will be used, a tie will result in the larger unit being selected.
    • :largest - the largest best fit unit will be used (i.e. thousand and seconds if values are large enough).
    • :smallest - the smallest best fit unit will be used (i.e. millisecond and one)
    • :none - no unit scaling will occur. Durations will be displayed in microseconds, and counts will be displayed in ones (this is equivalent to the behaviour Benchee had pre 0.5.0)
  • :before_scenario/after_scenario/before_each/after_each - read up on them in the hooks section in the README
  • :measure_function_call_overhead - Measure how long an empty function call takes and deduct this from each measure run time. This overhead should be negligible for all but the most micro benchmarks. Defaults to false.
  • profile_after - accepts any of the following options:
    • a boolean - true will enable profiling with the default profiler (:eprof) and false will disable profiling. Defaults to false.
    • a profiler - either as a tuple of {profiler, opts} (e.g., {:fprof, [sort: :own]}) or just the profiler (e.g., :fprof), which is equivalent to {profiler, []}. The accepted built-in profilers are :cprof, :eprof and :fprof.

Functions

@spec init(user_configuration()) :: Benchee.Suite.t()

Returns the initial benchmark configuration for Benchee, composed of defaults and an optional custom configuration.

Configuration times are given in seconds, but are converted to microseconds internally.

For a list of all possible options see user_configuration/0

Examples

iex> init()
%Benchee.Suite{
  configuration:
    %Benchee.Configuration{
      parallel: 1,
      time: 5_000_000_000.0,
      warmup: 2_000_000_000.0,
      inputs: nil,
      input_names: [],
      save: false,
      load: false,
      formatters: [Benchee.Formatters.Console],
      print: %{
        benchmarking: true,
        fast_warning: true,
        configuration: true
      },
      percentiles: [50, 99],
      unit_scaling: :best,
      assigns: %{},
      before_each: nil,
      after_each: nil,
      before_scenario: nil,
      after_scenario: nil
    },
  system: nil,
  scenarios: []
}

iex> init(time: 1, warmup: 0.2)
%Benchee.Suite{
  configuration:
    %Benchee.Configuration{
      parallel: 1,
      time: 1_000_000_000.0,
      warmup: 200_000_000.0,
      inputs: nil,
      input_names: [],
      save: false,
      load: false,
      formatters: [Benchee.Formatters.Console],
      print: %{
        benchmarking: true,
        fast_warning: true,
        configuration: true
      },
      percentiles: [50, 99],
      unit_scaling: :best,
      assigns: %{},
      before_each: nil,
      after_each: nil,
      before_scenario: nil,
      after_scenario: nil
    },
  system: nil,
  scenarios: []
}

iex> init(
...>   parallel: 2,
...>   time: 1,
...>   warmup: 0.2,
...>   formatters: [&IO.puts/1],
...>   print: [fast_warning: false],
...>   inputs: %{"Small" => 5, "Big" => 9999},
...>   unit_scaling: :smallest)
%Benchee.Suite{
  configuration:
    %Benchee.Configuration{
      parallel: 2,
      time: 1_000_000_000.0,
      warmup: 200_000_000.0,
      inputs: [{"Big", 9999}, {"Small", 5}],
      input_names: ["Big", "Small"],
      save: false,
      load: false,
      formatters: [&IO.puts/1],
      print: %{
        benchmarking: true,
        fast_warning: false,
        configuration: true
      },
      percentiles: [50, 99],
      unit_scaling: :smallest,
      assigns: %{},
      before_each: nil,
      after_each: nil,
      before_scenario: nil,
      after_scenario: nil
    },
  system: nil,
  scenarios: []
}