Mix v1.1.1 Mix.Tasks.Profile.Fprof
Profiles the given file or expression using Erlang’s fprof
tool.
fprof
can be useful when you want to discover the bottlenecks of a
sequential code.
Before running the code, it invokes the app.start
task which compiles
and loads your project. Then the target expression is profiled, together
with all processes which are spawned by it. Other processes (e.g. those
residing in the OTP application supervision tree) are not profiled.
To profile the code, you can use the syntax similar to mix run
task:
mix profile.fprof -e Hello.world
mix profile.fprof my_script.exs arg1 arg2 arg3
Command line options
--callers
- print detailed information about immediate callers and called functions--details
- include profile data for each profiled process--sort key
- sort the output by given key: acc (default) or own--config
,-c
- loads the given configuration file--eval
,-e
- evaluate the given code--require
,-r
- require pattern before running the command--parallel-require
,-pr
- requires pattern in parallel--no-compile
- do not compile even if files require compilation--no-deps-check
- do not check dependencies--no-start
- do not start applications after compilation
Profile output
The example output looks as following:
# CNT ACC (ms) OWN (ms)
Total 200279 1972.188 1964.579
:fprof.apply_start_stop/4 0 1972.188 0.012
anonymous fn/0 in :elixir_compiler_2 1 1972.167 0.001
Test.run/0 1 1972.166 0.007
Test.do_something/1 3 1972.131 0.040
Test.bottleneck/0 1 1599.490 0.007
...
The default output contains data gathered from all profiled processes. All times are wall clock milliseconds. The columns have the following meaning:
- CNT - total number of invocation of the given function
- ACC - total time spent in the function
- OWN - time spent in the function, excluding the time of called functions
The first row (Total) is the sum of all functions executed in all profiled processes. For the given output, we had in total 200279 function calls and spent about 2 seconds running the entire code.
You can obtain further information if you provide --callers
and
--details
options.
When --callers
option is specified, you’ll see expanded function entries:
Mod.caller_1/0 3 200.000 0.017
Mod.caller_2/0 2 100.000 0.017
Mod.some_function/0 5 300.000 0.017 <--
Mod.called_1/0 4 250.000 0.010
Mod.called_2/0 1 50.000 0.030
Here, the arrow (<--
) indicates the marked function - the function
described by this paragraph. You also see its immediate callers (above) and
called functions (below).
All the values of caller functions describe the marked function. For example,
the first row means that Mod.caller_1/0
invoked Mod.some_function/0
3 times.
200ms of the total time spent in Mod.some_function/0
happened due to this
particular caller.
In contrast, the values for the called functions describe those functions, but
in the context of the marked function. For example, the last row means that
Mod.called_2/0
was called once by Mod.some_function/0
, and in that case
the total time spent in the function was 50ms.
For a detailed explanation it’s worth reading the analysis in Erlang documentation for fprof.
Caveats
You should be aware that the code being profiled is running in an anonymous
function which is invoked by :fprof
module. Thus, you’ll see some additional
entries in your profile output, such as :fprof
calls, an anonymous
function with high ACC time, or an :undefined
function which represents
the outer caller (non-profiled code which started the profiler).
Also, keep in mind that profiling might significantly increase the running time of the profiled processes. This might skew your results for example if those processes perform some I/O operations, since running time of those operations will remain unchanged, while CPU bound operations of the profiled processes might take significantly longer. Thus, when profiling some intensive program, try to reduce such dependencies, or be aware of the resulting bias.
Finally, it’s advised to profile your program with the prod
environment, since
this should give you a more correct insight into your real bottlenecks.
Profiling with other environments might produce some false bottlenecks, such as
protocol dispatches, which perform much faster with the prod
environment when
:build_embedded
is true
(which is the default for production).
Summary
Functions
Callback implementation for Mix.Task.run/1