Mix v1.9.4 mix profile.fprof View Source
Profiles the given file or expression using Erlang's fprof
tool.
fprof
can be useful when you want to discover the bottlenecks of a
sequential code.
Before running the code, it invokes the app.start
task which compiles
and loads your project. Then the target expression is profiled, together
with all processes which are spawned by it. Other processes (e.g. those
residing in the OTP application supervision tree) are not profiled.
To profile the code, you can use syntax similar to the mix run
task:
mix profile.fprof -e Hello.world
mix profile.fprof my_script.exs arg1 arg2 arg3
This task is automatically reenabled, so you can profile multiple times in the same Mix invocation.
Command line options
--callers
- prints detailed information about immediate callers and called functions--details
- includes profile data for each profiled process--sort key
- sorts the output by given key:acc
(default) orown
--eval
,-e
- evaluates the given code--require
,-r
- requires pattern before running the command--parallel
,-p
- makes all requires parallel--no-compile
- does not compile even if files require compilation--no-deps-check
- does not check dependencies--no-archives-check
- does not check archives--no-start
- does not start applications after compilation--no-elixir-version-check
- does not check the Elixir version from mix.exs--no-warmup
- does not execute code once before profiling
Profile output
Example output:
# CNT ACC (ms) OWN (ms)
Total 200279 1972.188 1964.579
:fprof.apply_start_stop/4 0 1972.188 0.012
anonymous fn/0 in :elixir_compiler_2 1 1972.167 0.001
Test.run/0 1 1972.166 0.007
Test.do_something/1 3 1972.131 0.040
Test.bottleneck/0 1 1599.490 0.007
...
The default output contains data gathered from all profiled processes. All times are wall clock milliseconds. The columns have the following meaning:
- CNT - total number of invocations of the given function
- ACC - total time spent in the function
- OWN - time spent in the function, excluding the time of called functions
The first row (Total) is the sum of all functions executed in all profiled processes. For the given output, we had a total of 200279 function calls and spent about 2 seconds running the code.
More detailed information is returned if you provide the --callers
and
--details
options.
When --callers
option is specified, you'll see expanded function entries:
Mod.caller1/0 3 200.000 0.017
Mod.caller2/0 2 100.000 0.017
Mod.some_function/0 5 300.000 0.017 <--
Mod.called1/0 4 250.000 0.010
Mod.called2/0 1 50.000 0.030
Here, the arrow (<--
) indicates the marked function - the function
described by this paragraph. You also see its immediate callers (above) and
called functions (below).
All the values of caller functions describe the marked function. For example,
the first row means that Mod.caller1/0
invoked Mod.some_function/0
3 times.
200ms of the total time spent in Mod.some_function/0
was spent processing
calls from this particular caller.
In contrast, the values for the called functions describe those functions, but
in the context of the marked function. For example, the last row means that
Mod.called2/0
was called once by Mod.some_function/0
, and in that case
the total time spent in the function was 50ms.
For a detailed explanation it's worth reading the analysis in Erlang/OTP documentation for fprof.
Caveats
You should be aware that the code being profiled is running in an anonymous
function which is invoked by :fprof
module.
Thus, you'll see some additional entries in your profile output,
such as :fprof
calls, an anonymous
function with high ACC time, or an :undefined
function which represents
the outer caller (non-profiled code which started the profiler).
Also, keep in mind that profiling might significantly increase the running time of the profiled processes. This might skew your results if, for example, those processes perform some I/O operations, since running time of those operations will remain unchanged, while CPU bound operations of the profiled processes might take significantly longer. Thus, when profiling some intensive program, try to reduce such dependencies, or be aware of the resulting bias.
Finally, it's advised to profile your program with the prod
environment, since
this should provide more realistic insights into bottlenecks.
Link to this section Summary
Functions
Allows to programmatically run the fprof
profiler on expression in fun
.