Authors: Fred Hebert (mononcqc@ferd.ca) [web site: http://ferd.ca/], Lukas Larsson (lukas@erlang.org).
Functions to deal with Erlang's memory allocators, or particularly, to try to present the allocator data in a way that makes it simpler to discover possible problems.
Tweaking Erlang memory allocators and their behaviour is a very tricky ordeal whenever you have to give up the default settings. This module (and its documentation) will try and provide helpful pointers to help in this task.
This module should mostly be helpful to figure out if there is a problem, but will offer little help to figure out what is wrong.
To figure this out, you need to dig deeper into the allocator data
(obtainable with allocators/0
), and/or have some precise knowledge
about the type of load and work done by the VM to be able to assess what
each reaction to individual tweak should be.
A lot of trial and error might be required to figure out if tweaks have helped or not, ultimately.
In order to help do offline debugging of memory allocator problems
recon_alloc also has a few functions that store snapshots of the
memory statistics.
These snapshots can be used to freeze the current allocation values so that
they do not change during analysis while using the regular functionality of
this module, so that the allocator values can be saved, or that
they can be shared, dumped, and reloaded for further analysis using files.
See snapshot_load/1
for a simple use-case.
When a given area of memory is allocated by the OS to the
VM (through sys_alloc or mseg_alloc), it is put into a 'carrier'. There
are two kinds of carriers: multiblock and single block. The default
carriers data is sent to are multiblock carriers, owned by a specific
allocator (ets_alloc, binary_alloc, etc.). The specific allocator can
thus do allocation for specific Erlang requirements within bits of
memory that has been preallocated before. This allows more reuse,
and we can even measure the cache hit rates cache_hit_rates/0
.
There is however a threshold above which an item in memory won't fit a multiblock carrier. When that happens, the specific allocator does a special allocation to a single block carrier. This is done by the allocator basically asking for space directly from sys_alloc or mseg_alloc rather than a previously multiblock area already obtained before.
This leads to various allocation strategies where you decide to choose:set_unit/1
.
allocator() = temp_alloc | eheap_alloc | binary_alloc | ets_alloc | driver_alloc | sl_alloc | ll_alloc | fix_alloc | std_alloc
allocdata(T) = {{allocator(), instance()}, T}
allocdata_types(T) = {{allocator(), [instance()]}, T}
instance() = non_neg_integer()
memory() = [{atom(), atom()}]
snapshot() = {memory(), [allocdata(term())]}
allocators/0 | returns a dump of all allocator settings and values. |
allocators/1 | returns a dump of all allocator settings and values modified depending on the argument. |
average_block_sizes/1 | Checks all allocators in allocator() and returns the average
block sizes being used for mbcs and sbcs . |
cache_hit_rates/0 | looks at the mseg_alloc allocator (allocator used by all the
allocators in allocator() ) and returns information relative to
the cache hit rates. |
fragmentation/1 | Compares the block sizes to the carrier sizes, both for
single block (sbcs ) and multiblock (mbcs ) carriers. |
memory/1 | Equivalent to memory(Key, current) . |
memory/2 | reports one of multiple possible memory values for the entire node depending on what is to be reported:. |
sbcs_to_mbcs/1 | compares the amount of single block carriers (sbcs ) vs the
number of multiblock carriers (mbcs ) for each individual allocator in
allocator() . |
set_unit/1 | set the current unit to be used by recon_alloc. |
snapshot/0 | Take a new snapshot of the current memory allocator statistics. |
snapshot_clear/0 | clear the current snapshot in the process dictionary, if present, and return the value it had before being unset. |
snapshot_get/0 | returns the current snapshot stored by snapshot/0 . |
snapshot_load/1 | load a snapshot from a given file. |
snapshot_print/0 | print a dump of the current snapshot stored by snapshot/0
Prints undefined if no snapshot has been taken. |
snapshot_save/1 | save the current snapshot taken by snapshot/0 to a file. |
allocators() -> [allocdata(term())]
returns a dump of all allocator settings and values
allocators(X1::types) -> [allocdata_types(term())]
returns a dump of all allocator settings and values modified depending on the argument.
types
report the settings and accumulated values for each
allocator type. This is useful when looking for anomalies
in the system as a whole and not specific instances.average_block_sizes(Keyword::current | max) -> [{allocator(), [{Key, Val}]}]
Checks all allocators in allocator()
and returns the average
block sizes being used for mbcs
and sbcs
. This value is interesting
to use because it will tell us how large most blocks are.
This can be related to the VM's largest multiblock carrier size
(lmbcs
) and smallest multiblock carrier size (smbcs
) to specify
allocation strategies regarding the carrier sizes to be used.
This function isn't exceptionally useful unless you know you have some
specific problem, say with sbcs/mbcs ratios (see sbcs_to_mbcs/0
)
or fragmentation for a specific allocator, and want to figure out what
values to pick to increase or decrease sizes compared to the currently
configured value.
lmbcs
and smbcs
are going to be rounded up
to the next power of two when configuring them.
cache_hit_rates() -> [{{instance, instance()}, [{Key, Val}]}]
looks at the mseg_alloc
allocator (allocator used by all the
allocators in allocator()
) and returns information relative to
the cache hit rates. Unless memory has expected spiky behaviour, it should
usually be above 0.80 (80%).
Cache can be tweaked using three VM flags: +MMmcs
, +MMrmcbf
, and
+MMamcbf
.
+MMmcs
stands for the maximum amount of cached memory segments. Its
default value is '10' and can be anything from 0 to 30. Increasing
it first and verifying if cache hits get better should be the first
step taken.
The two other options specify what are the maximal values of a segment to cache, in relative (in percent) and absolute terms (in kilobytes), respectively. Increasing these may allow more segments to be cached, but should also add overheads to memory allocation. An Erlang node that has limited memory and increases these values may make things worse on that point.
The values returned by this function are sorted by a weight combining the lower cache hit joined to the largest memory values allocated.fragmentation(Keyword::current | max) -> [allocdata([{atom(), term()}])]
Compares the block sizes to the carrier sizes, both for
single block (sbcs
) and multiblock (mbcs
) carriers.
The returned results are sorted by a weight system that is
somewhat likely to return the most fragmented allocators first,
based on their percentage of use and the total size of the carriers,
for both sbcs
and mbcs
.
current
allocator values, and
for max
allocator values. The current values hold the present allocation
numbers, and max values, the values at the peak. Comparing both together
can give an idea of whether the node is currently being at its memory peak
when possibly leaky, or if it isn't. This information can in turn
influence the tuning of allocators to better fit sizes of blocks and/or
carriers.
memory(Key::used | allocated | unused) -> pos_integer()
Equivalent to memory(Key, current)
.
memory(X1::used | allocated | unused, Keyword::current | max) -> pos_integer()
reports one of multiple possible memory values for the entire node depending on what is to be reported:
used
reports the memory that is actively used for allocated
Erlang data;allocated
reports the memory that is reserved by the VM. It
includes the memory used, but also the memory yet-to-be-used but still
given by the OS. This is the amount you want if you're dealing with
ulimit and OS-reported values. allocated_types
report the memory that is reserved by the
VM grouped into the different util allocators.allocated_instances
report the memory that is reserved
by the VM grouped into the different schedulers. Note that
instance id 0 is the global allocator used to allocate data from
non-managed threads, i.e. async and driver threads.unused
reports the amount of memory reserved by the VM that
is not being allocated.
Equivalent to allocated - used
.usage
returns a percentage (0.0 .. 1.0) of used/allocated
memory ratios.The memory reported by allocated
should roughly
match what the OS reports. If this amount is different by a large margin,
it may be the sign that someone is allocating memory in C directly, outside
of Erlang's own allocator -- a big warning sign. There are currently
three sources of memory alloction that are not counted towards this value:
The cached segments in the mseg allocator, any memory allocated as a
super carrier, and small pieces of memory allocated during startup
before the memory allocators are initialized.
fragmentation/1
)
sbcs_to_mbcs(Keyword::max | current) -> [allocdata(term())]
compares the amount of single block carriers (sbcs
) vs the
number of multiblock carriers (mbcs
) for each individual allocator in
allocator()
.
When a specific piece of data is allocated, it is compared to a threshold,
called the 'single block carrier threshold' (sbct
). When the data is
larger than the sbct
, it gets sent to a single block carrier. When the
data is smaller than the sbct
, it gets placed into a multiblock carrier.
mbcs are to be preferred to sbcs because they basically represent pre- allocated memory, whereas sbcs will map to one call to sys_alloc or mseg_alloc, which is more expensive than redistributing data that was obtained for multiblock carriers. Moreover, the VM is able to do specific work with mbcs that should help reduce fragmentation in ways sys_alloc or mmap usually won't.
Ideally, most of the data should fit inside multiblock carriers. If
most of the data ends up in sbcs
, you may need to adjust the multiblock
carrier sizes, specifically the maximal value (lmbcs
) and the threshold
(sbct
). On 32 bit VMs, sbct
is limited to 8MBs, but 64 bit VMs can go
to pretty much any practical size.
set_unit(X1::byte | kilobyte | megabyte | gigabyte) -> ok
set the current unit to be used by recon_alloc. This effects all functions that return bytes.
Eg.1> recon_alloc:memory(used,current). 17548752 2> recon_alloc:set_unit(kilobyte). undefined 3> recon_alloc:memory(used,current). 17576.90625
snapshot() -> snapshot() | undefined
Take a new snapshot of the current memory allocator statistics.
The snapshot is stored in the process dictionary of the calling process,
with all the limitations that it implies (i.e. no garbage-collection).
To unsert the snapshot, see snapshot_clear/1
.
snapshot_clear() -> snapshot() | undefined
clear the current snapshot in the process dictionary, if present, and return the value it had before being unset.
snapshot_get() -> snapshot() | undefined
returns the current snapshot stored by snapshot/0
.
Returns undefined
if no snapshot has been taken.
snapshot_load(Filename) -> snapshot() | undefined
load a snapshot from a given file. The format of the data in the
file can be either the same as output by snapshot_save()
,
or the output obtained by calling
{erlang:memory(),[{A,erlang:system_info({allocator,A})} || A <- erlang:system_info(alloc_util_allocators)++[sys_alloc,mseg_alloc]]}.
and storing it in a file.
If the latter option is taken, please remember to add a full stop at the end
of the resulting Erlang term, as this function uses file:consult/1
to load
the file.
Example usage:
On target machine: 1> recon_alloc:snapshot(). undefined 2> recon_alloc:memory(used). 18411064 3> recon_alloc:snapshot_save("recon_snapshot.terms"). ok On other machine: 1> recon_alloc:snapshot_load("recon_snapshot.terms"). undefined 2> recon_alloc:memory(used). 18411064
snapshot_print() -> ok
print a dump of the current snapshot stored by snapshot/0
Prints undefined
if no snapshot has been taken.
snapshot_save(Filename) -> ok
save the current snapshot taken by snapshot/0
to a file.
If there is no current snapshot, a snaphot of the current allocator
statistics will be written to the file.
Generated by EDoc