View Source Evision.DNN.Net (Evision v0.2.9)
Summary
Functions
Connects output of the first layer to input of the second layer.
Dump net to String
Dump net structure, hyperparameters, backend, target and fusion to dot file
Dump net structure, hyperparameters, backend, target and fusion to pbtxt file
empty
Enables or disables layer fusion in the network.
Enables or disables the Winograd compute branch. The Winograd compute branch can speed up 3x3 Convolution at a small loss of accuracy.
Runs forward pass to compute outputs of layers listed in @p outBlobNames.
Runs forward pass to compute outputs of layers listed in @p outBlobNames.
Runs forward pass to compute output of layer with name @p outputName.
Runs forward pass to compute output of layer with name @p outputName.
getFLOPS
Returns input scale and zeropoint for a quantized Net.
Variant 1:
getLayer
Converts string name of the layer to the integer identifier.
getLayerNames
getLayerShapes
Returns count of layers of specified type.
getLayersShapes
Returns list of types for layer used in model.
getMemoryConsumption
getMemoryConsumption
Returns output scale and zeropoint for a quantized Net.
Variant 1:
getParam
Variant 1:
getParam
Returns overall time for inference and timings (in ticks) for layers.
Returns indexes of layers with unconnected outputs.
Returns names of layers with unconnected outputs.
Net
Returns a quantized Net from a floating-point Net.
Returns a quantized Net from a floating-point Net.
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
Compile Halide layers.
Sets the new input value for the network
Sets the new input value for the network
Specify shape of network input.
Sets outputs names of the network input pseudo layer.
Variant 1:
setParam
Ask network to use specific computation backend where it supported.
Ask network to make computations on specific target device.
Types
@type t() :: %Evision.DNN.Net{ref: reference()}
Type that represents an DNN.Net
struct.
ref.
reference()
The underlying erlang resource variable.
Functions
Connects output of the first layer to input of the second layer.
Positional Arguments
self:
Evision.DNN.Net.t()
outPin:
String
.descriptor of the first layer output.
inpPin:
String
.descriptor of the second layer input.
Descriptors have the following template <DFN><layer_name>[.input_number]</DFN>:
the first part of the template <DFN>layer_name</DFN> is string name of the added layer. If this part is empty then the network input pseudo layer will be used;
the second optional part of the template <DFN>input_number</DFN> is either number of the layer input, either label one. If this part is omitted then the first layer input will be used.
@see setNetInputs(), Layer::inputNameToIndex(), Layer::outputNameToIndex()
Python prototype (for reference only):
connect(outPin, inpPin) -> None
@spec dump(Keyword.t()) :: any() | {:error, String.t()}
@spec dump(t()) :: binary() | {:error, String.t()}
Dump net to String
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
- retval:
String
@returns String with structure, hyperparameters, backend, target and fusion Call method after setInput(). To see correct backend, target and fusion run after forward().
Python prototype (for reference only):
dump() -> retval
Dump net structure, hyperparameters, backend, target and fusion to dot file
Positional Arguments
self:
Evision.DNN.Net.t()
path:
String
.path to output file with .dot extension
@see dump()
Python prototype (for reference only):
dumpToFile(path) -> None
Dump net structure, hyperparameters, backend, target and fusion to pbtxt file
Positional Arguments
self:
Evision.DNN.Net.t()
path:
String
.path to output file with .pbtxt extension
Use Netron (https://netron.app) to open the target file to visualize the model. Call method after setInput(). To see correct backend, target and fusion run after forward().
Python prototype (for reference only):
dumpToPbtxt(path) -> None
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}
empty
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
- retval:
bool
Returns true if there are no layers in the network.
Python prototype (for reference only):
empty() -> retval
Enables or disables layer fusion in the network.
Positional Arguments
self:
Evision.DNN.Net.t()
fusion:
bool
.true to enable the fusion, false to disable. The fusion is enabled by default.
Python prototype (for reference only):
enableFusion(fusion) -> None
Enables or disables the Winograd compute branch. The Winograd compute branch can speed up 3x3 Convolution at a small loss of accuracy.
Positional Arguments
self:
Evision.DNN.Net.t()
useWinograd:
bool
.true to enable the Winograd compute branch. The default is true.
Python prototype (for reference only):
enableWinograd(useWinograd) -> None
@spec forward(Evision.Net.t(), [{atom(), term()}, ...] | nil) :: [Evision.Mat.t()] | Evision.Mat.t() | {:error, String.t()}
Runs forward pass to compute outputs of layers listed in @p outBlobNames.
Positional Arguments
self:
Evision.DNN.Net.t()
outBlobNames:
[String]
.names for layers which outputs are needed to get
Return
outputBlobs:
[Evision.Mat]
.contains blobs for first outputs of specified layers.
Python prototype (for reference only):
forward(outBlobNames[, outputBlobs]) -> outputBlobs
@spec forwardAndRetrieve(t(), [binary()]) :: [[Evision.Mat.t()]] | {:error, String.t()}
Runs forward pass to compute outputs of layers listed in @p outBlobNames.
Positional Arguments
self:
Evision.DNN.Net.t()
outBlobNames:
[String]
.names for layers which outputs are needed to get
Return
outputBlobs:
[[Evision.Mat]]
.contains all output blobs for each layer specified in @p outBlobNames.
Python prototype (for reference only):
forwardAndRetrieve(outBlobNames) -> outputBlobs
@spec forwardAsync(Keyword.t()) :: any() | {:error, String.t()}
@spec forwardAsync(t()) :: Evision.AsyncArray.t() | {:error, String.t()}
Runs forward pass to compute output of layer with name @p outputName.
Positional Arguments
- self:
Evision.DNN.Net.t()
Keyword Arguments
outputName:
String
.name for layer which output is needed to get
Return
- retval:
Evision.AsyncArray.t()
@details By default runs forward pass for the whole network. This is an asynchronous version of forward(const String&). dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.
Python prototype (for reference only):
forwardAsync([, outputName]) -> retval
@spec forwardAsync(t(), [{:outputName, term()}] | nil) :: Evision.AsyncArray.t() | {:error, String.t()}
Runs forward pass to compute output of layer with name @p outputName.
Positional Arguments
- self:
Evision.DNN.Net.t()
Keyword Arguments
outputName:
String
.name for layer which output is needed to get
Return
- retval:
Evision.AsyncArray.t()
@details By default runs forward pass for the whole network. This is an asynchronous version of forward(const String&). dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.
Python prototype (for reference only):
forwardAsync([, outputName]) -> retval
getFLOPS
Positional Arguments
- self:
Evision.DNN.Net.t()
- netInputShape:
MatShape
Return
- retval:
int64
Has overloading in C++
Python prototype (for reference only):
getFLOPS(netInputShape) -> retval
getFLOPS
Positional Arguments
- self:
Evision.DNN.Net.t()
- layerId:
integer()
- netInputShape:
MatShape
Return
- retval:
int64
Has overloading in C++
Python prototype (for reference only):
getFLOPS(layerId, netInputShape) -> retval
@spec getInputDetails(Keyword.t()) :: any() | {:error, String.t()}
@spec getInputDetails(t()) :: {[number()], [integer()]} | {:error, String.t()}
Returns input scale and zeropoint for a quantized Net.
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
scales:
[float]
.output parameter for returning input scales.
zeropoints:
[integer()]
.output parameter for returning input zeropoints.
Python prototype (for reference only):
getInputDetails() -> scales, zeropoints
@spec getLayer(t(), term()) :: Evision.DNN.Layer.t() | {:error, String.t()}
@spec getLayer(t(), binary()) :: Evision.DNN.Layer.t() | {:error, String.t()}
@spec getLayer(t(), integer()) :: Evision.DNN.Layer.t() | {:error, String.t()}
Variant 1:
getLayer
Positional Arguments
- self:
Evision.DNN.Net.t()
- layerId:
LayerId
Return
- retval:
Evision.DNN.Layer.t()
Has overloading in C++
@deprecated to be removed
Python prototype (for reference only):
getLayer(layerId) -> retval
Variant 2:
getLayer
Positional Arguments
- self:
Evision.DNN.Net.t()
- layerName:
String
Return
- retval:
Evision.DNN.Layer.t()
Has overloading in C++
@deprecated Use int getLayerId(const String &layer)
Python prototype (for reference only):
getLayer(layerName) -> retval
Variant 3:
Returns pointer to layer with specified id or name which the network use.
Positional Arguments
- self:
Evision.DNN.Net.t()
- layerId:
integer()
Return
- retval:
Evision.DNN.Layer.t()
Python prototype (for reference only):
getLayer(layerId) -> retval
Converts string name of the layer to the integer identifier.
Positional Arguments
- self:
Evision.DNN.Net.t()
- layer:
String
Return
- retval:
integer()
@returns id of the layer, or -1 if the layer wasn't found.
Python prototype (for reference only):
getLayerId(layer) -> retval
@spec getLayerNames(Keyword.t()) :: any() | {:error, String.t()}
@spec getLayerNames(t()) :: [binary()] | {:error, String.t()}
getLayerNames
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
- retval:
[String]
Python prototype (for reference only):
getLayerNames() -> retval
@spec getLayerShapes(Evision.Net.t(), [{{atom(), term()}}, ...] | nil) :: {[[integer()]], [[integer()]]} | {:error, String.t()}
@spec getLayerShapes(Evision.Net.t(), [{{atom(), term()}}, ...] | nil) :: {[integer()], [[[integer()]]], [[[integer()]]]} | {:error, String.t()}
getLayerShapes
Positional Arguments
- self:
Evision.DNN.Net.t()
- netInputShapes:
[MatShape]
- layerId:
integer()
Return
- inLayerShapes:
[MatShape]
- outLayerShapes:
[MatShape]
Has overloading in C++
Python prototype (for reference only):
getLayerShapes(netInputShapes, layerId) -> inLayerShapes, outLayerShapes
Returns count of layers of specified type.
Positional Arguments
self:
Evision.DNN.Net.t()
layerType:
String
.type.
Return
- retval:
integer()
@returns count of layers
Python prototype (for reference only):
getLayersCount(layerType) -> retval
getLayersShapes
Positional Arguments
- self:
Evision.DNN.Net.t()
- netInputShape:
MatShape
Return
- layersIds:
[integer()]
- inLayersShapes:
[[MatShape]]
- outLayersShapes:
[[MatShape]]
Has overloading in C++
Python prototype (for reference only):
getLayersShapes(netInputShape) -> layersIds, inLayersShapes, outLayersShapes
@spec getLayerTypes(Keyword.t()) :: any() | {:error, String.t()}
@spec getLayerTypes(t()) :: [binary()] | {:error, String.t()}
Returns list of types for layer used in model.
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
layersTypes:
[String]
.output parameter for returning types.
Python prototype (for reference only):
getLayerTypes() -> layersTypes
getMemoryConsumption
Positional Arguments
- self:
Evision.DNN.Net.t()
- netInputShape:
MatShape
Return
- weights:
size_t
- blobs:
size_t
Has overloading in C++
Python prototype (for reference only):
getMemoryConsumption(netInputShape) -> weights, blobs
@spec getMemoryConsumption(t(), integer(), [integer()]) :: {integer(), integer()} | {:error, String.t()}
getMemoryConsumption
Positional Arguments
- self:
Evision.DNN.Net.t()
- layerId:
integer()
- netInputShape:
MatShape
Return
- weights:
size_t
- blobs:
size_t
Has overloading in C++
Python prototype (for reference only):
getMemoryConsumption(layerId, netInputShape) -> weights, blobs
@spec getOutputDetails(Keyword.t()) :: any() | {:error, String.t()}
@spec getOutputDetails(t()) :: {[number()], [integer()]} | {:error, String.t()}
Returns output scale and zeropoint for a quantized Net.
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
scales:
[float]
.output parameter for returning output scales.
zeropoints:
[integer()]
.output parameter for returning output zeropoints.
Python prototype (for reference only):
getOutputDetails() -> scales, zeropoints
@spec getParam(t(), binary()) :: Evision.Mat.t() | {:error, String.t()}
@spec getParam(t(), integer()) :: Evision.Mat.t() | {:error, String.t()}
Variant 1:
getParam
Positional Arguments
- self:
Evision.DNN.Net.t()
- layerName:
String
Keyword Arguments
- numParam:
integer()
.
Return
- retval:
Evision.Mat.t()
Python prototype (for reference only):
getParam(layerName[, numParam]) -> retval
Variant 2:
Returns parameter blob of the layer.
Positional Arguments
self:
Evision.DNN.Net.t()
layer:
integer()
.name or id of the layer.
Keyword Arguments
numParam:
integer()
.index of the layer parameter in the Layer::blobs array.
Return
- retval:
Evision.Mat.t()
@see Layer::blobs
Python prototype (for reference only):
getParam(layer[, numParam]) -> retval
@spec getParam(t(), binary(), [{:numParam, term()}] | nil) :: Evision.Mat.t() | {:error, String.t()}
@spec getParam(t(), integer(), [{:numParam, term()}] | nil) :: Evision.Mat.t() | {:error, String.t()}
Variant 1:
getParam
Positional Arguments
- self:
Evision.DNN.Net.t()
- layerName:
String
Keyword Arguments
- numParam:
integer()
.
Return
- retval:
Evision.Mat.t()
Python prototype (for reference only):
getParam(layerName[, numParam]) -> retval
Variant 2:
Returns parameter blob of the layer.
Positional Arguments
self:
Evision.DNN.Net.t()
layer:
integer()
.name or id of the layer.
Keyword Arguments
numParam:
integer()
.index of the layer parameter in the Layer::blobs array.
Return
- retval:
Evision.Mat.t()
@see Layer::blobs
Python prototype (for reference only):
getParam(layer[, numParam]) -> retval
@spec getPerfProfile(Keyword.t()) :: any() | {:error, String.t()}
@spec getPerfProfile(t()) :: {integer(), [number()]} | {:error, String.t()}
Returns overall time for inference and timings (in ticks) for layers.
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
retval:
int64
timings:
[double]
.vector for tick timings for all layers.
Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. Supported by DNN_BACKEND_OPENCV on DNN_TARGET_CPU only. @return overall ticks for model inference.
Python prototype (for reference only):
getPerfProfile() -> retval, timings
@spec getUnconnectedOutLayers(Keyword.t()) :: any() | {:error, String.t()}
@spec getUnconnectedOutLayers(t()) :: [integer()] | {:error, String.t()}
Returns indexes of layers with unconnected outputs.
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
- retval:
[integer()]
FIXIT: Rework API to registerOutput() approach, deprecate this call
Python prototype (for reference only):
getUnconnectedOutLayers() -> retval
@spec getUnconnectedOutLayersNames(Keyword.t()) :: any() | {:error, String.t()}
@spec getUnconnectedOutLayersNames(t()) :: [binary()] | {:error, String.t()}
Returns names of layers with unconnected outputs.
Positional Arguments
- self:
Evision.DNN.Net.t()
Return
- retval:
[String]
FIXIT: Rework API to registerOutput() approach, deprecate this call
Python prototype (for reference only):
getUnconnectedOutLayersNames() -> retval
Net
Return
- self:
Evision.DNN.Net.t()
Python prototype (for reference only):
Net() -> <dnn_Net object>
@spec quantize(t(), [Evision.Mat.maybe_mat_in()], integer(), integer()) :: t() | {:error, String.t()}
Returns a quantized Net from a floating-point Net.
Positional Arguments
self:
Evision.DNN.Net.t()
calibData:
[Evision.Mat]
.Calibration data to compute the quantization parameters.
inputsDtype:
integer()
.Datatype of quantized net's inputs. Can be CV_32F or CV_8S.
outputsDtype:
integer()
.Datatype of quantized net's outputs. Can be CV_32F or CV_8S.
Keyword Arguments
perChannel:
bool
.Quantization granularity of quantized Net. The default is true, that means quantize model in per-channel way (channel-wise). Set it false to quantize model in per-tensor way (or tensor-wise).
Return
- retval:
Evision.DNN.Net.t()
Python prototype (for reference only):
quantize(calibData, inputsDtype, outputsDtype[, perChannel]) -> retval
@spec quantize( t(), [Evision.Mat.maybe_mat_in()], integer(), integer(), [{:perChannel, term()}] | nil ) :: t() | {:error, String.t()}
Returns a quantized Net from a floating-point Net.
Positional Arguments
self:
Evision.DNN.Net.t()
calibData:
[Evision.Mat]
.Calibration data to compute the quantization parameters.
inputsDtype:
integer()
.Datatype of quantized net's inputs. Can be CV_32F or CV_8S.
outputsDtype:
integer()
.Datatype of quantized net's outputs. Can be CV_32F or CV_8S.
Keyword Arguments
perChannel:
bool
.Quantization granularity of quantized Net. The default is true, that means quantize model in per-channel way (channel-wise). Set it false to quantize model in per-tensor way (or tensor-wise).
Return
- retval:
Evision.DNN.Net.t()
Python prototype (for reference only):
quantize(calibData, inputsDtype, outputsDtype[, perChannel]) -> retval
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
Positional Arguments
bufferModelConfig:
[uchar]
.buffer with model's configuration.
bufferWeights:
[uchar]
.buffer with model's trained weights.
Return
- retval:
Evision.DNN.Net.t()
@returns Net object.
Python prototype (for reference only):
readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval
Compile Halide layers.
Positional Arguments
self:
Evision.DNN.Net.t()
scheduler:
String
.Path to YAML file with scheduling directives.
@see setPreferableBackend/2
Schedule layers that support Halide backend. Then compile them for
specific target. For layers that not represented in scheduling file
or if no manual scheduling used at all, automatic scheduling will be applied.
Python prototype (for reference only):
setHalideScheduler(scheduler) -> None
@spec setInput(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
Sets the new input value for the network
Positional Arguments
self:
Evision.DNN.Net.t()
blob:
Evision.Mat
.A new blob. Should have CV_32F or CV_8U depth.
Keyword Arguments
name:
String
.A name of input layer.
scalefactor:
double
.An optional normalization scale.
mean:
Evision.scalar()
.An optional mean subtraction values.
@see connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \f[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\f]
Python prototype (for reference only):
setInput(blob[, name[, scalefactor[, mean]]]) -> None
@spec setInput( t(), Evision.Mat.maybe_mat_in(), [mean: term(), name: term(), scalefactor: term()] | nil ) :: t() | {:error, String.t()}
Sets the new input value for the network
Positional Arguments
self:
Evision.DNN.Net.t()
blob:
Evision.Mat
.A new blob. Should have CV_32F or CV_8U depth.
Keyword Arguments
name:
String
.A name of input layer.
scalefactor:
double
.An optional normalization scale.
mean:
Evision.scalar()
.An optional mean subtraction values.
@see connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \f[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\f]
Python prototype (for reference only):
setInput(blob[, name[, scalefactor[, mean]]]) -> None
Specify shape of network input.
Positional Arguments
- self:
Evision.DNN.Net.t()
- inputName:
String
- shape:
MatShape
Python prototype (for reference only):
setInputShape(inputName, shape) -> None
Sets outputs names of the network input pseudo layer.
Positional Arguments
- self:
Evision.DNN.Net.t()
- inputBlobNames:
[String]
Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.
Python prototype (for reference only):
setInputsNames(inputBlobNames) -> None
@spec setParam(t(), binary(), integer(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec setParam(t(), integer(), integer(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
Variant 1:
setParam
Positional Arguments
- self:
Evision.DNN.Net.t()
- layerName:
String
- numParam:
integer()
- blob:
Evision.Mat
Python prototype (for reference only):
setParam(layerName, numParam, blob) -> None
Variant 2:
Sets the new value for the learned param of the layer.
Positional Arguments
self:
Evision.DNN.Net.t()
layer:
integer()
.name or id of the layer.
numParam:
integer()
.index of the layer parameter in the Layer::blobs array.
blob:
Evision.Mat
.the new value.
@see Layer::blobs Note: If shape of the new blob differs from the previous shape, then the following forward pass may fail.
Python prototype (for reference only):
setParam(layer, numParam, blob) -> None
Ask network to use specific computation backend where it supported.
Positional Arguments
self:
Evision.DNN.Net.t()
backendId:
integer()
.backend identifier.
@see Backend
Python prototype (for reference only):
setPreferableBackend(backendId) -> None
Ask network to make computations on specific target device.
Positional Arguments
self:
Evision.DNN.Net.t()
targetId:
integer()
.target identifier.
@see Target List of supported combinations backend / target: | | DNN_BACKEND_OPENCV | DNN_BACKEND_INFERENCE_ENGINE | DNN_BACKEND_HALIDE | DNN_BACKEND_CUDA | |------------------------|--------------------|------------------------------|--------------------|-------------------| | DNN_TARGET_CPU | + | + | + | | | DNN_TARGET_OPENCL | + | + | + | | | DNN_TARGET_OPENCL_FP16 | + | + | | | | DNN_TARGET_MYRIAD | | + | | | | DNN_TARGET_FPGA | | + | | | | DNN_TARGET_CUDA | | | | + | | DNN_TARGET_CUDA_FP16 | | | | + | | DNN_TARGET_HDDL | | + | | |
Python prototype (for reference only):
setPreferableTarget(targetId) -> None