View Source Evision.DNN.Net (Evision v0.2.9)

Summary

Types

t()

Type that represents an DNN.Net struct.

Functions

Connects output of the first layer to input of the second layer.

Dump net to String

Dump net structure, hyperparameters, backend, target and fusion to dot file

Dump net structure, hyperparameters, backend, target and fusion to pbtxt file

Enables or disables layer fusion in the network.

Enables or disables the Winograd compute branch. The Winograd compute branch can speed up 3x3 Convolution at a small loss of accuracy.

Runs forward pass to compute outputs of layers listed in @p outBlobNames.

Runs forward pass to compute outputs of layers listed in @p outBlobNames.

Runs forward pass to compute output of layer with name @p outputName.

Runs forward pass to compute output of layer with name @p outputName.

Returns input scale and zeropoint for a quantized Net.

Variant 1:

getLayer

Converts string name of the layer to the integer identifier.

Returns count of layers of specified type.

Returns list of types for layer used in model.

Returns output scale and zeropoint for a quantized Net.

Variant 1:

getParam

Variant 1:

getParam

Returns overall time for inference and timings (in ticks) for layers.

Returns indexes of layers with unconnected outputs.

Returns names of layers with unconnected outputs.

Net

Returns a quantized Net from a floating-point Net.

Returns a quantized Net from a floating-point Net.

Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).

Compile Halide layers.

Sets the new input value for the network

Sets the new input value for the network

Specify shape of network input.

Sets outputs names of the network input pseudo layer.

Ask network to use specific computation backend where it supported.

Ask network to make computations on specific target device.

Types

@type t() :: %Evision.DNN.Net{ref: reference()}

Type that represents an DNN.Net struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec connect(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

connect(self, outPin, inpPin)

View Source
@spec connect(t(), binary(), binary()) :: t() | {:error, String.t()}

Connects output of the first layer to input of the second layer.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • outPin: String.

    descriptor of the first layer output.

  • inpPin: String.

    descriptor of the second layer input.

Descriptors have the following template <DFN>&lt;layer_name&gt;[.input_number]</DFN>:

  • the first part of the template <DFN>layer_name</DFN> is string name of the added layer. If this part is empty then the network input pseudo layer will be used;

  • the second optional part of the template <DFN>input_number</DFN> is either number of the layer input, either label one. If this part is omitted then the first layer input will be used.

@see setNetInputs(), Layer::inputNameToIndex(), Layer::outputNameToIndex()

Python prototype (for reference only):

connect(outPin, inpPin) -> None
@spec dump(Keyword.t()) :: any() | {:error, String.t()}
@spec dump(t()) :: binary() | {:error, String.t()}

Dump net to String

Positional Arguments
  • self: Evision.DNN.Net.t()
Return

@returns String with structure, hyperparameters, backend, target and fusion Call method after setInput(). To see correct backend, target and fusion run after forward().

Python prototype (for reference only):

dump() -> retval
@spec dumpToFile(Keyword.t()) :: any() | {:error, String.t()}
@spec dumpToFile(t(), binary()) :: t() | {:error, String.t()}

Dump net structure, hyperparameters, backend, target and fusion to dot file

Positional Arguments
  • self: Evision.DNN.Net.t()

  • path: String.

    path to output file with .dot extension

@see dump()

Python prototype (for reference only):

dumpToFile(path) -> None
@spec dumpToPbtxt(Keyword.t()) :: any() | {:error, String.t()}
@spec dumpToPbtxt(t(), binary()) :: t() | {:error, String.t()}

Dump net structure, hyperparameters, backend, target and fusion to pbtxt file

Positional Arguments
  • self: Evision.DNN.Net.t()

  • path: String.

    path to output file with .pbtxt extension

    Use Netron (https://netron.app) to open the target file to visualize the model. Call method after setInput(). To see correct backend, target and fusion run after forward().

Python prototype (for reference only):

dumpToPbtxt(path) -> None
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}

empty

Positional Arguments
  • self: Evision.DNN.Net.t()
Return
  • retval: bool

Returns true if there are no layers in the network.

Python prototype (for reference only):

empty() -> retval
Link to this function

enableFusion(named_args)

View Source
@spec enableFusion(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

enableFusion(self, fusion)

View Source
@spec enableFusion(t(), boolean()) :: t() | {:error, String.t()}

Enables or disables layer fusion in the network.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • fusion: bool.

    true to enable the fusion, false to disable. The fusion is enabled by default.

Python prototype (for reference only):

enableFusion(fusion) -> None
Link to this function

enableWinograd(named_args)

View Source
@spec enableWinograd(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

enableWinograd(self, useWinograd)

View Source
@spec enableWinograd(t(), boolean()) :: t() | {:error, String.t()}

Enables or disables the Winograd compute branch. The Winograd compute branch can speed up 3x3 Convolution at a small loss of accuracy.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • useWinograd: bool.

    true to enable the Winograd compute branch. The default is true.

Python prototype (for reference only):

enableWinograd(useWinograd) -> None
Link to this function

forward(self, opts \\ nil)

View Source
@spec forward(Evision.Net.t(), [{atom(), term()}, ...] | nil) ::
  [Evision.Mat.t()] | Evision.Mat.t() | {:error, String.t()}

Runs forward pass to compute outputs of layers listed in @p outBlobNames.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • outBlobNames: [String].

    names for layers which outputs are needed to get

Return
  • outputBlobs: [Evision.Mat].

    contains blobs for first outputs of specified layers.

Python prototype (for reference only):

forward(outBlobNames[, outputBlobs]) -> outputBlobs
Link to this function

forwardAndRetrieve(named_args)

View Source
@spec forwardAndRetrieve(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

forwardAndRetrieve(self, outBlobNames)

View Source
@spec forwardAndRetrieve(t(), [binary()]) ::
  [[Evision.Mat.t()]] | {:error, String.t()}

Runs forward pass to compute outputs of layers listed in @p outBlobNames.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • outBlobNames: [String].

    names for layers which outputs are needed to get

Return
  • outputBlobs: [[Evision.Mat]].

    contains all output blobs for each layer specified in @p outBlobNames.

Python prototype (for reference only):

forwardAndRetrieve(outBlobNames) -> outputBlobs
Link to this function

forwardAsync(named_args)

View Source
@spec forwardAsync(Keyword.t()) :: any() | {:error, String.t()}
@spec forwardAsync(t()) :: Evision.AsyncArray.t() | {:error, String.t()}

Runs forward pass to compute output of layer with name @p outputName.

Positional Arguments
  • self: Evision.DNN.Net.t()
Keyword Arguments
  • outputName: String.

    name for layer which output is needed to get

Return
  • retval: Evision.AsyncArray.t()

@details By default runs forward pass for the whole network. This is an asynchronous version of forward(const String&). dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.

Python prototype (for reference only):

forwardAsync([, outputName]) -> retval
Link to this function

forwardAsync(self, opts)

View Source
@spec forwardAsync(t(), [{:outputName, term()}] | nil) ::
  Evision.AsyncArray.t() | {:error, String.t()}

Runs forward pass to compute output of layer with name @p outputName.

Positional Arguments
  • self: Evision.DNN.Net.t()
Keyword Arguments
  • outputName: String.

    name for layer which output is needed to get

Return
  • retval: Evision.AsyncArray.t()

@details By default runs forward pass for the whole network. This is an asynchronous version of forward(const String&). dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.

Python prototype (for reference only):

forwardAsync([, outputName]) -> retval
@spec getFLOPS(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

getFLOPS(self, netInputShape)

View Source
@spec getFLOPS(t(), [integer()]) :: integer() | {:error, String.t()}

getFLOPS

Positional Arguments
  • self: Evision.DNN.Net.t()
  • netInputShape: MatShape
Return
  • retval: int64

Has overloading in C++

Python prototype (for reference only):

getFLOPS(netInputShape) -> retval
Link to this function

getFLOPS(self, layerId, netInputShape)

View Source
@spec getFLOPS(t(), integer(), [integer()]) :: integer() | {:error, String.t()}

getFLOPS

Positional Arguments
  • self: Evision.DNN.Net.t()
  • layerId: integer()
  • netInputShape: MatShape
Return
  • retval: int64

Has overloading in C++

Python prototype (for reference only):

getFLOPS(layerId, netInputShape) -> retval
Link to this function

getInputDetails(named_args)

View Source
@spec getInputDetails(Keyword.t()) :: any() | {:error, String.t()}
@spec getInputDetails(t()) :: {[number()], [integer()]} | {:error, String.t()}

Returns input scale and zeropoint for a quantized Net.

Positional Arguments
  • self: Evision.DNN.Net.t()
Return
  • scales: [float].

    output parameter for returning input scales.

  • zeropoints: [integer()].

    output parameter for returning input zeropoints.

Python prototype (for reference only):

getInputDetails() -> scales, zeropoints
@spec getLayer(Keyword.t()) :: any() | {:error, String.t()}
@spec getLayer(t(), term()) :: Evision.DNN.Layer.t() | {:error, String.t()}
@spec getLayer(t(), binary()) :: Evision.DNN.Layer.t() | {:error, String.t()}
@spec getLayer(t(), integer()) :: Evision.DNN.Layer.t() | {:error, String.t()}

Variant 1:

getLayer

Positional Arguments
  • self: Evision.DNN.Net.t()
  • layerId: LayerId
Return
  • retval: Evision.DNN.Layer.t()

Has overloading in C++

@deprecated to be removed

Python prototype (for reference only):

getLayer(layerId) -> retval

Variant 2:

getLayer

Positional Arguments
  • self: Evision.DNN.Net.t()
  • layerName: String
Return
  • retval: Evision.DNN.Layer.t()

Has overloading in C++

@deprecated Use int getLayerId(const String &layer)

Python prototype (for reference only):

getLayer(layerName) -> retval

Variant 3:

Returns pointer to layer with specified id or name which the network use.

Positional Arguments
  • self: Evision.DNN.Net.t()
  • layerId: integer()
Return
  • retval: Evision.DNN.Layer.t()

Python prototype (for reference only):

getLayer(layerId) -> retval
@spec getLayerId(Keyword.t()) :: any() | {:error, String.t()}
@spec getLayerId(t(), binary()) :: integer() | {:error, String.t()}

Converts string name of the layer to the integer identifier.

Positional Arguments
  • self: Evision.DNN.Net.t()
  • layer: String
Return
  • retval: integer()

@returns id of the layer, or -1 if the layer wasn't found.

Python prototype (for reference only):

getLayerId(layer) -> retval
Link to this function

getLayerNames(named_args)

View Source
@spec getLayerNames(Keyword.t()) :: any() | {:error, String.t()}
@spec getLayerNames(t()) :: [binary()] | {:error, String.t()}

getLayerNames

Positional Arguments
  • self: Evision.DNN.Net.t()
Return
  • retval: [String]

Python prototype (for reference only):

getLayerNames() -> retval
Link to this function

getLayerShapes(self, opts \\ nil)

View Source
@spec getLayerShapes(Evision.Net.t(), [{{atom(), term()}}, ...] | nil) ::
  {[[integer()]], [[integer()]]} | {:error, String.t()}
@spec getLayerShapes(Evision.Net.t(), [{{atom(), term()}}, ...] | nil) ::
  {[integer()], [[[integer()]]], [[[integer()]]]} | {:error, String.t()}

getLayerShapes

Positional Arguments
  • self: Evision.DNN.Net.t()
  • netInputShapes: [MatShape]
  • layerId: integer()
Return
  • inLayerShapes: [MatShape]
  • outLayerShapes: [MatShape]

Has overloading in C++

Python prototype (for reference only):

getLayerShapes(netInputShapes, layerId) -> inLayerShapes, outLayerShapes
Link to this function

getLayersCount(named_args)

View Source
@spec getLayersCount(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

getLayersCount(self, layerType)

View Source
@spec getLayersCount(t(), binary()) :: integer() | {:error, String.t()}

Returns count of layers of specified type.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • layerType: String.

    type.

Return
  • retval: integer()

@returns count of layers

Python prototype (for reference only):

getLayersCount(layerType) -> retval
Link to this function

getLayersShapes(self, opts \\ nil)

View Source

getLayersShapes

Positional Arguments
  • self: Evision.DNN.Net.t()
  • netInputShape: MatShape
Return
  • layersIds: [integer()]
  • inLayersShapes: [[MatShape]]
  • outLayersShapes: [[MatShape]]

Has overloading in C++

Python prototype (for reference only):

getLayersShapes(netInputShape) -> layersIds, inLayersShapes, outLayersShapes
Link to this function

getLayerTypes(named_args)

View Source
@spec getLayerTypes(Keyword.t()) :: any() | {:error, String.t()}
@spec getLayerTypes(t()) :: [binary()] | {:error, String.t()}

Returns list of types for layer used in model.

Positional Arguments
  • self: Evision.DNN.Net.t()
Return
  • layersTypes: [String].

    output parameter for returning types.

Python prototype (for reference only):

getLayerTypes() -> layersTypes
Link to this function

getMemoryConsumption(named_args)

View Source
@spec getMemoryConsumption(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

getMemoryConsumption(self, netInputShape)

View Source
@spec getMemoryConsumption(t(), [integer()]) ::
  {integer(), integer()} | {:error, String.t()}

getMemoryConsumption

Positional Arguments
  • self: Evision.DNN.Net.t()
  • netInputShape: MatShape
Return
  • weights: size_t
  • blobs: size_t

Has overloading in C++

Python prototype (for reference only):

getMemoryConsumption(netInputShape) -> weights, blobs
Link to this function

getMemoryConsumption(self, layerId, netInputShape)

View Source
@spec getMemoryConsumption(t(), integer(), [integer()]) ::
  {integer(), integer()} | {:error, String.t()}

getMemoryConsumption

Positional Arguments
  • self: Evision.DNN.Net.t()
  • layerId: integer()
  • netInputShape: MatShape
Return
  • weights: size_t
  • blobs: size_t

Has overloading in C++

Python prototype (for reference only):

getMemoryConsumption(layerId, netInputShape) -> weights, blobs
Link to this function

getOutputDetails(named_args)

View Source
@spec getOutputDetails(Keyword.t()) :: any() | {:error, String.t()}
@spec getOutputDetails(t()) :: {[number()], [integer()]} | {:error, String.t()}

Returns output scale and zeropoint for a quantized Net.

Positional Arguments
  • self: Evision.DNN.Net.t()
Return
  • scales: [float].

    output parameter for returning output scales.

  • zeropoints: [integer()].

    output parameter for returning output zeropoints.

Python prototype (for reference only):

getOutputDetails() -> scales, zeropoints
@spec getParam(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

getParam(self, layerName)

View Source
@spec getParam(t(), binary()) :: Evision.Mat.t() | {:error, String.t()}
@spec getParam(t(), integer()) :: Evision.Mat.t() | {:error, String.t()}

Variant 1:

getParam

Positional Arguments
  • self: Evision.DNN.Net.t()
  • layerName: String
Keyword Arguments
  • numParam: integer().
Return
  • retval: Evision.Mat.t()

Python prototype (for reference only):

getParam(layerName[, numParam]) -> retval

Variant 2:

Returns parameter blob of the layer.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • layer: integer().

    name or id of the layer.

Keyword Arguments
  • numParam: integer().

    index of the layer parameter in the Layer::blobs array.

Return
  • retval: Evision.Mat.t()

@see Layer::blobs

Python prototype (for reference only):

getParam(layer[, numParam]) -> retval
Link to this function

getParam(self, layerName, opts)

View Source
@spec getParam(t(), binary(), [{:numParam, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec getParam(t(), integer(), [{:numParam, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

Variant 1:

getParam

Positional Arguments
  • self: Evision.DNN.Net.t()
  • layerName: String
Keyword Arguments
  • numParam: integer().
Return
  • retval: Evision.Mat.t()

Python prototype (for reference only):

getParam(layerName[, numParam]) -> retval

Variant 2:

Returns parameter blob of the layer.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • layer: integer().

    name or id of the layer.

Keyword Arguments
  • numParam: integer().

    index of the layer parameter in the Layer::blobs array.

Return
  • retval: Evision.Mat.t()

@see Layer::blobs

Python prototype (for reference only):

getParam(layer[, numParam]) -> retval
Link to this function

getPerfProfile(named_args)

View Source
@spec getPerfProfile(Keyword.t()) :: any() | {:error, String.t()}
@spec getPerfProfile(t()) :: {integer(), [number()]} | {:error, String.t()}

Returns overall time for inference and timings (in ticks) for layers.

Positional Arguments
  • self: Evision.DNN.Net.t()
Return
  • retval: int64

  • timings: [double].

    vector for tick timings for all layers.

Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. Supported by DNN_BACKEND_OPENCV on DNN_TARGET_CPU only. @return overall ticks for model inference.

Python prototype (for reference only):

getPerfProfile() -> retval, timings
Link to this function

getUnconnectedOutLayers(named_args)

View Source
@spec getUnconnectedOutLayers(Keyword.t()) :: any() | {:error, String.t()}
@spec getUnconnectedOutLayers(t()) :: [integer()] | {:error, String.t()}

Returns indexes of layers with unconnected outputs.

Positional Arguments
  • self: Evision.DNN.Net.t()
Return
  • retval: [integer()]

FIXIT: Rework API to registerOutput() approach, deprecate this call

Python prototype (for reference only):

getUnconnectedOutLayers() -> retval
Link to this function

getUnconnectedOutLayersNames(named_args)

View Source
@spec getUnconnectedOutLayersNames(Keyword.t()) :: any() | {:error, String.t()}
@spec getUnconnectedOutLayersNames(t()) :: [binary()] | {:error, String.t()}

Returns names of layers with unconnected outputs.

Positional Arguments
  • self: Evision.DNN.Net.t()
Return
  • retval: [String]

FIXIT: Rework API to registerOutput() approach, deprecate this call

Python prototype (for reference only):

getUnconnectedOutLayersNames() -> retval
@spec net() :: t() | {:error, String.t()}

Net

Return
  • self: Evision.DNN.Net.t()

Python prototype (for reference only):

Net() -> <dnn_Net object>
@spec net(Keyword.t()) :: any() | {:error, String.t()}
@spec quantize(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

quantize(self, calibData, inputsDtype, outputsDtype)

View Source
@spec quantize(t(), [Evision.Mat.maybe_mat_in()], integer(), integer()) ::
  t() | {:error, String.t()}

Returns a quantized Net from a floating-point Net.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • calibData: [Evision.Mat].

    Calibration data to compute the quantization parameters.

  • inputsDtype: integer().

    Datatype of quantized net's inputs. Can be CV_32F or CV_8S.

  • outputsDtype: integer().

    Datatype of quantized net's outputs. Can be CV_32F or CV_8S.

Keyword Arguments
  • perChannel: bool.

    Quantization granularity of quantized Net. The default is true, that means quantize model in per-channel way (channel-wise). Set it false to quantize model in per-tensor way (or tensor-wise).

Return
  • retval: Evision.DNN.Net.t()

Python prototype (for reference only):

quantize(calibData, inputsDtype, outputsDtype[, perChannel]) -> retval
Link to this function

quantize(self, calibData, inputsDtype, outputsDtype, opts)

View Source
@spec quantize(
  t(),
  [Evision.Mat.maybe_mat_in()],
  integer(),
  integer(),
  [{:perChannel, term()}] | nil
) ::
  t() | {:error, String.t()}

Returns a quantized Net from a floating-point Net.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • calibData: [Evision.Mat].

    Calibration data to compute the quantization parameters.

  • inputsDtype: integer().

    Datatype of quantized net's inputs. Can be CV_32F or CV_8S.

  • outputsDtype: integer().

    Datatype of quantized net's outputs. Can be CV_32F or CV_8S.

Keyword Arguments
  • perChannel: bool.

    Quantization granularity of quantized Net. The default is true, that means quantize model in per-channel way (channel-wise). Set it false to quantize model in per-tensor way (or tensor-wise).

Return
  • retval: Evision.DNN.Net.t()

Python prototype (for reference only):

quantize(calibData, inputsDtype, outputsDtype[, perChannel]) -> retval
Link to this function

readFromModelOptimizer(named_args)

View Source
@spec readFromModelOptimizer(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

readFromModelOptimizer(bufferModelConfig, bufferWeights)

View Source
@spec readFromModelOptimizer(binary(), binary()) :: t() | {:error, String.t()}

Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).

Positional Arguments
  • bufferModelConfig: [uchar].

    buffer with model's configuration.

  • bufferWeights: [uchar].

    buffer with model's trained weights.

Return
  • retval: Evision.DNN.Net.t()

@returns Net object.

Python prototype (for reference only):

readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval
Link to this function

setHalideScheduler(named_args)

View Source
@spec setHalideScheduler(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setHalideScheduler(self, scheduler)

View Source
@spec setHalideScheduler(t(), binary()) :: t() | {:error, String.t()}

Compile Halide layers.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • scheduler: String.

    Path to YAML file with scheduling directives.

@see setPreferableBackend/2 Schedule layers that support Halide backend. Then compile them for specific target. For layers that not represented in scheduling file or if no manual scheduling used at all, automatic scheduling will be applied.

Python prototype (for reference only):

setHalideScheduler(scheduler) -> None
@spec setInput(Keyword.t()) :: any() | {:error, String.t()}
@spec setInput(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}

Sets the new input value for the network

Positional Arguments
  • self: Evision.DNN.Net.t()

  • blob: Evision.Mat.

    A new blob. Should have CV_32F or CV_8U depth.

Keyword Arguments
  • name: String.

    A name of input layer.

  • scalefactor: double.

    An optional normalization scale.

  • mean: Evision.scalar().

    An optional mean subtraction values.

@see connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \f[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\f]

Python prototype (for reference only):

setInput(blob[, name[, scalefactor[, mean]]]) -> None
Link to this function

setInput(self, blob, opts)

View Source
@spec setInput(
  t(),
  Evision.Mat.maybe_mat_in(),
  [mean: term(), name: term(), scalefactor: term()] | nil
) ::
  t() | {:error, String.t()}

Sets the new input value for the network

Positional Arguments
  • self: Evision.DNN.Net.t()

  • blob: Evision.Mat.

    A new blob. Should have CV_32F or CV_8U depth.

Keyword Arguments
  • name: String.

    A name of input layer.

  • scalefactor: double.

    An optional normalization scale.

  • mean: Evision.scalar().

    An optional mean subtraction values.

@see connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \f[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\f]

Python prototype (for reference only):

setInput(blob[, name[, scalefactor[, mean]]]) -> None
Link to this function

setInputShape(named_args)

View Source
@spec setInputShape(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setInputShape(self, inputName, shape)

View Source
@spec setInputShape(t(), binary(), [integer()]) :: t() | {:error, String.t()}

Specify shape of network input.

Positional Arguments
  • self: Evision.DNN.Net.t()
  • inputName: String
  • shape: MatShape

Python prototype (for reference only):

setInputShape(inputName, shape) -> None
Link to this function

setInputsNames(named_args)

View Source
@spec setInputsNames(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setInputsNames(self, inputBlobNames)

View Source
@spec setInputsNames(t(), [binary()]) :: t() | {:error, String.t()}

Sets outputs names of the network input pseudo layer.

Positional Arguments
  • self: Evision.DNN.Net.t()
  • inputBlobNames: [String]

Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.

Python prototype (for reference only):

setInputsNames(inputBlobNames) -> None
@spec setParam(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setParam(self, layerName, numParam, blob)

View Source
@spec setParam(t(), binary(), integer(), Evision.Mat.maybe_mat_in()) ::
  t() | {:error, String.t()}
@spec setParam(t(), integer(), integer(), Evision.Mat.maybe_mat_in()) ::
  t() | {:error, String.t()}

Variant 1:

setParam

Positional Arguments

Python prototype (for reference only):

setParam(layerName, numParam, blob) -> None

Variant 2:

Sets the new value for the learned param of the layer.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • layer: integer().

    name or id of the layer.

  • numParam: integer().

    index of the layer parameter in the Layer::blobs array.

  • blob: Evision.Mat.

    the new value.

@see Layer::blobs Note: If shape of the new blob differs from the previous shape, then the following forward pass may fail.

Python prototype (for reference only):

setParam(layer, numParam, blob) -> None
Link to this function

setPreferableBackend(named_args)

View Source
@spec setPreferableBackend(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setPreferableBackend(self, backendId)

View Source
@spec setPreferableBackend(t(), integer()) :: t() | {:error, String.t()}

Ask network to use specific computation backend where it supported.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • backendId: integer().

    backend identifier.

@see Backend

Python prototype (for reference only):

setPreferableBackend(backendId) -> None
Link to this function

setPreferableTarget(named_args)

View Source
@spec setPreferableTarget(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setPreferableTarget(self, targetId)

View Source
@spec setPreferableTarget(t(), integer()) :: t() | {:error, String.t()}

Ask network to make computations on specific target device.

Positional Arguments
  • self: Evision.DNN.Net.t()

  • targetId: integer().

    target identifier.

@see Target List of supported combinations backend / target: | | DNN_BACKEND_OPENCV | DNN_BACKEND_INFERENCE_ENGINE | DNN_BACKEND_HALIDE | DNN_BACKEND_CUDA | |------------------------|--------------------|------------------------------|--------------------|-------------------| | DNN_TARGET_CPU | + | + | + | | | DNN_TARGET_OPENCL | + | + | + | | | DNN_TARGET_OPENCL_FP16 | + | + | | | | DNN_TARGET_MYRIAD | | + | | | | DNN_TARGET_FPGA | | + | | | | DNN_TARGET_CUDA | | | | + | | DNN_TARGET_CUDA_FP16 | | | | + | | DNN_TARGET_HDDL | | + | | |

Python prototype (for reference only):

setPreferableTarget(targetId) -> None