View Source Evision.DNN.ClassificationModel (Evision v0.1.21)

Link to this section Summary

Types

t()

Type that represents an Evision.DNN.ClassificationModel struct.

Functions

Variant 1:

Create model from deep learning network.

Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Get enable/disable softmax post processing option.

Given the @p input frame, create input blob, run net and return the output @p blobs.

Given the @p input frame, create input blob, run net and return the output @p blobs.

Set enable/disable softmax post processing option.

Set flag crop for frame.

Set mean value for frame.

Set preprocessing parameters for frame.

Set preprocessing parameters for frame.

Set scalefactor value for frame.

Set input size for frame.

Set flag swapRB for frame.

Link to this section Types

@type t() :: %Evision.DNN.ClassificationModel{ref: reference()}

Type that represents an Evision.DNN.ClassificationModel struct.

  • ref. reference()

    The underlying erlang resource variable.

Link to this section Functions

Link to this function

classificationModel(network)

View Source
@spec classificationModel(Evision.DNN.Net.t()) :: t() | {:error, String.t()}
@spec classificationModel(binary()) :: t() | {:error, String.t()}

Variant 1:

Create model from deep learning network.

Positional Arguments
Return

Python prototype (for reference only):

ClassificationModel(network) -> <dnn_ClassificationModel object>

Variant 2:

Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Positional Arguments
  • model: String.

    Binary file contains trained weights.

Keyword Arguments
  • config: String.

    Text file contains network configuration.

Return

Python prototype (for reference only):

ClassificationModel(model[, config]) -> <dnn_ClassificationModel object>
Link to this function

classificationModel(model, opts)

View Source
@spec classificationModel(binary(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}

Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Positional Arguments
  • model: String.

    Binary file contains trained weights.

Keyword Arguments
  • config: String.

    Text file contains network configuration.

Return

Python prototype (for reference only):

ClassificationModel(model[, config]) -> <dnn_ClassificationModel object>
@spec classify(t(), Evision.Mat.maybe_mat_in()) ::
  {integer(), number()} | {:error, String.t()}

classify

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()
  • frame: Evision.Mat
Return
  • classId: int
  • conf: float

Has overloading in C++

Python prototype (for reference only):

classify(frame) -> classId, conf
Link to this function

getEnableSoftmaxPostProcessing(self)

View Source
@spec getEnableSoftmaxPostProcessing(t()) :: boolean() | {:error, String.t()}

Get enable/disable softmax post processing option.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()
Return
  • retval: bool

This option defaults to false, softmax post processing is not applied within the classify() function.

Python prototype (for reference only):

getEnableSoftmaxPostProcessing() -> retval
@spec predict(t(), Evision.Mat.maybe_mat_in()) ::
  [Evision.Mat.t()] | {:error, String.t()}

Given the @p input frame, create input blob, run net and return the output @p blobs.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()
  • frame: Evision.Mat
Return
  • outs: [Evision.Mat].

    Allocated output blobs, which will store results of the computation.

Python prototype (for reference only):

predict(frame[, outs]) -> outs
Link to this function

predict(self, frame, opts)

View Source
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  [Evision.Mat.t()] | {:error, String.t()}

Given the @p input frame, create input blob, run net and return the output @p blobs.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()
  • frame: Evision.Mat
Return
  • outs: [Evision.Mat].

    Allocated output blobs, which will store results of the computation.

Python prototype (for reference only):

predict(frame[, outs]) -> outs
Link to this function

setEnableSoftmaxPostProcessing(self, enable)

View Source
@spec setEnableSoftmaxPostProcessing(t(), boolean()) :: t() | {:error, String.t()}

Set enable/disable softmax post processing option.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()

  • enable: bool.

    Set enable softmax post processing within the classify() function.

Return

If this option is true, softmax is applied after forward inference within the classify() function to convert the confidences range to [0.0-1.0]. This function allows you to toggle this behavior. Please turn true when not contain softmax layer in model.

Python prototype (for reference only):

setEnableSoftmaxPostProcessing(enable) -> retval
Link to this function

setInputCrop(self, crop)

View Source
@spec setInputCrop(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}

Set flag crop for frame.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not.

Return

Python prototype (for reference only):

setInputCrop(crop) -> retval
Link to this function

setInputMean(self, mean)

View Source
@spec setInputMean(
  t(),
  {number()}
  | {number(), number()}
  | {number() | number() | number()}
  | {number(), number(), number(), number()}
) :: Evision.DNN.Model.t() | {:error, String.t()}

Set mean value for frame.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()

  • mean: Scalar.

    Scalar with mean values which are subtracted from channels.

Return

Python prototype (for reference only):

setInputMean(mean) -> retval
@spec setInputParams(t()) :: :ok | {:error, String.t()}

Set preprocessing parameters for frame.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()
Keyword Arguments
  • scale: double.

    Multiplier for frame values.

  • size: Size.

    New input size.

  • mean: Scalar.

    Scalar with mean values which are subtracted from channels.

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )

Python prototype (for reference only):

setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
Link to this function

setInputParams(self, opts)

View Source
@spec setInputParams(t(), [{atom(), term()}, ...] | nil) :: :ok | {:error, String.t()}

Set preprocessing parameters for frame.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()
Keyword Arguments
  • scale: double.

    Multiplier for frame values.

  • size: Size.

    New input size.

  • mean: Scalar.

    Scalar with mean values which are subtracted from channels.

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )

Python prototype (for reference only):

setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
Link to this function

setInputScale(self, scale)

View Source
@spec setInputScale(t(), number()) :: Evision.DNN.Model.t() | {:error, String.t()}

Set scalefactor value for frame.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()

  • scale: double.

    Multiplier for frame values.

Return

Python prototype (for reference only):

setInputScale(scale) -> retval
Link to this function

setInputSize(self, size)

View Source
@spec setInputSize(
  t(),
  {number(), number()}
) :: Evision.DNN.Model.t() | {:error, String.t()}

Set input size for frame.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()

  • size: Size.

    New input size.

Return

Note: If shape of the new blob less than 0, then frame size not change.

Python prototype (for reference only):

setInputSize(size) -> retval
Link to this function

setInputSize(self, width, height)

View Source
@spec setInputSize(t(), integer(), integer()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setInputSize

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()

  • width: int.

    New input width.

  • height: int.

    New input height.

Return

Has overloading in C++

Python prototype (for reference only):

setInputSize(width, height) -> retval
Link to this function

setInputSwapRB(self, swapRB)

View Source
@spec setInputSwapRB(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}

Set flag swapRB for frame.

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

Return

Python prototype (for reference only):

setInputSwapRB(swapRB) -> retval
Link to this function

setPreferableBackend(self, backendId)

View Source
@spec setPreferableBackend(t(), integer()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setPreferableBackend

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()
  • backendId: dnn_Backend
Return

Python prototype (for reference only):

setPreferableBackend(backendId) -> retval
Link to this function

setPreferableTarget(self, targetId)

View Source
@spec setPreferableTarget(t(), integer()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setPreferableTarget

Positional Arguments
  • self: Evision.DNN.ClassificationModel.t()
  • targetId: dnn_Target
Return

Python prototype (for reference only):

setPreferableTarget(targetId) -> retval