View Source Evision.DNN.DetectionModel (Evision v0.1.38)

Summary

Types

t()

Type that represents an DNN.DetectionModel struct.

Functions

Given the @p input frame, create input blob, run net and return result detections.

Given the @p input frame, create input blob, run net and return result detections.

Variant 1:

Create model from deep learning network.

Create detection model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Getter for nmsAcrossClasses. This variable defaults to false, such that when non max suppression is used during the detect() function, it will do so only per-class

Given the @p input frame, create input blob, run net and return the output @p blobs.

Given the @p input frame, create input blob, run net and return the output @p blobs.

Set flag crop for frame.

Set mean value for frame.

Set preprocessing parameters for frame.

Set preprocessing parameters for frame.

Set scalefactor value for frame.

Set input size for frame.

Set flag swapRB for frame.

nmsAcrossClasses defaults to false, such that when non max suppression is used during the detect() function, it will do so per-class. This function allows you to toggle this behaviour.

Types

@type t() :: %Evision.DNN.DetectionModel{ref: reference()}

Type that represents an DNN.DetectionModel struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec detect(t(), Evision.Mat.maybe_mat_in()) ::
  {[integer()], [number()], [{number(), number(), number(), number()}]}
  | {:error, String.t()}

Given the @p input frame, create input blob, run net and return result detections.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
  • frame: Evision.Mat.t()
Keyword Arguments
  • confThreshold: float.

    A threshold used to filter boxes by confidences.

  • nmsThreshold: float.

    A threshold used in non maximum suppression.

Return
  • classIds: [int].

    Class indexes in result detection.

  • confidences: [float].

    A set of corresponding confidences.

  • boxes: [Rect].

    A set of bounding boxes.

Python prototype (for reference only):

detect(frame[, confThreshold[, nmsThreshold]]) -> classIds, confidences, boxes
Link to this function

detect(self, frame, opts)

View Source
@spec detect(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  {[integer()], [number()], [{number(), number(), number(), number()}]}
  | {:error, String.t()}

Given the @p input frame, create input blob, run net and return result detections.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
  • frame: Evision.Mat.t()
Keyword Arguments
  • confThreshold: float.

    A threshold used to filter boxes by confidences.

  • nmsThreshold: float.

    A threshold used in non maximum suppression.

Return
  • classIds: [int].

    Class indexes in result detection.

  • confidences: [float].

    A set of corresponding confidences.

  • boxes: [Rect].

    A set of bounding boxes.

Python prototype (for reference only):

detect(frame[, confThreshold[, nmsThreshold]]) -> classIds, confidences, boxes
@spec detectionModel(Evision.DNN.Net.t()) :: t() | {:error, String.t()}
@spec detectionModel(binary()) :: t() | {:error, String.t()}

Variant 1:

Create model from deep learning network.

Positional Arguments
  • network: Evision.DNN.Net.t().

    Net object.

Return
  • self: Evision.DNN.DetectionModel.t()

Python prototype (for reference only):

DetectionModel(network) -> <dnn_DetectionModel object>

Variant 2:

Create detection model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Positional Arguments
  • model: String.

    Binary file contains trained weights.

Keyword Arguments
  • config: String.

    Text file contains network configuration.

Return
  • self: Evision.DNN.DetectionModel.t()

Python prototype (for reference only):

DetectionModel(model[, config]) -> <dnn_DetectionModel object>
Link to this function

detectionModel(model, opts)

View Source
@spec detectionModel(binary(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}

Create detection model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Positional Arguments
  • model: String.

    Binary file contains trained weights.

Keyword Arguments
  • config: String.

    Text file contains network configuration.

Return
  • self: Evision.DNN.DetectionModel.t()

Python prototype (for reference only):

DetectionModel(model[, config]) -> <dnn_DetectionModel object>
Link to this function

enableWinograd(self, useWinograd)

View Source
@spec enableWinograd(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}

enableWinograd

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
  • useWinograd: bool
Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

enableWinograd(useWinograd) -> retval
Link to this function

getNmsAcrossClasses(self)

View Source
@spec getNmsAcrossClasses(t()) :: boolean() | {:error, String.t()}

Getter for nmsAcrossClasses. This variable defaults to false, such that when non max suppression is used during the detect() function, it will do so only per-class

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
Return
  • retval: bool

Python prototype (for reference only):

getNmsAcrossClasses() -> retval
@spec predict(t(), Evision.Mat.maybe_mat_in()) ::
  [Evision.Mat.t()] | {:error, String.t()}

Given the @p input frame, create input blob, run net and return the output @p blobs.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
  • frame: Evision.Mat.t()
Return
  • outs: [Evision.Mat].

    Allocated output blobs, which will store results of the computation.

Python prototype (for reference only):

predict(frame[, outs]) -> outs
Link to this function

predict(self, frame, opts)

View Source
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  [Evision.Mat.t()] | {:error, String.t()}

Given the @p input frame, create input blob, run net and return the output @p blobs.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
  • frame: Evision.Mat.t()
Return
  • outs: [Evision.Mat].

    Allocated output blobs, which will store results of the computation.

Python prototype (for reference only):

predict(frame[, outs]) -> outs
Link to this function

setInputCrop(self, crop)

View Source
@spec setInputCrop(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}

Set flag crop for frame.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setInputCrop(crop) -> retval
Link to this function

setInputMean(self, mean)

View Source
@spec setInputMean(
  t(),
  {number()}
  | {number(), number()}
  | {number(), number(), number()}
  | {number(), number(), number(), number()}
) :: Evision.DNN.Model.t() | {:error, String.t()}

Set mean value for frame.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()

  • mean: Scalar.

    Scalar with mean values which are subtracted from channels.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setInputMean(mean) -> retval
@spec setInputParams(t()) :: t() | {:error, String.t()}

Set preprocessing parameters for frame.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
Keyword Arguments
  • scale: double.

    Multiplier for frame values.

  • size: Size.

    New input size.

  • mean: Scalar.

    Scalar with mean values which are subtracted from channels.

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )

Python prototype (for reference only):

setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
Link to this function

setInputParams(self, opts)

View Source
@spec setInputParams(t(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}

Set preprocessing parameters for frame.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
Keyword Arguments
  • scale: double.

    Multiplier for frame values.

  • size: Size.

    New input size.

  • mean: Scalar.

    Scalar with mean values which are subtracted from channels.

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )

Python prototype (for reference only):

setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
Link to this function

setInputScale(self, scale)

View Source
@spec setInputScale(
  t(),
  {number()}
  | {number(), number()}
  | {number(), number(), number()}
  | {number(), number(), number(), number()}
) :: Evision.DNN.Model.t() | {:error, String.t()}

Set scalefactor value for frame.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()

  • scale: Scalar.

    Multiplier for frame values.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setInputScale(scale) -> retval
Link to this function

setInputSize(self, size)

View Source
@spec setInputSize(
  t(),
  {number(), number()}
) :: Evision.DNN.Model.t() | {:error, String.t()}

Set input size for frame.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()

  • size: Size.

    New input size.

Return
  • retval: Evision.DNN.Model.t()

Note: If shape of the new blob less than 0, then frame size not change.

Python prototype (for reference only):

setInputSize(size) -> retval
Link to this function

setInputSize(self, width, height)

View Source
@spec setInputSize(t(), integer(), integer()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setInputSize

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()

  • width: int.

    New input width.

  • height: int.

    New input height.

Return
  • retval: Evision.DNN.Model.t()

Has overloading in C++

Python prototype (for reference only):

setInputSize(width, height) -> retval
Link to this function

setInputSwapRB(self, swapRB)

View Source
@spec setInputSwapRB(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}

Set flag swapRB for frame.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setInputSwapRB(swapRB) -> retval
Link to this function

setNmsAcrossClasses(self, value)

View Source
@spec setNmsAcrossClasses(t(), boolean()) :: t() | {:error, String.t()}

nmsAcrossClasses defaults to false, such that when non max suppression is used during the detect() function, it will do so per-class. This function allows you to toggle this behaviour.

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()

  • value: bool.

    The new value for nmsAcrossClasses

Return
  • retval: Evision.DNN.DetectionModel.t()

Python prototype (for reference only):

setNmsAcrossClasses(value) -> retval
Link to this function

setPreferableBackend(self, backendId)

View Source
@spec setPreferableBackend(t(), integer()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setPreferableBackend

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
  • backendId: dnn_Backend
Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setPreferableBackend(backendId) -> retval
Link to this function

setPreferableTarget(self, targetId)

View Source
@spec setPreferableTarget(t(), integer()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setPreferableTarget

Positional Arguments
  • self: Evision.DNN.DetectionModel.t()
  • targetId: dnn_Target
Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setPreferableTarget(targetId) -> retval