View Source Evision.DNN.TextDetectionModel (Evision v0.2.9)

Summary

Types

t()

Type that represents an DNN.TextDetectionModel struct.

Functions

Given the @p input frame, create input blob, run net and return the output @p blobs.

Given the @p input frame, create input blob, run net and return the output @p blobs.

Set flag crop for frame.

Set mean value for frame.

Set preprocessing parameters for frame.

Set preprocessing parameters for frame.

Set scalefactor value for frame.

Set input size for frame.

Set flag swapRB for frame.

Set output names for frame.

Types

@type t() :: %Evision.DNN.TextDetectionModel{ref: reference()}

Type that represents an DNN.TextDetectionModel struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec detect(Keyword.t()) :: any() | {:error, String.t()}
@spec detect(t(), Evision.Mat.maybe_mat_in()) ::
  [[{number(), number()}]] | {:error, String.t()}

detect

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()
  • frame: Evision.Mat
Return
  • detections: [[Point]]

Has overloading in C++

Python prototype (for reference only):

detect(frame) -> detections
Link to this function

detectTextRectangles(named_args)

View Source
@spec detectTextRectangles(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

detectTextRectangles(self, frame)

View Source
@spec detectTextRectangles(t(), Evision.Mat.maybe_mat_in()) ::
  {[{{number(), number()}, {number(), number()}, number()}], [number()]}
  | {:error, String.t()}

Performs detection

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()

  • frame: Evision.Mat.

    the input image

Return
  • detections: [{centre={x, y}, size={s1, s2}, angle}].

    array with detections' RotationRect results

  • confidences: [float].

    array with detection confidences

Given the input @p frame, prepare network input, run network inference, post-process network output and return result detections. Each result is rotated rectangle. Note: Result may be inaccurate in case of strong perspective transformations.

Python prototype (for reference only):

detectTextRectangles(frame) -> detections, confidences
Link to this function

enableWinograd(named_args)

View Source
@spec enableWinograd(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

enableWinograd(self, useWinograd)

View Source
@spec enableWinograd(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}

enableWinograd

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()
  • useWinograd: bool
Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

enableWinograd(useWinograd) -> retval
@spec predict(Keyword.t()) :: any() | {:error, String.t()}
@spec predict(t(), Evision.Mat.maybe_mat_in()) ::
  [Evision.Mat.t()] | {:error, String.t()}

Given the @p input frame, create input blob, run net and return the output @p blobs.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()
  • frame: Evision.Mat
Return
  • outs: [Evision.Mat].

    Allocated output blobs, which will store results of the computation.

Python prototype (for reference only):

predict(frame[, outs]) -> outs
Link to this function

predict(self, frame, opts)

View Source
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  [Evision.Mat.t()] | {:error, String.t()}

Given the @p input frame, create input blob, run net and return the output @p blobs.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()
  • frame: Evision.Mat
Return
  • outs: [Evision.Mat].

    Allocated output blobs, which will store results of the computation.

Python prototype (for reference only):

predict(frame[, outs]) -> outs
Link to this function

setInputCrop(named_args)

View Source
@spec setInputCrop(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setInputCrop(self, crop)

View Source
@spec setInputCrop(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}

Set flag crop for frame.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setInputCrop(crop) -> retval
Link to this function

setInputMean(named_args)

View Source
@spec setInputMean(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setInputMean(self, mean)

View Source
@spec setInputMean(t(), Evision.scalar()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

Set mean value for frame.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()

  • mean: Evision.scalar().

    Scalar with mean values which are subtracted from channels.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setInputMean(mean) -> retval
Link to this function

setInputParams(named_args)

View Source
@spec setInputParams(Keyword.t()) :: any() | {:error, String.t()}
@spec setInputParams(t()) :: Evision.DNN.Model.t() | {:error, String.t()}

Set preprocessing parameters for frame.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()
Keyword Arguments
  • scale: double.

    Multiplier for frame values.

  • size: Size.

    New input size.

  • mean: Evision.scalar().

    Scalar with mean values which are subtracted from channels.

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )

Python prototype (for reference only):

setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
Link to this function

setInputParams(self, opts)

View Source
@spec setInputParams(
  t(),
  [crop: term(), mean: term(), scale: term(), size: term(), swapRB: term()]
  | nil
) :: Evision.DNN.Model.t() | {:error, String.t()}

Set preprocessing parameters for frame.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()
Keyword Arguments
  • scale: double.

    Multiplier for frame values.

  • size: Size.

    New input size.

  • mean: Evision.scalar().

    Scalar with mean values which are subtracted from channels.

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

  • crop: bool.

    Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )

Python prototype (for reference only):

setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
Link to this function

setInputScale(named_args)

View Source
@spec setInputScale(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setInputScale(self, scale)

View Source
@spec setInputScale(t(), Evision.scalar()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

Set scalefactor value for frame.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()

  • scale: Evision.scalar().

    Multiplier for frame values.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setInputScale(scale) -> retval
Link to this function

setInputSize(named_args)

View Source
@spec setInputSize(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setInputSize(self, size)

View Source
@spec setInputSize(
  t(),
  {number(), number()}
) :: Evision.DNN.Model.t() | {:error, String.t()}

Set input size for frame.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()

  • size: Size.

    New input size.

Return
  • retval: Evision.DNN.Model.t()

Note: If shape of the new blob less than 0, then frame size not change.

Python prototype (for reference only):

setInputSize(size) -> retval
Link to this function

setInputSize(self, width, height)

View Source
@spec setInputSize(t(), integer(), integer()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setInputSize

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()

  • width: integer().

    New input width.

  • height: integer().

    New input height.

Return
  • retval: Evision.DNN.Model.t()

Has overloading in C++

Python prototype (for reference only):

setInputSize(width, height) -> retval
Link to this function

setInputSwapRB(named_args)

View Source
@spec setInputSwapRB(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setInputSwapRB(self, swapRB)

View Source
@spec setInputSwapRB(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}

Set flag swapRB for frame.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()

  • swapRB: bool.

    Flag which indicates that swap first and last channels.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setInputSwapRB(swapRB) -> retval
Link to this function

setOutputNames(named_args)

View Source
@spec setOutputNames(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setOutputNames(self, outNames)

View Source
@spec setOutputNames(t(), [binary()]) :: Evision.DNN.Model.t() | {:error, String.t()}

Set output names for frame.

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()

  • outNames: [String].

    Names for output layers.

Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setOutputNames(outNames) -> retval
Link to this function

setPreferableBackend(named_args)

View Source
@spec setPreferableBackend(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setPreferableBackend(self, backendId)

View Source
@spec setPreferableBackend(t(), Evision.DNN.Backend.enum()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setPreferableBackend

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()
  • backendId: dnn_Backend
Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setPreferableBackend(backendId) -> retval
Link to this function

setPreferableTarget(named_args)

View Source
@spec setPreferableTarget(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setPreferableTarget(self, targetId)

View Source
@spec setPreferableTarget(t(), Evision.DNN.Target.enum()) ::
  Evision.DNN.Model.t() | {:error, String.t()}

setPreferableTarget

Positional Arguments
  • self: Evision.DNN.TextDetectionModel.t()
  • targetId: dnn_Target
Return
  • retval: Evision.DNN.Model.t()

Python prototype (for reference only):

setPreferableTarget(targetId) -> retval