View Source Evision.DNN.Layer (Evision v0.1.38)

Summary

Types

t()

Type that represents an DNN.Layer struct.

Functions

Clears the algorithm state

Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read

Computes and sets internal parameters according to inputs, outputs and blobs.

Computes and sets internal parameters according to inputs, outputs and blobs.

getDefaultName

Returns index of output blob in output array.

Reads algorithm parameters from a file storage

Allocates layer and computes output.

Allocates layer and computes output.

Stores algorithm parameters in a file storage

Types

@type t() :: %Evision.DNN.Layer{ref: reference()}

Type that represents an DNN.Layer struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec clear(t()) :: t() | {:error, String.t()}

Clears the algorithm state

Positional Arguments
  • self: Evision.DNN.Layer.t()

Python prototype (for reference only):

clear() -> None
@spec empty(t()) :: boolean() | {:error, String.t()}

Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read

Positional Arguments
  • self: Evision.DNN.Layer.t()
Return
  • retval: bool

Python prototype (for reference only):

empty() -> retval
@spec finalize(t(), [Evision.Mat.maybe_mat_in()]) ::
  [Evision.Mat.t()] | {:error, String.t()}

Computes and sets internal parameters according to inputs, outputs and blobs.

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • inputs: [Evision.Mat]
Return
  • outputs: [Evision.Mat].

    vector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

Python prototype (for reference only):

finalize(inputs[, outputs]) -> outputs
Link to this function

finalize(self, inputs, opts)

View Source
@spec finalize(t(), [Evision.Mat.maybe_mat_in()], [{atom(), term()}, ...] | nil) ::
  [Evision.Mat.t()] | {:error, String.t()}

Computes and sets internal parameters according to inputs, outputs and blobs.

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • inputs: [Evision.Mat]
Return
  • outputs: [Evision.Mat].

    vector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

Python prototype (for reference only):

finalize(inputs[, outputs]) -> outputs
@spec get_blobs(t()) :: [Evision.Mat.t()]
@spec get_name(t()) :: binary()
Link to this function

get_preferableTarget(self)

View Source
@spec get_preferableTarget(t()) :: integer()
@spec get_type(t()) :: binary()
@spec getDefaultName(t()) :: binary() | {:error, String.t()}

getDefaultName

Positional Arguments
  • self: Evision.DNN.Layer.t()
Return

Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.

Python prototype (for reference only):

getDefaultName() -> retval
Link to this function

outputNameToIndex(self, outputName)

View Source
@spec outputNameToIndex(t(), binary()) :: integer() | {:error, String.t()}

Returns index of output blob in output array.

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • outputName: String
Return
  • retval: int

@see inputNameToIndex()

Python prototype (for reference only):

outputNameToIndex(outputName) -> retval
@spec read(t(), Evision.FileNode.t()) :: t() | {:error, String.t()}

Reads algorithm parameters from a file storage

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • fn_: Evision.FileNode.t()

Python prototype (for reference only):

read(fn_) -> None
Link to this function

run(self, inputs, internals)

View Source
@spec run(t(), [Evision.Mat.maybe_mat_in()], [Evision.Mat.maybe_mat_in()]) ::
  {[Evision.Mat.t()], [Evision.Mat.t()]} | {:error, String.t()}

Allocates layer and computes output.

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • inputs: [Evision.Mat]
Return
  • outputs: [Evision.Mat].
  • internals: [Evision.Mat]

@deprecated This method will be removed in the future release.

Python prototype (for reference only):

run(inputs, internals[, outputs]) -> outputs, internals
Link to this function

run(self, inputs, internals, opts)

View Source
@spec run(
  t(),
  [Evision.Mat.maybe_mat_in()],
  [Evision.Mat.maybe_mat_in()],
  [{atom(), term()}, ...] | nil
) :: {[Evision.Mat.t()], [Evision.Mat.t()]} | {:error, String.t()}

Allocates layer and computes output.

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • inputs: [Evision.Mat]
Return
  • outputs: [Evision.Mat].
  • internals: [Evision.Mat]

@deprecated This method will be removed in the future release.

Python prototype (for reference only):

run(inputs, internals[, outputs]) -> outputs, internals
@spec save(t(), binary()) :: t() | {:error, String.t()}

save

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • filename: String

Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).

Python prototype (for reference only):

save(filename) -> None
@spec set_blobs(t(), [Evision.Mat.maybe_mat_in()]) :: t()
@spec write(t(), Evision.FileStorage.t()) :: t() | {:error, String.t()}

Stores algorithm parameters in a file storage

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • fs: Evision.FileStorage.t()

Python prototype (for reference only):

write(fs) -> None
@spec write(t(), Evision.FileStorage.t(), binary()) :: t() | {:error, String.t()}

write

Positional Arguments
  • self: Evision.DNN.Layer.t()
  • fs: Evision.FileStorage.t()
  • name: String

Has overloading in C++

Python prototype (for reference only):

write(fs, name) -> None