View Source Evision.CUDA.GpuMat (Evision v0.2.9)

Summary

Types

t()

Type that represents an Evision.CUDA.GpuMat struct.

Functions

Variant 1:

convertTo

Variant 1:

copyTo

Variant 1:

copyTo

defaultAllocator

Performs data download from GpuMat (Blocking call)

Variant 1:

Performs data download from GpuMat (Non-Blocking call)

Performs data download from GpuMat (Non-Blocking call)

Create CUDA GpuMat from a shared CUDA device pointer with new shape

Create CUDA GpuMat from a shared CUDA device pointer

GpuMat

Variant 1:

GpuMat

Variant 1:

GpuMat

Variant 1:

GpuMat

Variant 1:

GpuMat

setDefaultAllocator

Variant 1:

setTo

Get raw pointers

updateContinuityFlag

Variant 1:

Performs data upload to GpuMat (Blocking call)

Variant 1:

Performs data upload to GpuMat (Non-Blocking call)

Types

@type t() :: %Evision.CUDA.GpuMat{
  channels: integer(),
  device_id: integer() | nil,
  elemSize: integer(),
  raw_type: integer(),
  ref: reference(),
  shape: tuple(),
  step: integer(),
  type: Evision.Mat.mat_type()
}

Type that represents an Evision.CUDA.GpuMat struct.

  • channels: int.

    The number of matrix channels.

  • type: Evision.Mat.mat_type().

    Type of the matrix elements, following :nx's convention.

  • raw_type: int.

    The raw value returned from int cv::Mat::type().

  • shape: tuple.

    The shape of the matrix.

  • elemSize: integer().

    Element size in bytes.

  • step: integer().

    Number of bytes between two consecutive rows.

    When there are no gaps between successive rows, the value of step is equal to the number of columns times the element size.

  • device_id: integer() | nil.

    nil if currently there's no GPU memory allocated for the GpuMat resource.

  • ref: reference.

    The underlying erlang resource variable.

Functions

@spec adjustROI(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

adjustROI(self, dtop, dbottom, dleft, dright)

View Source
@spec adjustROI(t(), integer(), integer(), integer(), integer()) ::
  t() | {:error, String.t()}

adjustROI

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • dtop: integer()
  • dbottom: integer()
  • dleft: integer()
  • dright: integer()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

adjustROI(dtop, dbottom, dleft, dright) -> retval
@spec assignTo(Keyword.t()) :: any() | {:error, String.t()}
@spec assignTo(t(), t()) :: t() | {:error, String.t()}

assignTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • m: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • type: integer().

Python prototype (for reference only):

assignTo(m[, type]) -> None
@spec assignTo(t(), t(), [{:type, term()}] | nil) :: t() | {:error, String.t()}

assignTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • m: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • type: integer().

Python prototype (for reference only):

assignTo(m[, type]) -> None
@spec channels(Keyword.t()) :: any() | {:error, String.t()}
@spec channels(t()) :: integer() | {:error, String.t()}

channels

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: integer()

Python prototype (for reference only):

channels() -> retval
@spec clone(Keyword.t()) :: any() | {:error, String.t()}
@spec clone(t()) :: t() | {:error, String.t()}

clone

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

clone() -> retval
@spec col(Keyword.t()) :: any() | {:error, String.t()}
@spec col(t(), integer()) :: t() | {:error, String.t()}

col

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • x: integer()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

col(x) -> retval
@spec colRange(Keyword.t()) :: any() | {:error, String.t()}
@spec colRange(t(), {integer(), integer()} | :all) :: t() | {:error, String.t()}

colRange

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • r: Range
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

colRange(r) -> retval
Link to this function

colRange(self, startcol, endcol)

View Source
@spec colRange(t(), integer(), integer()) :: t() | {:error, String.t()}

colRange

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • startcol: integer()
  • endcol: integer()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

colRange(startcol, endcol) -> retval
@spec convertTo(Keyword.t()) :: any() | {:error, String.t()}
@spec convertTo(t(), integer()) :: t() | {:error, String.t()}

convertTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • rtype: integer()
Keyword Arguments
  • alpha: double.
  • beta: double.
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

convertTo(rtype[, dst[, alpha[, beta]]]) -> dst
Link to this function

convertTo(self, rtype, opts)

View Source
@spec convertTo(t(), integer(), [alpha: term(), beta: term()] | nil) ::
  t() | {:error, String.t()}
@spec convertTo(t(), integer(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}

Variant 1:

convertTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • rtype: integer()
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

convertTo(rtype, stream[, dst]) -> dst

Variant 2:

convertTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • rtype: integer()
Keyword Arguments
  • alpha: double.
  • beta: double.
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

convertTo(rtype[, dst[, alpha[, beta]]]) -> dst
Link to this function

convertTo(self, rtype, stream, opts)

View Source
@spec convertTo(
  t(),
  integer(),
  Evision.CUDA.Stream.t(),
  [{atom(), term()}, ...] | nil
) ::
  t() | {:error, String.t()}

convertTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • rtype: integer()
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

convertTo(rtype, stream[, dst]) -> dst
Link to this function

convertTo(self, rtype, alpha, beta, stream)

View Source
@spec convertTo(t(), integer(), number(), number(), Evision.CUDA.Stream.t()) ::
  t() | {:error, String.t()}

convertTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • rtype: integer()
  • alpha: double
  • beta: double
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

convertTo(rtype, alpha, beta, stream[, dst]) -> dst
Link to this function

convertTo(self, rtype, alpha, beta, stream, opts)

View Source
@spec convertTo(
  t(),
  integer(),
  number(),
  number(),
  Evision.CUDA.Stream.t(),
  [{atom(), term()}, ...] | nil
) :: t() | {:error, String.t()}

convertTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • rtype: integer()
  • alpha: double
  • beta: double
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

convertTo(rtype, alpha, beta, stream[, dst]) -> dst
@spec copyTo(Keyword.t()) :: any() | {:error, String.t()}
@spec copyTo(t()) :: t() | {:error, String.t()}

copyTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

copyTo([, dst]) -> dst
@spec copyTo(t(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec copyTo(t(), t()) :: t() | {:error, String.t()}
@spec copyTo(t(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}

Variant 1:

copyTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • mask: Evision.CUDA.GpuMat.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

copyTo(mask[, dst]) -> dst

Variant 2:

copyTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

copyTo(stream[, dst]) -> dst

Variant 3:

copyTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

copyTo([, dst]) -> dst
Link to this function

copyTo(self, mask, opts)

View Source
@spec copyTo(t(), t(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec copyTo(t(), Evision.CUDA.Stream.t(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}
@spec copyTo(t(), t(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}

Variant 1:

copyTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • mask: Evision.CUDA.GpuMat.t()
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

copyTo(mask, stream[, dst]) -> dst

Variant 2:

copyTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • mask: Evision.CUDA.GpuMat.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

copyTo(mask[, dst]) -> dst

Variant 3:

copyTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

copyTo(stream[, dst]) -> dst
Link to this function

copyTo(self, mask, stream, opts)

View Source
@spec copyTo(t(), t(), Evision.CUDA.Stream.t(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}

copyTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • mask: Evision.CUDA.GpuMat.t()
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

copyTo(mask, stream[, dst]) -> dst
@spec create(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

create(self, size, type)

View Source
@spec create(t(), {number(), number()}, integer()) :: t() | {:error, String.t()}

create

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • size: Size
  • type: integer()

Python prototype (for reference only):

create(size, type) -> None
Link to this function

create(self, rows, cols, type)

View Source
@spec create(t(), integer(), integer(), integer()) :: t() | {:error, String.t()}

create

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • rows: integer()
  • cols: integer()
  • type: integer()

Python prototype (for reference only):

create(rows, cols, type) -> None
@spec cudaPtr(Keyword.t()) :: any() | {:error, String.t()}
@spec cudaPtr(t()) :: :ok | {:error, String.t()}

cudaPtr

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: void*

Python prototype (for reference only):

cudaPtr() -> retval
@spec defaultAllocator() :: reference() | {:error, String.t()}

defaultAllocator

Return
  • retval: GpuMat::Allocator*

Python prototype (for reference only):

defaultAllocator() -> retval
Link to this function

defaultAllocator(named_args)

View Source
@spec defaultAllocator(Keyword.t()) :: any() | {:error, String.t()}
@spec depth(Keyword.t()) :: any() | {:error, String.t()}
@spec depth(t()) :: integer() | {:error, String.t()}

depth

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: integer()

Python prototype (for reference only):

depth() -> retval
@spec download(Keyword.t()) :: any() | {:error, String.t()}
@spec download(t()) :: Evision.Mat.t() | {:error, String.t()}

Performs data download from GpuMat (Blocking call)

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • dst: Evision.Mat.t().

This function copies data from device memory to host memory. As being a blocking call, it is guaranteed that the copy operation is finished when this function returns.

Python prototype (for reference only):

download([, dst]) -> dst
@spec download(t(), [{atom(), term()}, ...] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec download(t(), Evision.CUDA.Stream.t()) :: Evision.Mat.t() | {:error, String.t()}

Variant 1:

Performs data download from GpuMat (Non-Blocking call)

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.Mat.t().

This function copies data from device memory to host memory. As being a non-blocking call, this function may return even if the copy operation is not finished. The copy operation may be overlapped with operations in other non-default streams if \p stream is not the default stream and \p dst is HostMem allocated with HostMem::PAGE_LOCKED option.

Python prototype (for reference only):

download(stream[, dst]) -> dst

Variant 2:

Performs data download from GpuMat (Blocking call)

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • dst: Evision.Mat.t().

This function copies data from device memory to host memory. As being a blocking call, it is guaranteed that the copy operation is finished when this function returns.

Python prototype (for reference only):

download([, dst]) -> dst
Link to this function

download(self, stream, opts)

View Source
@spec download(t(), Evision.CUDA.Stream.t(), [{atom(), term()}, ...] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

Performs data download from GpuMat (Non-Blocking call)

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • stream: Evision.CUDA.Stream.t()
Return
  • dst: Evision.Mat.t().

This function copies data from device memory to host memory. As being a non-blocking call, this function may return even if the copy operation is not finished. The copy operation may be overlapped with operations in other non-default streams if \p stream is not the default stream and \p dst is HostMem allocated with HostMem::PAGE_LOCKED option.

Python prototype (for reference only):

download(stream[, dst]) -> dst
@spec elemSize1(Keyword.t()) :: any() | {:error, String.t()}
@spec elemSize1(t()) :: integer() | {:error, String.t()}

elemSize1

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: size_t

Python prototype (for reference only):

elemSize1() -> retval
@spec elemSize(Keyword.t()) :: any() | {:error, String.t()}
@spec elemSize(t()) :: integer() | {:error, String.t()}

elemSize

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: size_t

Python prototype (for reference only):

elemSize() -> retval
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}

empty

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: bool

Python prototype (for reference only):

empty() -> retval
Link to this function

from_pointer(handle, opts \\ [])

View Source
@spec from_pointer(
  %Evision.IPCHandle.Local{
    channels: term(),
    cols: term(),
    device_id: term(),
    handle: term(),
    rows: term(),
    step: term(),
    type: term()
  }
  | %Evision.IPCHandle.CUDA{
      channels: term(),
      cols: term(),
      device_id: term(),
      handle: term(),
      rows: term(),
      step: term(),
      type: term()
    },
  Keyword.t()
) :: t() | {:error, String.t()}

Create CUDA GpuMat from a shared CUDA device pointer with new shape

Positional Arguments
  • handle, either an %Evision.IPCHandle.Local{} or an %Evision.IPCHandle.CUDA{}.

  • new_shape, tuple()

    The shape of the shared image. It's expected to be either

    • {height, width, channels}, for any 2D image that has 1 or multiple channels
    • {height, width}, for any 1-channel 2D image
    • {rows}
Link to this function

from_pointer(device_pointer, dtype, shape, opts)

View Source
@spec from_pointer([integer()], atom() | {atom(), integer()}, tuple(), [
  {:device_id, non_neg_integer()}
]) ::
  t() | {:error, String.t()}

Create CUDA GpuMat from a shared CUDA device pointer

Positional Arguments
  • device_pointer, list(integer()).

    This can be either a local pointer or an IPC pointer.

    However, please note that IPC pointers have to be generated from another OS process (Erlang process doesn't count).

  • dtype, tuple() | atom()

    Data type.

  • shape, tuple()

    The shape of the shared image. It's expected to be either

    • {height, width, channels}, for any 2D image that has 1 or multiple channels
    • {height, width}, for any 1-channel 2D image
    • {rows}
Keyword Arguments
  • device_id, non_neg_integer.

    GPU Device ID, default to 0.

@spec get_step(t()) :: integer()
@spec gpuMat() :: t() | {:error, String.t()}

GpuMat

Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat([, allocator]) -> <cuda_GpuMat object>
@spec gpuMat(Keyword.t()) :: any() | {:error, String.t()}
@spec gpuMat([{:allocator, term()}] | nil) :: t() | {:error, String.t()}
@spec gpuMat(Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec gpuMat(t()) :: t() | {:error, String.t()}
@spec gpuMat(t()) :: t() | {:error, String.t()}

Variant 1:

GpuMat

Positional Arguments
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(arr[, allocator]) -> <cuda_GpuMat object>

Variant 2:

GpuMat

Positional Arguments
  • arr: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(arr[, allocator]) -> <cuda_GpuMat object>

Variant 3:

GpuMat

Positional Arguments
  • m: Evision.CUDA.GpuMat.t()
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(m) -> <cuda_GpuMat object>

Variant 4:

GpuMat

Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat([, allocator]) -> <cuda_GpuMat object>
@spec gpuMat(Evision.Mat.maybe_mat_in(), [{:allocator, term()}] | nil) ::
  t() | {:error, String.t()}
@spec gpuMat(t(), [{:allocator, term()}] | nil) :: t() | {:error, String.t()}
@spec gpuMat(t(), {number(), number(), number(), number()}) ::
  t() | {:error, String.t()}
@spec gpuMat(
  {number(), number()},
  integer()
) :: t() | {:error, String.t()}

Variant 1:

GpuMat

Positional Arguments
  • m: Evision.CUDA.GpuMat.t()
  • roi: Rect
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(m, roi) -> <cuda_GpuMat object>

Variant 2:

GpuMat

Positional Arguments
  • size: Size
  • type: integer()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(size, type[, allocator]) -> <cuda_GpuMat object>

Variant 3:

GpuMat

Positional Arguments
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(arr[, allocator]) -> <cuda_GpuMat object>

Variant 4:

GpuMat

Positional Arguments
  • arr: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(arr[, allocator]) -> <cuda_GpuMat object>
Link to this function

gpuMat(size, type, opts)

View Source
@spec gpuMat({number(), number()}, integer(), [{:allocator, term()}] | nil) ::
  t() | {:error, String.t()}
@spec gpuMat(t(), {integer(), integer()} | :all, {integer(), integer()} | :all) ::
  t() | {:error, String.t()}
@spec gpuMat({number(), number()}, integer(), Evision.scalar()) ::
  t() | {:error, String.t()}
@spec gpuMat(integer(), integer(), integer()) :: t() | {:error, String.t()}

Variant 1:

GpuMat

Positional Arguments
  • m: Evision.CUDA.GpuMat.t()
  • rowRange: Range
  • colRange: Range
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(m, rowRange, colRange) -> <cuda_GpuMat object>

Variant 2:

GpuMat

Positional Arguments
  • size: Size
  • type: integer()
  • s: Evision.scalar()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(size, type, s[, allocator]) -> <cuda_GpuMat object>

Variant 3:

GpuMat

Positional Arguments
  • rows: integer()
  • cols: integer()
  • type: integer()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(rows, cols, type[, allocator]) -> <cuda_GpuMat object>

Variant 4:

GpuMat

Positional Arguments
  • size: Size
  • type: integer()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(size, type[, allocator]) -> <cuda_GpuMat object>
Link to this function

gpuMat(size, type, s, opts)

View Source
@spec gpuMat(
  {number(), number()},
  integer(),
  Evision.scalar(),
  [{:allocator, term()}] | nil
) ::
  t() | {:error, String.t()}
@spec gpuMat(integer(), integer(), integer(), [{:allocator, term()}] | nil) ::
  t() | {:error, String.t()}
@spec gpuMat(integer(), integer(), integer(), Evision.scalar()) ::
  t() | {:error, String.t()}

Variant 1:

GpuMat

Positional Arguments
  • rows: integer()
  • cols: integer()
  • type: integer()
  • s: Evision.scalar()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(rows, cols, type, s[, allocator]) -> <cuda_GpuMat object>

Variant 2:

GpuMat

Positional Arguments
  • size: Size
  • type: integer()
  • s: Evision.scalar()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(size, type, s[, allocator]) -> <cuda_GpuMat object>

Variant 3:

GpuMat

Positional Arguments
  • rows: integer()
  • cols: integer()
  • type: integer()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(rows, cols, type[, allocator]) -> <cuda_GpuMat object>
Link to this function

gpuMat(rows, cols, type, s, opts)

View Source
@spec gpuMat(
  integer(),
  integer(),
  integer(),
  Evision.scalar(),
  [{:allocator, term()}] | nil
) ::
  t() | {:error, String.t()}

GpuMat

Positional Arguments
  • rows: integer()
  • cols: integer()
  • type: integer()
  • s: Evision.scalar()
Keyword Arguments
  • allocator: GpuMat_Allocator*.
Return
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

GpuMat(rows, cols, type, s[, allocator]) -> <cuda_GpuMat object>
Link to this function

isContinuous(named_args)

View Source
@spec isContinuous(Keyword.t()) :: any() | {:error, String.t()}
@spec isContinuous(t()) :: boolean() | {:error, String.t()}

isContinuous

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: bool

Python prototype (for reference only):

isContinuous() -> retval
@spec locateROI(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

locateROI(self, wholeSize, ofs)

View Source
@spec locateROI(t(), {number(), number()}, {number(), number()}) ::
  t() | {:error, String.t()}

locateROI

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • wholeSize: Size
  • ofs: Point

Python prototype (for reference only):

locateROI(wholeSize, ofs) -> None
@spec release(Keyword.t()) :: any() | {:error, String.t()}
@spec release(t()) :: t() | {:error, String.t()}

release

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

release() -> None
@spec reshape(Keyword.t()) :: any() | {:error, String.t()}
@spec reshape(t(), integer()) :: t() | {:error, String.t()}

reshape

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • cn: integer()
Keyword Arguments
  • rows: integer().
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

reshape(cn[, rows]) -> retval
@spec reshape(t(), integer(), [{:rows, term()}] | nil) :: t() | {:error, String.t()}

reshape

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • cn: integer()
Keyword Arguments
  • rows: integer().
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

reshape(cn[, rows]) -> retval
@spec row(Keyword.t()) :: any() | {:error, String.t()}
@spec row(t(), integer()) :: t() | {:error, String.t()}

row

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • y: integer()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

row(y) -> retval
@spec rowRange(Keyword.t()) :: any() | {:error, String.t()}
@spec rowRange(t(), {integer(), integer()} | :all) :: t() | {:error, String.t()}

rowRange

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • r: Range
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

rowRange(r) -> retval
Link to this function

rowRange(self, startrow, endrow)

View Source
@spec rowRange(t(), integer(), integer()) :: t() | {:error, String.t()}

rowRange

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • startrow: integer()
  • endrow: integer()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

rowRange(startrow, endrow) -> retval
Link to this function

setDefaultAllocator(named_args)

View Source
@spec setDefaultAllocator(Keyword.t()) :: any() | {:error, String.t()}
@spec setDefaultAllocator(reference()) :: :ok | {:error, String.t()}

setDefaultAllocator

Positional Arguments
  • allocator: GpuMat_Allocator*

Python prototype (for reference only):

setDefaultAllocator(allocator) -> None
@spec setTo(Keyword.t()) :: any() | {:error, String.t()}
@spec setTo(t(), Evision.scalar()) :: t() | {:error, String.t()}

setTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • s: Evision.scalar()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

setTo(s) -> retval
@spec setTo(t(), Evision.scalar(), Evision.Mat.maybe_mat_in()) ::
  t() | {:error, String.t()}
@spec setTo(t(), Evision.scalar(), t()) :: t() | {:error, String.t()}
@spec setTo(t(), Evision.scalar(), Evision.CUDA.Stream.t()) ::
  t() | {:error, String.t()}

Variant 1:

setTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • s: Evision.scalar()
  • mask: Evision.Mat
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

setTo(s, mask) -> retval

Variant 2:

setTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • s: Evision.scalar()
  • mask: Evision.CUDA.GpuMat.t()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

setTo(s, mask) -> retval

Variant 3:

setTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • s: Evision.scalar()
  • stream: Evision.CUDA.Stream.t()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

setTo(s, stream) -> retval
Link to this function

setTo(self, s, mask, stream)

View Source
@spec setTo(
  t(),
  Evision.scalar(),
  Evision.Mat.maybe_mat_in(),
  Evision.CUDA.Stream.t()
) ::
  t() | {:error, String.t()}
@spec setTo(t(), Evision.scalar(), t(), Evision.CUDA.Stream.t()) ::
  t() | {:error, String.t()}

Variant 1:

setTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • s: Evision.scalar()
  • mask: Evision.Mat
  • stream: Evision.CUDA.Stream.t()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

setTo(s, mask, stream) -> retval

Variant 2:

setTo

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • s: Evision.scalar()
  • mask: Evision.CUDA.GpuMat.t()
  • stream: Evision.CUDA.Stream.t()
Return
  • retval: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

setTo(s, mask, stream) -> retval
@spec size(Keyword.t()) :: any() | {:error, String.t()}
@spec size(t()) :: {number(), number()} | {:error, String.t()}

size

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: Size

Python prototype (for reference only):

size() -> retval
@spec step1(Keyword.t()) :: any() | {:error, String.t()}
@spec step1(t()) :: integer() | {:error, String.t()}

step1

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: size_t

Python prototype (for reference only):

step1() -> retval
@spec swap(Keyword.t()) :: any() | {:error, String.t()}
@spec swap(t(), t()) :: t() | {:error, String.t()}

swap

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • mat: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

swap(mat) -> None
@spec to_pointer(t()) ::
  {:ok,
   %Evision.IPCHandle.Local{
     channels: term(),
     cols: term(),
     device_id: term(),
     handle: term(),
     rows: term(),
     step: term(),
     type: term()
   }}
  | {:error, String.t()}

Get raw pointers

Positional Arguments
  • self. Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mode, one of :local, :cuda_ipc.

    • :local: Get a local CUDA pointer that can be used within current OS process.
    • :cuda_ipc: Get a CUDA IPC pointer that can be used across OS processes.
    • :host_ipc: Get a host IPC pointer that can be used across OS processes.

    Defaults to :local.

@spec to_pointer(t(), [{:mode, :local | :cuda_ipc | :host_ipc}]) ::
  {:ok,
   %Evision.IPCHandle.Local{
     channels: term(),
     cols: term(),
     device_id: term(),
     handle: term(),
     rows: term(),
     step: term(),
     type: term()
   }
   | %Evision.IPCHandle.CUDA{
       channels: term(),
       cols: term(),
       device_id: term(),
       handle: term(),
       rows: term(),
       step: term(),
       type: term()
     }
   | %Evision.IPCHandle.Host{
       channels: term(),
       cols: term(),
       fd: term(),
       name: term(),
       rows: term(),
       size: term(),
       type: term()
     }}
  | {:error, String.t()}
@spec type(Keyword.t()) :: any() | {:error, String.t()}
@spec type(t()) :: integer() | {:error, String.t()}

type

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
Return
  • retval: integer()

Python prototype (for reference only):

type() -> retval
Link to this function

updateContinuityFlag(named_args)

View Source
@spec updateContinuityFlag(Keyword.t()) :: any() | {:error, String.t()}
@spec updateContinuityFlag(t()) :: t() | {:error, String.t()}

updateContinuityFlag

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()

Python prototype (for reference only):

updateContinuityFlag() -> None
@spec upload(Keyword.t()) :: any() | {:error, String.t()}
@spec upload(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec upload(t(), t()) :: t() | {:error, String.t()}

Variant 1:

Performs data upload to GpuMat (Blocking call)

Positional Arguments

This function copies data from host memory to device memory. As being a blocking call, it is guaranteed that the copy operation is finished when this function returns.

Python prototype (for reference only):

upload(arr) -> None

Variant 2:

Performs data upload to GpuMat (Blocking call)

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • arr: Evision.CUDA.GpuMat.t()

This function copies data from host memory to device memory. As being a blocking call, it is guaranteed that the copy operation is finished when this function returns.

Python prototype (for reference only):

upload(arr) -> None
Link to this function

upload(self, arr, stream)

View Source
@spec upload(t(), Evision.Mat.maybe_mat_in(), Evision.CUDA.Stream.t()) ::
  t() | {:error, String.t()}
@spec upload(t(), t(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}

Variant 1:

Performs data upload to GpuMat (Non-Blocking call)

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • arr: Evision.Mat
  • stream: Evision.CUDA.Stream.t()

This function copies data from host memory to device memory. As being a non-blocking call, this function may return even if the copy operation is not finished. The copy operation may be overlapped with operations in other non-default streams if \p stream is not the default stream and \p dst is HostMem allocated with HostMem::PAGE_LOCKED option.

Python prototype (for reference only):

upload(arr, stream) -> None

Variant 2:

Performs data upload to GpuMat (Non-Blocking call)

Positional Arguments
  • self: Evision.CUDA.GpuMat.t()
  • arr: Evision.CUDA.GpuMat.t()
  • stream: Evision.CUDA.Stream.t()

This function copies data from host memory to device memory. As being a non-blocking call, this function may return even if the copy operation is not finished. The copy operation may be overlapped with operations in other non-default streams if \p stream is not the default stream and \p dst is HostMem allocated with HostMem::PAGE_LOCKED option.

Python prototype (for reference only):

upload(arr, stream) -> None