View Source Evision.CUDA.HostMem (Evision v0.1.38)

Summary

Types

t()

Type that represents an CUDA.HostMem struct.

Functions

channels

clone

createMatHeader

depth

elemSize1

elemSize

empty

HostMem

Variant 1:

HostMem

Variant 1:

HostMem

Variant 1:

HostMem

Maps CPU memory to GPU address space and creates the cuda::GpuMat header without reference counting for it.

size

step1

type

Types

@type t() :: %Evision.CUDA.HostMem{ref: reference()}

Type that represents an CUDA.HostMem struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec channels(t()) :: integer() | {:error, String.t()}

channels

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: int

Python prototype (for reference only):

channels() -> retval
@spec clone(t()) :: t() | {:error, String.t()}

clone

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

clone() -> retval
Link to this function

create(self, rows, cols, type)

View Source
@spec create(t(), integer(), integer(), integer()) :: t() | {:error, String.t()}

create

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
  • rows: int
  • cols: int
  • type: int

Python prototype (for reference only):

create(rows, cols, type) -> None
@spec createMatHeader(t()) :: Evision.Mat.t() | {:error, String.t()}

createMatHeader

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: Evision.Mat.t()

Python prototype (for reference only):

createMatHeader() -> retval
@spec depth(t()) :: integer() | {:error, String.t()}

depth

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: int

Python prototype (for reference only):

depth() -> retval
@spec elemSize1(t()) :: integer() | {:error, String.t()}

elemSize1

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: size_t

Python prototype (for reference only):

elemSize1() -> retval
@spec elemSize(t()) :: integer() | {:error, String.t()}

elemSize

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: size_t

Python prototype (for reference only):

elemSize() -> retval
@spec empty(t()) :: boolean() | {:error, String.t()}

empty

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: bool

Python prototype (for reference only):

empty() -> retval
@spec get_step(t()) :: integer()
@spec hostMem() :: t() | {:error, String.t()}

HostMem

Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem([, alloc_type]) -> <cuda_HostMem object>
@spec hostMem([{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec hostMem(Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec hostMem(Evision.CUDA.GpuMat.t()) :: t() | {:error, String.t()}

Variant 1:

HostMem

Positional Arguments
  • arr: Evision.Mat.t()
Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem(arr[, alloc_type]) -> <cuda_HostMem object>

Variant 2:

HostMem

Positional Arguments
  • arr: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem(arr[, alloc_type]) -> <cuda_HostMem object>

Variant 3:

HostMem

Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem([, alloc_type]) -> <cuda_HostMem object>
@spec hostMem(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}
@spec hostMem(Evision.CUDA.GpuMat.t(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}
@spec hostMem(
  {number(), number()},
  integer()
) :: t() | {:error, String.t()}

Variant 1:

HostMem

Positional Arguments
  • size: Size
  • type: int
Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>

Variant 2:

HostMem

Positional Arguments
  • arr: Evision.Mat.t()
Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem(arr[, alloc_type]) -> <cuda_HostMem object>

Variant 3:

HostMem

Positional Arguments
  • arr: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
Link to this function

hostMem(size, type, opts)

View Source
@spec hostMem({number(), number()}, integer(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}
@spec hostMem(integer(), integer(), integer()) :: t() | {:error, String.t()}

Variant 1:

HostMem

Positional Arguments
  • rows: int
  • cols: int
  • type: int
Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>

Variant 2:

HostMem

Positional Arguments
  • size: Size
  • type: int
Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
Link to this function

hostMem(rows, cols, type, opts)

View Source
@spec hostMem(integer(), integer(), integer(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}

HostMem

Positional Arguments
  • rows: int
  • cols: int
  • type: int
Keyword Arguments
  • alloc_type: HostMem_AllocType.
Return
  • self: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
@spec isContinuous(t()) :: boolean() | {:error, String.t()}

Maps CPU memory to GPU address space and creates the cuda::GpuMat header without reference counting for it.

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: bool

This can be done only if memory was allocated with the SHARED flag and if it is supported by the hardware. Laptops often share video and CPU memory, so address spaces can be mapped, which eliminates an extra copy.

Python prototype (for reference only):

isContinuous() -> retval
@spec reshape(t(), integer()) :: t() | {:error, String.t()}

reshape

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
  • cn: int
Keyword Arguments
  • rows: int.
Return
  • retval: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

reshape(cn[, rows]) -> retval
@spec reshape(t(), integer(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}

reshape

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
  • cn: int
Keyword Arguments
  • rows: int.
Return
  • retval: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

reshape(cn[, rows]) -> retval
@spec size(t()) :: {number(), number()} | {:error, String.t()}

size

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: Size

Python prototype (for reference only):

size() -> retval
@spec step1(t()) :: integer() | {:error, String.t()}

step1

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: size_t

Python prototype (for reference only):

step1() -> retval
@spec swap(t(), t()) :: t() | {:error, String.t()}

swap

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
  • b: Evision.CUDA.HostMem.t()

Python prototype (for reference only):

swap(b) -> None
@spec type(t()) :: integer() | {:error, String.t()}

type

Positional Arguments
  • self: Evision.CUDA.HostMem.t()
Return
  • retval: int

Python prototype (for reference only):

type() -> retval