View Source Evision.CUDA.HostMem (Evision v0.2.9)
Summary
Functions
channels
clone
createMatHeader
depth
elemSize1
elemSize
empty
HostMem
Variant 1:
HostMem
Variant 1:
HostMem
Variant 1:
HostMem
HostMem
Maps CPU memory to GPU address space and creates the cuda::GpuMat header without reference counting for it.
reshape
reshape
size
step1
swap
type
Types
@type t() :: %Evision.CUDA.HostMem{ref: reference()}
Type that represents an CUDA.HostMem
struct.
ref.
reference()
The underlying erlang resource variable.
Functions
@spec channels(Keyword.t()) :: any() | {:error, String.t()}
@spec channels(t()) :: integer() | {:error, String.t()}
channels
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
integer()
Python prototype (for reference only):
channels() -> retval
@spec clone(Keyword.t()) :: any() | {:error, String.t()}
@spec clone(t()) :: t() | {:error, String.t()}
clone
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
clone() -> retval
create
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
- rows:
integer()
- cols:
integer()
- type:
integer()
Python prototype (for reference only):
create(rows, cols, type) -> None
@spec createMatHeader(Keyword.t()) :: any() | {:error, String.t()}
@spec createMatHeader(t()) :: Evision.Mat.t() | {:error, String.t()}
createMatHeader
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
Evision.Mat.t()
Python prototype (for reference only):
createMatHeader() -> retval
@spec depth(Keyword.t()) :: any() | {:error, String.t()}
@spec depth(t()) :: integer() | {:error, String.t()}
depth
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
integer()
Python prototype (for reference only):
depth() -> retval
@spec elemSize1(Keyword.t()) :: any() | {:error, String.t()}
@spec elemSize1(t()) :: integer() | {:error, String.t()}
elemSize1
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
size_t
Python prototype (for reference only):
elemSize1() -> retval
@spec elemSize(Keyword.t()) :: any() | {:error, String.t()}
@spec elemSize(t()) :: integer() | {:error, String.t()}
elemSize
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
size_t
Python prototype (for reference only):
elemSize() -> retval
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}
empty
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
bool
Python prototype (for reference only):
empty() -> retval
HostMem
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem([, alloc_type]) -> <cuda_HostMem object>
@spec hostMem(Keyword.t()) :: any() | {:error, String.t()}
@spec hostMem([{:alloc_type, term()}] | nil) :: t() | {:error, String.t()}
@spec hostMem(Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec hostMem(Evision.CUDA.GpuMat.t()) :: t() | {:error, String.t()}
Variant 1:
HostMem
Positional Arguments
- arr:
Evision.Mat
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
Variant 2:
HostMem
Positional Arguments
- arr:
Evision.CUDA.GpuMat.t()
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
Variant 3:
HostMem
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem([, alloc_type]) -> <cuda_HostMem object>
@spec hostMem(Evision.Mat.maybe_mat_in(), [{:alloc_type, term()}] | nil) :: t() | {:error, String.t()}
@spec hostMem(Evision.CUDA.GpuMat.t(), [{:alloc_type, term()}] | nil) :: t() | {:error, String.t()}
@spec hostMem( {number(), number()}, integer() ) :: t() | {:error, String.t()}
Variant 1:
HostMem
Positional Arguments
- size:
Size
- type:
integer()
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
Variant 2:
HostMem
Positional Arguments
- arr:
Evision.Mat
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
Variant 3:
HostMem
Positional Arguments
- arr:
Evision.CUDA.GpuMat.t()
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
@spec hostMem({number(), number()}, integer(), [{:alloc_type, term()}] | nil) :: t() | {:error, String.t()}
@spec hostMem(integer(), integer(), integer()) :: t() | {:error, String.t()}
Variant 1:
HostMem
Positional Arguments
- rows:
integer()
- cols:
integer()
- type:
integer()
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
Variant 2:
HostMem
Positional Arguments
- size:
Size
- type:
integer()
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
@spec hostMem(integer(), integer(), integer(), [{:alloc_type, term()}] | nil) :: t() | {:error, String.t()}
HostMem
Positional Arguments
- rows:
integer()
- cols:
integer()
- type:
integer()
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
@spec isContinuous(Keyword.t()) :: any() | {:error, String.t()}
@spec isContinuous(t()) :: boolean() | {:error, String.t()}
Maps CPU memory to GPU address space and creates the cuda::GpuMat header without reference counting for it.
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
bool
This can be done only if memory was allocated with the SHARED flag and if it is supported by the hardware. Laptops often share video and CPU memory, so address spaces can be mapped, which eliminates an extra copy.
Python prototype (for reference only):
isContinuous() -> retval
reshape
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
- cn:
integer()
Keyword Arguments
- rows:
integer()
.
Return
- retval:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
reshape(cn[, rows]) -> retval
reshape
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
- cn:
integer()
Keyword Arguments
- rows:
integer()
.
Return
- retval:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
reshape(cn[, rows]) -> retval
@spec size(Keyword.t()) :: any() | {:error, String.t()}
@spec size(t()) :: {number(), number()} | {:error, String.t()}
size
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
Size
Python prototype (for reference only):
size() -> retval
@spec step1(Keyword.t()) :: any() | {:error, String.t()}
@spec step1(t()) :: integer() | {:error, String.t()}
step1
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
size_t
Python prototype (for reference only):
step1() -> retval
swap
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
- b:
Evision.CUDA.HostMem.t()
Python prototype (for reference only):
swap(b) -> None
@spec type(Keyword.t()) :: any() | {:error, String.t()}
@spec type(t()) :: integer() | {:error, String.t()}
type
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
integer()
Python prototype (for reference only):
type() -> retval