View Source Evision.CUDA.GpuMat (Evision v0.2.14)
Summary
Types
Type that represents an Evision.CUDA.GpuMat struct.
Functions
assignTo
assignTo
channels
clone
col
colRange
colRange
convertTo
Variant 1:
convertTo
convertTo
copyTo
Variant 1:
copyTo
Variant 1:
copyTo
create
cudaPtr
defaultAllocator
depth
Performs data download from GpuMat (Blocking call)
Variant 1:
Performs data download from GpuMat (Non-Blocking call)
Performs data download from GpuMat (Non-Blocking call)
elemSize1
elemSize
empty
Create CUDA GpuMat from a shared CUDA device pointer with new shape
Create CUDA GpuMat from a shared CUDA device pointer
getStdAllocator
GpuMat
Variant 1:
GpuMat
Variant 1:
GpuMat
Variant 1:
GpuMat
Variant 1:
GpuMat
isContinuous
locateROI
release
reshape
reshape
row
rowRange
rowRange
setDefaultAllocator
setTo
Variant 1:
setTo
Variant 1:
setTo
size
step1
swap
Get raw pointers
type
updateContinuityFlag
Variant 1:
Performs data upload to GpuMat (Blocking call)
Variant 1:
Performs data upload to GpuMat (Non-Blocking call)
Types
@type t() :: %Evision.CUDA.GpuMat{ channels: integer(), device_id: integer() | nil, elemSize: integer(), raw_type: integer(), ref: reference(), shape: tuple(), step: integer(), type: Evision.Mat.mat_type() }
Type that represents an Evision.CUDA.GpuMat struct.
channels:
int.The number of matrix channels.
type:
Evision.Mat.mat_type().Type of the matrix elements, following
:nx's convention.raw_type:
int.The raw value returned from
int cv::Mat::type().shape:
tuple.The shape of the matrix.
elemSize:
integer().Element size in bytes.
step:
integer().Number of bytes between two consecutive rows.
When there are no gaps between successive rows, the value of
stepis equal to the number of columns times the element size.device_id:
integer() | nil.nilif currently there's no GPU memory allocated for the GpuMat resource.ref:
reference.The underlying erlang resource variable.
Functions
adjustROI
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - dtop:
integer() - dbottom:
integer() - dleft:
integer() - dright:
integer()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
adjustROI(dtop, dbottom, dleft, dright) -> retval
assignTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - m:
Evision.CUDA.GpuMat.t()
Keyword Arguments
- type:
integer().
Python prototype (for reference only):
assignTo(m[, type]) -> None
assignTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - m:
Evision.CUDA.GpuMat.t()
Keyword Arguments
- type:
integer().
Python prototype (for reference only):
assignTo(m[, type]) -> None
@spec channels(Keyword.t()) :: any() | {:error, String.t()}
@spec channels(t()) :: integer() | {:error, String.t()}
channels
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
integer()
Python prototype (for reference only):
channels() -> retval
@spec clone(Keyword.t()) :: any() | {:error, String.t()}
@spec clone(t()) :: t() | {:error, String.t()}
clone
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
clone() -> retval
col
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - x:
integer()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
col(x) -> retval
colRange
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - r:
Range
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
colRange(r) -> retval
colRange
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - startcol:
integer() - endcol:
integer()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
colRange(startcol, endcol) -> retval
convertTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - rtype:
integer()
Keyword Arguments
- alpha:
double. - beta:
double.
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
convertTo(rtype[, dst[, alpha[, beta]]]) -> dst
@spec convertTo(t(), integer(), [alpha: term(), beta: term()] | nil) :: t() | {:error, String.t()}
@spec convertTo(t(), integer(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}
Variant 1:
convertTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - rtype:
integer() - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
convertTo(rtype, stream[, dst]) -> dstVariant 2:
convertTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - rtype:
integer()
Keyword Arguments
- alpha:
double. - beta:
double.
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
convertTo(rtype[, dst[, alpha[, beta]]]) -> dst
@spec convertTo( t(), integer(), Evision.CUDA.Stream.t(), [{atom(), term()}, ...] | nil ) :: t() | {:error, String.t()}
convertTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - rtype:
integer() - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
convertTo(rtype, stream[, dst]) -> dst
@spec convertTo(t(), integer(), number(), number(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}
convertTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - rtype:
integer() - alpha:
double - beta:
double - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
convertTo(rtype, alpha, beta, stream[, dst]) -> dst
@spec convertTo( t(), integer(), number(), number(), Evision.CUDA.Stream.t(), [{atom(), term()}, ...] | nil ) :: t() | {:error, String.t()}
convertTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - rtype:
integer() - alpha:
double - beta:
double - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
convertTo(rtype, alpha, beta, stream[, dst]) -> dst
@spec copyTo(Keyword.t()) :: any() | {:error, String.t()}
@spec copyTo(t()) :: t() | {:error, String.t()}
copyTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
copyTo([, dst]) -> dst
@spec copyTo(t(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec copyTo(t(), t()) :: t() | {:error, String.t()}
@spec copyTo(t(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}
Variant 1:
copyTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - mask:
Evision.CUDA.GpuMat.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
copyTo(mask[, dst]) -> dstVariant 2:
copyTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
copyTo(stream[, dst]) -> dstVariant 3:
copyTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
copyTo([, dst]) -> dst
@spec copyTo(t(), t(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec copyTo(t(), Evision.CUDA.Stream.t(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec copyTo(t(), t(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}
Variant 1:
copyTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - mask:
Evision.CUDA.GpuMat.t() - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
copyTo(mask, stream[, dst]) -> dstVariant 2:
copyTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - mask:
Evision.CUDA.GpuMat.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
copyTo(mask[, dst]) -> dstVariant 3:
copyTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
copyTo(stream[, dst]) -> dst
@spec copyTo(t(), t(), Evision.CUDA.Stream.t(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
copyTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - mask:
Evision.CUDA.GpuMat.t() - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.CUDA.GpuMat.t().
Python prototype (for reference only):
copyTo(mask, stream[, dst]) -> dst
create
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - size:
Size - type:
integer()
Python prototype (for reference only):
create(size, type) -> None
create
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - rows:
integer() - cols:
integer() - type:
integer()
Python prototype (for reference only):
create(rows, cols, type) -> None
@spec cudaPtr(Keyword.t()) :: any() | {:error, String.t()}
@spec cudaPtr(t()) :: :ok | {:error, String.t()}
cudaPtr
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
void*
Python prototype (for reference only):
cudaPtr() -> retval
defaultAllocator
Return
- retval:
GpuMat::Allocator*
Python prototype (for reference only):
defaultAllocator() -> retval
@spec depth(Keyword.t()) :: any() | {:error, String.t()}
@spec depth(t()) :: integer() | {:error, String.t()}
depth
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
integer()
Python prototype (for reference only):
depth() -> retval
@spec download(Keyword.t()) :: any() | {:error, String.t()}
@spec download(t()) :: Evision.Mat.t() | {:error, String.t()}
Performs data download from GpuMat (Blocking call)
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- dst:
Evision.Mat.t().
This function copies data from device memory to host memory. As being a blocking call, it is guaranteed that the copy operation is finished when this function returns.
Python prototype (for reference only):
download([, dst]) -> dst
@spec download(t(), [{atom(), term()}, ...] | nil) :: Evision.Mat.t() | {:error, String.t()}
@spec download(t(), Evision.CUDA.Stream.t()) :: Evision.Mat.t() | {:error, String.t()}
Variant 1:
Performs data download from GpuMat (Non-Blocking call)
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.Mat.t().
This function copies data from device memory to host memory. As being a non-blocking call, this function may return even if the copy operation is not finished. The copy operation may be overlapped with operations in other non-default streams if \p stream is not the default stream and \p dst is HostMem allocated with HostMem::PAGE_LOCKED option.
Python prototype (for reference only):
download(stream[, dst]) -> dstVariant 2:
Performs data download from GpuMat (Blocking call)
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- dst:
Evision.Mat.t().
This function copies data from device memory to host memory. As being a blocking call, it is guaranteed that the copy operation is finished when this function returns.
Python prototype (for reference only):
download([, dst]) -> dst
@spec download(t(), Evision.CUDA.Stream.t(), [{atom(), term()}, ...] | nil) :: Evision.Mat.t() | {:error, String.t()}
Performs data download from GpuMat (Non-Blocking call)
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - stream:
Evision.CUDA.Stream.t()
Return
- dst:
Evision.Mat.t().
This function copies data from device memory to host memory. As being a non-blocking call, this function may return even if the copy operation is not finished. The copy operation may be overlapped with operations in other non-default streams if \p stream is not the default stream and \p dst is HostMem allocated with HostMem::PAGE_LOCKED option.
Python prototype (for reference only):
download(stream[, dst]) -> dst
@spec elemSize1(Keyword.t()) :: any() | {:error, String.t()}
@spec elemSize1(t()) :: integer() | {:error, String.t()}
elemSize1
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
size_t
Python prototype (for reference only):
elemSize1() -> retval
@spec elemSize(Keyword.t()) :: any() | {:error, String.t()}
@spec elemSize(t()) :: integer() | {:error, String.t()}
elemSize
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
size_t
Python prototype (for reference only):
elemSize() -> retval
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}
empty
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
bool
Python prototype (for reference only):
empty() -> retval
@spec from_pointer( %Evision.IPCHandle.Local{ channels: term(), cols: term(), device_id: term(), handle: term(), rows: term(), step: term(), type: term() } | %Evision.IPCHandle.CUDA{ channels: term(), cols: term(), device_id: term(), handle: term(), rows: term(), step: term(), type: term() }, Keyword.t() ) :: t() | {:error, String.t()}
Create CUDA GpuMat from a shared CUDA device pointer with new shape
Positional Arguments
handle, either an
%Evision.IPCHandle.Local{}or an%Evision.IPCHandle.CUDA{}.new_shape,
tuple()The shape of the shared image. It's expected to be either
{height, width, channels}, for any 2D image that has 1 or multiple channels{height, width}, for any 1-channel 2D image{rows}
@spec from_pointer([integer()], atom() | {atom(), integer()}, tuple(), [ {:device_id, non_neg_integer()} ]) :: t() | {:error, String.t()}
Create CUDA GpuMat from a shared CUDA device pointer
Positional Arguments
device_pointer,
list(integer()).This can be either a local pointer or an IPC pointer.
However, please note that IPC pointers have to be generated from another OS process (Erlang process doesn't count).
dtype,
tuple() | atom()Data type.
shape,
tuple()The shape of the shared image. It's expected to be either
{height, width, channels}, for any 2D image that has 1 or multiple channels{height, width}, for any 1-channel 2D image{rows}
Keyword Arguments
device_id,
non_neg_integer.GPU Device ID, default to
0.
getStdAllocator
Return
- retval:
GpuMat::Allocator*
Python prototype (for reference only):
getStdAllocator() -> retval
GpuMat
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat([, allocator]) -> <cuda_GpuMat object>
@spec gpuMat(Keyword.t()) :: any() | {:error, String.t()}
@spec gpuMat([{:allocator, term()}] | nil) :: t() | {:error, String.t()}
@spec gpuMat(Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec gpuMat(t()) :: t() | {:error, String.t()}
@spec gpuMat(t()) :: t() | {:error, String.t()}
Variant 1:
GpuMat
Positional Arguments
- arr:
Evision.Mat
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(arr[, allocator]) -> <cuda_GpuMat object>Variant 2:
GpuMat
Positional Arguments
- arr:
Evision.CUDA.GpuMat.t()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(arr[, allocator]) -> <cuda_GpuMat object>Variant 3:
GpuMat
Positional Arguments
- m:
Evision.CUDA.GpuMat.t()
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(m) -> <cuda_GpuMat object>Variant 4:
GpuMat
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat([, allocator]) -> <cuda_GpuMat object>
@spec gpuMat(Evision.Mat.maybe_mat_in(), [{:allocator, term()}] | nil) :: t() | {:error, String.t()}
@spec gpuMat(t(), [{:allocator, term()}] | nil) :: t() | {:error, String.t()}
@spec gpuMat(t(), {number(), number(), number(), number()}) :: t() | {:error, String.t()}
@spec gpuMat( {number(), number()}, integer() ) :: t() | {:error, String.t()}
Variant 1:
GpuMat
Positional Arguments
- m:
Evision.CUDA.GpuMat.t() - roi:
Rect
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(m, roi) -> <cuda_GpuMat object>Variant 2:
GpuMat
Positional Arguments
- size:
Size - type:
integer()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(size, type[, allocator]) -> <cuda_GpuMat object>Variant 3:
GpuMat
Positional Arguments
- arr:
Evision.Mat
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(arr[, allocator]) -> <cuda_GpuMat object>Variant 4:
GpuMat
Positional Arguments
- arr:
Evision.CUDA.GpuMat.t()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(arr[, allocator]) -> <cuda_GpuMat object>
@spec gpuMat({number(), number()}, integer(), [{:allocator, term()}] | nil) :: t() | {:error, String.t()}
@spec gpuMat(t(), {integer(), integer()} | :all, {integer(), integer()} | :all) :: t() | {:error, String.t()}
@spec gpuMat({number(), number()}, integer(), Evision.scalar()) :: t() | {:error, String.t()}
@spec gpuMat(integer(), integer(), integer()) :: t() | {:error, String.t()}
Variant 1:
GpuMat
Positional Arguments
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(m, rowRange, colRange) -> <cuda_GpuMat object>Variant 2:
GpuMat
Positional Arguments
- size:
Size - type:
integer() - s:
Evision.scalar()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(size, type, s[, allocator]) -> <cuda_GpuMat object>Variant 3:
GpuMat
Positional Arguments
- rows:
integer() - cols:
integer() - type:
integer()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(rows, cols, type[, allocator]) -> <cuda_GpuMat object>Variant 4:
GpuMat
Positional Arguments
- size:
Size - type:
integer()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(size, type[, allocator]) -> <cuda_GpuMat object>
@spec gpuMat( {number(), number()}, integer(), Evision.scalar(), [{:allocator, term()}] | nil ) :: t() | {:error, String.t()}
@spec gpuMat(integer(), integer(), integer(), [{:allocator, term()}] | nil) :: t() | {:error, String.t()}
@spec gpuMat(integer(), integer(), integer(), Evision.scalar()) :: t() | {:error, String.t()}
Variant 1:
GpuMat
Positional Arguments
- rows:
integer() - cols:
integer() - type:
integer() - s:
Evision.scalar()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(rows, cols, type, s[, allocator]) -> <cuda_GpuMat object>Variant 2:
GpuMat
Positional Arguments
- size:
Size - type:
integer() - s:
Evision.scalar()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(size, type, s[, allocator]) -> <cuda_GpuMat object>Variant 3:
GpuMat
Positional Arguments
- rows:
integer() - cols:
integer() - type:
integer()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(rows, cols, type[, allocator]) -> <cuda_GpuMat object>
@spec gpuMat( integer(), integer(), integer(), Evision.scalar(), [{:allocator, term()}] | nil ) :: t() | {:error, String.t()}
GpuMat
Positional Arguments
- rows:
integer() - cols:
integer() - type:
integer() - s:
Evision.scalar()
Keyword Arguments
- allocator:
GpuMat_Allocator*.
Return
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
GpuMat(rows, cols, type, s[, allocator]) -> <cuda_GpuMat object>
@spec isContinuous(Keyword.t()) :: any() | {:error, String.t()}
@spec isContinuous(t()) :: boolean() | {:error, String.t()}
isContinuous
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
bool
Python prototype (for reference only):
isContinuous() -> retval
locateROI
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - wholeSize:
Size - ofs:
Point
Python prototype (for reference only):
locateROI(wholeSize, ofs) -> None
@spec release(Keyword.t()) :: any() | {:error, String.t()}
@spec release(t()) :: t() | {:error, String.t()}
release
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
release() -> None
reshape
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - cn:
integer()
Keyword Arguments
- rows:
integer().
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
reshape(cn[, rows]) -> retval
reshape
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - cn:
integer()
Keyword Arguments
- rows:
integer().
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
reshape(cn[, rows]) -> retval
row
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - y:
integer()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
row(y) -> retval
rowRange
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - r:
Range
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
rowRange(r) -> retval
rowRange
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - startrow:
integer() - endrow:
integer()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
rowRange(startrow, endrow) -> retval
@spec setDefaultAllocator(Keyword.t()) :: any() | {:error, String.t()}
@spec setDefaultAllocator(reference()) :: :ok | {:error, String.t()}
setDefaultAllocator
Positional Arguments
- allocator:
GpuMat_Allocator*
Python prototype (for reference only):
setDefaultAllocator(allocator) -> None
@spec setTo(t(), Evision.scalar()) :: t() | {:error, String.t()}
setTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - s:
Evision.scalar()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
setTo(s) -> retval
@spec setTo(t(), Evision.scalar(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec setTo(t(), Evision.scalar(), t()) :: t() | {:error, String.t()}
@spec setTo(t(), Evision.scalar(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}
Variant 1:
setTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - s:
Evision.scalar() - mask:
Evision.Mat
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
setTo(s, mask) -> retvalVariant 2:
setTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - s:
Evision.scalar() - mask:
Evision.CUDA.GpuMat.t()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
setTo(s, mask) -> retvalVariant 3:
setTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - s:
Evision.scalar() - stream:
Evision.CUDA.Stream.t()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
setTo(s, stream) -> retval
@spec setTo( t(), Evision.scalar(), Evision.Mat.maybe_mat_in(), Evision.CUDA.Stream.t() ) :: t() | {:error, String.t()}
@spec setTo(t(), Evision.scalar(), t(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}
Variant 1:
setTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - s:
Evision.scalar() - mask:
Evision.Mat - stream:
Evision.CUDA.Stream.t()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
setTo(s, mask, stream) -> retvalVariant 2:
setTo
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - s:
Evision.scalar() - mask:
Evision.CUDA.GpuMat.t() - stream:
Evision.CUDA.Stream.t()
Return
- retval:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
setTo(s, mask, stream) -> retval
@spec size(Keyword.t()) :: any() | {:error, String.t()}
@spec size(t()) :: {number(), number()} | {:error, String.t()}
size
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
Size
Python prototype (for reference only):
size() -> retval
@spec step1(Keyword.t()) :: any() | {:error, String.t()}
@spec step1(t()) :: integer() | {:error, String.t()}
step1
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
size_t
Python prototype (for reference only):
step1() -> retval
swap
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - mat:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
swap(mat) -> None
@spec to_pointer(t()) :: {:ok, %Evision.IPCHandle.Local{ channels: term(), cols: term(), device_id: term(), handle: term(), rows: term(), step: term(), type: term() }} | {:error, String.t()}
Get raw pointers
Positional Arguments
- self.
Evision.CUDA.GpuMat.t()
Keyword Arguments
mode, one of
:local,:cuda_ipc.:local: Get a local CUDA pointer that can be used within current OS process.:cuda_ipc: Get a CUDA IPC pointer that can be used across OS processes.:host_ipc: Get a host IPC pointer that can be used across OS processes.
Defaults to
:local.
@spec to_pointer(t(), [{:mode, :local | :cuda_ipc | :host_ipc}]) :: {:ok, %Evision.IPCHandle.Local{ channels: term(), cols: term(), device_id: term(), handle: term(), rows: term(), step: term(), type: term() } | %Evision.IPCHandle.CUDA{ channels: term(), cols: term(), device_id: term(), handle: term(), rows: term(), step: term(), type: term() } | %Evision.IPCHandle.Host{ channels: term(), cols: term(), fd: term(), name: term(), rows: term(), size: term(), type: term() }} | {:error, String.t()}
@spec type(Keyword.t()) :: any() | {:error, String.t()}
@spec type(t()) :: integer() | {:error, String.t()}
type
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Return
- retval:
integer()
Python prototype (for reference only):
type() -> retval
@spec updateContinuityFlag(Keyword.t()) :: any() | {:error, String.t()}
@spec updateContinuityFlag(t()) :: t() | {:error, String.t()}
updateContinuityFlag
Positional Arguments
- self:
Evision.CUDA.GpuMat.t()
Python prototype (for reference only):
updateContinuityFlag() -> None
@spec upload(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec upload(t(), t()) :: t() | {:error, String.t()}
Variant 1:
Performs data upload to GpuMat (Blocking call)
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - arr:
Evision.Mat
This function copies data from host memory to device memory. As being a blocking call, it is guaranteed that the copy operation is finished when this function returns.
Python prototype (for reference only):
upload(arr) -> NoneVariant 2:
Performs data upload to GpuMat (Blocking call)
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - arr:
Evision.CUDA.GpuMat.t()
This function copies data from host memory to device memory. As being a blocking call, it is guaranteed that the copy operation is finished when this function returns.
Python prototype (for reference only):
upload(arr) -> None
@spec upload(t(), Evision.Mat.maybe_mat_in(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}
@spec upload(t(), t(), Evision.CUDA.Stream.t()) :: t() | {:error, String.t()}
Variant 1:
Performs data upload to GpuMat (Non-Blocking call)
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - arr:
Evision.Mat - stream:
Evision.CUDA.Stream.t()
This function copies data from host memory to device memory. As being a non-blocking call, this function may return even if the copy operation is not finished. The copy operation may be overlapped with operations in other non-default streams if \p stream is not the default stream and \p dst is HostMem allocated with HostMem::PAGE_LOCKED option.
Python prototype (for reference only):
upload(arr, stream) -> NoneVariant 2:
Performs data upload to GpuMat (Non-Blocking call)
Positional Arguments
- self:
Evision.CUDA.GpuMat.t() - arr:
Evision.CUDA.GpuMat.t() - stream:
Evision.CUDA.Stream.t()
This function copies data from host memory to device memory. As being a non-blocking call, this function may return even if the copy operation is not finished. The copy operation may be overlapped with operations in other non-default streams if \p stream is not the default stream and \p dst is HostMem allocated with HostMem::PAGE_LOCKED option.
Python prototype (for reference only):
upload(arr, stream) -> None