View Source Evision.CUDA.Stream (Evision v0.1.21)

Link to this section Summary

Types

t()

Type that represents an Evision.CUDA.Stream struct.

Functions

cudaPtr

Adds a callback to be called on the host after all currently enqueued items in the stream have completed.

Returns true if the current stream queue is finished. Otherwise, it returns false.

Stream

Variant 1:

creates a new Stream using the cudaFlags argument to determine the behaviors of the stream

Makes a compute stream wait on an event.

Blocks the current CPU thread until all operations in the stream are complete.

Link to this section Types

@type t() :: %Evision.CUDA.Stream{ref: reference()}

Type that represents an Evision.CUDA.Stream struct.

  • ref. reference()

    The underlying erlang resource variable.

Link to this section Functions

@spec cudaPtr(t()) :: :ok | {:error, String.t()}

cudaPtr

Positional Arguments
  • self: Evision.CUDA.Stream.t()
Return
  • retval: void*

Python prototype (for reference only):

cudaPtr() -> retval
@spec null() :: t() | {:error, String.t()}

Adds a callback to be called on the host after all currently enqueued items in the stream have completed.

Return

Note: Callbacks must not make any CUDA API calls. Callbacks must not perform any synchronization that may depend on outstanding device work or other callbacks that are not mandated to run earlier. Callbacks without a mandated order (in independent streams) execute in undefined order and may be serialized.

Python prototype (for reference only):

Null() -> retval
@spec queryIfComplete(t()) :: boolean() | {:error, String.t()}

Returns true if the current stream queue is finished. Otherwise, it returns false.

Positional Arguments
  • self: Evision.CUDA.Stream.t()
Return
  • retval: bool

Python prototype (for reference only):

queryIfComplete() -> retval
@spec stream() :: t() | {:error, String.t()}

Stream

Return

Python prototype (for reference only):

Stream() -> <cuda_Stream object>
@spec stream(integer()) :: t() | {:error, String.t()}
@spec stream(reference()) :: t() | {:error, String.t()}

Variant 1:

creates a new Stream using the cudaFlags argument to determine the behaviors of the stream

Positional Arguments
  • cudaFlags: size_t
Return

Note: The cudaFlags parameter is passed to the underlying api cudaStreamCreateWithFlags() and supports the same parameter values.

// creates an OpenCV cuda::Stream that manages an asynchronous, non-blocking,
// non-default CUDA stream
cv::cuda::Stream cvStream(cudaStreamNonBlocking);

Python prototype (for reference only):

Stream(cudaFlags) -> <cuda_Stream object>

Variant 2:

Stream

Positional Arguments
  • allocator: Ptr<GpuMat::Allocator>
Return

Python prototype (for reference only):

Stream(allocator) -> <cuda_Stream object>
@spec waitEvent(t(), Evision.CUDA.Event.t()) :: :ok | {:error, String.t()}

Makes a compute stream wait on an event.

Positional Arguments

Python prototype (for reference only):

waitEvent(event) -> None
@spec waitForCompletion(t()) :: :ok | {:error, String.t()}

Blocks the current CPU thread until all operations in the stream are complete.

Positional Arguments
  • self: Evision.CUDA.Stream.t()

Python prototype (for reference only):

waitForCompletion() -> None