View Source Evision.CUDA.Stream (Evision v0.2.9)

Summary

Types

t()

Type that represents an CUDA.Stream struct.

Functions

Adds a callback to be called on the host after all currently enqueued items in the stream have completed.

Returns true if the current stream queue is finished. Otherwise, it returns false.

Stream

Variant 1:

creates a new Stream using the cudaFlags argument to determine the behaviors of the stream

Makes a compute stream wait on an event.

Blocks the current CPU thread until all operations in the stream are complete.

Types

@type t() :: %Evision.CUDA.Stream{ref: reference()}

Type that represents an CUDA.Stream struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec cudaPtr(Keyword.t()) :: any() | {:error, String.t()}
@spec cudaPtr(t()) :: :ok | {:error, String.t()}

cudaPtr

Positional Arguments
  • self: Evision.CUDA.Stream.t()
Return
  • retval: void*

Python prototype (for reference only):

cudaPtr() -> retval
@spec null() :: t() | {:error, String.t()}

Adds a callback to be called on the host after all currently enqueued items in the stream have completed.

Return
  • retval: Evision.CUDA.Stream.t()

Note: Callbacks must not make any CUDA API calls. Callbacks must not perform any synchronization that may depend on outstanding device work or other callbacks that are not mandated to run earlier. Callbacks without a mandated order (in independent streams) execute in undefined order and may be serialized.

Python prototype (for reference only):

Null() -> retval
@spec null(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

queryIfComplete(named_args)

View Source
@spec queryIfComplete(Keyword.t()) :: any() | {:error, String.t()}
@spec queryIfComplete(t()) :: boolean() | {:error, String.t()}

Returns true if the current stream queue is finished. Otherwise, it returns false.

Positional Arguments
  • self: Evision.CUDA.Stream.t()
Return
  • retval: bool

Python prototype (for reference only):

queryIfComplete() -> retval
@spec stream() :: t() | {:error, String.t()}

Stream

Return
  • self: Evision.CUDA.Stream.t()

Python prototype (for reference only):

Stream() -> <cuda_Stream object>
@spec stream(Keyword.t()) :: any() | {:error, String.t()}
@spec stream(integer()) :: t() | {:error, String.t()}
@spec stream(reference()) :: t() | {:error, String.t()}

Variant 1:

creates a new Stream using the cudaFlags argument to determine the behaviors of the stream

Positional Arguments
  • cudaFlags: size_t
Return
  • self: Evision.CUDA.Stream.t()

Note: The cudaFlags parameter is passed to the underlying api cudaStreamCreateWithFlags() and supports the same parameter values.

// creates an OpenCV cuda::Stream that manages an asynchronous, non-blocking,
// non-default CUDA stream
cv::cuda::Stream cvStream(cudaStreamNonBlocking);

Python prototype (for reference only):

Stream(cudaFlags) -> <cuda_Stream object>

Variant 2:

Stream

Positional Arguments
  • allocator: GpuMat::Allocator
Return
  • self: Evision.CUDA.Stream.t()

Python prototype (for reference only):

Stream(allocator) -> <cuda_Stream object>
@spec waitEvent(Keyword.t()) :: any() | {:error, String.t()}
@spec waitEvent(t(), Evision.CUDA.Event.t()) :: t() | {:error, String.t()}

Makes a compute stream wait on an event.

Positional Arguments
  • self: Evision.CUDA.Stream.t()
  • event: Evision.CUDA.Event.t()

Python prototype (for reference only):

waitEvent(event) -> None
Link to this function

waitForCompletion(named_args)

View Source
@spec waitForCompletion(Keyword.t()) :: any() | {:error, String.t()}
@spec waitForCompletion(t()) :: t() | {:error, String.t()}

Blocks the current CPU thread until all operations in the stream are complete.

Positional Arguments
  • self: Evision.CUDA.Stream.t()

Python prototype (for reference only):

waitForCompletion() -> None