View Source Evision.CUDA (Evision v0.2.7)

Summary

Types

t()

Type that represents an CUDA struct.

Functions

Variant 1:

Computes an absolute value of each matrix element.

Variant 1:

Computes an absolute value of each matrix element.

Variant 1:

Computes per-element absolute difference of two matrices (or of a matrix and scalar).

Variant 1:

Computes per-element absolute difference of two matrices (or of a matrix and scalar).

Variant 1:

Returns the sum of absolute values for matrix elements.

Variant 1:

Returns the sum of absolute values for matrix elements.

Variant 1:

Computes a matrix-matrix or matrix-scalar sum.

Variant 1:

Computes a matrix-matrix or matrix-scalar sum.

Variant 1:

Computes the weighted sum of two arrays.

Variant 1:

Computes the weighted sum of two arrays.

Variant 1:

Composites two images using alpha opacity values contained in each image.

Variant 1:

Composites two images using alpha opacity values contained in each image.

Variant 1:

Performs bilateral filtering of passed image

Variant 1:

Performs bilateral filtering of passed image

Variant 1:

Performs linear blending of two images.

Variant 1:

Performs linear blending of two images.

Variant 1:

calcAbsSum

Variant 1:

calcAbsSum

Variant 1:

Calculates histogram for one channel 8-bit image.

Variant 1:

Calculates histogram for one channel 8-bit image confined in given mask.

Variant 1:

Calculates histogram for one channel 8-bit image confined in given mask.

Variant 1:

calcNorm

Variant 1:

calcNorm

Variant 1:

calcNormDiff

Variant 1:

calcNormDiff

Variant 1:

calcSqrSum

Variant 1:

calcSqrSum

Variant 1:

calcSum

Variant 1:

calcSum

Variant 1:

Converts Cartesian coordinates into polar.

Variant 1:

Converts Cartesian coordinates into polar.

Variant 1:

Compares elements of two matrices (or of a matrix and scalar).

Variant 1:

Compares elements of two matrices (or of a matrix and scalar).

Variant 1:

connectedComponents

Variant 1:

connectedComponents

Variant 1:

Computes the Connected Components Labeled image of a binary image.

Variant 1:

Computes the Connected Components Labeled image of a binary image.

Converts the spatial image moments returned from cuda::spatialMoments to cv::Moments.

Variant 1:

Forms a border around an image.

Variant 1:

Forms a border around an image.

Variant 1:

countNonZero

Variant 1:

countNonZero

Creates MOG2 Background Subtractor

Creates MOG2 Background Subtractor

Creates mixture-of-gaussian background subtractor

Creates mixture-of-gaussian background subtractor

Creates a normalized 2D box filter.

Creates a normalized 2D box filter.

Creates the maximum filter.

Creates the maximum filter.

Creates the minimum filter.

Creates the minimum filter.

Creates implementation for cuda::CannyEdgeDetector .

Creates implementation for cuda::CannyEdgeDetector .

Creates implementation for cuda::CLAHE .

Creates implementation for cuda::CLAHE .

Creates a vertical 1D box filter.

Creates a vertical 1D box filter.

Creates a continuous matrix.

Creates a continuous matrix.

Creates implementation for cuda::Convolution .

Creates implementation for cuda::Convolution .

Creates a generalized Deriv operator.

Creates a generalized Deriv operator.

Creates implementation for cuda::DFT.

Creates DisparityBilateralFilter object.

Creates DisparityBilateralFilter object.

Creates implementation for generalized hough transform from @cite Ballard1981 .

Creates implementation for generalized hough transform from @cite Guil1999 .

Creates implementation for cuda::CornersDetector .

Creates implementation for cuda::CornersDetector .

Variant 1:

Bindings overload to create a GpuMat from existing GPU memory.

Bindings overload to create a GpuMat from existing GPU memory.

Creates implementation for Harris cornerness criteria.

Creates implementation for Harris cornerness criteria.

Creates implementation for cuda::HoughLinesDetector .

Creates implementation for cuda::HoughLinesDetector .

Creates implementation for cuda::HoughSegmentDetector .

Creates implementation for cuda::HoughSegmentDetector .

Creates a Laplacian operator.

Creates a Laplacian operator.

Variant 1:

Creates a non-separable linear 2D filter.

Variant 1:

Creates a non-separable linear 2D filter.

Variant 1:

Creates implementation for cuda::LookUpTable .

Performs median filtering for each point of the source image.

Performs median filtering for each point of the source image.

Creates implementation for the minimum eigen value of a 2x2 derivative covariation matrix (the cornerness criteria).

Creates implementation for the minimum eigen value of a 2x2 derivative covariation matrix (the cornerness criteria).

Variant 1:

Creates a 2D morphological filter.

Variant 1:

Creates a 2D morphological filter.

Creates a horizontal 1D box filter.

Creates a horizontal 1D box filter.

Creates a vertical or horizontal Scharr operator.

Creates a vertical or horizontal Scharr operator.

Variant 1:

Creates a separable linear filter.

Creates StereoBeliefPropagation object.

Creates StereoBeliefPropagation object.

Creates StereoBM object.

Creates StereoBM object.

Creates StereoConstantSpaceBP object.

Creates StereoConstantSpaceBP object.

Creates StereoSGM object.

Creates StereoSGM object.

Creates implementation for cuda::TemplateMatching .

Creates implementation for cuda::TemplateMatching .

Variant 1:

Converts an image from one color space to another.

Variant 1:

Converts an image from one color space to another.

Variant 1:

Converts an image from Bayer pattern to RGB or grayscale.

Variant 1:

Converts an image from Bayer pattern to RGB or grayscale.

Variant 1:

Performs a forward or inverse discrete Fourier transform (1D or 2D) of the floating point matrix.

Variant 1:

Performs a forward or inverse discrete Fourier transform (1D or 2D) of the floating point matrix.

Variant 1:

Computes a matrix-matrix or matrix-scalar division.

Variant 1:

Computes a matrix-matrix or matrix-scalar division.

Variant 1:

Colors a disparity image.

Variant 1:

Colors a disparity image.

Ensures that the size of a matrix is big enough and the matrix has a proper type.

Ensures that the size of a matrix is big enough and the matrix has a proper type.

Variant 1:

Equalizes the histogram of a grayscale image.

Variant 1:

Equalizes the histogram of a grayscale image.

Computes levels with even distribution.

Computes levels with even distribution.

Variant 1:

Computes an exponent of each matrix element.

Variant 1:

Computes an exponent of each matrix element.

Perform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising with several computational optimizations. Noise expected to be a gaussian white noise

Perform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising with several computational optimizations. Noise expected to be a gaussian white noise

Modification of fastNlMeansDenoising function for colored images

Modification of fastNlMeansDenoising function for colored images

Variant 1:

findMinMax

Variant 1:

findMinMax

Variant 1:

findMinMaxLoc

Variant 1:

findMinMaxLoc

Variant 1:

Flips a 2D matrix around vertical, horizontal, or both axes.

Variant 1:

Flips a 2D matrix around vertical, horizontal, or both axes.

Variant 1:

Routines for correcting image color gamma.

Variant 1:

Routines for correcting image color gamma.

Variant 1:

Performs generalized matrix multiplication.

Variant 1:

Performs generalized matrix multiplication.

Returns the number of installed CUDA-enabled devices.

Returns the current device index set by cuda::setDevice or initialized by default.

Variant 1:

Calculates a histogram with evenly distributed bins.

Variant 1:

Calculates a histogram with evenly distributed bins.

Variant 1:

Calculates a histogram with bins determined by the levels array.

Variant 1:

Calculates a histogram with bins determined by the levels array.

Variant 1:

Checks if array elements lie between two scalars.

Variant 1:

Checks if array elements lie between two scalars.

Variant 1:

Computes an integral image.

Variant 1:

Computes an integral image.

Variant 1:

Computes a natural logarithm of absolute value of each matrix element.

Variant 1:

Computes a natural logarithm of absolute value of each matrix element.

Variant 1:

Performs pixel by pixel right left of an image by a constant value.

Variant 1:

Performs pixel by pixel right left of an image by a constant value.

Variant 1:

Computes magnitudes of complex matrix elements.

Variant 1:

magnitude

Variant 1:

magnitude

Variant 1:

Computes squared magnitudes of complex matrix elements.

Variant 1:

magnitudeSqr

Variant 1:

magnitudeSqr

Variant 1:

Computes the per-element maximum of two matrices (or a matrix and a scalar).

Variant 1:

Computes the per-element maximum of two matrices (or a matrix and a scalar).

Variant 1:

Performs mean-shift filtering for each point of the source image.

Variant 1:

Performs mean-shift filtering for each point of the source image.

Variant 1:

Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images.

Variant 1:

Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images.

Variant 1:

Performs a mean-shift segmentation of the source image and eliminates small segments.

Variant 1:

Performs a mean-shift segmentation of the source image and eliminates small segments.

Variant 1:

meanStdDev

Variant 1:

meanStdDev

Variant 1:

Computes the per-element minimum of two matrices (or a matrix and a scalar).

Variant 1:

Computes the per-element minimum of two matrices (or a matrix and a scalar).

Variant 1:

Finds global minimum and maximum matrix elements and returns their values.

Variant 1:

Finds global minimum and maximum matrix elements and returns their values.

Variant 1:

Finds global minimum and maximum matrix elements and returns their values with locations.

Variant 1:

Finds global minimum and maximum matrix elements and returns their values with locations.

Variant 1:

Calculates all of the moments up to the 3rd order of a rasterized shape.

Variant 1:

Calculates all of the moments up to the 3rd order of a rasterized shape.

Variant 1:

Performs a per-element multiplication of two Fourier spectrums and scales the result.

Variant 1:

Performs a per-element multiplication of two Fourier spectrums and scales the result.

Variant 1:

Performs a per-element multiplication of two Fourier spectrums.

Variant 1:

Performs a per-element multiplication of two Fourier spectrums.

Variant 1:

Computes a matrix-matrix or matrix-scalar per-element product.

Variant 1:

Computes a matrix-matrix or matrix-scalar per-element product.

Performs pure non local means denoising without any simplification, and thus it is not fast.

Performs pure non local means denoising without any simplification, and thus it is not fast.

Variant 1:

Returns the difference of two matrices.

Variant 1:

Returns the difference of two matrices.

Variant 1:

Normalizes the norm or value range of an array.

Variant 1:

Normalizes the norm or value range of an array.

Returns the number of image moments less than or equal to the largest image moments \a order.

Variant 1:

Computes polar angles of complex matrix elements.

Variant 1:

Computes polar angles of complex matrix elements.

Variant 1:

Converts polar coordinates into Cartesian.

Variant 1:

Converts polar coordinates into Cartesian.

Variant 1:

Raises every matrix element to a power.

Variant 1:

Raises every matrix element to a power.

printCudaDeviceInfo

printShortCudaDeviceInfo

Variant 1:

Smoothes an image and downsamples it.

Variant 1:

Smoothes an image and downsamples it.

Variant 1:

Upsamples an image and then smoothes it.

Variant 1:

Upsamples an image and then smoothes it.

Variant 1:

Computes a standard deviation of integral images.

Variant 1:

Computes a standard deviation of integral images.

Variant 1:

Reduces a matrix to a vector.

Variant 1:

Reduces a matrix to a vector.

Page-locks the memory of matrix and maps it for the device(s).

Variant 1:

Applies a generic geometrical transformation to an image.

Variant 1:

Applies a generic geometrical transformation to an image.

Reprojects a disparity image to 3D space.

Reprojects a disparity image to 3D space.

Explicitly destroys and cleans up all resources associated with the current device in the current process.

Variant 1:

Resizes an image.

Variant 1:

Resizes an image.

Variant 1:

Rotates an image around the origin (0,0) and then shifts it.

Variant 1:

Rotates an image around the origin (0,0) and then shifts it.

Variant 1:

Performs pixel by pixel right shift of an image by a constant value.

Variant 1:

Performs pixel by pixel right shift of an image by a constant value.

setBufferPoolUsage

Sets a device and initializes it for the current thread.

Variant 1:

Calculates all of the spatial moments up to the 3rd order of a rasterized shape.

Variant 1:

Calculates all of the spatial moments up to the 3rd order of a rasterized shape.

Variant 1:

split

Variant 1:

split

Variant 1:

Computes a square value of each matrix element.

Variant 1:

Computes a square value of each matrix element.

Variant 1:

Computes a squared integral image.

Variant 1:

Computes a squared integral image.

Variant 1:

Returns the squared sum of matrix elements.

Variant 1:

Returns the squared sum of matrix elements.

Variant 1:

Computes a square root of each matrix element.

Variant 1:

Computes a square root of each matrix element.

Variant 1:

Computes a matrix-matrix or matrix-scalar difference.

Variant 1:

Computes a matrix-matrix or matrix-scalar difference.

Variant 1:

Returns the sum of matrix elements.

Variant 1:

Returns the sum of matrix elements.

Variant 1:

Applies a fixed-level threshold to each array element.

Variant 1:

Applies a fixed-level threshold to each array element.

Variant 1:

Transposes a matrix.

Variant 1:

Transposes a matrix.

Unmaps the memory of matrix and makes it pageable again.

Variant 1:

warpAffine

Variant 1:

warpAffine

Variant 1:

warpPerspective

Variant 1:

warpPerspective

Bindings overload to create a Stream object from the address stored in an existing CUDA Runtime API stream pointer (cudaStream_t).

Types

@type t() :: %Evision.CUDA{ref: reference()}

Type that represents an CUDA struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec abs(Keyword.t()) :: any() | {:error, String.t()}
@spec abs(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec abs(Evision.CUDA.GpuMat.t()) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes an absolute value of each matrix element.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

@sa abs

Python prototype (for reference only):

abs(src[, dst[, stream]]) -> dst

Variant 2:

Computes an absolute value of each matrix element.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

@sa abs

Python prototype (for reference only):

abs(src[, dst[, stream]]) -> dst
@spec abs(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec abs(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes an absolute value of each matrix element.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

@sa abs

Python prototype (for reference only):

abs(src[, dst[, stream]]) -> dst

Variant 2:

Computes an absolute value of each matrix element.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

@sa abs

Python prototype (for reference only):

abs(src[, dst[, stream]]) -> dst
@spec absdiff(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Computes per-element absolute difference of two matrices (or of a matrix and scalar).

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa absdiff

Python prototype (for reference only):

absdiff(src1, src2[, dst[, stream]]) -> dst

Variant 2:

Computes per-element absolute difference of two matrices (or of a matrix and scalar).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa absdiff

Python prototype (for reference only):

absdiff(src1, src2[, dst[, stream]]) -> dst
Link to this function

absdiff(src1, src2, opts)

View Source
@spec absdiff(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec absdiff(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes per-element absolute difference of two matrices (or of a matrix and scalar).

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa absdiff

Python prototype (for reference only):

absdiff(src1, src2[, dst[, stream]]) -> dst

Variant 2:

Computes per-element absolute difference of two matrices (or of a matrix and scalar).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa absdiff

Python prototype (for reference only):

absdiff(src1, src2[, dst[, stream]]) -> dst
@spec absSum(Keyword.t()) :: any() | {:error, String.t()}
@spec absSum(Evision.Mat.maybe_mat_in()) :: Evision.scalar() | {:error, String.t()}
@spec absSum(Evision.CUDA.GpuMat.t()) :: Evision.scalar() | {:error, String.t()}

Variant 1:

Returns the sum of absolute values for matrix elements.

Positional Arguments
  • src: Evision.Mat.

    Source image of any depth except for CV_64F .

Keyword Arguments
  • mask: Evision.Mat.

    optional operation mask; it must have the same size as src1 and CV_8UC1 type.

Return
  • retval: Evision.scalar().t()

Python prototype (for reference only):

absSum(src[, mask]) -> retval

Variant 2:

Returns the sum of absolute values for matrix elements.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image of any depth except for CV_64F .

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    optional operation mask; it must have the same size as src1 and CV_8UC1 type.

Return
  • retval: Evision.scalar().t()

Python prototype (for reference only):

absSum(src[, mask]) -> retval
@spec absSum(Evision.Mat.maybe_mat_in(), [{:mask, term()}] | nil) ::
  Evision.scalar() | {:error, String.t()}
@spec absSum(Evision.CUDA.GpuMat.t(), [{:mask, term()}] | nil) ::
  Evision.scalar() | {:error, String.t()}

Variant 1:

Returns the sum of absolute values for matrix elements.

Positional Arguments
  • src: Evision.Mat.

    Source image of any depth except for CV_64F .

Keyword Arguments
  • mask: Evision.Mat.

    optional operation mask; it must have the same size as src1 and CV_8UC1 type.

Return
  • retval: Evision.scalar().t()

Python prototype (for reference only):

absSum(src[, mask]) -> retval

Variant 2:

Returns the sum of absolute values for matrix elements.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image of any depth except for CV_64F .

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    optional operation mask; it must have the same size as src1 and CV_8UC1 type.

Return
  • retval: Evision.scalar().t()

Python prototype (for reference only):

absSum(src[, mask]) -> retval
@spec add(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Computes a matrix-matrix or matrix-scalar sum.

Positional Arguments
  • src1: Evision.Mat.

    First source matrix or scalar.

  • src2: Evision.Mat.

    Second source matrix or scalar. Matrix should have the same size and type as src1 .

Keyword Arguments
  • mask: Evision.Mat.

    Optional operation mask, 8-bit single channel array, that specifies elements of the destination array to be changed. The mask can be used only with single channel images.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

@sa add

Python prototype (for reference only):

add(src1, src2[, dst[, mask[, dtype[, stream]]]]) -> dst

Variant 2:

Computes a matrix-matrix or matrix-scalar sum.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar. Matrix should have the same size and type as src1 .

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Optional operation mask, 8-bit single channel array, that specifies elements of the destination array to be changed. The mask can be used only with single channel images.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

@sa add

Python prototype (for reference only):

add(src1, src2[, dst[, mask[, dtype[, stream]]]]) -> dst
@spec add(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [dtype: term(), mask: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec add(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [dtype: term(), mask: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes a matrix-matrix or matrix-scalar sum.

Positional Arguments
  • src1: Evision.Mat.

    First source matrix or scalar.

  • src2: Evision.Mat.

    Second source matrix or scalar. Matrix should have the same size and type as src1 .

Keyword Arguments
  • mask: Evision.Mat.

    Optional operation mask, 8-bit single channel array, that specifies elements of the destination array to be changed. The mask can be used only with single channel images.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

@sa add

Python prototype (for reference only):

add(src1, src2[, dst[, mask[, dtype[, stream]]]]) -> dst

Variant 2:

Computes a matrix-matrix or matrix-scalar sum.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar. Matrix should have the same size and type as src1 .

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Optional operation mask, 8-bit single channel array, that specifies elements of the destination array to be changed. The mask can be used only with single channel images.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

@sa add

Python prototype (for reference only):

add(src1, src2[, dst[, mask[, dtype[, stream]]]]) -> dst
@spec addWeighted(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

addWeighted(src1, alpha, src2, beta, gamma)

View Source
@spec addWeighted(
  Evision.Mat.maybe_mat_in(),
  number(),
  Evision.Mat.maybe_mat_in(),
  number(),
  number()
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec addWeighted(
  Evision.CUDA.GpuMat.t(),
  number(),
  Evision.CUDA.GpuMat.t(),
  number(),
  number()
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes the weighted sum of two arrays.

Positional Arguments
  • src1: Evision.Mat.

    First source array.

  • alpha: double.

    Weight for the first array elements.

  • src2: Evision.Mat.

    Second source array of the same size and channel number as src1 .

  • beta: double.

    Weight for the second array elements.

  • gamma: double.

    Scalar added to each sum.

Keyword Arguments
  • dtype: integer().

    Optional depth of the destination array. When both input arrays have the same depth, dtype can be set to -1, which will be equivalent to src1.depth().

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination array that has the same size and number of channels as the input arrays.

The function addWeighted calculates the weighted sum of two arrays as follows: \f[\texttt{dst} (I)= \texttt{saturate} ( \texttt{src1} (I)* \texttt{alpha} + \texttt{src2} (I)* \texttt{beta} + \texttt{gamma} )\f] where I is a multi-dimensional index of array elements. In case of multi-channel arrays, each channel is processed independently. @sa addWeighted

Python prototype (for reference only):

addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype[, stream]]]) -> dst

Variant 2:

Computes the weighted sum of two arrays.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source array.

  • alpha: double.

    Weight for the first array elements.

  • src2: Evision.CUDA.GpuMat.t().

    Second source array of the same size and channel number as src1 .

  • beta: double.

    Weight for the second array elements.

  • gamma: double.

    Scalar added to each sum.

Keyword Arguments
  • dtype: integer().

    Optional depth of the destination array. When both input arrays have the same depth, dtype can be set to -1, which will be equivalent to src1.depth().

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination array that has the same size and number of channels as the input arrays.

The function addWeighted calculates the weighted sum of two arrays as follows: \f[\texttt{dst} (I)= \texttt{saturate} ( \texttt{src1} (I)* \texttt{alpha} + \texttt{src2} (I)* \texttt{beta} + \texttt{gamma} )\f] where I is a multi-dimensional index of array elements. In case of multi-channel arrays, each channel is processed independently. @sa addWeighted

Python prototype (for reference only):

addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype[, stream]]]) -> dst
Link to this function

addWeighted(src1, alpha, src2, beta, gamma, opts)

View Source
@spec addWeighted(
  Evision.Mat.maybe_mat_in(),
  number(),
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  [dtype: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec addWeighted(
  Evision.CUDA.GpuMat.t(),
  number(),
  Evision.CUDA.GpuMat.t(),
  number(),
  number(),
  [dtype: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes the weighted sum of two arrays.

Positional Arguments
  • src1: Evision.Mat.

    First source array.

  • alpha: double.

    Weight for the first array elements.

  • src2: Evision.Mat.

    Second source array of the same size and channel number as src1 .

  • beta: double.

    Weight for the second array elements.

  • gamma: double.

    Scalar added to each sum.

Keyword Arguments
  • dtype: integer().

    Optional depth of the destination array. When both input arrays have the same depth, dtype can be set to -1, which will be equivalent to src1.depth().

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination array that has the same size and number of channels as the input arrays.

The function addWeighted calculates the weighted sum of two arrays as follows: \f[\texttt{dst} (I)= \texttt{saturate} ( \texttt{src1} (I)* \texttt{alpha} + \texttt{src2} (I)* \texttt{beta} + \texttt{gamma} )\f] where I is a multi-dimensional index of array elements. In case of multi-channel arrays, each channel is processed independently. @sa addWeighted

Python prototype (for reference only):

addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype[, stream]]]) -> dst

Variant 2:

Computes the weighted sum of two arrays.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source array.

  • alpha: double.

    Weight for the first array elements.

  • src2: Evision.CUDA.GpuMat.t().

    Second source array of the same size and channel number as src1 .

  • beta: double.

    Weight for the second array elements.

  • gamma: double.

    Scalar added to each sum.

Keyword Arguments
  • dtype: integer().

    Optional depth of the destination array. When both input arrays have the same depth, dtype can be set to -1, which will be equivalent to src1.depth().

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination array that has the same size and number of channels as the input arrays.

The function addWeighted calculates the weighted sum of two arrays as follows: \f[\texttt{dst} (I)= \texttt{saturate} ( \texttt{src1} (I)* \texttt{alpha} + \texttt{src2} (I)* \texttt{beta} + \texttt{gamma} )\f] where I is a multi-dimensional index of array elements. In case of multi-channel arrays, each channel is processed independently. @sa addWeighted

Python prototype (for reference only):

addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype[, stream]]]) -> dst
@spec alphaComp(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

alphaComp(img1, img2, alpha_op)

View Source

Variant 1:

Composites two images using alpha opacity values contained in each image.

Positional Arguments
  • img1: Evision.Mat.

    First image. Supports CV_8UC4 , CV_16UC4 , CV_32SC4 and CV_32FC4 types.

  • img2: Evision.Mat.

    Second image. Must have the same size and the same type as img1 .

  • alpha_op: integer().

    Flag specifying the alpha-blending operation:

    • ALPHA_OVER
    • ALPHA_IN
    • ALPHA_OUT
    • ALPHA_ATOP
    • ALPHA_XOR
    • ALPHA_PLUS
    • ALPHA_OVER_PREMUL
    • ALPHA_IN_PREMUL
    • ALPHA_OUT_PREMUL
    • ALPHA_ATOP_PREMUL
    • ALPHA_XOR_PREMUL
    • ALPHA_PLUS_PREMUL
    • ALPHA_PREMUL
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

Note:

  • An example demonstrating the use of alphaComp can be found at opencv_source_code/samples/gpu/alpha_comp.cpp

Python prototype (for reference only):

alphaComp(img1, img2, alpha_op[, dst[, stream]]) -> dst

Variant 2:

Composites two images using alpha opacity values contained in each image.

Positional Arguments
  • img1: Evision.CUDA.GpuMat.t().

    First image. Supports CV_8UC4 , CV_16UC4 , CV_32SC4 and CV_32FC4 types.

  • img2: Evision.CUDA.GpuMat.t().

    Second image. Must have the same size and the same type as img1 .

  • alpha_op: integer().

    Flag specifying the alpha-blending operation:

    • ALPHA_OVER
    • ALPHA_IN
    • ALPHA_OUT
    • ALPHA_ATOP
    • ALPHA_XOR
    • ALPHA_PLUS
    • ALPHA_OVER_PREMUL
    • ALPHA_IN_PREMUL
    • ALPHA_OUT_PREMUL
    • ALPHA_ATOP_PREMUL
    • ALPHA_XOR_PREMUL
    • ALPHA_PLUS_PREMUL
    • ALPHA_PREMUL
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

Note:

  • An example demonstrating the use of alphaComp can be found at opencv_source_code/samples/gpu/alpha_comp.cpp

Python prototype (for reference only):

alphaComp(img1, img2, alpha_op[, dst[, stream]]) -> dst
Link to this function

alphaComp(img1, img2, alpha_op, opts)

View Source
@spec alphaComp(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  [{:stream, term()}] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec alphaComp(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Composites two images using alpha opacity values contained in each image.

Positional Arguments
  • img1: Evision.Mat.

    First image. Supports CV_8UC4 , CV_16UC4 , CV_32SC4 and CV_32FC4 types.

  • img2: Evision.Mat.

    Second image. Must have the same size and the same type as img1 .

  • alpha_op: integer().

    Flag specifying the alpha-blending operation:

    • ALPHA_OVER
    • ALPHA_IN
    • ALPHA_OUT
    • ALPHA_ATOP
    • ALPHA_XOR
    • ALPHA_PLUS
    • ALPHA_OVER_PREMUL
    • ALPHA_IN_PREMUL
    • ALPHA_OUT_PREMUL
    • ALPHA_ATOP_PREMUL
    • ALPHA_XOR_PREMUL
    • ALPHA_PLUS_PREMUL
    • ALPHA_PREMUL
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

Note:

  • An example demonstrating the use of alphaComp can be found at opencv_source_code/samples/gpu/alpha_comp.cpp

Python prototype (for reference only):

alphaComp(img1, img2, alpha_op[, dst[, stream]]) -> dst

Variant 2:

Composites two images using alpha opacity values contained in each image.

Positional Arguments
  • img1: Evision.CUDA.GpuMat.t().

    First image. Supports CV_8UC4 , CV_16UC4 , CV_32SC4 and CV_32FC4 types.

  • img2: Evision.CUDA.GpuMat.t().

    Second image. Must have the same size and the same type as img1 .

  • alpha_op: integer().

    Flag specifying the alpha-blending operation:

    • ALPHA_OVER
    • ALPHA_IN
    • ALPHA_OUT
    • ALPHA_ATOP
    • ALPHA_XOR
    • ALPHA_PLUS
    • ALPHA_OVER_PREMUL
    • ALPHA_IN_PREMUL
    • ALPHA_OUT_PREMUL
    • ALPHA_ATOP_PREMUL
    • ALPHA_XOR_PREMUL
    • ALPHA_PLUS_PREMUL
    • ALPHA_PREMUL
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

Note:

  • An example demonstrating the use of alphaComp can be found at opencv_source_code/samples/gpu/alpha_comp.cpp

Python prototype (for reference only):

alphaComp(img1, img2, alpha_op[, dst[, stream]]) -> dst
Link to this function

bilateralFilter(named_args)

View Source
@spec bilateralFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

bilateralFilter(src, kernel_size, sigma_color, sigma_spatial)

View Source
@spec bilateralFilter(Evision.Mat.maybe_mat_in(), integer(), number(), number()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec bilateralFilter(Evision.CUDA.GpuMat.t(), integer(), number(), number()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs bilateral filtering of passed image

Positional Arguments
  • src: Evision.Mat.

    Source image. Supports only (channels != 2 && depth() != CV_8S && depth() != CV_32S && depth() != CV_64F).

  • kernel_size: integer().

    Kernel window size.

  • sigma_color: float.

    Filter sigma in the color space.

  • sigma_spatial: float.

    Filter sigma in the coordinate space.

Keyword Arguments
  • borderMode: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination imagwe.

@sa bilateralFilter

Python prototype (for reference only):

bilateralFilter(src, kernel_size, sigma_color, sigma_spatial[, dst[, borderMode[, stream]]]) -> dst

Variant 2:

Performs bilateral filtering of passed image

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Supports only (channels != 2 && depth() != CV_8S && depth() != CV_32S && depth() != CV_64F).

  • kernel_size: integer().

    Kernel window size.

  • sigma_color: float.

    Filter sigma in the color space.

  • sigma_spatial: float.

    Filter sigma in the coordinate space.

Keyword Arguments
  • borderMode: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination imagwe.

@sa bilateralFilter

Python prototype (for reference only):

bilateralFilter(src, kernel_size, sigma_color, sigma_spatial[, dst[, borderMode[, stream]]]) -> dst
Link to this function

bilateralFilter(src, kernel_size, sigma_color, sigma_spatial, opts)

View Source
@spec bilateralFilter(
  Evision.Mat.maybe_mat_in(),
  integer(),
  number(),
  number(),
  [borderMode: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec bilateralFilter(
  Evision.CUDA.GpuMat.t(),
  integer(),
  number(),
  number(),
  [borderMode: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs bilateral filtering of passed image

Positional Arguments
  • src: Evision.Mat.

    Source image. Supports only (channels != 2 && depth() != CV_8S && depth() != CV_32S && depth() != CV_64F).

  • kernel_size: integer().

    Kernel window size.

  • sigma_color: float.

    Filter sigma in the color space.

  • sigma_spatial: float.

    Filter sigma in the coordinate space.

Keyword Arguments
  • borderMode: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination imagwe.

@sa bilateralFilter

Python prototype (for reference only):

bilateralFilter(src, kernel_size, sigma_color, sigma_spatial[, dst[, borderMode[, stream]]]) -> dst

Variant 2:

Performs bilateral filtering of passed image

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Supports only (channels != 2 && depth() != CV_8S && depth() != CV_32S && depth() != CV_64F).

  • kernel_size: integer().

    Kernel window size.

  • sigma_color: float.

    Filter sigma in the color space.

  • sigma_spatial: float.

    Filter sigma in the coordinate space.

Keyword Arguments
  • borderMode: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination imagwe.

@sa bilateralFilter

Python prototype (for reference only):

bilateralFilter(src, kernel_size, sigma_color, sigma_spatial[, dst[, borderMode[, stream]]]) -> dst
@spec blendLinear(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

blendLinear(img1, img2, weights1, weights2)

View Source

Variant 1:

Performs linear blending of two images.

Positional Arguments
  • img1: Evision.Mat.

    First image. Supports only CV_8U and CV_32F depth.

  • img2: Evision.Mat.

    Second image. Must have the same size and the same type as img1 .

  • weights1: Evision.Mat.

    Weights for first image. Must have tha same size as img1 . Supports only CV_32F type.

  • weights2: Evision.Mat.

    Weights for second image. Must have tha same size as img2 . Supports only CV_32F type.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • result: Evision.Mat.t().

    Destination image.

Python prototype (for reference only):

blendLinear(img1, img2, weights1, weights2[, result[, stream]]) -> result

Variant 2:

Performs linear blending of two images.

Positional Arguments
  • img1: Evision.CUDA.GpuMat.t().

    First image. Supports only CV_8U and CV_32F depth.

  • img2: Evision.CUDA.GpuMat.t().

    Second image. Must have the same size and the same type as img1 .

  • weights1: Evision.CUDA.GpuMat.t().

    Weights for first image. Must have tha same size as img1 . Supports only CV_32F type.

  • weights2: Evision.CUDA.GpuMat.t().

    Weights for second image. Must have tha same size as img2 . Supports only CV_32F type.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • result: Evision.CUDA.GpuMat.t().

    Destination image.

Python prototype (for reference only):

blendLinear(img1, img2, weights1, weights2[, result[, stream]]) -> result
Link to this function

blendLinear(img1, img2, weights1, weights2, opts)

View Source
@spec blendLinear(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:stream, term()}] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec blendLinear(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:stream, term()}] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs linear blending of two images.

Positional Arguments
  • img1: Evision.Mat.

    First image. Supports only CV_8U and CV_32F depth.

  • img2: Evision.Mat.

    Second image. Must have the same size and the same type as img1 .

  • weights1: Evision.Mat.

    Weights for first image. Must have tha same size as img1 . Supports only CV_32F type.

  • weights2: Evision.Mat.

    Weights for second image. Must have tha same size as img2 . Supports only CV_32F type.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • result: Evision.Mat.t().

    Destination image.

Python prototype (for reference only):

blendLinear(img1, img2, weights1, weights2[, result[, stream]]) -> result

Variant 2:

Performs linear blending of two images.

Positional Arguments
  • img1: Evision.CUDA.GpuMat.t().

    First image. Supports only CV_8U and CV_32F depth.

  • img2: Evision.CUDA.GpuMat.t().

    Second image. Must have the same size and the same type as img1 .

  • weights1: Evision.CUDA.GpuMat.t().

    Weights for first image. Must have tha same size as img1 . Supports only CV_32F type.

  • weights2: Evision.CUDA.GpuMat.t().

    Weights for second image. Must have tha same size as img2 . Supports only CV_32F type.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • result: Evision.CUDA.GpuMat.t().

    Destination image.

Python prototype (for reference only):

blendLinear(img1, img2, weights1, weights2[, result[, stream]]) -> result
Link to this function

buildWarpAffineMaps(named_args)

View Source
@spec buildWarpAffineMaps(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

buildWarpAffineMaps(m, inverse, dsize)

View Source
@spec buildWarpAffineMaps(Evision.Mat.maybe_mat_in(), boolean(), {number(), number()}) ::
  {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

buildWarpAffineMaps

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • xmap: Evision.CUDA.GpuMat.t().
  • ymap: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

buildWarpAffineMaps(M, inverse, dsize[, xmap[, ymap[, stream]]]) -> xmap, ymap
Link to this function

buildWarpAffineMaps(m, inverse, dsize, opts)

View Source
@spec buildWarpAffineMaps(
  Evision.Mat.maybe_mat_in(),
  boolean(),
  {number(), number()},
  [{:stream, term()}] | nil
) :: {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

buildWarpAffineMaps

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • xmap: Evision.CUDA.GpuMat.t().
  • ymap: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

buildWarpAffineMaps(M, inverse, dsize[, xmap[, ymap[, stream]]]) -> xmap, ymap
Link to this function

buildWarpPerspectiveMaps(named_args)

View Source
@spec buildWarpPerspectiveMaps(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

buildWarpPerspectiveMaps(m, inverse, dsize)

View Source
@spec buildWarpPerspectiveMaps(
  Evision.Mat.maybe_mat_in(),
  boolean(),
  {number(), number()}
) ::
  {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

buildWarpPerspectiveMaps

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • xmap: Evision.CUDA.GpuMat.t().
  • ymap: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

buildWarpPerspectiveMaps(M, inverse, dsize[, xmap[, ymap[, stream]]]) -> xmap, ymap
Link to this function

buildWarpPerspectiveMaps(m, inverse, dsize, opts)

View Source
@spec buildWarpPerspectiveMaps(
  Evision.Mat.maybe_mat_in(),
  boolean(),
  {number(), number()},
  [{:stream, term()}] | nil
) :: {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

buildWarpPerspectiveMaps

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • xmap: Evision.CUDA.GpuMat.t().
  • ymap: Evision.CUDA.GpuMat.t().

Python prototype (for reference only):

buildWarpPerspectiveMaps(M, inverse, dsize[, xmap[, ymap[, stream]]]) -> xmap, ymap
@spec calcAbsSum(Keyword.t()) :: any() | {:error, String.t()}
@spec calcAbsSum(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec calcAbsSum(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcAbsSum

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcAbsSum(src[, dst[, mask[, stream]]]) -> dst

Variant 2:

calcAbsSum

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcAbsSum(src[, dst[, mask[, stream]]]) -> dst
@spec calcAbsSum(Evision.Mat.maybe_mat_in(), [mask: term(), stream: term()] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcAbsSum(Evision.CUDA.GpuMat.t(), [mask: term(), stream: term()] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcAbsSum

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcAbsSum(src[, dst[, mask[, stream]]]) -> dst

Variant 2:

calcAbsSum

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcAbsSum(src[, dst[, mask[, stream]]]) -> dst
@spec calcHist(Keyword.t()) :: any() | {:error, String.t()}
@spec calcHist(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec calcHist(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Calculates histogram for one channel 8-bit image.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.Mat.t().

    Destination histogram with one row, 256 columns, and the CV_32SC1 type.

Python prototype (for reference only):

calcHist(src[, hist[, stream]]) -> hist

Variant 2:

Calculates histogram for one channel 8-bit image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image with CV_8UC1 type.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.CUDA.GpuMat.t().

    Destination histogram with one row, 256 columns, and the CV_32SC1 type.

Python prototype (for reference only):

calcHist(src[, hist[, stream]]) -> hist
@spec calcHist(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcHist(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}
@spec calcHist(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcHist(Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Calculates histogram for one channel 8-bit image confined in given mask.

Positional Arguments
  • src: Evision.Mat.

    Source image with CV_8UC1 type.

  • mask: Evision.Mat.

    A mask image same size as src and of type CV_8UC1.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.Mat.t().

    Destination histogram with one row, 256 columns, and the CV_32SC1 type.

Python prototype (for reference only):

calcHist(src, mask[, hist[, stream]]) -> hist

Variant 2:

Calculates histogram for one channel 8-bit image confined in given mask.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image with CV_8UC1 type.

  • mask: Evision.CUDA.GpuMat.t().

    A mask image same size as src and of type CV_8UC1.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.CUDA.GpuMat.t().

    Destination histogram with one row, 256 columns, and the CV_32SC1 type.

Python prototype (for reference only):

calcHist(src, mask[, hist[, stream]]) -> hist

Variant 3:

Calculates histogram for one channel 8-bit image.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.Mat.t().

    Destination histogram with one row, 256 columns, and the CV_32SC1 type.

Python prototype (for reference only):

calcHist(src[, hist[, stream]]) -> hist

Variant 4:

Calculates histogram for one channel 8-bit image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image with CV_8UC1 type.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.CUDA.GpuMat.t().

    Destination histogram with one row, 256 columns, and the CV_32SC1 type.

Python prototype (for reference only):

calcHist(src[, hist[, stream]]) -> hist
Link to this function

calcHist(src, mask, opts)

View Source
@spec calcHist(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcHist(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Calculates histogram for one channel 8-bit image confined in given mask.

Positional Arguments
  • src: Evision.Mat.

    Source image with CV_8UC1 type.

  • mask: Evision.Mat.

    A mask image same size as src and of type CV_8UC1.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.Mat.t().

    Destination histogram with one row, 256 columns, and the CV_32SC1 type.

Python prototype (for reference only):

calcHist(src, mask[, hist[, stream]]) -> hist

Variant 2:

Calculates histogram for one channel 8-bit image confined in given mask.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image with CV_8UC1 type.

  • mask: Evision.CUDA.GpuMat.t().

    A mask image same size as src and of type CV_8UC1.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.CUDA.GpuMat.t().

    Destination histogram with one row, 256 columns, and the CV_32SC1 type.

Python prototype (for reference only):

calcHist(src, mask[, hist[, stream]]) -> hist
@spec calcNorm(Keyword.t()) :: any() | {:error, String.t()}
@spec calcNorm(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcNorm(Evision.CUDA.GpuMat.t(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcNorm

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcNorm(src, normType[, dst[, mask[, stream]]]) -> dst

Variant 2:

calcNorm

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
  • normType: integer()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcNorm(src, normType[, dst[, mask[, stream]]]) -> dst
Link to this function

calcNorm(src, normType, opts)

View Source
@spec calcNorm(
  Evision.Mat.maybe_mat_in(),
  integer(),
  [mask: term(), stream: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcNorm(
  Evision.CUDA.GpuMat.t(),
  integer(),
  [mask: term(), stream: term()] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcNorm

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcNorm(src, normType[, dst[, mask[, stream]]]) -> dst

Variant 2:

calcNorm

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
  • normType: integer()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcNorm(src, normType[, dst[, mask[, stream]]]) -> dst
Link to this function

calcNormDiff(named_args)

View Source
@spec calcNormDiff(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

calcNormDiff(src1, src2)

View Source
@spec calcNormDiff(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcNormDiff(Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcNormDiff

Positional Arguments
Keyword Arguments
  • normType: integer().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcNormDiff(src1, src2[, dst[, normType[, stream]]]) -> dst

Variant 2:

calcNormDiff

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t()
  • src2: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • normType: integer().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcNormDiff(src1, src2[, dst[, normType[, stream]]]) -> dst
Link to this function

calcNormDiff(src1, src2, opts)

View Source
@spec calcNormDiff(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [normType: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec calcNormDiff(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [normType: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcNormDiff

Positional Arguments
Keyword Arguments
  • normType: integer().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcNormDiff(src1, src2[, dst[, normType[, stream]]]) -> dst

Variant 2:

calcNormDiff

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t()
  • src2: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • normType: integer().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcNormDiff(src1, src2[, dst[, normType[, stream]]]) -> dst
@spec calcSqrSum(Keyword.t()) :: any() | {:error, String.t()}
@spec calcSqrSum(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec calcSqrSum(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcSqrSum

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcSqrSum(src[, dst[, mask[, stream]]]) -> dst

Variant 2:

calcSqrSum

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcSqrSum(src[, dst[, mask[, stream]]]) -> dst
@spec calcSqrSum(Evision.Mat.maybe_mat_in(), [mask: term(), stream: term()] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcSqrSum(Evision.CUDA.GpuMat.t(), [mask: term(), stream: term()] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcSqrSum

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcSqrSum(src[, dst[, mask[, stream]]]) -> dst

Variant 2:

calcSqrSum

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcSqrSum(src[, dst[, mask[, stream]]]) -> dst
@spec calcSum(Keyword.t()) :: any() | {:error, String.t()}
@spec calcSum(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec calcSum(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcSum

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcSum(src[, dst[, mask[, stream]]]) -> dst

Variant 2:

calcSum

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcSum(src[, dst[, mask[, stream]]]) -> dst
@spec calcSum(Evision.Mat.maybe_mat_in(), [mask: term(), stream: term()] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec calcSum(Evision.CUDA.GpuMat.t(), [mask: term(), stream: term()] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

calcSum

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

calcSum(src[, dst[, mask[, stream]]]) -> dst

Variant 2:

calcSum

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

calcSum(src[, dst[, mask[, stream]]]) -> dst
@spec cartToPolar(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Converts Cartesian coordinates into polar.

Positional Arguments
  • x: Evision.Mat.

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.Mat.

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag for angles that must be evaluated in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

  • angle: Evision.Mat.t().

    Destination matrix of angles ( CV_32FC1 ).

@sa cartToPolar

Python prototype (for reference only):

cartToPolar(x, y[, magnitude[, angle[, angleInDegrees[, stream]]]]) -> magnitude, angle

Variant 2:

Converts Cartesian coordinates into polar.

Positional Arguments
  • x: Evision.CUDA.GpuMat.t().

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.CUDA.GpuMat.t().

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag for angles that must be evaluated in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

  • angle: Evision.CUDA.GpuMat.t().

    Destination matrix of angles ( CV_32FC1 ).

@sa cartToPolar

Python prototype (for reference only):

cartToPolar(x, y[, magnitude[, angle[, angleInDegrees[, stream]]]]) -> magnitude, angle
@spec cartToPolar(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [angleInDegrees: term(), stream: term()] | nil
) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec cartToPolar(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [angleInDegrees: term(), stream: term()] | nil
) :: {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

Converts Cartesian coordinates into polar.

Positional Arguments
  • x: Evision.Mat.

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.Mat.

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag for angles that must be evaluated in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

  • angle: Evision.Mat.t().

    Destination matrix of angles ( CV_32FC1 ).

@sa cartToPolar

Python prototype (for reference only):

cartToPolar(x, y[, magnitude[, angle[, angleInDegrees[, stream]]]]) -> magnitude, angle

Variant 2:

Converts Cartesian coordinates into polar.

Positional Arguments
  • x: Evision.CUDA.GpuMat.t().

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.CUDA.GpuMat.t().

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag for angles that must be evaluated in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

  • angle: Evision.CUDA.GpuMat.t().

    Destination matrix of angles ( CV_32FC1 ).

@sa cartToPolar

Python prototype (for reference only):

cartToPolar(x, y[, magnitude[, angle[, angleInDegrees[, stream]]]]) -> magnitude, angle
@spec compare(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

compare(src1, src2, cmpop)

View Source

Variant 1:

Compares elements of two matrices (or of a matrix and scalar).

Positional Arguments
  • src1: Evision.Mat.

    First source matrix or scalar.

  • src2: Evision.Mat.

    Second source matrix or scalar.

  • cmpop: integer().

    Flag specifying the relation between the elements to be checked:

    • CMP_EQ: a(.) == b(.)
    • CMP_GT: a(.) > b(.)
    • CMP_GE: a(.) >= b(.)
    • CMP_LT: a(.) \< b(.)
    • CMP_LE: a(.) \<= b(.)
    • CMP_NE: a(.) != b(.)
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size as the input array(s) and type CV_8U.

@sa compare

Python prototype (for reference only):

compare(src1, src2, cmpop[, dst[, stream]]) -> dst

Variant 2:

Compares elements of two matrices (or of a matrix and scalar).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

  • cmpop: integer().

    Flag specifying the relation between the elements to be checked:

    • CMP_EQ: a(.) == b(.)
    • CMP_GT: a(.) > b(.)
    • CMP_GE: a(.) >= b(.)
    • CMP_LT: a(.) \< b(.)
    • CMP_LE: a(.) \<= b(.)
    • CMP_NE: a(.) != b(.)
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size as the input array(s) and type CV_8U.

@sa compare

Python prototype (for reference only):

compare(src1, src2, cmpop[, dst[, stream]]) -> dst
Link to this function

compare(src1, src2, cmpop, opts)

View Source
@spec compare(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  [{:stream, term()}] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec compare(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Compares elements of two matrices (or of a matrix and scalar).

Positional Arguments
  • src1: Evision.Mat.

    First source matrix or scalar.

  • src2: Evision.Mat.

    Second source matrix or scalar.

  • cmpop: integer().

    Flag specifying the relation between the elements to be checked:

    • CMP_EQ: a(.) == b(.)
    • CMP_GT: a(.) > b(.)
    • CMP_GE: a(.) >= b(.)
    • CMP_LT: a(.) \< b(.)
    • CMP_LE: a(.) \<= b(.)
    • CMP_NE: a(.) != b(.)
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size as the input array(s) and type CV_8U.

@sa compare

Python prototype (for reference only):

compare(src1, src2, cmpop[, dst[, stream]]) -> dst

Variant 2:

Compares elements of two matrices (or of a matrix and scalar).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

  • cmpop: integer().

    Flag specifying the relation between the elements to be checked:

    • CMP_EQ: a(.) == b(.)
    • CMP_GT: a(.) > b(.)
    • CMP_GE: a(.) >= b(.)
    • CMP_LT: a(.) \< b(.)
    • CMP_LE: a(.) \<= b(.)
    • CMP_NE: a(.) != b(.)
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size as the input array(s) and type CV_8U.

@sa compare

Python prototype (for reference only):

compare(src1, src2, cmpop[, dst[, stream]]) -> dst
Link to this function

connectedComponents(named_args)

View Source
@spec connectedComponents(Keyword.t()) :: any() | {:error, String.t()}
@spec connectedComponents(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec connectedComponents(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

connectedComponents

Positional Arguments
  • image: Evision.Mat.

    The 8-bit single-channel image to be labeled.

Keyword Arguments
  • connectivity: integer().

    Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.

  • ltype: integer().

    Output image label type. Currently CV_32S is supported.

Return
  • labels: Evision.Mat.t().

    Destination labeled image.

Has overloading in C++

Python prototype (for reference only):

connectedComponents(image[, labels[, connectivity[, ltype]]]) -> labels

Variant 2:

connectedComponents

Positional Arguments
  • image: Evision.CUDA.GpuMat.t().

    The 8-bit single-channel image to be labeled.

Keyword Arguments
  • connectivity: integer().

    Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.

  • ltype: integer().

    Output image label type. Currently CV_32S is supported.

Return
  • labels: Evision.CUDA.GpuMat.t().

    Destination labeled image.

Has overloading in C++

Python prototype (for reference only):

connectedComponents(image[, labels[, connectivity[, ltype]]]) -> labels
Link to this function

connectedComponents(image, opts)

View Source
@spec connectedComponents(
  Evision.Mat.maybe_mat_in(),
  [connectivity: term(), ltype: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec connectedComponents(
  Evision.CUDA.GpuMat.t(),
  [connectivity: term(), ltype: term()] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

connectedComponents

Positional Arguments
  • image: Evision.Mat.

    The 8-bit single-channel image to be labeled.

Keyword Arguments
  • connectivity: integer().

    Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.

  • ltype: integer().

    Output image label type. Currently CV_32S is supported.

Return
  • labels: Evision.Mat.t().

    Destination labeled image.

Has overloading in C++

Python prototype (for reference only):

connectedComponents(image[, labels[, connectivity[, ltype]]]) -> labels

Variant 2:

connectedComponents

Positional Arguments
  • image: Evision.CUDA.GpuMat.t().

    The 8-bit single-channel image to be labeled.

Keyword Arguments
  • connectivity: integer().

    Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.

  • ltype: integer().

    Output image label type. Currently CV_32S is supported.

Return
  • labels: Evision.CUDA.GpuMat.t().

    Destination labeled image.

Has overloading in C++

Python prototype (for reference only):

connectedComponents(image[, labels[, connectivity[, ltype]]]) -> labels
Link to this function

connectedComponentsWithAlgorithm(named_args)

View Source
@spec connectedComponentsWithAlgorithm(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

connectedComponentsWithAlgorithm(image, connectivity, ltype, ccltype)

View Source
@spec connectedComponentsWithAlgorithm(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  Evision.ConnectedComponentsAlgorithmsTypes.enum()
) :: Evision.Mat.t() | {:error, String.t()}
@spec connectedComponentsWithAlgorithm(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  Evision.ConnectedComponentsAlgorithmsTypes.enum()
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes the Connected Components Labeled image of a binary image.

Positional Arguments
  • image: Evision.Mat.

    The 8-bit single-channel image to be labeled.

  • connectivity: integer().

    Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.

  • ltype: integer().

    Output image label type. Currently CV_32S is supported.

  • ccltype: cuda_ConnectedComponentsAlgorithmsTypes.

    Connected components algorithm type (see the #ConnectedComponentsAlgorithmsTypes).

Return
  • labels: Evision.Mat.t().

    Destination labeled image.

The function takes as input a binary image and performs Connected Components Labeling. The output is an image where each Connected Component is assigned a unique label (integer value). ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently BKE @cite Allegretti2019 is supported, see the #ConnectedComponentsAlgorithmsTypes for details. Note that labels in the output are not required to be sequential.

Note: A sample program demonstrating Connected Components Labeling in CUDA can be found at\n opencv_contrib_source_code/modules/cudaimgproc/samples/connected_components.cpp

Python prototype (for reference only):

connectedComponentsWithAlgorithm(image, connectivity, ltype, ccltype[, labels]) -> labels

Variant 2:

Computes the Connected Components Labeled image of a binary image.

Positional Arguments
  • image: Evision.CUDA.GpuMat.t().

    The 8-bit single-channel image to be labeled.

  • connectivity: integer().

    Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.

  • ltype: integer().

    Output image label type. Currently CV_32S is supported.

  • ccltype: cuda_ConnectedComponentsAlgorithmsTypes.

    Connected components algorithm type (see the #ConnectedComponentsAlgorithmsTypes).

Return
  • labels: Evision.CUDA.GpuMat.t().

    Destination labeled image.

The function takes as input a binary image and performs Connected Components Labeling. The output is an image where each Connected Component is assigned a unique label (integer value). ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently BKE @cite Allegretti2019 is supported, see the #ConnectedComponentsAlgorithmsTypes for details. Note that labels in the output are not required to be sequential.

Note: A sample program demonstrating Connected Components Labeling in CUDA can be found at\n opencv_contrib_source_code/modules/cudaimgproc/samples/connected_components.cpp

Python prototype (for reference only):

connectedComponentsWithAlgorithm(image, connectivity, ltype, ccltype[, labels]) -> labels
Link to this function

connectedComponentsWithAlgorithm(image, connectivity, ltype, ccltype, opts)

View Source
@spec connectedComponentsWithAlgorithm(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  Evision.ConnectedComponentsAlgorithmsTypes.enum(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec connectedComponentsWithAlgorithm(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  Evision.ConnectedComponentsAlgorithmsTypes.enum(),
  [{atom(), term()}, ...] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes the Connected Components Labeled image of a binary image.

Positional Arguments
  • image: Evision.Mat.

    The 8-bit single-channel image to be labeled.

  • connectivity: integer().

    Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.

  • ltype: integer().

    Output image label type. Currently CV_32S is supported.

  • ccltype: cuda_ConnectedComponentsAlgorithmsTypes.

    Connected components algorithm type (see the #ConnectedComponentsAlgorithmsTypes).

Return
  • labels: Evision.Mat.t().

    Destination labeled image.

The function takes as input a binary image and performs Connected Components Labeling. The output is an image where each Connected Component is assigned a unique label (integer value). ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently BKE @cite Allegretti2019 is supported, see the #ConnectedComponentsAlgorithmsTypes for details. Note that labels in the output are not required to be sequential.

Note: A sample program demonstrating Connected Components Labeling in CUDA can be found at\n opencv_contrib_source_code/modules/cudaimgproc/samples/connected_components.cpp

Python prototype (for reference only):

connectedComponentsWithAlgorithm(image, connectivity, ltype, ccltype[, labels]) -> labels

Variant 2:

Computes the Connected Components Labeled image of a binary image.

Positional Arguments
  • image: Evision.CUDA.GpuMat.t().

    The 8-bit single-channel image to be labeled.

  • connectivity: integer().

    Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.

  • ltype: integer().

    Output image label type. Currently CV_32S is supported.

  • ccltype: cuda_ConnectedComponentsAlgorithmsTypes.

    Connected components algorithm type (see the #ConnectedComponentsAlgorithmsTypes).

Return
  • labels: Evision.CUDA.GpuMat.t().

    Destination labeled image.

The function takes as input a binary image and performs Connected Components Labeling. The output is an image where each Connected Component is assigned a unique label (integer value). ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently BKE @cite Allegretti2019 is supported, see the #ConnectedComponentsAlgorithmsTypes for details. Note that labels in the output are not required to be sequential.

Note: A sample program demonstrating Connected Components Labeling in CUDA can be found at\n opencv_contrib_source_code/modules/cudaimgproc/samples/connected_components.cpp

Python prototype (for reference only):

connectedComponentsWithAlgorithm(image, connectivity, ltype, ccltype[, labels]) -> labels
Link to this function

convertSpatialMoments(named_args)

View Source
@spec convertSpatialMoments(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

convertSpatialMoments(spatialMoments, order, momentsType)

View Source
@spec convertSpatialMoments(
  Evision.Mat.maybe_mat_in(),
  Evision.CUDA.MomentsOrder.t(),
  integer()
) ::
  map() | {:error, String.t()}

Converts the spatial image moments returned from cuda::spatialMoments to cv::Moments.

Positional Arguments
  • spatialMoments: Evision.Mat.

    Spatial moments returned from cuda::spatialMoments.

  • order: MomentsOrder.

    Order used when calculating image moments with cuda::spatialMoments.

  • momentsType: integer().

    Precision used when calculating image moments with cuda::spatialMoments.

Return
  • retval: Moments

@returns cv::Moments. @sa cuda::spatialMoments, cuda::moments, cuda::convertSpatialMoments, cuda::numMoments, cuda::MomentsOrder

Python prototype (for reference only):

convertSpatialMoments(spatialMoments, order, momentsType) -> retval
Link to this function

copyMakeBorder(named_args)

View Source
@spec copyMakeBorder(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

copyMakeBorder(src, top, bottom, left, right, borderType)

View Source
@spec copyMakeBorder(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  integer(),
  integer(),
  integer()
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec copyMakeBorder(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  integer(),
  integer(),
  integer()
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Forms a border around an image.

Positional Arguments
  • src: Evision.Mat.

    Source image. CV_8UC1 , CV_8UC4 , CV_32SC1 , and CV_32FC1 types are supported.

  • top: integer().

    Number of top pixels

  • bottom: integer().

    Number of bottom pixels

  • left: integer().

    Number of left pixels

  • right: integer().

    Number of pixels in each direction from the source image rectangle to extrapolate. For example: top=1, bottom=1, left=1, right=1 mean that 1 pixel-wide border needs to be built.

  • borderType: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

Keyword Arguments
  • value: Evision.scalar().

    Border value.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the same type as src. The size is Size(src.cols+left+right, src.rows+top+bottom) .

Python prototype (for reference only):

copyMakeBorder(src, top, bottom, left, right, borderType[, dst[, value[, stream]]]) -> dst

Variant 2:

Forms a border around an image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. CV_8UC1 , CV_8UC4 , CV_32SC1 , and CV_32FC1 types are supported.

  • top: integer().

    Number of top pixels

  • bottom: integer().

    Number of bottom pixels

  • left: integer().

    Number of left pixels

  • right: integer().

    Number of pixels in each direction from the source image rectangle to extrapolate. For example: top=1, bottom=1, left=1, right=1 mean that 1 pixel-wide border needs to be built.

  • borderType: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

Keyword Arguments
  • value: Evision.scalar().

    Border value.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the same type as src. The size is Size(src.cols+left+right, src.rows+top+bottom) .

Python prototype (for reference only):

copyMakeBorder(src, top, bottom, left, right, borderType[, dst[, value[, stream]]]) -> dst
Link to this function

copyMakeBorder(src, top, bottom, left, right, borderType, opts)

View Source
@spec copyMakeBorder(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  integer(),
  integer(),
  integer(),
  [stream: term(), value: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec copyMakeBorder(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  integer(),
  integer(),
  integer(),
  [stream: term(), value: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Forms a border around an image.

Positional Arguments
  • src: Evision.Mat.

    Source image. CV_8UC1 , CV_8UC4 , CV_32SC1 , and CV_32FC1 types are supported.

  • top: integer().

    Number of top pixels

  • bottom: integer().

    Number of bottom pixels

  • left: integer().

    Number of left pixels

  • right: integer().

    Number of pixels in each direction from the source image rectangle to extrapolate. For example: top=1, bottom=1, left=1, right=1 mean that 1 pixel-wide border needs to be built.

  • borderType: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

Keyword Arguments
  • value: Evision.scalar().

    Border value.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the same type as src. The size is Size(src.cols+left+right, src.rows+top+bottom) .

Python prototype (for reference only):

copyMakeBorder(src, top, bottom, left, right, borderType[, dst[, value[, stream]]]) -> dst

Variant 2:

Forms a border around an image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. CV_8UC1 , CV_8UC4 , CV_32SC1 , and CV_32FC1 types are supported.

  • top: integer().

    Number of top pixels

  • bottom: integer().

    Number of bottom pixels

  • left: integer().

    Number of left pixels

  • right: integer().

    Number of pixels in each direction from the source image rectangle to extrapolate. For example: top=1, bottom=1, left=1, right=1 mean that 1 pixel-wide border needs to be built.

  • borderType: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

Keyword Arguments
  • value: Evision.scalar().

    Border value.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the same type as src. The size is Size(src.cols+left+right, src.rows+top+bottom) .

Python prototype (for reference only):

copyMakeBorder(src, top, bottom, left, right, borderType[, dst[, value[, stream]]]) -> dst
Link to this function

countNonZero(named_args)

View Source
@spec countNonZero(Keyword.t()) :: any() | {:error, String.t()}
@spec countNonZero(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec countNonZero(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

countNonZero

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

countNonZero(src[, dst[, stream]]) -> dst

Variant 2:

countNonZero

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

countNonZero(src[, dst[, stream]]) -> dst
@spec countNonZero(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec countNonZero(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

countNonZero

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

countNonZero(src[, dst[, stream]]) -> dst

Variant 2:

countNonZero

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

countNonZero(src[, dst[, stream]]) -> dst
Link to this function

createBackgroundSubtractorMOG2()

View Source
@spec createBackgroundSubtractorMOG2() ::
  Evision.CUDA.BackgroundSubtractorMOG2.t() | {:error, String.t()}

Creates MOG2 Background Subtractor

Keyword Arguments
  • history: integer().

    Length of the history.

  • varThreshold: double.

    Threshold on the squared Mahalanobis distance between the pixel and the model to decide whether a pixel is well described by the background model. This parameter does not affect the background update.

  • detectShadows: bool.

    If true, the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to false.

Return
  • retval: Evision.CUDA.BackgroundSubtractorMOG2.t()

Python prototype (for reference only):

createBackgroundSubtractorMOG2([, history[, varThreshold[, detectShadows]]]) -> retval
Link to this function

createBackgroundSubtractorMOG2(named_args)

View Source
@spec createBackgroundSubtractorMOG2(Keyword.t()) :: any() | {:error, String.t()}
@spec createBackgroundSubtractorMOG2(
  [detectShadows: term(), history: term(), varThreshold: term()]
  | nil
) :: Evision.CUDA.BackgroundSubtractorMOG2.t() | {:error, String.t()}

Creates MOG2 Background Subtractor

Keyword Arguments
  • history: integer().

    Length of the history.

  • varThreshold: double.

    Threshold on the squared Mahalanobis distance between the pixel and the model to decide whether a pixel is well described by the background model. This parameter does not affect the background update.

  • detectShadows: bool.

    If true, the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to false.

Return
  • retval: Evision.CUDA.BackgroundSubtractorMOG2.t()

Python prototype (for reference only):

createBackgroundSubtractorMOG2([, history[, varThreshold[, detectShadows]]]) -> retval
Link to this function

createBackgroundSubtractorMOG()

View Source
@spec createBackgroundSubtractorMOG() ::
  Evision.CUDA.BackgroundSubtractorMOG.t() | {:error, String.t()}

Creates mixture-of-gaussian background subtractor

Keyword Arguments
  • history: integer().

    Length of the history.

  • nmixtures: integer().

    Number of Gaussian mixtures.

  • backgroundRatio: double.

    Background ratio.

  • noiseSigma: double.

    Noise strength (standard deviation of the brightness or each color channel). 0 means some automatic value.

Return
  • retval: Evision.CUDA.BackgroundSubtractorMOG.t()

Python prototype (for reference only):

createBackgroundSubtractorMOG([, history[, nmixtures[, backgroundRatio[, noiseSigma]]]]) -> retval
Link to this function

createBackgroundSubtractorMOG(named_args)

View Source
@spec createBackgroundSubtractorMOG(Keyword.t()) :: any() | {:error, String.t()}
@spec createBackgroundSubtractorMOG(
  [
    backgroundRatio: term(),
    history: term(),
    nmixtures: term(),
    noiseSigma: term()
  ]
  | nil
) :: Evision.CUDA.BackgroundSubtractorMOG.t() | {:error, String.t()}

Creates mixture-of-gaussian background subtractor

Keyword Arguments
  • history: integer().

    Length of the history.

  • nmixtures: integer().

    Number of Gaussian mixtures.

  • backgroundRatio: double.

    Background ratio.

  • noiseSigma: double.

    Noise strength (standard deviation of the brightness or each color channel). 0 means some automatic value.

Return
  • retval: Evision.CUDA.BackgroundSubtractorMOG.t()

Python prototype (for reference only):

createBackgroundSubtractorMOG([, history[, nmixtures[, backgroundRatio[, noiseSigma]]]]) -> retval
Link to this function

createBoxFilter(named_args)

View Source
@spec createBoxFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createBoxFilter(srcType, dstType, ksize)

View Source
@spec createBoxFilter(integer(), integer(), {number(), number()}) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a normalized 2D box filter.

Positional Arguments
  • srcType: integer().

    Input image type. Only CV_8UC1, CV_8UC4 and CV_32FC1 are supported for now.

  • dstType: integer().

    Output image type. Only the same type as src is supported for now.

  • ksize: Size.

    Kernel size.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

@sa boxFilter

Python prototype (for reference only):

createBoxFilter(srcType, dstType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createBoxFilter(srcType, dstType, ksize, opts)

View Source
@spec createBoxFilter(
  integer(),
  integer(),
  {number(), number()},
  [anchor: term(), borderMode: term(), borderVal: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a normalized 2D box filter.

Positional Arguments
  • srcType: integer().

    Input image type. Only CV_8UC1, CV_8UC4 and CV_32FC1 are supported for now.

  • dstType: integer().

    Output image type. Only the same type as src is supported for now.

  • ksize: Size.

    Kernel size.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

@sa boxFilter

Python prototype (for reference only):

createBoxFilter(srcType, dstType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createBoxMaxFilter(named_args)

View Source
@spec createBoxMaxFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createBoxMaxFilter(srcType, ksize)

View Source
@spec createBoxMaxFilter(
  integer(),
  {number(), number()}
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates the maximum filter.

Positional Arguments
  • srcType: integer().

    Input/output image type. Only CV_8UC1 and CV_8UC4 are supported.

  • ksize: Size.

    Kernel size.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value (-1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

Python prototype (for reference only):

createBoxMaxFilter(srcType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createBoxMaxFilter(srcType, ksize, opts)

View Source
@spec createBoxMaxFilter(
  integer(),
  {number(), number()},
  [anchor: term(), borderMode: term(), borderVal: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates the maximum filter.

Positional Arguments
  • srcType: integer().

    Input/output image type. Only CV_8UC1 and CV_8UC4 are supported.

  • ksize: Size.

    Kernel size.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value (-1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

Python prototype (for reference only):

createBoxMaxFilter(srcType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createBoxMinFilter(named_args)

View Source
@spec createBoxMinFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createBoxMinFilter(srcType, ksize)

View Source
@spec createBoxMinFilter(
  integer(),
  {number(), number()}
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates the minimum filter.

Positional Arguments
  • srcType: integer().

    Input/output image type. Only CV_8UC1 and CV_8UC4 are supported.

  • ksize: Size.

    Kernel size.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value (-1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

Python prototype (for reference only):

createBoxMinFilter(srcType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createBoxMinFilter(srcType, ksize, opts)

View Source
@spec createBoxMinFilter(
  integer(),
  {number(), number()},
  [anchor: term(), borderMode: term(), borderVal: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates the minimum filter.

Positional Arguments
  • srcType: integer().

    Input/output image type. Only CV_8UC1 and CV_8UC4 are supported.

  • ksize: Size.

    Kernel size.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value (-1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

Python prototype (for reference only):

createBoxMinFilter(srcType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createCannyEdgeDetector(named_args)

View Source
@spec createCannyEdgeDetector(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createCannyEdgeDetector(low_thresh, high_thresh)

View Source
@spec createCannyEdgeDetector(number(), number()) ::
  Evision.CUDA.CannyEdgeDetector.t() | {:error, String.t()}

Creates implementation for cuda::CannyEdgeDetector .

Positional Arguments
  • low_thresh: double.

    First threshold for the hysteresis procedure.

  • high_thresh: double.

    Second threshold for the hysteresis procedure.

Keyword Arguments
  • apperture_size: integer().

    Aperture size for the Sobel operator.

  • l2gradient: bool.

    Flag indicating whether a more accurate \f$L_2\f$ norm \f$=\sqrt{(dI/dx)^2 + (dI/dy)^2}\f$ should be used to compute the image gradient magnitude ( L2gradient=true ), or a faster default \f$L_1\f$ norm \f$=|dI/dx|+|dI/dy|\f$ is enough ( L2gradient=false ).

Return
  • retval: CannyEdgeDetector

Python prototype (for reference only):

createCannyEdgeDetector(low_thresh, high_thresh[, apperture_size[, L2gradient]]) -> retval
Link to this function

createCannyEdgeDetector(low_thresh, high_thresh, opts)

View Source
@spec createCannyEdgeDetector(
  number(),
  number(),
  [apperture_size: term(), l2gradient: term()] | nil
) ::
  Evision.CUDA.CannyEdgeDetector.t() | {:error, String.t()}

Creates implementation for cuda::CannyEdgeDetector .

Positional Arguments
  • low_thresh: double.

    First threshold for the hysteresis procedure.

  • high_thresh: double.

    Second threshold for the hysteresis procedure.

Keyword Arguments
  • apperture_size: integer().

    Aperture size for the Sobel operator.

  • l2gradient: bool.

    Flag indicating whether a more accurate \f$L_2\f$ norm \f$=\sqrt{(dI/dx)^2 + (dI/dy)^2}\f$ should be used to compute the image gradient magnitude ( L2gradient=true ), or a faster default \f$L_1\f$ norm \f$=|dI/dx|+|dI/dy|\f$ is enough ( L2gradient=false ).

Return
  • retval: CannyEdgeDetector

Python prototype (for reference only):

createCannyEdgeDetector(low_thresh, high_thresh[, apperture_size[, L2gradient]]) -> retval
@spec createCLAHE() :: Evision.CUDA.CLAHE.t() | {:error, String.t()}

Creates implementation for cuda::CLAHE .

Keyword Arguments
  • clipLimit: double.

    Threshold for contrast limiting.

  • tileGridSize: Size.

    Size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. tileGridSize defines the number of tiles in row and column.

Return
  • retval: Evision.CUDA.CLAHE.t()

Python prototype (for reference only):

createCLAHE([, clipLimit[, tileGridSize]]) -> retval
@spec createCLAHE(Keyword.t()) :: any() | {:error, String.t()}
@spec createCLAHE([clipLimit: term(), tileGridSize: term()] | nil) ::
  Evision.CUDA.CLAHE.t() | {:error, String.t()}

Creates implementation for cuda::CLAHE .

Keyword Arguments
  • clipLimit: double.

    Threshold for contrast limiting.

  • tileGridSize: Size.

    Size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. tileGridSize defines the number of tiles in row and column.

Return
  • retval: Evision.CUDA.CLAHE.t()

Python prototype (for reference only):

createCLAHE([, clipLimit[, tileGridSize]]) -> retval
Link to this function

createColumnSumFilter(named_args)

View Source
@spec createColumnSumFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createColumnSumFilter(srcType, dstType, ksize)

View Source
@spec createColumnSumFilter(integer(), integer(), integer()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a vertical 1D box filter.

Positional Arguments
  • srcType: integer().

    Input image type. Only CV_8UC1 type is supported for now.

  • dstType: integer().

    Output image type. Only CV_32FC1 type is supported for now.

  • ksize: integer().

    Kernel size.

Keyword Arguments
  • anchor: integer().

    Anchor point. The default value (-1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

Python prototype (for reference only):

createColumnSumFilter(srcType, dstType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createColumnSumFilter(srcType, dstType, ksize, opts)

View Source
@spec createColumnSumFilter(
  integer(),
  integer(),
  integer(),
  [anchor: term(), borderMode: term(), borderVal: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a vertical 1D box filter.

Positional Arguments
  • srcType: integer().

    Input image type. Only CV_8UC1 type is supported for now.

  • dstType: integer().

    Output image type. Only CV_32FC1 type is supported for now.

  • ksize: integer().

    Kernel size.

Keyword Arguments
  • anchor: integer().

    Anchor point. The default value (-1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

Python prototype (for reference only):

createColumnSumFilter(srcType, dstType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createContinuous(named_args)

View Source
@spec createContinuous(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createContinuous(rows, cols, type)

View Source
@spec createContinuous(integer(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

Creates a continuous matrix.

Positional Arguments
  • rows: integer().

    Row count.

  • cols: integer().

    Column count.

  • type: integer().

    Type of the matrix.

Return
  • arr: Evision.Mat.t().

    Destination matrix. This parameter changes only if it has a proper type and area ( \f$\texttt{rows} \times \texttt{cols}\f$ ).

Matrix is called continuous if its elements are stored continuously, that is, without gaps at the end of each row.

Python prototype (for reference only):

createContinuous(rows, cols, type[, arr]) -> arr
Link to this function

createContinuous(rows, cols, type, opts)

View Source
@spec createContinuous(integer(), integer(), integer(), [{atom(), term()}, ...] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

Creates a continuous matrix.

Positional Arguments
  • rows: integer().

    Row count.

  • cols: integer().

    Column count.

  • type: integer().

    Type of the matrix.

Return
  • arr: Evision.Mat.t().

    Destination matrix. This parameter changes only if it has a proper type and area ( \f$\texttt{rows} \times \texttt{cols}\f$ ).

Matrix is called continuous if its elements are stored continuously, that is, without gaps at the end of each row.

Python prototype (for reference only):

createContinuous(rows, cols, type[, arr]) -> arr
@spec createConvolution() :: Evision.CUDA.Convolution.t() | {:error, String.t()}

Creates implementation for cuda::Convolution .

Keyword Arguments
  • user_block_size: Size.

    Block size. If you leave default value Size(0,0) then automatic estimation of block size will be used (which is optimized for speed). By varying user_block_size you can reduce memory requirements at the cost of speed.

Return
  • retval: Convolution

Python prototype (for reference only):

createConvolution([, user_block_size]) -> retval
Link to this function

createConvolution(named_args)

View Source
@spec createConvolution(Keyword.t()) :: any() | {:error, String.t()}
@spec createConvolution([{:user_block_size, term()}] | nil) ::
  Evision.CUDA.Convolution.t() | {:error, String.t()}

Creates implementation for cuda::Convolution .

Keyword Arguments
  • user_block_size: Size.

    Block size. If you leave default value Size(0,0) then automatic estimation of block size will be used (which is optimized for speed). By varying user_block_size you can reduce memory requirements at the cost of speed.

Return
  • retval: Convolution

Python prototype (for reference only):

createConvolution([, user_block_size]) -> retval
Link to this function

createDerivFilter(named_args)

View Source
@spec createDerivFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createDerivFilter(srcType, dstType, dx, dy, ksize)

View Source
@spec createDerivFilter(integer(), integer(), integer(), integer(), integer()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a generalized Deriv operator.

Positional Arguments
  • srcType: integer().

    Source image type.

  • dstType: integer().

    Destination array type.

  • dx: integer().

    Derivative order in respect of x.

  • dy: integer().

    Derivative order in respect of y.

  • ksize: integer().

    Aperture size. See getDerivKernels for details.

Keyword Arguments
  • normalize: bool.

    Flag indicating whether to normalize (scale down) the filter coefficients or not. See getDerivKernels for details.

  • scale: double.

    Optional scale factor for the computed derivative values. By default, no scaling is applied. For details, see getDerivKernels .

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction. For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

Python prototype (for reference only):

createDerivFilter(srcType, dstType, dx, dy, ksize[, normalize[, scale[, rowBorderMode[, columnBorderMode]]]]) -> retval
Link to this function

createDerivFilter(srcType, dstType, dx, dy, ksize, opts)

View Source
@spec createDerivFilter(
  integer(),
  integer(),
  integer(),
  integer(),
  integer(),
  [
    columnBorderMode: term(),
    normalize: term(),
    rowBorderMode: term(),
    scale: term()
  ]
  | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a generalized Deriv operator.

Positional Arguments
  • srcType: integer().

    Source image type.

  • dstType: integer().

    Destination array type.

  • dx: integer().

    Derivative order in respect of x.

  • dy: integer().

    Derivative order in respect of y.

  • ksize: integer().

    Aperture size. See getDerivKernels for details.

Keyword Arguments
  • normalize: bool.

    Flag indicating whether to normalize (scale down) the filter coefficients or not. See getDerivKernels for details.

  • scale: double.

    Optional scale factor for the computed derivative values. By default, no scaling is applied. For details, see getDerivKernels .

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction. For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

Python prototype (for reference only):

createDerivFilter(srcType, dstType, dx, dy, ksize[, normalize[, scale[, rowBorderMode[, columnBorderMode]]]]) -> retval
@spec createDFT(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createDFT(dft_size, flags)

View Source
@spec createDFT(
  {number(), number()},
  integer()
) :: Evision.CUDA.DFT.t() | {:error, String.t()}

Creates implementation for cuda::DFT.

Positional Arguments
  • dft_size: Size.

    The image size.

  • flags: integer().

    Optional flags:

    • DFT_ROWS transforms each individual row of the source matrix.
    • DFT_SCALE scales the result: divide it by the number of elements in the transform (obtained from dft_size ).
    • DFT_INVERSE inverts DFT. Use for complex-complex cases (real-complex and complex-real cases are always forward and inverse, respectively).
    • DFT_COMPLEX_INPUT Specifies that inputs will be complex with 2 channels.
    • DFT_REAL_OUTPUT specifies the output as real. The source matrix is the result of real-complex transform, so the destination matrix must be real.
Return
  • retval: DFT

Python prototype (for reference only):

createDFT(dft_size, flags) -> retval
Link to this function

createDisparityBilateralFilter()

View Source
@spec createDisparityBilateralFilter() ::
  Evision.CUDA.DisparityBilateralFilter.t() | {:error, String.t()}

Creates DisparityBilateralFilter object.

Keyword Arguments
  • ndisp: integer().

    Number of disparities.

  • radius: integer().

    Filter radius.

  • iters: integer().

    Number of iterations.

Return
  • retval: Evision.CUDA.DisparityBilateralFilter.t()

Python prototype (for reference only):

createDisparityBilateralFilter([, ndisp[, radius[, iters]]]) -> retval
Link to this function

createDisparityBilateralFilter(named_args)

View Source
@spec createDisparityBilateralFilter(Keyword.t()) :: any() | {:error, String.t()}
@spec createDisparityBilateralFilter(
  [iters: term(), ndisp: term(), radius: term()]
  | nil
) ::
  Evision.CUDA.DisparityBilateralFilter.t() | {:error, String.t()}

Creates DisparityBilateralFilter object.

Keyword Arguments
  • ndisp: integer().

    Number of disparities.

  • radius: integer().

    Filter radius.

  • iters: integer().

    Number of iterations.

Return
  • retval: Evision.CUDA.DisparityBilateralFilter.t()

Python prototype (for reference only):

createDisparityBilateralFilter([, ndisp[, radius[, iters]]]) -> retval
Link to this function

createGaussianFilter(named_args)

View Source
@spec createGaussianFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createGaussianFilter(srcType, dstType, ksize, sigma1)

View Source
@spec createGaussianFilter(integer(), integer(), {number(), number()}, number()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a Gaussian filter.

Positional Arguments
  • srcType: integer().

    Source image type.

  • dstType: integer().

    Destination array type.

  • ksize: Size.

    Aperture size. See getGaussianKernel for details.

  • sigma1: double.

    Gaussian sigma in the horizontal direction. See getGaussianKernel for details.

Keyword Arguments
  • sigma2: double.

    Gaussian sigma in the vertical direction. If 0, then \f$\texttt{sigma2}\leftarrow\texttt{sigma1}\f$ .

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction. For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa GaussianBlur

Python prototype (for reference only):

createGaussianFilter(srcType, dstType, ksize, sigma1[, sigma2[, rowBorderMode[, columnBorderMode]]]) -> retval
Link to this function

createGaussianFilter(srcType, dstType, ksize, sigma1, opts)

View Source
@spec createGaussianFilter(
  integer(),
  integer(),
  {number(), number()},
  number(),
  [columnBorderMode: term(), rowBorderMode: term(), sigma2: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a Gaussian filter.

Positional Arguments
  • srcType: integer().

    Source image type.

  • dstType: integer().

    Destination array type.

  • ksize: Size.

    Aperture size. See getGaussianKernel for details.

  • sigma1: double.

    Gaussian sigma in the horizontal direction. See getGaussianKernel for details.

Keyword Arguments
  • sigma2: double.

    Gaussian sigma in the vertical direction. If 0, then \f$\texttt{sigma2}\leftarrow\texttt{sigma1}\f$ .

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction. For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa GaussianBlur

Python prototype (for reference only):

createGaussianFilter(srcType, dstType, ksize, sigma1[, sigma2[, rowBorderMode[, columnBorderMode]]]) -> retval
Link to this function

createGeneralizedHoughBallard()

View Source
@spec createGeneralizedHoughBallard() ::
  Evision.GeneralizedHoughBallard.t() | {:error, String.t()}

Creates implementation for generalized hough transform from @cite Ballard1981 .

Return
  • retval: Evision.GeneralizedHoughBallard.t()

Python prototype (for reference only):

createGeneralizedHoughBallard() -> retval
Link to this function

createGeneralizedHoughBallard(named_args)

View Source
@spec createGeneralizedHoughBallard(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createGeneralizedHoughGuil()

View Source
@spec createGeneralizedHoughGuil() ::
  Evision.GeneralizedHoughGuil.t() | {:error, String.t()}

Creates implementation for generalized hough transform from @cite Guil1999 .

Return
  • retval: Evision.GeneralizedHoughGuil.t()

Python prototype (for reference only):

createGeneralizedHoughGuil() -> retval
Link to this function

createGeneralizedHoughGuil(named_args)

View Source
@spec createGeneralizedHoughGuil(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createGoodFeaturesToTrackDetector(named_args)

View Source
@spec createGoodFeaturesToTrackDetector(Keyword.t()) :: any() | {:error, String.t()}
@spec createGoodFeaturesToTrackDetector(integer()) ::
  Evision.CUDA.CornersDetector.t() | {:error, String.t()}

Creates implementation for cuda::CornersDetector .

Positional Arguments
  • srcType: integer().

    Input source type. Only CV_8UC1 and CV_32FC1 are supported for now.

Keyword Arguments
  • maxCorners: integer().

    Maximum number of corners to return. If there are more corners than are found, the strongest of them is returned.

  • qualityLevel: double.

    Parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue (see cornerMinEigenVal ) or the Harris function response (see cornerHarris ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01 , then all the corners with the quality measure less than 15 are rejected.

  • minDistance: double.

    Minimum possible Euclidean distance between the returned corners.

  • blockSize: integer().

    Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. See cornerEigenValsAndVecs .

  • useHarrisDetector: bool.

    Parameter indicating whether to use a Harris detector (see cornerHarris) or cornerMinEigenVal.

  • harrisK: double.

    Free parameter of the Harris detector.

Return
  • retval: CornersDetector

Python prototype (for reference only):

createGoodFeaturesToTrackDetector(srcType[, maxCorners[, qualityLevel[, minDistance[, blockSize[, useHarrisDetector[, harrisK]]]]]]) -> retval
Link to this function

createGoodFeaturesToTrackDetector(srcType, opts)

View Source
@spec createGoodFeaturesToTrackDetector(
  integer(),
  [
    blockSize: term(),
    harrisK: term(),
    maxCorners: term(),
    minDistance: term(),
    qualityLevel: term(),
    useHarrisDetector: term()
  ]
  | nil
) :: Evision.CUDA.CornersDetector.t() | {:error, String.t()}

Creates implementation for cuda::CornersDetector .

Positional Arguments
  • srcType: integer().

    Input source type. Only CV_8UC1 and CV_32FC1 are supported for now.

Keyword Arguments
  • maxCorners: integer().

    Maximum number of corners to return. If there are more corners than are found, the strongest of them is returned.

  • qualityLevel: double.

    Parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue (see cornerMinEigenVal ) or the Harris function response (see cornerHarris ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01 , then all the corners with the quality measure less than 15 are rejected.

  • minDistance: double.

    Minimum possible Euclidean distance between the returned corners.

  • blockSize: integer().

    Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. See cornerEigenValsAndVecs .

  • useHarrisDetector: bool.

    Parameter indicating whether to use a Harris detector (see cornerHarris) or cornerMinEigenVal.

  • harrisK: double.

    Free parameter of the Harris detector.

Return
  • retval: CornersDetector

Python prototype (for reference only):

createGoodFeaturesToTrackDetector(srcType[, maxCorners[, qualityLevel[, minDistance[, blockSize[, useHarrisDetector[, harrisK]]]]]]) -> retval
Link to this function

createGpuMatFromCudaMemory(named_args)

View Source
@spec createGpuMatFromCudaMemory(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createGpuMatFromCudaMemory(size, type, cudaMemoryAddress)

View Source
@spec createGpuMatFromCudaMemory({number(), number()}, integer(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

createGpuMatFromCudaMemory

Positional Arguments
  • size: Size.

    2D array size: Size(cols, rows). In the Size() constructor, the number of rows and the number of columns go in the reverse order.

  • type: integer().

    Type of the matrix.

  • cudaMemoryAddress: size_t.

    Address of the allocated GPU memory on the device. This does not allocate matrix data. Instead, it just initializes the matrix header that points to the specified \a cudaMemoryAddress, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.

Keyword Arguments
  • step: size_t.

    Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to Mat::AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize(). See GpuMat::elemSize.

Return
  • retval: Evision.CUDA.GpuMat.t()

Has overloading in C++

Note: Overload for generation of bindings only, not exported or intended for use internally from C++.

Python prototype (for reference only):

createGpuMatFromCudaMemory(size, type, cudaMemoryAddress[, step]) -> retval
Link to this function

createGpuMatFromCudaMemory(size, type, cudaMemoryAddress, opts)

View Source
@spec createGpuMatFromCudaMemory(
  {number(), number()},
  integer(),
  integer(),
  [{:step, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}
@spec createGpuMatFromCudaMemory(integer(), integer(), integer(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Bindings overload to create a GpuMat from existing GPU memory.

Positional Arguments
  • rows: integer().

    Row count.

  • cols: integer().

    Column count.

  • type: integer().

    Type of the matrix.

  • cudaMemoryAddress: size_t.

    Address of the allocated GPU memory on the device. This does not allocate matrix data. Instead, it just initializes the matrix header that points to the specified \a cudaMemoryAddress, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.

Keyword Arguments
  • step: size_t.

    Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to Mat::AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize(). See GpuMat::elemSize.

Return
  • retval: Evision.CUDA.GpuMat.t()

Note: Overload for generation of bindings only, not exported or intended for use internally from C++.

Python prototype (for reference only):

createGpuMatFromCudaMemory(rows, cols, type, cudaMemoryAddress[, step]) -> retval

Variant 2:

createGpuMatFromCudaMemory

Positional Arguments
  • size: Size.

    2D array size: Size(cols, rows). In the Size() constructor, the number of rows and the number of columns go in the reverse order.

  • type: integer().

    Type of the matrix.

  • cudaMemoryAddress: size_t.

    Address of the allocated GPU memory on the device. This does not allocate matrix data. Instead, it just initializes the matrix header that points to the specified \a cudaMemoryAddress, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.

Keyword Arguments
  • step: size_t.

    Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to Mat::AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize(). See GpuMat::elemSize.

Return
  • retval: Evision.CUDA.GpuMat.t()

Has overloading in C++

Note: Overload for generation of bindings only, not exported or intended for use internally from C++.

Python prototype (for reference only):

createGpuMatFromCudaMemory(size, type, cudaMemoryAddress[, step]) -> retval
Link to this function

createGpuMatFromCudaMemory(rows, cols, type, cudaMemoryAddress, opts)

View Source
@spec createGpuMatFromCudaMemory(
  integer(),
  integer(),
  integer(),
  integer(),
  [{:step, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Bindings overload to create a GpuMat from existing GPU memory.

Positional Arguments
  • rows: integer().

    Row count.

  • cols: integer().

    Column count.

  • type: integer().

    Type of the matrix.

  • cudaMemoryAddress: size_t.

    Address of the allocated GPU memory on the device. This does not allocate matrix data. Instead, it just initializes the matrix header that points to the specified \a cudaMemoryAddress, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.

Keyword Arguments
  • step: size_t.

    Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to Mat::AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize(). See GpuMat::elemSize.

Return
  • retval: Evision.CUDA.GpuMat.t()

Note: Overload for generation of bindings only, not exported or intended for use internally from C++.

Python prototype (for reference only):

createGpuMatFromCudaMemory(rows, cols, type, cudaMemoryAddress[, step]) -> retval
Link to this function

createHarrisCorner(named_args)

View Source
@spec createHarrisCorner(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createHarrisCorner(srcType, blockSize, ksize, k)

View Source
@spec createHarrisCorner(integer(), integer(), integer(), number()) ::
  Evision.CUDA.CornernessCriteria.t() | {:error, String.t()}

Creates implementation for Harris cornerness criteria.

Positional Arguments
  • srcType: integer().

    Input source type. Only CV_8UC1 and CV_32FC1 are supported for now.

  • blockSize: integer().

    Neighborhood size.

  • ksize: integer().

    Aperture parameter for the Sobel operator.

  • k: double.

    Harris detector free parameter.

Keyword Arguments
  • borderType: integer().

    Pixel extrapolation method. Only BORDER_REFLECT101 and BORDER_REPLICATE are supported for now.

Return
  • retval: CornernessCriteria

@sa cornerHarris

Python prototype (for reference only):

createHarrisCorner(srcType, blockSize, ksize, k[, borderType]) -> retval
Link to this function

createHarrisCorner(srcType, blockSize, ksize, k, opts)

View Source
@spec createHarrisCorner(
  integer(),
  integer(),
  integer(),
  number(),
  [{:borderType, term()}] | nil
) ::
  Evision.CUDA.CornernessCriteria.t() | {:error, String.t()}

Creates implementation for Harris cornerness criteria.

Positional Arguments
  • srcType: integer().

    Input source type. Only CV_8UC1 and CV_32FC1 are supported for now.

  • blockSize: integer().

    Neighborhood size.

  • ksize: integer().

    Aperture parameter for the Sobel operator.

  • k: double.

    Harris detector free parameter.

Keyword Arguments
  • borderType: integer().

    Pixel extrapolation method. Only BORDER_REFLECT101 and BORDER_REPLICATE are supported for now.

Return
  • retval: CornernessCriteria

@sa cornerHarris

Python prototype (for reference only):

createHarrisCorner(srcType, blockSize, ksize, k[, borderType]) -> retval
Link to this function

createHoughCirclesDetector(named_args)

View Source
@spec createHoughCirclesDetector(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createHoughCirclesDetector(dp, minDist, cannyThreshold, votesThreshold, minRadius, maxRadius)

View Source
@spec createHoughCirclesDetector(
  number(),
  number(),
  integer(),
  integer(),
  integer(),
  integer()
) ::
  Evision.CUDA.HoughCirclesDetector.t() | {:error, String.t()}

Creates implementation for cuda::HoughCirclesDetector .

Positional Arguments
  • dp: float.

    Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height.

  • minDist: float.

    Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.

  • cannyThreshold: integer().

    The higher threshold of the two passed to Canny edge detector (the lower one is twice smaller).

  • votesThreshold: integer().

    The accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected.

  • minRadius: integer().

    Minimum circle radius.

  • maxRadius: integer().

    Maximum circle radius.

Keyword Arguments
  • maxCircles: integer().

    Maximum number of output circles.

Return
  • retval: HoughCirclesDetector

Python prototype (for reference only):

createHoughCirclesDetector(dp, minDist, cannyThreshold, votesThreshold, minRadius, maxRadius[, maxCircles]) -> retval
Link to this function

createHoughCirclesDetector(dp, minDist, cannyThreshold, votesThreshold, minRadius, maxRadius, opts)

View Source
@spec createHoughCirclesDetector(
  number(),
  number(),
  integer(),
  integer(),
  integer(),
  integer(),
  [{:maxCircles, term()}] | nil
) :: Evision.CUDA.HoughCirclesDetector.t() | {:error, String.t()}

Creates implementation for cuda::HoughCirclesDetector .

Positional Arguments
  • dp: float.

    Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height.

  • minDist: float.

    Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.

  • cannyThreshold: integer().

    The higher threshold of the two passed to Canny edge detector (the lower one is twice smaller).

  • votesThreshold: integer().

    The accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected.

  • minRadius: integer().

    Minimum circle radius.

  • maxRadius: integer().

    Maximum circle radius.

Keyword Arguments
  • maxCircles: integer().

    Maximum number of output circles.

Return
  • retval: HoughCirclesDetector

Python prototype (for reference only):

createHoughCirclesDetector(dp, minDist, cannyThreshold, votesThreshold, minRadius, maxRadius[, maxCircles]) -> retval
Link to this function

createHoughLinesDetector(named_args)

View Source
@spec createHoughLinesDetector(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createHoughLinesDetector(rho, theta, threshold)

View Source
@spec createHoughLinesDetector(number(), number(), integer()) ::
  Evision.CUDA.HoughLinesDetector.t() | {:error, String.t()}

Creates implementation for cuda::HoughLinesDetector .

Positional Arguments
  • rho: float.

    Distance resolution of the accumulator in pixels.

  • theta: float.

    Angle resolution of the accumulator in radians.

  • threshold: integer().

    Accumulator threshold parameter. Only those lines are returned that get enough votes ( \f$>\texttt{threshold}\f$ ).

Keyword Arguments
  • doSort: bool.

    Performs lines sort by votes.

  • maxLines: integer().

    Maximum number of output lines.

Return
  • retval: HoughLinesDetector

Python prototype (for reference only):

createHoughLinesDetector(rho, theta, threshold[, doSort[, maxLines]]) -> retval
Link to this function

createHoughLinesDetector(rho, theta, threshold, opts)

View Source
@spec createHoughLinesDetector(
  number(),
  number(),
  integer(),
  [doSort: term(), maxLines: term()] | nil
) ::
  Evision.CUDA.HoughLinesDetector.t() | {:error, String.t()}

Creates implementation for cuda::HoughLinesDetector .

Positional Arguments
  • rho: float.

    Distance resolution of the accumulator in pixels.

  • theta: float.

    Angle resolution of the accumulator in radians.

  • threshold: integer().

    Accumulator threshold parameter. Only those lines are returned that get enough votes ( \f$>\texttt{threshold}\f$ ).

Keyword Arguments
  • doSort: bool.

    Performs lines sort by votes.

  • maxLines: integer().

    Maximum number of output lines.

Return
  • retval: HoughLinesDetector

Python prototype (for reference only):

createHoughLinesDetector(rho, theta, threshold[, doSort[, maxLines]]) -> retval
Link to this function

createHoughSegmentDetector(named_args)

View Source
@spec createHoughSegmentDetector(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createHoughSegmentDetector(rho, theta, minLineLength, maxLineGap)

View Source
@spec createHoughSegmentDetector(number(), number(), integer(), integer()) ::
  Evision.CUDA.HoughSegmentDetector.t() | {:error, String.t()}

Creates implementation for cuda::HoughSegmentDetector .

Positional Arguments
  • rho: float.

    Distance resolution of the accumulator in pixels.

  • theta: float.

    Angle resolution of the accumulator in radians.

  • minLineLength: integer().

    Minimum line length. Line segments shorter than that are rejected.

  • maxLineGap: integer().

    Maximum allowed gap between points on the same line to link them.

Keyword Arguments
  • maxLines: integer().

    Maximum number of output lines.

  • threshold: integer().

    %Accumulator threshold parameter. Only those lines are returned that get enough votes ( \f$>\texttt{threshold}\f$ ).

Return
  • retval: HoughSegmentDetector

Python prototype (for reference only):

createHoughSegmentDetector(rho, theta, minLineLength, maxLineGap[, maxLines[, threshold]]) -> retval
Link to this function

createHoughSegmentDetector(rho, theta, minLineLength, maxLineGap, opts)

View Source
@spec createHoughSegmentDetector(
  number(),
  number(),
  integer(),
  integer(),
  [maxLines: term(), threshold: term()] | nil
) :: Evision.CUDA.HoughSegmentDetector.t() | {:error, String.t()}

Creates implementation for cuda::HoughSegmentDetector .

Positional Arguments
  • rho: float.

    Distance resolution of the accumulator in pixels.

  • theta: float.

    Angle resolution of the accumulator in radians.

  • minLineLength: integer().

    Minimum line length. Line segments shorter than that are rejected.

  • maxLineGap: integer().

    Maximum allowed gap between points on the same line to link them.

Keyword Arguments
  • maxLines: integer().

    Maximum number of output lines.

  • threshold: integer().

    %Accumulator threshold parameter. Only those lines are returned that get enough votes ( \f$>\texttt{threshold}\f$ ).

Return
  • retval: HoughSegmentDetector

Python prototype (for reference only):

createHoughSegmentDetector(rho, theta, minLineLength, maxLineGap[, maxLines[, threshold]]) -> retval
Link to this function

createLaplacianFilter(named_args)

View Source
@spec createLaplacianFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createLaplacianFilter(srcType, dstType)

View Source
@spec createLaplacianFilter(integer(), integer()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a Laplacian operator.

Positional Arguments
  • srcType: integer().

    Input image type. Supports CV_8U , CV_16U and CV_32F one and four channel image.

  • dstType: integer().

    Output image type. Only the same type as src is supported for now.

Keyword Arguments
  • ksize: integer().

    Aperture size used to compute the second-derivative filters (see getDerivKernels). It must be positive and odd. Only ksize = 1 and ksize = 3 are supported.

  • scale: double.

    Optional scale factor for the computed Laplacian values. By default, no scaling is applied (see getDerivKernels ).

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

@sa Laplacian

Python prototype (for reference only):

createLaplacianFilter(srcType, dstType[, ksize[, scale[, borderMode[, borderVal]]]]) -> retval
Link to this function

createLaplacianFilter(srcType, dstType, opts)

View Source
@spec createLaplacianFilter(
  integer(),
  integer(),
  [borderMode: term(), borderVal: term(), ksize: term(), scale: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a Laplacian operator.

Positional Arguments
  • srcType: integer().

    Input image type. Supports CV_8U , CV_16U and CV_32F one and four channel image.

  • dstType: integer().

    Output image type. Only the same type as src is supported for now.

Keyword Arguments
  • ksize: integer().

    Aperture size used to compute the second-derivative filters (see getDerivKernels). It must be positive and odd. Only ksize = 1 and ksize = 3 are supported.

  • scale: double.

    Optional scale factor for the computed Laplacian values. By default, no scaling is applied (see getDerivKernels ).

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

@sa Laplacian

Python prototype (for reference only):

createLaplacianFilter(srcType, dstType[, ksize[, scale[, borderMode[, borderVal]]]]) -> retval
Link to this function

createLinearFilter(named_args)

View Source
@spec createLinearFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createLinearFilter(srcType, dstType, kernel)

View Source
@spec createLinearFilter(integer(), integer(), Evision.Mat.maybe_mat_in()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}
@spec createLinearFilter(integer(), integer(), Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Variant 1:

Creates a non-separable linear 2D filter.

Positional Arguments
  • srcType: integer().

    Input image type. Supports CV_8U , CV_16U and CV_32F one and four channel image.

  • dstType: integer().

    Output image type. Only the same type as src is supported for now.

  • kernel: Evision.Mat.

    2D array of filter coefficients.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

@sa filter2D

Python prototype (for reference only):

createLinearFilter(srcType, dstType, kernel[, anchor[, borderMode[, borderVal]]]) -> retval

Variant 2:

Creates a non-separable linear 2D filter.

Positional Arguments
  • srcType: integer().

    Input image type. Supports CV_8U , CV_16U and CV_32F one and four channel image.

  • dstType: integer().

    Output image type. Only the same type as src is supported for now.

  • kernel: Evision.CUDA.GpuMat.t().

    2D array of filter coefficients.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

@sa filter2D

Python prototype (for reference only):

createLinearFilter(srcType, dstType, kernel[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createLinearFilter(srcType, dstType, kernel, opts)

View Source
@spec createLinearFilter(
  integer(),
  integer(),
  Evision.Mat.maybe_mat_in(),
  [anchor: term(), borderMode: term(), borderVal: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}
@spec createLinearFilter(
  integer(),
  integer(),
  Evision.CUDA.GpuMat.t(),
  [anchor: term(), borderMode: term(), borderVal: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Variant 1:

Creates a non-separable linear 2D filter.

Positional Arguments
  • srcType: integer().

    Input image type. Supports CV_8U , CV_16U and CV_32F one and four channel image.

  • dstType: integer().

    Output image type. Only the same type as src is supported for now.

  • kernel: Evision.Mat.

    2D array of filter coefficients.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

@sa filter2D

Python prototype (for reference only):

createLinearFilter(srcType, dstType, kernel[, anchor[, borderMode[, borderVal]]]) -> retval

Variant 2:

Creates a non-separable linear 2D filter.

Positional Arguments
  • srcType: integer().

    Input image type. Supports CV_8U , CV_16U and CV_32F one and four channel image.

  • dstType: integer().

    Output image type. Only the same type as src is supported for now.

  • kernel: Evision.CUDA.GpuMat.t().

    2D array of filter coefficients.

Keyword Arguments
  • anchor: Point.

    Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

@sa filter2D

Python prototype (for reference only):

createLinearFilter(srcType, dstType, kernel[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createLookUpTable(named_args)

View Source
@spec createLookUpTable(Keyword.t()) :: any() | {:error, String.t()}
@spec createLookUpTable(Evision.Mat.maybe_mat_in()) ::
  Evision.CUDA.LookUpTable.t() | {:error, String.t()}
@spec createLookUpTable(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.LookUpTable.t() | {:error, String.t()}

Variant 1:

Creates implementation for cuda::LookUpTable .

Positional Arguments
  • lut: Evision.Mat.

    Look-up table of 256 elements. It is a continuous CV_8U matrix.

Return
  • retval: LookUpTable

Python prototype (for reference only):

createLookUpTable(lut) -> retval

Variant 2:

Creates implementation for cuda::LookUpTable .

Positional Arguments
  • lut: Evision.CUDA.GpuMat.t().

    Look-up table of 256 elements. It is a continuous CV_8U matrix.

Return
  • retval: LookUpTable

Python prototype (for reference only):

createLookUpTable(lut) -> retval
Link to this function

createMedianFilter(named_args)

View Source
@spec createMedianFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createMedianFilter(srcType, windowSize)

View Source
@spec createMedianFilter(integer(), integer()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Performs median filtering for each point of the source image.

Positional Arguments
  • srcType: integer().

    type of of source image. Only CV_8UC1 images are supported for now.

  • windowSize: integer().

    Size of the kernerl used for the filtering. Uses a (windowSize x windowSize) filter.

Keyword Arguments
  • partition: integer().

    Specifies the parallel granularity of the workload. This parameter should be used GPU experts when optimizing performance.

Return
  • retval: Filter

Outputs an image that has been filtered using a median-filtering formulation. Details on this algorithm can be found in: Green, O., 2017. "Efficient scalable median filtering using histogram-based operations", IEEE Transactions on Image Processing, 27(5), pp.2217-2228.

Python prototype (for reference only):

createMedianFilter(srcType, windowSize[, partition]) -> retval
Link to this function

createMedianFilter(srcType, windowSize, opts)

View Source
@spec createMedianFilter(integer(), integer(), [{:partition, term()}] | nil) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Performs median filtering for each point of the source image.

Positional Arguments
  • srcType: integer().

    type of of source image. Only CV_8UC1 images are supported for now.

  • windowSize: integer().

    Size of the kernerl used for the filtering. Uses a (windowSize x windowSize) filter.

Keyword Arguments
  • partition: integer().

    Specifies the parallel granularity of the workload. This parameter should be used GPU experts when optimizing performance.

Return
  • retval: Filter

Outputs an image that has been filtered using a median-filtering formulation. Details on this algorithm can be found in: Green, O., 2017. "Efficient scalable median filtering using histogram-based operations", IEEE Transactions on Image Processing, 27(5), pp.2217-2228.

Python prototype (for reference only):

createMedianFilter(srcType, windowSize[, partition]) -> retval
Link to this function

createMinEigenValCorner(named_args)

View Source
@spec createMinEigenValCorner(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createMinEigenValCorner(srcType, blockSize, ksize)

View Source
@spec createMinEigenValCorner(integer(), integer(), integer()) ::
  Evision.CUDA.CornernessCriteria.t() | {:error, String.t()}

Creates implementation for the minimum eigen value of a 2x2 derivative covariation matrix (the cornerness criteria).

Positional Arguments
  • srcType: integer().

    Input source type. Only CV_8UC1 and CV_32FC1 are supported for now.

  • blockSize: integer().

    Neighborhood size.

  • ksize: integer().

    Aperture parameter for the Sobel operator.

Keyword Arguments
  • borderType: integer().

    Pixel extrapolation method. Only BORDER_REFLECT101 and BORDER_REPLICATE are supported for now.

Return
  • retval: CornernessCriteria

@sa cornerMinEigenVal

Python prototype (for reference only):

createMinEigenValCorner(srcType, blockSize, ksize[, borderType]) -> retval
Link to this function

createMinEigenValCorner(srcType, blockSize, ksize, opts)

View Source
@spec createMinEigenValCorner(
  integer(),
  integer(),
  integer(),
  [{:borderType, term()}] | nil
) ::
  Evision.CUDA.CornernessCriteria.t() | {:error, String.t()}

Creates implementation for the minimum eigen value of a 2x2 derivative covariation matrix (the cornerness criteria).

Positional Arguments
  • srcType: integer().

    Input source type. Only CV_8UC1 and CV_32FC1 are supported for now.

  • blockSize: integer().

    Neighborhood size.

  • ksize: integer().

    Aperture parameter for the Sobel operator.

Keyword Arguments
  • borderType: integer().

    Pixel extrapolation method. Only BORDER_REFLECT101 and BORDER_REPLICATE are supported for now.

Return
  • retval: CornernessCriteria

@sa cornerMinEigenVal

Python prototype (for reference only):

createMinEigenValCorner(srcType, blockSize, ksize[, borderType]) -> retval
Link to this function

createMorphologyFilter(named_args)

View Source
@spec createMorphologyFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createMorphologyFilter(op, srcType, kernel)

View Source
@spec createMorphologyFilter(integer(), integer(), Evision.Mat.maybe_mat_in()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}
@spec createMorphologyFilter(integer(), integer(), Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Variant 1:

Creates a 2D morphological filter.

Positional Arguments
  • op: integer().

    Type of morphological operation. The following types are possible:

    • MORPH_ERODE erode
    • MORPH_DILATE dilate
    • MORPH_OPEN opening
    • MORPH_CLOSE closing
    • MORPH_GRADIENT morphological gradient
    • MORPH_TOPHAT "top hat"
    • MORPH_BLACKHAT "black hat"
  • srcType: integer().

    Input/output image type. Only CV_8UC1, CV_8UC4, CV_32FC1 and CV_32FC4 are supported.

  • kernel: Evision.Mat.

    2D 8-bit structuring element for the morphological operation.

Keyword Arguments
  • anchor: Point.

    Anchor position within the structuring element. Negative values mean that the anchor is at the center.

  • iterations: integer().

    Number of times erosion and dilation to be applied.

Return
  • retval: Filter

@sa morphologyEx

Python prototype (for reference only):

createMorphologyFilter(op, srcType, kernel[, anchor[, iterations]]) -> retval

Variant 2:

Creates a 2D morphological filter.

Positional Arguments
  • op: integer().

    Type of morphological operation. The following types are possible:

    • MORPH_ERODE erode
    • MORPH_DILATE dilate
    • MORPH_OPEN opening
    • MORPH_CLOSE closing
    • MORPH_GRADIENT morphological gradient
    • MORPH_TOPHAT "top hat"
    • MORPH_BLACKHAT "black hat"
  • srcType: integer().

    Input/output image type. Only CV_8UC1, CV_8UC4, CV_32FC1 and CV_32FC4 are supported.

  • kernel: Evision.CUDA.GpuMat.t().

    2D 8-bit structuring element for the morphological operation.

Keyword Arguments
  • anchor: Point.

    Anchor position within the structuring element. Negative values mean that the anchor is at the center.

  • iterations: integer().

    Number of times erosion and dilation to be applied.

Return
  • retval: Filter

@sa morphologyEx

Python prototype (for reference only):

createMorphologyFilter(op, srcType, kernel[, anchor[, iterations]]) -> retval
Link to this function

createMorphologyFilter(op, srcType, kernel, opts)

View Source
@spec createMorphologyFilter(
  integer(),
  integer(),
  Evision.Mat.maybe_mat_in(),
  [anchor: term(), iterations: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}
@spec createMorphologyFilter(
  integer(),
  integer(),
  Evision.CUDA.GpuMat.t(),
  [anchor: term(), iterations: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Variant 1:

Creates a 2D morphological filter.

Positional Arguments
  • op: integer().

    Type of morphological operation. The following types are possible:

    • MORPH_ERODE erode
    • MORPH_DILATE dilate
    • MORPH_OPEN opening
    • MORPH_CLOSE closing
    • MORPH_GRADIENT morphological gradient
    • MORPH_TOPHAT "top hat"
    • MORPH_BLACKHAT "black hat"
  • srcType: integer().

    Input/output image type. Only CV_8UC1, CV_8UC4, CV_32FC1 and CV_32FC4 are supported.

  • kernel: Evision.Mat.

    2D 8-bit structuring element for the morphological operation.

Keyword Arguments
  • anchor: Point.

    Anchor position within the structuring element. Negative values mean that the anchor is at the center.

  • iterations: integer().

    Number of times erosion and dilation to be applied.

Return
  • retval: Filter

@sa morphologyEx

Python prototype (for reference only):

createMorphologyFilter(op, srcType, kernel[, anchor[, iterations]]) -> retval

Variant 2:

Creates a 2D morphological filter.

Positional Arguments
  • op: integer().

    Type of morphological operation. The following types are possible:

    • MORPH_ERODE erode
    • MORPH_DILATE dilate
    • MORPH_OPEN opening
    • MORPH_CLOSE closing
    • MORPH_GRADIENT morphological gradient
    • MORPH_TOPHAT "top hat"
    • MORPH_BLACKHAT "black hat"
  • srcType: integer().

    Input/output image type. Only CV_8UC1, CV_8UC4, CV_32FC1 and CV_32FC4 are supported.

  • kernel: Evision.CUDA.GpuMat.t().

    2D 8-bit structuring element for the morphological operation.

Keyword Arguments
  • anchor: Point.

    Anchor position within the structuring element. Negative values mean that the anchor is at the center.

  • iterations: integer().

    Number of times erosion and dilation to be applied.

Return
  • retval: Filter

@sa morphologyEx

Python prototype (for reference only):

createMorphologyFilter(op, srcType, kernel[, anchor[, iterations]]) -> retval
Link to this function

createRowSumFilter(named_args)

View Source
@spec createRowSumFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createRowSumFilter(srcType, dstType, ksize)

View Source
@spec createRowSumFilter(integer(), integer(), integer()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a horizontal 1D box filter.

Positional Arguments
  • srcType: integer().

    Input image type. Only CV_8UC1 type is supported for now.

  • dstType: integer().

    Output image type. Only CV_32FC1 type is supported for now.

  • ksize: integer().

    Kernel size.

Keyword Arguments
  • anchor: integer().

    Anchor point. The default value (-1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

Python prototype (for reference only):

createRowSumFilter(srcType, dstType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createRowSumFilter(srcType, dstType, ksize, opts)

View Source
@spec createRowSumFilter(
  integer(),
  integer(),
  integer(),
  [anchor: term(), borderMode: term(), borderVal: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a horizontal 1D box filter.

Positional Arguments
  • srcType: integer().

    Input image type. Only CV_8UC1 type is supported for now.

  • dstType: integer().

    Output image type. Only CV_32FC1 type is supported for now.

  • ksize: integer().

    Kernel size.

Keyword Arguments
  • anchor: integer().

    Anchor point. The default value (-1) means that the anchor is at the kernel center.

  • borderMode: integer().

    Pixel extrapolation method. For details, see borderInterpolate .

  • borderVal: Evision.scalar().

    Default border value.

Return
  • retval: Filter

Python prototype (for reference only):

createRowSumFilter(srcType, dstType, ksize[, anchor[, borderMode[, borderVal]]]) -> retval
Link to this function

createScharrFilter(named_args)

View Source
@spec createScharrFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createScharrFilter(srcType, dstType, dx, dy)

View Source
@spec createScharrFilter(integer(), integer(), integer(), integer()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a vertical or horizontal Scharr operator.

Positional Arguments
  • srcType: integer().

    Source image type.

  • dstType: integer().

    Destination array type.

  • dx: integer().

    Order of the derivative x.

  • dy: integer().

    Order of the derivative y.

Keyword Arguments
  • scale: double.

    Optional scale factor for the computed derivative values. By default, no scaling is applied. See getDerivKernels for details.

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction. For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa Scharr

Python prototype (for reference only):

createScharrFilter(srcType, dstType, dx, dy[, scale[, rowBorderMode[, columnBorderMode]]]) -> retval
Link to this function

createScharrFilter(srcType, dstType, dx, dy, opts)

View Source
@spec createScharrFilter(
  integer(),
  integer(),
  integer(),
  integer(),
  [columnBorderMode: term(), rowBorderMode: term(), scale: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a vertical or horizontal Scharr operator.

Positional Arguments
  • srcType: integer().

    Source image type.

  • dstType: integer().

    Destination array type.

  • dx: integer().

    Order of the derivative x.

  • dy: integer().

    Order of the derivative y.

Keyword Arguments
  • scale: double.

    Optional scale factor for the computed derivative values. By default, no scaling is applied. See getDerivKernels for details.

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction. For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa Scharr

Python prototype (for reference only):

createScharrFilter(srcType, dstType, dx, dy[, scale[, rowBorderMode[, columnBorderMode]]]) -> retval
Link to this function

createSeparableLinearFilter(named_args)

View Source
@spec createSeparableLinearFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createSeparableLinearFilter(srcType, dstType, rowKernel, columnKernel)

View Source
@spec createSeparableLinearFilter(
  integer(),
  integer(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in()
) :: Evision.CUDA.Filter.t() | {:error, String.t()}
@spec createSeparableLinearFilter(
  integer(),
  integer(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t()
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Variant 1:

Creates a separable linear filter.

Positional Arguments
  • srcType: integer().

    Source array type.

  • dstType: integer().

    Destination array type.

  • rowKernel: Evision.Mat.

    Horizontal filter coefficients. Support kernels with size \<= 32 .

  • columnKernel: Evision.Mat.

    Vertical filter coefficients. Support kernels with size \<= 32 .

Keyword Arguments
  • anchor: Point.

    Anchor position within the kernel. Negative values mean that anchor is positioned at the aperture center.

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa sepFilter2D

Python prototype (for reference only):

createSeparableLinearFilter(srcType, dstType, rowKernel, columnKernel[, anchor[, rowBorderMode[, columnBorderMode]]]) -> retval

Variant 2:

Creates a separable linear filter.

Positional Arguments
  • srcType: integer().

    Source array type.

  • dstType: integer().

    Destination array type.

  • rowKernel: Evision.CUDA.GpuMat.t().

    Horizontal filter coefficients. Support kernels with size \<= 32 .

  • columnKernel: Evision.CUDA.GpuMat.t().

    Vertical filter coefficients. Support kernels with size \<= 32 .

Keyword Arguments
  • anchor: Point.

    Anchor position within the kernel. Negative values mean that anchor is positioned at the aperture center.

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa sepFilter2D

Python prototype (for reference only):

createSeparableLinearFilter(srcType, dstType, rowKernel, columnKernel[, anchor[, rowBorderMode[, columnBorderMode]]]) -> retval
Link to this function

createSeparableLinearFilter(srcType, dstType, rowKernel, columnKernel, opts)

View Source
@spec createSeparableLinearFilter(
  integer(),
  integer(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [anchor: term(), columnBorderMode: term(), rowBorderMode: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}
@spec createSeparableLinearFilter(
  integer(),
  integer(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [anchor: term(), columnBorderMode: term(), rowBorderMode: term()] | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Variant 1:

Creates a separable linear filter.

Positional Arguments
  • srcType: integer().

    Source array type.

  • dstType: integer().

    Destination array type.

  • rowKernel: Evision.Mat.

    Horizontal filter coefficients. Support kernels with size \<= 32 .

  • columnKernel: Evision.Mat.

    Vertical filter coefficients. Support kernels with size \<= 32 .

Keyword Arguments
  • anchor: Point.

    Anchor position within the kernel. Negative values mean that anchor is positioned at the aperture center.

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa sepFilter2D

Python prototype (for reference only):

createSeparableLinearFilter(srcType, dstType, rowKernel, columnKernel[, anchor[, rowBorderMode[, columnBorderMode]]]) -> retval

Variant 2:

Creates a separable linear filter.

Positional Arguments
  • srcType: integer().

    Source array type.

  • dstType: integer().

    Destination array type.

  • rowKernel: Evision.CUDA.GpuMat.t().

    Horizontal filter coefficients. Support kernels with size \<= 32 .

  • columnKernel: Evision.CUDA.GpuMat.t().

    Vertical filter coefficients. Support kernels with size \<= 32 .

Keyword Arguments
  • anchor: Point.

    Anchor position within the kernel. Negative values mean that anchor is positioned at the aperture center.

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa sepFilter2D

Python prototype (for reference only):

createSeparableLinearFilter(srcType, dstType, rowKernel, columnKernel[, anchor[, rowBorderMode[, columnBorderMode]]]) -> retval
Link to this function

createSobelFilter(named_args)

View Source
@spec createSobelFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createSobelFilter(srcType, dstType, dx, dy)

View Source
@spec createSobelFilter(integer(), integer(), integer(), integer()) ::
  Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a Sobel operator.

Positional Arguments
  • srcType: integer().

    Source image type.

  • dstType: integer().

    Destination array type.

  • dx: integer().

    Derivative order in respect of x.

  • dy: integer().

    Derivative order in respect of y.

Keyword Arguments
  • ksize: integer().

    Size of the extended Sobel kernel. Possible values are 1, 3, 5 or 7.

  • scale: double.

    Optional scale factor for the computed derivative values. By default, no scaling is applied. For details, see getDerivKernels .

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction. For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa Sobel

Python prototype (for reference only):

createSobelFilter(srcType, dstType, dx, dy[, ksize[, scale[, rowBorderMode[, columnBorderMode]]]]) -> retval
Link to this function

createSobelFilter(srcType, dstType, dx, dy, opts)

View Source
@spec createSobelFilter(
  integer(),
  integer(),
  integer(),
  integer(),
  [
    columnBorderMode: term(),
    ksize: term(),
    rowBorderMode: term(),
    scale: term()
  ]
  | nil
) :: Evision.CUDA.Filter.t() | {:error, String.t()}

Creates a Sobel operator.

Positional Arguments
  • srcType: integer().

    Source image type.

  • dstType: integer().

    Destination array type.

  • dx: integer().

    Derivative order in respect of x.

  • dy: integer().

    Derivative order in respect of y.

Keyword Arguments
  • ksize: integer().

    Size of the extended Sobel kernel. Possible values are 1, 3, 5 or 7.

  • scale: double.

    Optional scale factor for the computed derivative values. By default, no scaling is applied. For details, see getDerivKernels .

  • rowBorderMode: integer().

    Pixel extrapolation method in the vertical direction. For details, see borderInterpolate.

  • columnBorderMode: integer().

    Pixel extrapolation method in the horizontal direction.

Return
  • retval: Filter

@sa Sobel

Python prototype (for reference only):

createSobelFilter(srcType, dstType, dx, dy[, ksize[, scale[, rowBorderMode[, columnBorderMode]]]]) -> retval
Link to this function

createStereoBeliefPropagation()

View Source
@spec createStereoBeliefPropagation() ::
  Evision.CUDA.StereoBeliefPropagation.t() | {:error, String.t()}

Creates StereoBeliefPropagation object.

Keyword Arguments
  • ndisp: integer().

    Number of disparities.

  • iters: integer().

    Number of BP iterations on each level.

  • levels: integer().

    Number of levels.

  • msg_type: integer().

    Type for messages. CV_16SC1 and CV_32FC1 types are supported.

Return
  • retval: Evision.CUDA.StereoBeliefPropagation.t()

Python prototype (for reference only):

createStereoBeliefPropagation([, ndisp[, iters[, levels[, msg_type]]]]) -> retval
Link to this function

createStereoBeliefPropagation(named_args)

View Source
@spec createStereoBeliefPropagation(Keyword.t()) :: any() | {:error, String.t()}
@spec createStereoBeliefPropagation(
  [iters: term(), levels: term(), msg_type: term(), ndisp: term()]
  | nil
) :: Evision.CUDA.StereoBeliefPropagation.t() | {:error, String.t()}

Creates StereoBeliefPropagation object.

Keyword Arguments
  • ndisp: integer().

    Number of disparities.

  • iters: integer().

    Number of BP iterations on each level.

  • levels: integer().

    Number of levels.

  • msg_type: integer().

    Type for messages. CV_16SC1 and CV_32FC1 types are supported.

Return
  • retval: Evision.CUDA.StereoBeliefPropagation.t()

Python prototype (for reference only):

createStereoBeliefPropagation([, ndisp[, iters[, levels[, msg_type]]]]) -> retval
@spec createStereoBM() :: Evision.CUDA.StereoBM.t() | {:error, String.t()}

Creates StereoBM object.

Keyword Arguments
  • numDisparities: integer().

    the disparity search range. For each pixel algorithm will find the best disparity from 0 (default minimum disparity) to numDisparities. The search range can then be shifted by changing the minimum disparity.

  • blockSize: integer().

    the linear size of the blocks compared by the algorithm. The size should be odd (as the block is centered at the current pixel). Larger block size implies smoother, though less accurate disparity map. Smaller block size gives more detailed disparity map, but there is higher chance for algorithm to find a wrong correspondence.

Return
  • retval: Evision.CUDA.StereoBM.t()

Python prototype (for reference only):

createStereoBM([, numDisparities[, blockSize]]) -> retval
Link to this function

createStereoBM(named_args)

View Source
@spec createStereoBM(Keyword.t()) :: any() | {:error, String.t()}
@spec createStereoBM([blockSize: term(), numDisparities: term()] | nil) ::
  Evision.CUDA.StereoBM.t() | {:error, String.t()}

Creates StereoBM object.

Keyword Arguments
  • numDisparities: integer().

    the disparity search range. For each pixel algorithm will find the best disparity from 0 (default minimum disparity) to numDisparities. The search range can then be shifted by changing the minimum disparity.

  • blockSize: integer().

    the linear size of the blocks compared by the algorithm. The size should be odd (as the block is centered at the current pixel). Larger block size implies smoother, though less accurate disparity map. Smaller block size gives more detailed disparity map, but there is higher chance for algorithm to find a wrong correspondence.

Return
  • retval: Evision.CUDA.StereoBM.t()

Python prototype (for reference only):

createStereoBM([, numDisparities[, blockSize]]) -> retval
Link to this function

createStereoConstantSpaceBP()

View Source
@spec createStereoConstantSpaceBP() ::
  Evision.CUDA.StereoConstantSpaceBP.t() | {:error, String.t()}

Creates StereoConstantSpaceBP object.

Keyword Arguments
  • ndisp: integer().

    Number of disparities.

  • iters: integer().

    Number of BP iterations on each level.

  • levels: integer().

    Number of levels.

  • nr_plane: integer().

    Number of disparity levels on the first level.

  • msg_type: integer().

    Type for messages. CV_16SC1 and CV_32FC1 types are supported.

Return
  • retval: Evision.CUDA.StereoConstantSpaceBP.t()

Python prototype (for reference only):

createStereoConstantSpaceBP([, ndisp[, iters[, levels[, nr_plane[, msg_type]]]]]) -> retval
Link to this function

createStereoConstantSpaceBP(named_args)

View Source
@spec createStereoConstantSpaceBP(Keyword.t()) :: any() | {:error, String.t()}
@spec createStereoConstantSpaceBP(
  [
    iters: term(),
    levels: term(),
    msg_type: term(),
    ndisp: term(),
    nr_plane: term()
  ]
  | nil
) :: Evision.CUDA.StereoConstantSpaceBP.t() | {:error, String.t()}

Creates StereoConstantSpaceBP object.

Keyword Arguments
  • ndisp: integer().

    Number of disparities.

  • iters: integer().

    Number of BP iterations on each level.

  • levels: integer().

    Number of levels.

  • nr_plane: integer().

    Number of disparity levels on the first level.

  • msg_type: integer().

    Type for messages. CV_16SC1 and CV_32FC1 types are supported.

Return
  • retval: Evision.CUDA.StereoConstantSpaceBP.t()

Python prototype (for reference only):

createStereoConstantSpaceBP([, ndisp[, iters[, levels[, nr_plane[, msg_type]]]]]) -> retval
@spec createStereoSGM() :: Evision.CUDA.StereoSGM.t() | {:error, String.t()}

Creates StereoSGM object.

Keyword Arguments
  • minDisparity: integer().

    Minimum possible disparity value. Normally, it is zero but sometimes rectification algorithms can shift images, so this parameter needs to be adjusted accordingly.

  • numDisparities: integer().

    Maximum disparity minus minimum disparity. The value must be 64, 128 or 256.

  • p1: integer().

    The first parameter controlling the disparity smoothness.This parameter is used for the case of slanted surfaces (not fronto parallel).

  • p2: integer().

    The second parameter controlling the disparity smoothness.This parameter is used for "solving" the depth discontinuities problem.

  • uniquenessRatio: integer().

    Margin in percentage by which the best (minimum) computed cost function value should "win" the second best value to consider the found match correct. Normally, a value within the 5-15 range is good enough.

  • mode: integer().

    Set it to StereoSGM::MODE_HH to run the full-scale two-pass dynamic programming algorithm. It will consume O(W*H*numDisparities) bytes. By default, it is set to StereoSGM::MODE_HH4.

Return
  • retval: Evision.CUDA.StereoSGM.t()

Python prototype (for reference only):

createStereoSGM([, minDisparity[, numDisparities[, P1[, P2[, uniquenessRatio[, mode]]]]]]) -> retval
Link to this function

createStereoSGM(named_args)

View Source
@spec createStereoSGM(Keyword.t()) :: any() | {:error, String.t()}
@spec createStereoSGM(
  [
    minDisparity: term(),
    mode: term(),
    numDisparities: term(),
    p1: term(),
    p2: term(),
    uniquenessRatio: term()
  ]
  | nil
) :: Evision.CUDA.StereoSGM.t() | {:error, String.t()}

Creates StereoSGM object.

Keyword Arguments
  • minDisparity: integer().

    Minimum possible disparity value. Normally, it is zero but sometimes rectification algorithms can shift images, so this parameter needs to be adjusted accordingly.

  • numDisparities: integer().

    Maximum disparity minus minimum disparity. The value must be 64, 128 or 256.

  • p1: integer().

    The first parameter controlling the disparity smoothness.This parameter is used for the case of slanted surfaces (not fronto parallel).

  • p2: integer().

    The second parameter controlling the disparity smoothness.This parameter is used for "solving" the depth discontinuities problem.

  • uniquenessRatio: integer().

    Margin in percentage by which the best (minimum) computed cost function value should "win" the second best value to consider the found match correct. Normally, a value within the 5-15 range is good enough.

  • mode: integer().

    Set it to StereoSGM::MODE_HH to run the full-scale two-pass dynamic programming algorithm. It will consume O(W*H*numDisparities) bytes. By default, it is set to StereoSGM::MODE_HH4.

Return
  • retval: Evision.CUDA.StereoSGM.t()

Python prototype (for reference only):

createStereoSGM([, minDisparity[, numDisparities[, P1[, P2[, uniquenessRatio[, mode]]]]]]) -> retval
Link to this function

createTemplateMatching(named_args)

View Source
@spec createTemplateMatching(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createTemplateMatching(srcType, method)

View Source
@spec createTemplateMatching(integer(), integer()) ::
  Evision.CUDA.TemplateMatching.t() | {:error, String.t()}

Creates implementation for cuda::TemplateMatching .

Positional Arguments
  • srcType: integer().

    Input source type. CV_32F and CV_8U depth images (1..4 channels) are supported for now.

  • method: integer().

    Specifies the way to compare the template with the image.

Keyword Arguments
  • user_block_size: Size.

    You can use field user_block_size to set specific block size. If you leave its default value Size(0,0) then automatic estimation of block size will be used (which is optimized for speed). By varying user_block_size you can reduce memory requirements at the cost of speed.

Return
  • retval: TemplateMatching

The following methods are supported for the CV_8U depth images for now:

  • CV_TM_SQDIFF
  • CV_TM_SQDIFF_NORMED
  • CV_TM_CCORR
  • CV_TM_CCORR_NORMED
  • CV_TM_CCOEFF
  • CV_TM_CCOEFF_NORMED

The following methods are supported for the CV_32F images for now:

  • CV_TM_SQDIFF
  • CV_TM_CCORR

@sa matchTemplate

Python prototype (for reference only):

createTemplateMatching(srcType, method[, user_block_size]) -> retval
Link to this function

createTemplateMatching(srcType, method, opts)

View Source
@spec createTemplateMatching(integer(), integer(), [{:user_block_size, term()}] | nil) ::
  Evision.CUDA.TemplateMatching.t() | {:error, String.t()}

Creates implementation for cuda::TemplateMatching .

Positional Arguments
  • srcType: integer().

    Input source type. CV_32F and CV_8U depth images (1..4 channels) are supported for now.

  • method: integer().

    Specifies the way to compare the template with the image.

Keyword Arguments
  • user_block_size: Size.

    You can use field user_block_size to set specific block size. If you leave its default value Size(0,0) then automatic estimation of block size will be used (which is optimized for speed). By varying user_block_size you can reduce memory requirements at the cost of speed.

Return
  • retval: TemplateMatching

The following methods are supported for the CV_8U depth images for now:

  • CV_TM_SQDIFF
  • CV_TM_SQDIFF_NORMED
  • CV_TM_CCORR
  • CV_TM_CCORR_NORMED
  • CV_TM_CCOEFF
  • CV_TM_CCOEFF_NORMED

The following methods are supported for the CV_32F images for now:

  • CV_TM_SQDIFF
  • CV_TM_CCORR

@sa matchTemplate

Python prototype (for reference only):

createTemplateMatching(srcType, method[, user_block_size]) -> retval
@spec cvtColor(Keyword.t()) :: any() | {:error, String.t()}
@spec cvtColor(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec cvtColor(Evision.CUDA.GpuMat.t(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Converts an image from one color space to another.

Positional Arguments
  • src: Evision.Mat.

    Source image with CV_8U , CV_16U , or CV_32F depth and 1, 3, or 4 channels.

  • code: integer().

    Color space conversion code. For details, see cvtColor .

Keyword Arguments
  • dcn: integer().

    Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

3-channel color spaces (like HSV, XYZ, and so on) can be stored in a 4-channel image for better performance. @sa cvtColor

Python prototype (for reference only):

cvtColor(src, code[, dst[, dcn[, stream]]]) -> dst

Variant 2:

Converts an image from one color space to another.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image with CV_8U , CV_16U , or CV_32F depth and 1, 3, or 4 channels.

  • code: integer().

    Color space conversion code. For details, see cvtColor .

Keyword Arguments
  • dcn: integer().

    Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

3-channel color spaces (like HSV, XYZ, and so on) can be stored in a 4-channel image for better performance. @sa cvtColor

Python prototype (for reference only):

cvtColor(src, code[, dst[, dcn[, stream]]]) -> dst
Link to this function

cvtColor(src, code, opts)

View Source
@spec cvtColor(
  Evision.Mat.maybe_mat_in(),
  integer(),
  [dcn: term(), stream: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec cvtColor(
  Evision.CUDA.GpuMat.t(),
  integer(),
  [dcn: term(), stream: term()] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Converts an image from one color space to another.

Positional Arguments
  • src: Evision.Mat.

    Source image with CV_8U , CV_16U , or CV_32F depth and 1, 3, or 4 channels.

  • code: integer().

    Color space conversion code. For details, see cvtColor .

Keyword Arguments
  • dcn: integer().

    Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

3-channel color spaces (like HSV, XYZ, and so on) can be stored in a 4-channel image for better performance. @sa cvtColor

Python prototype (for reference only):

cvtColor(src, code[, dst[, dcn[, stream]]]) -> dst

Variant 2:

Converts an image from one color space to another.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image with CV_8U , CV_16U , or CV_32F depth and 1, 3, or 4 channels.

  • code: integer().

    Color space conversion code. For details, see cvtColor .

Keyword Arguments
  • dcn: integer().

    Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

3-channel color spaces (like HSV, XYZ, and so on) can be stored in a 4-channel image for better performance. @sa cvtColor

Python prototype (for reference only):

cvtColor(src, code[, dst[, dcn[, stream]]]) -> dst
@spec demosaicing(Keyword.t()) :: any() | {:error, String.t()}
@spec demosaicing(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec demosaicing(Evision.CUDA.GpuMat.t(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Converts an image from Bayer pattern to RGB or grayscale.

Positional Arguments
  • src: Evision.Mat.

    Source image (8-bit or 16-bit single channel).

  • code: integer().

    Color space conversion code (see the description below).

Keyword Arguments
  • dcn: integer().

    Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

The function can do the following transformations:

  • Demosaicing using bilinear interpolation
  • COLOR_BayerBG2GRAY , COLOR_BayerGB2GRAY , COLOR_BayerRG2GRAY , COLOR_BayerGR2GRAY
  • COLOR_BayerBG2BGR , COLOR_BayerGB2BGR , COLOR_BayerRG2BGR , COLOR_BayerGR2BGR
  • Demosaicing using Malvar-He-Cutler algorithm (@cite MHT2011)
  • COLOR_BayerBG2GRAY_MHT , COLOR_BayerGB2GRAY_MHT , COLOR_BayerRG2GRAY_MHT , COLOR_BayerGR2GRAY_MHT
  • COLOR_BayerBG2BGR_MHT , COLOR_BayerGB2BGR_MHT , COLOR_BayerRG2BGR_MHT , COLOR_BayerGR2BGR_MHT @sa cvtColor

Python prototype (for reference only):

demosaicing(src, code[, dst[, dcn[, stream]]]) -> dst

Variant 2:

Converts an image from Bayer pattern to RGB or grayscale.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image (8-bit or 16-bit single channel).

  • code: integer().

    Color space conversion code (see the description below).

Keyword Arguments
  • dcn: integer().

    Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

The function can do the following transformations:

  • Demosaicing using bilinear interpolation
  • COLOR_BayerBG2GRAY , COLOR_BayerGB2GRAY , COLOR_BayerRG2GRAY , COLOR_BayerGR2GRAY
  • COLOR_BayerBG2BGR , COLOR_BayerGB2BGR , COLOR_BayerRG2BGR , COLOR_BayerGR2BGR
  • Demosaicing using Malvar-He-Cutler algorithm (@cite MHT2011)
  • COLOR_BayerBG2GRAY_MHT , COLOR_BayerGB2GRAY_MHT , COLOR_BayerRG2GRAY_MHT , COLOR_BayerGR2GRAY_MHT
  • COLOR_BayerBG2BGR_MHT , COLOR_BayerGB2BGR_MHT , COLOR_BayerRG2BGR_MHT , COLOR_BayerGR2BGR_MHT @sa cvtColor

Python prototype (for reference only):

demosaicing(src, code[, dst[, dcn[, stream]]]) -> dst
Link to this function

demosaicing(src, code, opts)

View Source
@spec demosaicing(
  Evision.Mat.maybe_mat_in(),
  integer(),
  [dcn: term(), stream: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec demosaicing(
  Evision.CUDA.GpuMat.t(),
  integer(),
  [dcn: term(), stream: term()] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Converts an image from Bayer pattern to RGB or grayscale.

Positional Arguments
  • src: Evision.Mat.

    Source image (8-bit or 16-bit single channel).

  • code: integer().

    Color space conversion code (see the description below).

Keyword Arguments
  • dcn: integer().

    Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

The function can do the following transformations:

  • Demosaicing using bilinear interpolation
  • COLOR_BayerBG2GRAY , COLOR_BayerGB2GRAY , COLOR_BayerRG2GRAY , COLOR_BayerGR2GRAY
  • COLOR_BayerBG2BGR , COLOR_BayerGB2BGR , COLOR_BayerRG2BGR , COLOR_BayerGR2BGR
  • Demosaicing using Malvar-He-Cutler algorithm (@cite MHT2011)
  • COLOR_BayerBG2GRAY_MHT , COLOR_BayerGB2GRAY_MHT , COLOR_BayerRG2GRAY_MHT , COLOR_BayerGR2GRAY_MHT
  • COLOR_BayerBG2BGR_MHT , COLOR_BayerGB2BGR_MHT , COLOR_BayerRG2BGR_MHT , COLOR_BayerGR2BGR_MHT @sa cvtColor

Python prototype (for reference only):

demosaicing(src, code[, dst[, dcn[, stream]]]) -> dst

Variant 2:

Converts an image from Bayer pattern to RGB or grayscale.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image (8-bit or 16-bit single channel).

  • code: integer().

    Color space conversion code (see the description below).

Keyword Arguments
  • dcn: integer().

    Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

The function can do the following transformations:

  • Demosaicing using bilinear interpolation
  • COLOR_BayerBG2GRAY , COLOR_BayerGB2GRAY , COLOR_BayerRG2GRAY , COLOR_BayerGR2GRAY
  • COLOR_BayerBG2BGR , COLOR_BayerGB2BGR , COLOR_BayerRG2BGR , COLOR_BayerGR2BGR
  • Demosaicing using Malvar-He-Cutler algorithm (@cite MHT2011)
  • COLOR_BayerBG2GRAY_MHT , COLOR_BayerGB2GRAY_MHT , COLOR_BayerRG2GRAY_MHT , COLOR_BayerGR2GRAY_MHT
  • COLOR_BayerBG2BGR_MHT , COLOR_BayerGB2BGR_MHT , COLOR_BayerRG2BGR_MHT , COLOR_BayerGR2BGR_MHT @sa cvtColor

Python prototype (for reference only):

demosaicing(src, code[, dst[, dcn[, stream]]]) -> dst
@spec dft(Keyword.t()) :: any() | {:error, String.t()}
@spec dft(
  Evision.Mat.maybe_mat_in(),
  {number(), number()}
) :: Evision.Mat.t() | {:error, String.t()}
@spec dft(
  Evision.CUDA.GpuMat.t(),
  {number(), number()}
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs a forward or inverse discrete Fourier transform (1D or 2D) of the floating point matrix.

Positional Arguments
  • src: Evision.Mat.

    Source matrix (real or complex).

  • dft_size: Size.

    Size of a discrete Fourier transform.

Keyword Arguments
  • flags: integer().

    Optional flags:

    • DFT_ROWS transforms each individual row of the source matrix.
    • DFT_SCALE scales the result: divide it by the number of elements in the transform (obtained from dft_size ).
    • DFT_INVERSE inverts DFT. Use for complex-complex cases (real-complex and complex-real cases are always forward and inverse, respectively).
    • DFT_COMPLEX_INPUT Specifies that input is complex input with 2 channels.
    • DFT_REAL_OUTPUT specifies the output as real. The source matrix is the result of real-complex transform, so the destination matrix must be real.
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix (real or complex).

Use to handle real matrices ( CV32FC1 ) and complex matrices in the interleaved format ( CV32FC2 ). The source matrix should be continuous, otherwise reallocation and data copying is performed. The function chooses an operation mode depending on the flags, size, and channel count of the source matrix:

  • If the source matrix is complex and the output is not specified as real, the destination matrix is complex and has the dft_size size and CV_32FC2 type. The destination matrix contains a full result of the DFT (forward or inverse).

  • If the source matrix is complex and the output is specified as real, the function assumes that its input is the result of the forward transform (see the next item). The destination matrix has the dft_size size and CV_32FC1 type. It contains the result of the inverse DFT.

  • If the source matrix is real (its type is CV_32FC1 ), forward DFT is performed. The result of the DFT is packed into complex ( CV_32FC2 ) matrix. So, the width of the destination matrix is dft_size.width / 2 + 1 . But if the source is a single column, the height is reduced instead of the width.

@sa dft

Python prototype (for reference only):

dft(src, dft_size[, dst[, flags[, stream]]]) -> dst

Variant 2:

Performs a forward or inverse discrete Fourier transform (1D or 2D) of the floating point matrix.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix (real or complex).

  • dft_size: Size.

    Size of a discrete Fourier transform.

Keyword Arguments
  • flags: integer().

    Optional flags:

    • DFT_ROWS transforms each individual row of the source matrix.
    • DFT_SCALE scales the result: divide it by the number of elements in the transform (obtained from dft_size ).
    • DFT_INVERSE inverts DFT. Use for complex-complex cases (real-complex and complex-real cases are always forward and inverse, respectively).
    • DFT_COMPLEX_INPUT Specifies that input is complex input with 2 channels.
    • DFT_REAL_OUTPUT specifies the output as real. The source matrix is the result of real-complex transform, so the destination matrix must be real.
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix (real or complex).

Use to handle real matrices ( CV32FC1 ) and complex matrices in the interleaved format ( CV32FC2 ). The source matrix should be continuous, otherwise reallocation and data copying is performed. The function chooses an operation mode depending on the flags, size, and channel count of the source matrix:

  • If the source matrix is complex and the output is not specified as real, the destination matrix is complex and has the dft_size size and CV_32FC2 type. The destination matrix contains a full result of the DFT (forward or inverse).

  • If the source matrix is complex and the output is specified as real, the function assumes that its input is the result of the forward transform (see the next item). The destination matrix has the dft_size size and CV_32FC1 type. It contains the result of the inverse DFT.

  • If the source matrix is real (its type is CV_32FC1 ), forward DFT is performed. The result of the DFT is packed into complex ( CV_32FC2 ) matrix. So, the width of the destination matrix is dft_size.width / 2 + 1 . But if the source is a single column, the height is reduced instead of the width.

@sa dft

Python prototype (for reference only):

dft(src, dft_size[, dst[, flags[, stream]]]) -> dst
Link to this function

dft(src, dft_size, opts)

View Source
@spec dft(
  Evision.Mat.maybe_mat_in(),
  {number(), number()},
  [flags: term(), stream: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec dft(
  Evision.CUDA.GpuMat.t(),
  {number(), number()},
  [flags: term(), stream: term()] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs a forward or inverse discrete Fourier transform (1D or 2D) of the floating point matrix.

Positional Arguments
  • src: Evision.Mat.

    Source matrix (real or complex).

  • dft_size: Size.

    Size of a discrete Fourier transform.

Keyword Arguments
  • flags: integer().

    Optional flags:

    • DFT_ROWS transforms each individual row of the source matrix.
    • DFT_SCALE scales the result: divide it by the number of elements in the transform (obtained from dft_size ).
    • DFT_INVERSE inverts DFT. Use for complex-complex cases (real-complex and complex-real cases are always forward and inverse, respectively).
    • DFT_COMPLEX_INPUT Specifies that input is complex input with 2 channels.
    • DFT_REAL_OUTPUT specifies the output as real. The source matrix is the result of real-complex transform, so the destination matrix must be real.
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix (real or complex).

Use to handle real matrices ( CV32FC1 ) and complex matrices in the interleaved format ( CV32FC2 ). The source matrix should be continuous, otherwise reallocation and data copying is performed. The function chooses an operation mode depending on the flags, size, and channel count of the source matrix:

  • If the source matrix is complex and the output is not specified as real, the destination matrix is complex and has the dft_size size and CV_32FC2 type. The destination matrix contains a full result of the DFT (forward or inverse).

  • If the source matrix is complex and the output is specified as real, the function assumes that its input is the result of the forward transform (see the next item). The destination matrix has the dft_size size and CV_32FC1 type. It contains the result of the inverse DFT.

  • If the source matrix is real (its type is CV_32FC1 ), forward DFT is performed. The result of the DFT is packed into complex ( CV_32FC2 ) matrix. So, the width of the destination matrix is dft_size.width / 2 + 1 . But if the source is a single column, the height is reduced instead of the width.

@sa dft

Python prototype (for reference only):

dft(src, dft_size[, dst[, flags[, stream]]]) -> dst

Variant 2:

Performs a forward or inverse discrete Fourier transform (1D or 2D) of the floating point matrix.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix (real or complex).

  • dft_size: Size.

    Size of a discrete Fourier transform.

Keyword Arguments
  • flags: integer().

    Optional flags:

    • DFT_ROWS transforms each individual row of the source matrix.
    • DFT_SCALE scales the result: divide it by the number of elements in the transform (obtained from dft_size ).
    • DFT_INVERSE inverts DFT. Use for complex-complex cases (real-complex and complex-real cases are always forward and inverse, respectively).
    • DFT_COMPLEX_INPUT Specifies that input is complex input with 2 channels.
    • DFT_REAL_OUTPUT specifies the output as real. The source matrix is the result of real-complex transform, so the destination matrix must be real.
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix (real or complex).

Use to handle real matrices ( CV32FC1 ) and complex matrices in the interleaved format ( CV32FC2 ). The source matrix should be continuous, otherwise reallocation and data copying is performed. The function chooses an operation mode depending on the flags, size, and channel count of the source matrix:

  • If the source matrix is complex and the output is not specified as real, the destination matrix is complex and has the dft_size size and CV_32FC2 type. The destination matrix contains a full result of the DFT (forward or inverse).

  • If the source matrix is complex and the output is specified as real, the function assumes that its input is the result of the forward transform (see the next item). The destination matrix has the dft_size size and CV_32FC1 type. It contains the result of the inverse DFT.

  • If the source matrix is real (its type is CV_32FC1 ), forward DFT is performed. The result of the DFT is packed into complex ( CV_32FC2 ) matrix. So, the width of the destination matrix is dft_size.width / 2 + 1 . But if the source is a single column, the height is reduced instead of the width.

@sa dft

Python prototype (for reference only):

dft(src, dft_size[, dst[, flags[, stream]]]) -> dst
@spec divide(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Computes a matrix-matrix or matrix-scalar division.

Positional Arguments
Keyword Arguments
  • scale: double.

    Optional scale factor.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

This function, in contrast to divide, uses a round-down rounding mode. @sa divide

Python prototype (for reference only):

divide(src1, src2[, dst[, scale[, dtype[, stream]]]]) -> dst

Variant 2:

Computes a matrix-matrix or matrix-scalar division.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or a scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • scale: double.

    Optional scale factor.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

This function, in contrast to divide, uses a round-down rounding mode. @sa divide

Python prototype (for reference only):

divide(src1, src2[, dst[, scale[, dtype[, stream]]]]) -> dst
Link to this function

divide(src1, src2, opts)

View Source
@spec divide(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [dtype: term(), scale: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec divide(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [dtype: term(), scale: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes a matrix-matrix or matrix-scalar division.

Positional Arguments
Keyword Arguments
  • scale: double.

    Optional scale factor.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

This function, in contrast to divide, uses a round-down rounding mode. @sa divide

Python prototype (for reference only):

divide(src1, src2[, dst[, scale[, dtype[, stream]]]]) -> dst

Variant 2:

Computes a matrix-matrix or matrix-scalar division.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or a scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • scale: double.

    Optional scale factor.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

This function, in contrast to divide, uses a round-down rounding mode. @sa divide

Python prototype (for reference only):

divide(src1, src2[, dst[, scale[, dtype[, stream]]]]) -> dst
Link to this function

drawColorDisp(named_args)

View Source
@spec drawColorDisp(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

drawColorDisp(src_disp, ndisp)

View Source
@spec drawColorDisp(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec drawColorDisp(Evision.CUDA.GpuMat.t(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Colors a disparity image.

Positional Arguments
  • src_disp: Evision.Mat.

    Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image. If 16-bit signed format is used, the values are assumed to have no fractional bits.

  • ndisp: integer().

    Number of disparities.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst_disp: Evision.Mat.t().

    Output disparity image. It has the same size as src_disp. The type is CV_8UC4 in BGRA format (alpha = 255).

This function draws a colored disparity map by converting disparity values from [0..ndisp) interval first to HSV color space (where different disparity values correspond to different hues) and then converting the pixels to RGB for visualization.

Python prototype (for reference only):

drawColorDisp(src_disp, ndisp[, dst_disp[, stream]]) -> dst_disp

Variant 2:

Colors a disparity image.

Positional Arguments
  • src_disp: Evision.CUDA.GpuMat.t().

    Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image. If 16-bit signed format is used, the values are assumed to have no fractional bits.

  • ndisp: integer().

    Number of disparities.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst_disp: Evision.CUDA.GpuMat.t().

    Output disparity image. It has the same size as src_disp. The type is CV_8UC4 in BGRA format (alpha = 255).

This function draws a colored disparity map by converting disparity values from [0..ndisp) interval first to HSV color space (where different disparity values correspond to different hues) and then converting the pixels to RGB for visualization.

Python prototype (for reference only):

drawColorDisp(src_disp, ndisp[, dst_disp[, stream]]) -> dst_disp
Link to this function

drawColorDisp(src_disp, ndisp, opts)

View Source
@spec drawColorDisp(Evision.Mat.maybe_mat_in(), integer(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec drawColorDisp(Evision.CUDA.GpuMat.t(), integer(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Colors a disparity image.

Positional Arguments
  • src_disp: Evision.Mat.

    Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image. If 16-bit signed format is used, the values are assumed to have no fractional bits.

  • ndisp: integer().

    Number of disparities.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst_disp: Evision.Mat.t().

    Output disparity image. It has the same size as src_disp. The type is CV_8UC4 in BGRA format (alpha = 255).

This function draws a colored disparity map by converting disparity values from [0..ndisp) interval first to HSV color space (where different disparity values correspond to different hues) and then converting the pixels to RGB for visualization.

Python prototype (for reference only):

drawColorDisp(src_disp, ndisp[, dst_disp[, stream]]) -> dst_disp

Variant 2:

Colors a disparity image.

Positional Arguments
  • src_disp: Evision.CUDA.GpuMat.t().

    Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image. If 16-bit signed format is used, the values are assumed to have no fractional bits.

  • ndisp: integer().

    Number of disparities.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst_disp: Evision.CUDA.GpuMat.t().

    Output disparity image. It has the same size as src_disp. The type is CV_8UC4 in BGRA format (alpha = 255).

This function draws a colored disparity map by converting disparity values from [0..ndisp) interval first to HSV color space (where different disparity values correspond to different hues) and then converting the pixels to RGB for visualization.

Python prototype (for reference only):

drawColorDisp(src_disp, ndisp[, dst_disp[, stream]]) -> dst_disp
Link to this function

ensureSizeIsEnough(named_args)

View Source
@spec ensureSizeIsEnough(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

ensureSizeIsEnough(rows, cols, type)

View Source
@spec ensureSizeIsEnough(integer(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

Ensures that the size of a matrix is big enough and the matrix has a proper type.

Positional Arguments
  • rows: integer().

    Minimum desired number of rows.

  • cols: integer().

    Minimum desired number of columns.

  • type: integer().

    Desired matrix type.

Return
  • arr: Evision.Mat.t().

    Destination matrix.

The function does not reallocate memory if the matrix has proper attributes already.

Python prototype (for reference only):

ensureSizeIsEnough(rows, cols, type[, arr]) -> arr
Link to this function

ensureSizeIsEnough(rows, cols, type, opts)

View Source
@spec ensureSizeIsEnough(
  integer(),
  integer(),
  integer(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

Ensures that the size of a matrix is big enough and the matrix has a proper type.

Positional Arguments
  • rows: integer().

    Minimum desired number of rows.

  • cols: integer().

    Minimum desired number of columns.

  • type: integer().

    Desired matrix type.

Return
  • arr: Evision.Mat.t().

    Destination matrix.

The function does not reallocate memory if the matrix has proper attributes already.

Python prototype (for reference only):

ensureSizeIsEnough(rows, cols, type[, arr]) -> arr
Link to this function

equalizeHist(named_args)

View Source
@spec equalizeHist(Keyword.t()) :: any() | {:error, String.t()}
@spec equalizeHist(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec equalizeHist(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Equalizes the histogram of a grayscale image.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

@sa equalizeHist

Python prototype (for reference only):

equalizeHist(src[, dst[, stream]]) -> dst

Variant 2:

Equalizes the histogram of a grayscale image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image with CV_8UC1 type.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

@sa equalizeHist

Python prototype (for reference only):

equalizeHist(src[, dst[, stream]]) -> dst
@spec equalizeHist(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec equalizeHist(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Equalizes the histogram of a grayscale image.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

@sa equalizeHist

Python prototype (for reference only):

equalizeHist(src[, dst[, stream]]) -> dst

Variant 2:

Equalizes the histogram of a grayscale image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image with CV_8UC1 type.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

@sa equalizeHist

Python prototype (for reference only):

equalizeHist(src[, dst[, stream]]) -> dst
@spec evenLevels(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

evenLevels(nLevels, lowerLevel, upperLevel)

View Source
@spec evenLevels(integer(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

Computes levels with even distribution.

Positional Arguments
  • nLevels: integer().

    Number of computed levels. nLevels must be at least 2.

  • lowerLevel: integer().

    Lower boundary value of the lowest level.

  • upperLevel: integer().

    Upper boundary value of the greatest level.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • levels: Evision.Mat.t().

    Destination array. levels has 1 row, nLevels columns, and the CV_32SC1 type.

Python prototype (for reference only):

evenLevels(nLevels, lowerLevel, upperLevel[, levels[, stream]]) -> levels
Link to this function

evenLevels(nLevels, lowerLevel, upperLevel, opts)

View Source
@spec evenLevels(integer(), integer(), integer(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

Computes levels with even distribution.

Positional Arguments
  • nLevels: integer().

    Number of computed levels. nLevels must be at least 2.

  • lowerLevel: integer().

    Lower boundary value of the lowest level.

  • upperLevel: integer().

    Upper boundary value of the greatest level.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • levels: Evision.Mat.t().

    Destination array. levels has 1 row, nLevels columns, and the CV_32SC1 type.

Python prototype (for reference only):

evenLevels(nLevels, lowerLevel, upperLevel[, levels[, stream]]) -> levels
@spec exp(Keyword.t()) :: any() | {:error, String.t()}
@spec exp(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec exp(Evision.CUDA.GpuMat.t()) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes an exponent of each matrix element.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

@sa exp

Python prototype (for reference only):

exp(src[, dst[, stream]]) -> dst

Variant 2:

Computes an exponent of each matrix element.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

@sa exp

Python prototype (for reference only):

exp(src[, dst[, stream]]) -> dst
@spec exp(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec exp(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes an exponent of each matrix element.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

@sa exp

Python prototype (for reference only):

exp(src[, dst[, stream]]) -> dst

Variant 2:

Computes an exponent of each matrix element.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

@sa exp

Python prototype (for reference only):

exp(src[, dst[, stream]]) -> dst
Link to this function

fastNlMeansDenoising(named_args)

View Source
@spec fastNlMeansDenoising(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

fastNlMeansDenoising(src, h)

View Source
@spec fastNlMeansDenoising(Evision.CUDA.GpuMat.t(), number()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Perform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising with several computational optimizations. Noise expected to be a gaussian white noise

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Input 8-bit 1-channel, 2-channel or 3-channel image.

  • h: float.

    Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

Keyword Arguments
  • search_window: integer().

    Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater search_window - greater denoising time. Recommended value 21 pixels

  • block_size: integer().

    Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous invocations.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Output image with the same size and type as src .

This function expected to be applied to grayscale images. For colored images look at FastNonLocalMeansDenoising::labMethod. @sa fastNlMeansDenoising

Python prototype (for reference only):

fastNlMeansDenoising(src, h[, dst[, search_window[, block_size[, stream]]]]) -> dst
Link to this function

fastNlMeansDenoising(src, h, opts)

View Source
@spec fastNlMeansDenoising(
  Evision.CUDA.GpuMat.t(),
  number(),
  [block_size: term(), search_window: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Perform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising with several computational optimizations. Noise expected to be a gaussian white noise

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Input 8-bit 1-channel, 2-channel or 3-channel image.

  • h: float.

    Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

Keyword Arguments
  • search_window: integer().

    Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater search_window - greater denoising time. Recommended value 21 pixels

  • block_size: integer().

    Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous invocations.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Output image with the same size and type as src .

This function expected to be applied to grayscale images. For colored images look at FastNonLocalMeansDenoising::labMethod. @sa fastNlMeansDenoising

Python prototype (for reference only):

fastNlMeansDenoising(src, h[, dst[, search_window[, block_size[, stream]]]]) -> dst
Link to this function

fastNlMeansDenoisingColored(named_args)

View Source
@spec fastNlMeansDenoisingColored(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

fastNlMeansDenoisingColored(src, h_luminance, photo_render)

View Source
@spec fastNlMeansDenoisingColored(Evision.CUDA.GpuMat.t(), number(), number()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Modification of fastNlMeansDenoising function for colored images

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Input 8-bit 3-channel image.

  • h_luminance: float.

    Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

  • photo_render: float.

    float The same as h but for color components. For most images value equals 10 will be enough to remove colored noise and do not distort colors

Keyword Arguments
  • search_window: integer().

    Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater search_window - greater denoising time. Recommended value 21 pixels

  • block_size: integer().

    Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous invocations.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Output image with the same size and type as src .

The function converts image to CIELAB colorspace and then separately denoise L and AB components with given h parameters using FastNonLocalMeansDenoising::simpleMethod function. @sa fastNlMeansDenoisingColored

Python prototype (for reference only):

fastNlMeansDenoisingColored(src, h_luminance, photo_render[, dst[, search_window[, block_size[, stream]]]]) -> dst
Link to this function

fastNlMeansDenoisingColored(src, h_luminance, photo_render, opts)

View Source
@spec fastNlMeansDenoisingColored(
  Evision.CUDA.GpuMat.t(),
  number(),
  number(),
  [block_size: term(), search_window: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Modification of fastNlMeansDenoising function for colored images

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Input 8-bit 3-channel image.

  • h_luminance: float.

    Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

  • photo_render: float.

    float The same as h but for color components. For most images value equals 10 will be enough to remove colored noise and do not distort colors

Keyword Arguments
  • search_window: integer().

    Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater search_window - greater denoising time. Recommended value 21 pixels

  • block_size: integer().

    Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous invocations.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Output image with the same size and type as src .

The function converts image to CIELAB colorspace and then separately denoise L and AB components with given h parameters using FastNonLocalMeansDenoising::simpleMethod function. @sa fastNlMeansDenoisingColored

Python prototype (for reference only):

fastNlMeansDenoisingColored(src, h_luminance, photo_render[, dst[, search_window[, block_size[, stream]]]]) -> dst
@spec findMinMax(Keyword.t()) :: any() | {:error, String.t()}
@spec findMinMax(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec findMinMax(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

findMinMax

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

findMinMax(src[, dst[, mask[, stream]]]) -> dst

Variant 2:

findMinMax

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

findMinMax(src[, dst[, mask[, stream]]]) -> dst
@spec findMinMax(Evision.Mat.maybe_mat_in(), [mask: term(), stream: term()] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec findMinMax(Evision.CUDA.GpuMat.t(), [mask: term(), stream: term()] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

findMinMax

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

findMinMax(src[, dst[, mask[, stream]]]) -> dst

Variant 2:

findMinMax

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

findMinMax(src[, dst[, mask[, stream]]]) -> dst
Link to this function

findMinMaxLoc(named_args)

View Source
@spec findMinMaxLoc(Keyword.t()) :: any() | {:error, String.t()}
@spec findMinMaxLoc(Evision.Mat.maybe_mat_in()) ::
  {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec findMinMaxLoc(Evision.CUDA.GpuMat.t()) ::
  {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

findMinMaxLoc

Positional Arguments
Keyword Arguments
Return
  • minMaxVals: Evision.Mat.t().
  • loc: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

findMinMaxLoc(src[, minMaxVals[, loc[, mask[, stream]]]]) -> minMaxVals, loc

Variant 2:

findMinMaxLoc

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • minMaxVals: Evision.CUDA.GpuMat.t().
  • loc: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

findMinMaxLoc(src[, minMaxVals[, loc[, mask[, stream]]]]) -> minMaxVals, loc
Link to this function

findMinMaxLoc(src, opts)

View Source
@spec findMinMaxLoc(Evision.Mat.maybe_mat_in(), [mask: term(), stream: term()] | nil) ::
  {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec findMinMaxLoc(Evision.CUDA.GpuMat.t(), [mask: term(), stream: term()] | nil) ::
  {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

findMinMaxLoc

Positional Arguments
Keyword Arguments
Return
  • minMaxVals: Evision.Mat.t().
  • loc: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

findMinMaxLoc(src[, minMaxVals[, loc[, mask[, stream]]]]) -> minMaxVals, loc

Variant 2:

findMinMaxLoc

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().
  • stream: Evision.CUDA.Stream.t().
Return
  • minMaxVals: Evision.CUDA.GpuMat.t().
  • loc: Evision.CUDA.GpuMat.t().

Has overloading in C++

Python prototype (for reference only):

findMinMaxLoc(src[, minMaxVals[, loc[, mask[, stream]]]]) -> minMaxVals, loc
@spec flip(Keyword.t()) :: any() | {:error, String.t()}
@spec flip(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec flip(Evision.CUDA.GpuMat.t(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Flips a 2D matrix around vertical, horizontal, or both axes.

Positional Arguments
  • src: Evision.Mat.

    Source matrix. Supports 1, 3 and 4 channels images with CV_8U, CV_16U, CV_32S or CV_32F depth.

  • flipCode: integer().

    Flip mode for the source:

    • 0 Flips around x-axis.
    • > 0 Flips around y-axis.
    • \< 0 Flips around both axes.
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix.

@sa flip

Python prototype (for reference only):

flip(src, flipCode[, dst[, stream]]) -> dst

Variant 2:

Flips a 2D matrix around vertical, horizontal, or both axes.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix. Supports 1, 3 and 4 channels images with CV_8U, CV_16U, CV_32S or CV_32F depth.

  • flipCode: integer().

    Flip mode for the source:

    • 0 Flips around x-axis.
    • > 0 Flips around y-axis.
    • \< 0 Flips around both axes.
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix.

@sa flip

Python prototype (for reference only):

flip(src, flipCode[, dst[, stream]]) -> dst
Link to this function

flip(src, flipCode, opts)

View Source
@spec flip(Evision.Mat.maybe_mat_in(), integer(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec flip(Evision.CUDA.GpuMat.t(), integer(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Flips a 2D matrix around vertical, horizontal, or both axes.

Positional Arguments
  • src: Evision.Mat.

    Source matrix. Supports 1, 3 and 4 channels images with CV_8U, CV_16U, CV_32S or CV_32F depth.

  • flipCode: integer().

    Flip mode for the source:

    • 0 Flips around x-axis.
    • > 0 Flips around y-axis.
    • \< 0 Flips around both axes.
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix.

@sa flip

Python prototype (for reference only):

flip(src, flipCode[, dst[, stream]]) -> dst

Variant 2:

Flips a 2D matrix around vertical, horizontal, or both axes.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix. Supports 1, 3 and 4 channels images with CV_8U, CV_16U, CV_32S or CV_32F depth.

  • flipCode: integer().

    Flip mode for the source:

    • 0 Flips around x-axis.
    • > 0 Flips around y-axis.
    • \< 0 Flips around both axes.
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix.

@sa flip

Python prototype (for reference only):

flip(src, flipCode[, dst[, stream]]) -> dst
Link to this function

gammaCorrection(named_args)

View Source
@spec gammaCorrection(Keyword.t()) :: any() | {:error, String.t()}
@spec gammaCorrection(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec gammaCorrection(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Routines for correcting image color gamma.

Positional Arguments
Keyword Arguments
  • forward: bool.

    true for forward gamma correction or false for inverse gamma correction.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

Python prototype (for reference only):

gammaCorrection(src[, dst[, forward[, stream]]]) -> dst

Variant 2:

Routines for correcting image color gamma.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image (3- or 4-channel 8 bit).

Keyword Arguments
  • forward: bool.

    true for forward gamma correction or false for inverse gamma correction.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

Python prototype (for reference only):

gammaCorrection(src[, dst[, forward[, stream]]]) -> dst
Link to this function

gammaCorrection(src, opts)

View Source
@spec gammaCorrection(
  Evision.Mat.maybe_mat_in(),
  [forward: term(), stream: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec gammaCorrection(
  Evision.CUDA.GpuMat.t(),
  [forward: term(), stream: term()] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Routines for correcting image color gamma.

Positional Arguments
Keyword Arguments
  • forward: bool.

    true for forward gamma correction or false for inverse gamma correction.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image.

Python prototype (for reference only):

gammaCorrection(src[, dst[, forward[, stream]]]) -> dst

Variant 2:

Routines for correcting image color gamma.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image (3- or 4-channel 8 bit).

Keyword Arguments
  • forward: bool.

    true for forward gamma correction or false for inverse gamma correction.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

Python prototype (for reference only):

gammaCorrection(src[, dst[, forward[, stream]]]) -> dst
@spec gemm(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

gemm(src1, src2, alpha, src3, beta)

View Source

Variant 1:

Performs generalized matrix multiplication.

Positional Arguments
  • src1: Evision.Mat.

    First multiplied input matrix that should have CV_32FC1 , CV_64FC1 , CV_32FC2 , or CV_64FC2 type.

  • src2: Evision.Mat.

    Second multiplied input matrix of the same type as src1 .

  • alpha: double.

    Weight of the matrix product.

  • src3: Evision.Mat.

    Third optional delta matrix added to the matrix product. It should have the same type as src1 and src2 .

  • beta: double.

    Weight of src3 .

Keyword Arguments
  • flags: integer().

    Operation flags:

    • GEMM_1_T transpose src1
    • GEMM_2_T transpose src2
    • GEMM_3_T transpose src3
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix. It has the proper size and the same type as input matrices.

The function performs generalized matrix multiplication similar to the gemm functions in BLAS level

  1. For example, gemm(src1, src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T) corresponds to \f[\texttt{dst} = \texttt{alpha} \cdot \texttt{src1} ^T \cdot \texttt{src2} + \texttt{beta} \cdot \texttt{src3} ^T\f] Note: Transposition operation doesn't support CV_64FC2 input type. @sa gemm

Python prototype (for reference only):

gemm(src1, src2, alpha, src3, beta[, dst[, flags[, stream]]]) -> dst

Variant 2:

Performs generalized matrix multiplication.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First multiplied input matrix that should have CV_32FC1 , CV_64FC1 , CV_32FC2 , or CV_64FC2 type.

  • src2: Evision.CUDA.GpuMat.t().

    Second multiplied input matrix of the same type as src1 .

  • alpha: double.

    Weight of the matrix product.

  • src3: Evision.CUDA.GpuMat.t().

    Third optional delta matrix added to the matrix product. It should have the same type as src1 and src2 .

  • beta: double.

    Weight of src3 .

Keyword Arguments
  • flags: integer().

    Operation flags:

    • GEMM_1_T transpose src1
    • GEMM_2_T transpose src2
    • GEMM_3_T transpose src3
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix. It has the proper size and the same type as input matrices.

The function performs generalized matrix multiplication similar to the gemm functions in BLAS level

  1. For example, gemm(src1, src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T) corresponds to \f[\texttt{dst} = \texttt{alpha} \cdot \texttt{src1} ^T \cdot \texttt{src2} + \texttt{beta} \cdot \texttt{src3} ^T\f] Note: Transposition operation doesn't support CV_64FC2 input type. @sa gemm

Python prototype (for reference only):

gemm(src1, src2, alpha, src3, beta[, dst[, flags[, stream]]]) -> dst
Link to this function

gemm(src1, src2, alpha, src3, beta, opts)

View Source
@spec gemm(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  number(),
  Evision.Mat.maybe_mat_in(),
  number(),
  [flags: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec gemm(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  number(),
  Evision.CUDA.GpuMat.t(),
  number(),
  [flags: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs generalized matrix multiplication.

Positional Arguments
  • src1: Evision.Mat.

    First multiplied input matrix that should have CV_32FC1 , CV_64FC1 , CV_32FC2 , or CV_64FC2 type.

  • src2: Evision.Mat.

    Second multiplied input matrix of the same type as src1 .

  • alpha: double.

    Weight of the matrix product.

  • src3: Evision.Mat.

    Third optional delta matrix added to the matrix product. It should have the same type as src1 and src2 .

  • beta: double.

    Weight of src3 .

Keyword Arguments
  • flags: integer().

    Operation flags:

    • GEMM_1_T transpose src1
    • GEMM_2_T transpose src2
    • GEMM_3_T transpose src3
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix. It has the proper size and the same type as input matrices.

The function performs generalized matrix multiplication similar to the gemm functions in BLAS level

  1. For example, gemm(src1, src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T) corresponds to \f[\texttt{dst} = \texttt{alpha} \cdot \texttt{src1} ^T \cdot \texttt{src2} + \texttt{beta} \cdot \texttt{src3} ^T\f] Note: Transposition operation doesn't support CV_64FC2 input type. @sa gemm

Python prototype (for reference only):

gemm(src1, src2, alpha, src3, beta[, dst[, flags[, stream]]]) -> dst

Variant 2:

Performs generalized matrix multiplication.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First multiplied input matrix that should have CV_32FC1 , CV_64FC1 , CV_32FC2 , or CV_64FC2 type.

  • src2: Evision.CUDA.GpuMat.t().

    Second multiplied input matrix of the same type as src1 .

  • alpha: double.

    Weight of the matrix product.

  • src3: Evision.CUDA.GpuMat.t().

    Third optional delta matrix added to the matrix product. It should have the same type as src1 and src2 .

  • beta: double.

    Weight of src3 .

Keyword Arguments
  • flags: integer().

    Operation flags:

    • GEMM_1_T transpose src1
    • GEMM_2_T transpose src2
    • GEMM_3_T transpose src3
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix. It has the proper size and the same type as input matrices.

The function performs generalized matrix multiplication similar to the gemm functions in BLAS level

  1. For example, gemm(src1, src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T) corresponds to \f[\texttt{dst} = \texttt{alpha} \cdot \texttt{src1} ^T \cdot \texttt{src2} + \texttt{beta} \cdot \texttt{src3} ^T\f] Note: Transposition operation doesn't support CV_64FC2 input type. @sa gemm

Python prototype (for reference only):

gemm(src1, src2, alpha, src3, beta[, dst[, flags[, stream]]]) -> dst
Link to this function

getCudaEnabledDeviceCount()

View Source
@spec getCudaEnabledDeviceCount() :: integer() | {:error, String.t()}

Returns the number of installed CUDA-enabled devices.

Return
  • retval: integer()

Use this function before any other CUDA functions calls. If OpenCV is compiled without CUDA support, this function returns 0. If the CUDA driver is not installed, or is incompatible, this function returns -1.

Python prototype (for reference only):

getCudaEnabledDeviceCount() -> retval
Link to this function

getCudaEnabledDeviceCount(named_args)

View Source
@spec getCudaEnabledDeviceCount(Keyword.t()) :: any() | {:error, String.t()}
@spec getDevice() :: integer() | {:error, String.t()}

Returns the current device index set by cuda::setDevice or initialized by default.

Return
  • retval: integer()

Python prototype (for reference only):

getDevice() -> retval
@spec getDevice(Keyword.t()) :: any() | {:error, String.t()}
@spec histEven(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

histEven(src, histSize, lowerLevel, upperLevel)

View Source
@spec histEven(Evision.Mat.maybe_mat_in(), integer(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec histEven(Evision.CUDA.GpuMat.t(), integer(), integer(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Calculates a histogram with evenly distributed bins.

Positional Arguments
  • src: Evision.Mat.

    Source image. CV_8U, CV_16U, or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately.

  • histSize: integer().

    Size of the histogram.

  • lowerLevel: integer().

    Lower boundary of lowest-level bin.

  • upperLevel: integer().

    Upper boundary of highest-level bin.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.Mat.t().

    Destination histogram with one row, histSize columns, and the CV_32S type.

Python prototype (for reference only):

histEven(src, histSize, lowerLevel, upperLevel[, hist[, stream]]) -> hist

Variant 2:

Calculates a histogram with evenly distributed bins.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. CV_8U, CV_16U, or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately.

  • histSize: integer().

    Size of the histogram.

  • lowerLevel: integer().

    Lower boundary of lowest-level bin.

  • upperLevel: integer().

    Upper boundary of highest-level bin.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.CUDA.GpuMat.t().

    Destination histogram with one row, histSize columns, and the CV_32S type.

Python prototype (for reference only):

histEven(src, histSize, lowerLevel, upperLevel[, hist[, stream]]) -> hist
Link to this function

histEven(src, histSize, lowerLevel, upperLevel, opts)

View Source
@spec histEven(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  integer(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec histEven(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  integer(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}
@spec histEven(
  Evision.Mat.maybe_mat_in(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  integer()
) ::
  :ok | {:error, String.t()}
@spec histEven(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  integer()
) ::
  :ok | {:error, String.t()}

Variant 1:

Calculates a histogram with evenly distributed bins.

Positional Arguments
  • src: Evision.Mat.

    Source image. CV_8U, CV_16U, or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately.

  • histSize: integer().

    Size of the histogram.

  • lowerLevel: integer().

    Lower boundary of lowest-level bin.

  • upperLevel: integer().

    Upper boundary of highest-level bin.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.Mat.t().

    Destination histogram with one row, histSize columns, and the CV_32S type.

Python prototype (for reference only):

histEven(src, histSize, lowerLevel, upperLevel[, hist[, stream]]) -> hist

Variant 2:

Calculates a histogram with evenly distributed bins.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. CV_8U, CV_16U, or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately.

  • histSize: integer().

    Size of the histogram.

  • lowerLevel: integer().

    Lower boundary of lowest-level bin.

  • upperLevel: integer().

    Upper boundary of highest-level bin.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.CUDA.GpuMat.t().

    Destination histogram with one row, histSize columns, and the CV_32S type.

Python prototype (for reference only):

histEven(src, histSize, lowerLevel, upperLevel[, hist[, stream]]) -> hist

Variant 3:

histEven

Positional Arguments
  • src: Evision.Mat
  • hist: GpuMat*
  • histSize: int*
  • lowerLevel: int*
  • upperLevel: int*
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

Has overloading in C++

Python prototype (for reference only):

histEven(src, hist, histSize, lowerLevel, upperLevel[, stream]) -> None

Variant 4:

histEven

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
  • hist: GpuMat*
  • histSize: int*
  • lowerLevel: int*
  • upperLevel: int*
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

Has overloading in C++

Python prototype (for reference only):

histEven(src, hist, histSize, lowerLevel, upperLevel[, stream]) -> None
Link to this function

histEven(src, hist, histSize, lowerLevel, upperLevel, opts)

View Source
@spec histEven(
  Evision.Mat.maybe_mat_in(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  integer(),
  [{:stream, term()}] | nil
) :: :ok | {:error, String.t()}
@spec histEven(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  integer(),
  [{:stream, term()}] | nil
) :: :ok | {:error, String.t()}

Variant 1:

histEven

Positional Arguments
  • src: Evision.Mat
  • hist: GpuMat*
  • histSize: int*
  • lowerLevel: int*
  • upperLevel: int*
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

Has overloading in C++

Python prototype (for reference only):

histEven(src, hist, histSize, lowerLevel, upperLevel[, stream]) -> None

Variant 2:

histEven

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
  • hist: GpuMat*
  • histSize: int*
  • lowerLevel: int*
  • upperLevel: int*
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

Has overloading in C++

Python prototype (for reference only):

histEven(src, hist, histSize, lowerLevel, upperLevel[, stream]) -> None
@spec histRange(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Calculates a histogram with bins determined by the levels array.

Positional Arguments
  • src: Evision.Mat.

    Source image. CV_8U , CV_16U , or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately.

  • levels: Evision.Mat.

    Number of levels in the histogram.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.Mat.t().

    Destination histogram with one row, (levels.cols-1) columns, and the CV_32SC1 type.

Python prototype (for reference only):

histRange(src, levels[, hist[, stream]]) -> hist

Variant 2:

Calculates a histogram with bins determined by the levels array.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. CV_8U , CV_16U , or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately.

  • levels: Evision.CUDA.GpuMat.t().

    Number of levels in the histogram.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.CUDA.GpuMat.t().

    Destination histogram with one row, (levels.cols-1) columns, and the CV_32SC1 type.

Python prototype (for reference only):

histRange(src, levels[, hist[, stream]]) -> hist
Link to this function

histRange(src, levels, opts)

View Source
@spec histRange(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec histRange(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}
@spec histRange(
  Evision.Mat.maybe_mat_in(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t()
) ::
  :ok | {:error, String.t()}
@spec histRange(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t()
) ::
  :ok | {:error, String.t()}

Variant 1:

Calculates a histogram with bins determined by the levels array.

Positional Arguments
  • src: Evision.Mat.

    Source image. CV_8U , CV_16U , or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately.

  • levels: Evision.Mat.

    Number of levels in the histogram.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.Mat.t().

    Destination histogram with one row, (levels.cols-1) columns, and the CV_32SC1 type.

Python prototype (for reference only):

histRange(src, levels[, hist[, stream]]) -> hist

Variant 2:

Calculates a histogram with bins determined by the levels array.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. CV_8U , CV_16U , or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately.

  • levels: Evision.CUDA.GpuMat.t().

    Number of levels in the histogram.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • hist: Evision.CUDA.GpuMat.t().

    Destination histogram with one row, (levels.cols-1) columns, and the CV_32SC1 type.

Python prototype (for reference only):

histRange(src, levels[, hist[, stream]]) -> hist

Variant 3:

histRange

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

Has overloading in C++

Python prototype (for reference only):

histRange(src, hist, levels[, stream]) -> None

Variant 4:

histRange

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
  • hist: GpuMat*
  • levels: GpuMat*
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

Has overloading in C++

Python prototype (for reference only):

histRange(src, hist, levels[, stream]) -> None
Link to this function

histRange(src, hist, levels, opts)

View Source
@spec histRange(
  Evision.Mat.maybe_mat_in(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:stream, term()}] | nil
) :: :ok | {:error, String.t()}
@spec histRange(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:stream, term()}] | nil
) :: :ok | {:error, String.t()}

Variant 1:

histRange

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

Has overloading in C++

Python prototype (for reference only):

histRange(src, hist, levels[, stream]) -> None

Variant 2:

histRange

Positional Arguments
  • src: Evision.CUDA.GpuMat.t()
  • hist: GpuMat*
  • levels: GpuMat*
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

Has overloading in C++

Python prototype (for reference only):

histRange(src, hist, levels[, stream]) -> None
@spec inRange(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

inRange(src, lowerb, upperb)

View Source

Variant 1:

Checks if array elements lie between two scalars.

Positional Arguments
  • src: Evision.Mat.

    first input array.

  • lowerb: Evision.scalar().

    inclusive lower boundary cv::Scalar.

  • upperb: Evision.scalar().

    inclusive upper boundary cv::Scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    output array of the same size as src and CV_8U type.

The function checks the range as follows:

  • For every element of a single-channel input array: \f[\texttt{dst} (I)= \texttt{lowerb}_0 \leq \texttt{src} (I)_0 \leq \texttt{upperb}_0\f]

  • For two-channel arrays: \f[\texttt{dst} (I)= \texttt{lowerb}_0 \leq \texttt{src} (I)_0 \leq \texttt{upperb}_0 \land \texttt{lowerb}_1 \leq \texttt{src} (I)_1 \leq \texttt{upperb}_1\f]

  • and so forth.

That is, dst (I) is set to 255 (all 1 -bits) if src (I) is within the specified 1D, 2D, 3D, ... box and 0 otherwise. Note that unlike the CPU inRange, this does NOT accept an array for lowerb or upperb, only a cv::Scalar.

@sa cv::inRange

Python prototype (for reference only):

inRange(src, lowerb, upperb[, dst[, stream]]) -> dst

Variant 2:

Checks if array elements lie between two scalars.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    first input array.

  • lowerb: Evision.scalar().

    inclusive lower boundary cv::Scalar.

  • upperb: Evision.scalar().

    inclusive upper boundary cv::Scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    output array of the same size as src and CV_8U type.

The function checks the range as follows:

  • For every element of a single-channel input array: \f[\texttt{dst} (I)= \texttt{lowerb}_0 \leq \texttt{src} (I)_0 \leq \texttt{upperb}_0\f]

  • For two-channel arrays: \f[\texttt{dst} (I)= \texttt{lowerb}_0 \leq \texttt{src} (I)_0 \leq \texttt{upperb}_0 \land \texttt{lowerb}_1 \leq \texttt{src} (I)_1 \leq \texttt{upperb}_1\f]

  • and so forth.

That is, dst (I) is set to 255 (all 1 -bits) if src (I) is within the specified 1D, 2D, 3D, ... box and 0 otherwise. Note that unlike the CPU inRange, this does NOT accept an array for lowerb or upperb, only a cv::Scalar.

@sa cv::inRange

Python prototype (for reference only):

inRange(src, lowerb, upperb[, dst[, stream]]) -> dst
Link to this function

inRange(src, lowerb, upperb, opts)

View Source
@spec inRange(
  Evision.Mat.maybe_mat_in(),
  Evision.scalar(),
  Evision.scalar(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec inRange(
  Evision.CUDA.GpuMat.t(),
  Evision.scalar(),
  Evision.scalar(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Checks if array elements lie between two scalars.

Positional Arguments
  • src: Evision.Mat.

    first input array.

  • lowerb: Evision.scalar().

    inclusive lower boundary cv::Scalar.

  • upperb: Evision.scalar().

    inclusive upper boundary cv::Scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    output array of the same size as src and CV_8U type.

The function checks the range as follows:

  • For every element of a single-channel input array: \f[\texttt{dst} (I)= \texttt{lowerb}_0 \leq \texttt{src} (I)_0 \leq \texttt{upperb}_0\f]

  • For two-channel arrays: \f[\texttt{dst} (I)= \texttt{lowerb}_0 \leq \texttt{src} (I)_0 \leq \texttt{upperb}_0 \land \texttt{lowerb}_1 \leq \texttt{src} (I)_1 \leq \texttt{upperb}_1\f]

  • and so forth.

That is, dst (I) is set to 255 (all 1 -bits) if src (I) is within the specified 1D, 2D, 3D, ... box and 0 otherwise. Note that unlike the CPU inRange, this does NOT accept an array for lowerb or upperb, only a cv::Scalar.

@sa cv::inRange

Python prototype (for reference only):

inRange(src, lowerb, upperb[, dst[, stream]]) -> dst

Variant 2:

Checks if array elements lie between two scalars.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    first input array.

  • lowerb: Evision.scalar().

    inclusive lower boundary cv::Scalar.

  • upperb: Evision.scalar().

    inclusive upper boundary cv::Scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    output array of the same size as src and CV_8U type.

The function checks the range as follows:

  • For every element of a single-channel input array: \f[\texttt{dst} (I)= \texttt{lowerb}_0 \leq \texttt{src} (I)_0 \leq \texttt{upperb}_0\f]

  • For two-channel arrays: \f[\texttt{dst} (I)= \texttt{lowerb}_0 \leq \texttt{src} (I)_0 \leq \texttt{upperb}_0 \land \texttt{lowerb}_1 \leq \texttt{src} (I)_1 \leq \texttt{upperb}_1\f]

  • and so forth.

That is, dst (I) is set to 255 (all 1 -bits) if src (I) is within the specified 1D, 2D, 3D, ... box and 0 otherwise. Note that unlike the CPU inRange, this does NOT accept an array for lowerb or upperb, only a cv::Scalar.

@sa cv::inRange

Python prototype (for reference only):

inRange(src, lowerb, upperb[, dst[, stream]]) -> dst
@spec integral(Keyword.t()) :: any() | {:error, String.t()}
@spec integral(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec integral(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes an integral image.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only CV_8UC1 images are supported for now.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • sum: Evision.Mat.t().

    Integral image containing 32-bit unsigned integer values packed into CV_32SC1 .

@sa integral

Python prototype (for reference only):

integral(src[, sum[, stream]]) -> sum

Variant 2:

Computes an integral image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only CV_8UC1 images are supported for now.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • sum: Evision.CUDA.GpuMat.t().

    Integral image containing 32-bit unsigned integer values packed into CV_32SC1 .

@sa integral

Python prototype (for reference only):

integral(src[, sum[, stream]]) -> sum
@spec integral(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec integral(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes an integral image.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only CV_8UC1 images are supported for now.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • sum: Evision.Mat.t().

    Integral image containing 32-bit unsigned integer values packed into CV_32SC1 .

@sa integral

Python prototype (for reference only):

integral(src[, sum[, stream]]) -> sum

Variant 2:

Computes an integral image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only CV_8UC1 images are supported for now.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • sum: Evision.CUDA.GpuMat.t().

    Integral image containing 32-bit unsigned integer values packed into CV_32SC1 .

@sa integral

Python prototype (for reference only):

integral(src[, sum[, stream]]) -> sum
@spec log(Keyword.t()) :: any() | {:error, String.t()}
@spec log(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec log(Evision.CUDA.GpuMat.t()) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes a natural logarithm of absolute value of each matrix element.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

@sa log

Python prototype (for reference only):

log(src[, dst[, stream]]) -> dst

Variant 2:

Computes a natural logarithm of absolute value of each matrix element.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

@sa log

Python prototype (for reference only):

log(src[, dst[, stream]]) -> dst
@spec log(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec log(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes a natural logarithm of absolute value of each matrix element.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

@sa log

Python prototype (for reference only):

log(src[, dst[, stream]]) -> dst

Variant 2:

Computes a natural logarithm of absolute value of each matrix element.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

@sa log

Python prototype (for reference only):

log(src[, dst[, stream]]) -> dst
@spec lshift(Keyword.t()) :: any() | {:error, String.t()}
@spec lshift(Evision.Mat.maybe_mat_in(), Evision.scalar()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec lshift(Evision.CUDA.GpuMat.t(), Evision.scalar()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs pixel by pixel right left of an image by a constant value.

Positional Arguments
  • src: Evision.Mat.

    Source matrix. Supports 1, 3 and 4 channels images with CV_8U , CV_16U or CV_32S depth.

  • val: Evision.scalar().

    Constant values, one per channel.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

Python prototype (for reference only):

lshift(src, val[, dst[, stream]]) -> dst

Variant 2:

Performs pixel by pixel right left of an image by a constant value.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix. Supports 1, 3 and 4 channels images with CV_8U , CV_16U or CV_32S depth.

  • val: Evision.scalar().

    Constant values, one per channel.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

Python prototype (for reference only):

lshift(src, val[, dst[, stream]]) -> dst
@spec lshift(Evision.Mat.maybe_mat_in(), Evision.scalar(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec lshift(Evision.CUDA.GpuMat.t(), Evision.scalar(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs pixel by pixel right left of an image by a constant value.

Positional Arguments
  • src: Evision.Mat.

    Source matrix. Supports 1, 3 and 4 channels images with CV_8U , CV_16U or CV_32S depth.

  • val: Evision.scalar().

    Constant values, one per channel.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

Python prototype (for reference only):

lshift(src, val[, dst[, stream]]) -> dst

Variant 2:

Performs pixel by pixel right left of an image by a constant value.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix. Supports 1, 3 and 4 channels images with CV_8U , CV_16U or CV_32S depth.

  • val: Evision.scalar().

    Constant values, one per channel.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

Python prototype (for reference only):

lshift(src, val[, dst[, stream]]) -> dst
@spec magnitude(Keyword.t()) :: any() | {:error, String.t()}
@spec magnitude(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec magnitude(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes magnitudes of complex matrix elements.

Positional Arguments
  • xy: Evision.Mat.

    Source complex matrix in the interleaved format ( CV_32FC2 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

@sa magnitude

Python prototype (for reference only):

magnitude(xy[, magnitude[, stream]]) -> magnitude

Variant 2:

Computes magnitudes of complex matrix elements.

Positional Arguments
  • xy: Evision.CUDA.GpuMat.t().

    Source complex matrix in the interleaved format ( CV_32FC2 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

@sa magnitude

Python prototype (for reference only):

magnitude(xy[, magnitude[, stream]]) -> magnitude
@spec magnitude(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec magnitude(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}
@spec magnitude(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec magnitude(Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

magnitude

Positional Arguments
  • x: Evision.Mat.

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.Mat.

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

Has overloading in C++

computes magnitude of each (x(i), y(i)) vector supports only floating-point source

Python prototype (for reference only):

magnitude(x, y[, magnitude[, stream]]) -> magnitude

Variant 2:

magnitude

Positional Arguments
  • x: Evision.CUDA.GpuMat.t().

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.CUDA.GpuMat.t().

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

Has overloading in C++

computes magnitude of each (x(i), y(i)) vector supports only floating-point source

Python prototype (for reference only):

magnitude(x, y[, magnitude[, stream]]) -> magnitude

Variant 3:

Computes magnitudes of complex matrix elements.

Positional Arguments
  • xy: Evision.Mat.

    Source complex matrix in the interleaved format ( CV_32FC2 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

@sa magnitude

Python prototype (for reference only):

magnitude(xy[, magnitude[, stream]]) -> magnitude

Variant 4:

Computes magnitudes of complex matrix elements.

Positional Arguments
  • xy: Evision.CUDA.GpuMat.t().

    Source complex matrix in the interleaved format ( CV_32FC2 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

@sa magnitude

Python prototype (for reference only):

magnitude(xy[, magnitude[, stream]]) -> magnitude
@spec magnitude(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec magnitude(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

magnitude

Positional Arguments
  • x: Evision.Mat.

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.Mat.

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

Has overloading in C++

computes magnitude of each (x(i), y(i)) vector supports only floating-point source

Python prototype (for reference only):

magnitude(x, y[, magnitude[, stream]]) -> magnitude

Variant 2:

magnitude

Positional Arguments
  • x: Evision.CUDA.GpuMat.t().

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.CUDA.GpuMat.t().

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitudes ( CV_32FC1 ).

Has overloading in C++

computes magnitude of each (x(i), y(i)) vector supports only floating-point source

Python prototype (for reference only):

magnitude(x, y[, magnitude[, stream]]) -> magnitude
Link to this function

magnitudeSqr(named_args)

View Source
@spec magnitudeSqr(Keyword.t()) :: any() | {:error, String.t()}
@spec magnitudeSqr(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec magnitudeSqr(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes squared magnitudes of complex matrix elements.

Positional Arguments
  • xy: Evision.Mat.

    Source complex matrix in the interleaved format ( CV_32FC2 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitude squares ( CV_32FC1 ).

Python prototype (for reference only):

magnitudeSqr(xy[, magnitude[, stream]]) -> magnitude

Variant 2:

Computes squared magnitudes of complex matrix elements.

Positional Arguments
  • xy: Evision.CUDA.GpuMat.t().

    Source complex matrix in the interleaved format ( CV_32FC2 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitude squares ( CV_32FC1 ).

Python prototype (for reference only):

magnitudeSqr(xy[, magnitude[, stream]]) -> magnitude
@spec magnitudeSqr(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec magnitudeSqr(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}
@spec magnitudeSqr(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec magnitudeSqr(Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

magnitudeSqr

Positional Arguments
  • x: Evision.Mat.

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.Mat.

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitude squares ( CV_32FC1 ).

Has overloading in C++

computes squared magnitude of each (x(i), y(i)) vector supports only floating-point source

Python prototype (for reference only):

magnitudeSqr(x, y[, magnitude[, stream]]) -> magnitude

Variant 2:

magnitudeSqr

Positional Arguments
  • x: Evision.CUDA.GpuMat.t().

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.CUDA.GpuMat.t().

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitude squares ( CV_32FC1 ).

Has overloading in C++

computes squared magnitude of each (x(i), y(i)) vector supports only floating-point source

Python prototype (for reference only):

magnitudeSqr(x, y[, magnitude[, stream]]) -> magnitude

Variant 3:

Computes squared magnitudes of complex matrix elements.

Positional Arguments
  • xy: Evision.Mat.

    Source complex matrix in the interleaved format ( CV_32FC2 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitude squares ( CV_32FC1 ).

Python prototype (for reference only):

magnitudeSqr(xy[, magnitude[, stream]]) -> magnitude

Variant 4:

Computes squared magnitudes of complex matrix elements.

Positional Arguments
  • xy: Evision.CUDA.GpuMat.t().

    Source complex matrix in the interleaved format ( CV_32FC2 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitude squares ( CV_32FC1 ).

Python prototype (for reference only):

magnitudeSqr(xy[, magnitude[, stream]]) -> magnitude
Link to this function

magnitudeSqr(x, y, opts)

View Source
@spec magnitudeSqr(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec magnitudeSqr(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:stream, term()}] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

magnitudeSqr

Positional Arguments
  • x: Evision.Mat.

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.Mat.

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.Mat.t().

    Destination matrix of float magnitude squares ( CV_32FC1 ).

Has overloading in C++

computes squared magnitude of each (x(i), y(i)) vector supports only floating-point source

Python prototype (for reference only):

magnitudeSqr(x, y[, magnitude[, stream]]) -> magnitude

Variant 2:

magnitudeSqr

Positional Arguments
  • x: Evision.CUDA.GpuMat.t().

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.CUDA.GpuMat.t().

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • magnitude: Evision.CUDA.GpuMat.t().

    Destination matrix of float magnitude squares ( CV_32FC1 ).

Has overloading in C++

computes squared magnitude of each (x(i), y(i)) vector supports only floating-point source

Python prototype (for reference only):

magnitudeSqr(x, y[, magnitude[, stream]]) -> magnitude
@spec max(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Computes the per-element maximum of two matrices (or a matrix and a scalar).

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa max

Python prototype (for reference only):

max(src1, src2[, dst[, stream]]) -> dst

Variant 2:

Computes the per-element maximum of two matrices (or a matrix and a scalar).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa max

Python prototype (for reference only):

max(src1, src2[, dst[, stream]]) -> dst
@spec max(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec max(Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes the per-element maximum of two matrices (or a matrix and a scalar).

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa max

Python prototype (for reference only):

max(src1, src2[, dst[, stream]]) -> dst

Variant 2:

Computes the per-element maximum of two matrices (or a matrix and a scalar).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa max

Python prototype (for reference only):

max(src1, src2[, dst[, stream]]) -> dst
Link to this function

meanShiftFiltering(named_args)

View Source
@spec meanShiftFiltering(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

meanShiftFiltering(src, sp, sr)

View Source
@spec meanShiftFiltering(Evision.Mat.maybe_mat_in(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec meanShiftFiltering(Evision.CUDA.GpuMat.t(), integer(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs mean-shift filtering for each point of the source image.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image containing the color of mapped points. It has the same size and type as src .

It maps each point of the source image into another point. As a result, you have a new color and new position of each point.

Python prototype (for reference only):

meanShiftFiltering(src, sp, sr[, dst[, criteria[, stream]]]) -> dst

Variant 2:

Performs mean-shift filtering for each point of the source image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image containing the color of mapped points. It has the same size and type as src .

It maps each point of the source image into another point. As a result, you have a new color and new position of each point.

Python prototype (for reference only):

meanShiftFiltering(src, sp, sr[, dst[, criteria[, stream]]]) -> dst
Link to this function

meanShiftFiltering(src, sp, sr, opts)

View Source
@spec meanShiftFiltering(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  [criteria: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec meanShiftFiltering(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  [criteria: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs mean-shift filtering for each point of the source image.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image containing the color of mapped points. It has the same size and type as src .

It maps each point of the source image into another point. As a result, you have a new color and new position of each point.

Python prototype (for reference only):

meanShiftFiltering(src, sp, sr[, dst[, criteria[, stream]]]) -> dst

Variant 2:

Performs mean-shift filtering for each point of the source image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image containing the color of mapped points. It has the same size and type as src .

It maps each point of the source image into another point. As a result, you have a new color and new position of each point.

Python prototype (for reference only):

meanShiftFiltering(src, sp, sr[, dst[, criteria[, stream]]]) -> dst
Link to this function

meanShiftProc(named_args)

View Source
@spec meanShiftProc(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

meanShiftProc(src, sp, sr)

View Source
@spec meanShiftProc(Evision.Mat.maybe_mat_in(), integer(), integer()) ::
  {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec meanShiftProc(Evision.CUDA.GpuMat.t(), integer(), integer()) ::
  {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dstr: Evision.Mat.t().

    Destination image containing the color of mapped points. The size and type is the same as src .

  • dstsp: Evision.Mat.t().

    Destination image containing the position of mapped points. The size is the same as src size. The type is CV_16SC2 .

@sa cuda::meanShiftFiltering

Python prototype (for reference only):

meanShiftProc(src, sp, sr[, dstr[, dstsp[, criteria[, stream]]]]) -> dstr, dstsp

Variant 2:

Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dstr: Evision.CUDA.GpuMat.t().

    Destination image containing the color of mapped points. The size and type is the same as src .

  • dstsp: Evision.CUDA.GpuMat.t().

    Destination image containing the position of mapped points. The size is the same as src size. The type is CV_16SC2 .

@sa cuda::meanShiftFiltering

Python prototype (for reference only):

meanShiftProc(src, sp, sr[, dstr[, dstsp[, criteria[, stream]]]]) -> dstr, dstsp
Link to this function

meanShiftProc(src, sp, sr, opts)

View Source
@spec meanShiftProc(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  [criteria: term(), stream: term()] | nil
) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec meanShiftProc(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  [criteria: term(), stream: term()] | nil
) :: {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dstr: Evision.Mat.t().

    Destination image containing the color of mapped points. The size and type is the same as src .

  • dstsp: Evision.Mat.t().

    Destination image containing the position of mapped points. The size is the same as src size. The type is CV_16SC2 .

@sa cuda::meanShiftFiltering

Python prototype (for reference only):

meanShiftProc(src, sp, sr[, dstr[, dstsp[, criteria[, stream]]]]) -> dstr, dstsp

Variant 2:

Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dstr: Evision.CUDA.GpuMat.t().

    Destination image containing the color of mapped points. The size and type is the same as src .

  • dstsp: Evision.CUDA.GpuMat.t().

    Destination image containing the position of mapped points. The size is the same as src size. The type is CV_16SC2 .

@sa cuda::meanShiftFiltering

Python prototype (for reference only):

meanShiftProc(src, sp, sr[, dstr[, dstsp[, criteria[, stream]]]]) -> dstr, dstsp
Link to this function

meanShiftSegmentation(named_args)

View Source
@spec meanShiftSegmentation(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

meanShiftSegmentation(src, sp, sr, minsize)

View Source
@spec meanShiftSegmentation(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  integer()
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec meanShiftSegmentation(Evision.CUDA.GpuMat.t(), integer(), integer(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs a mean-shift segmentation of the source image and eliminates small segments.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

  • minsize: integer().

    Minimum segment size. Smaller segments are merged.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Segmented image with the same size and type as src (host or gpu memory).

Python prototype (for reference only):

meanShiftSegmentation(src, sp, sr, minsize[, dst[, criteria[, stream]]]) -> dst

Variant 2:

Performs a mean-shift segmentation of the source image and eliminates small segments.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

  • minsize: integer().

    Minimum segment size. Smaller segments are merged.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Segmented image with the same size and type as src (host or gpu memory).

Python prototype (for reference only):

meanShiftSegmentation(src, sp, sr, minsize[, dst[, criteria[, stream]]]) -> dst
Link to this function

meanShiftSegmentation(src, sp, sr, minsize, opts)

View Source
@spec meanShiftSegmentation(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  integer(),
  [criteria: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec meanShiftSegmentation(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  integer(),
  [criteria: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs a mean-shift segmentation of the source image and eliminates small segments.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

  • minsize: integer().

    Minimum segment size. Smaller segments are merged.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Segmented image with the same size and type as src (host or gpu memory).

Python prototype (for reference only):

meanShiftSegmentation(src, sp, sr, minsize[, dst[, criteria[, stream]]]) -> dst

Variant 2:

Performs a mean-shift segmentation of the source image and eliminates small segments.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only CV_8UC4 images are supported for now.

  • sp: integer().

    Spatial window radius.

  • sr: integer().

    Color window radius.

  • minsize: integer().

    Minimum segment size. Smaller segments are merged.

Keyword Arguments
  • criteria: TermCriteria.

    Termination criteria. See TermCriteria.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Segmented image with the same size and type as src (host or gpu memory).

Python prototype (for reference only):

meanShiftSegmentation(src, sp, sr, minsize[, dst[, criteria[, stream]]]) -> dst
@spec meanStdDev(Keyword.t()) :: any() | {:error, String.t()}
@spec meanStdDev(Evision.Mat.maybe_mat_in()) ::
  {Evision.scalar(), Evision.scalar()} | {:error, String.t()}
@spec meanStdDev(Evision.CUDA.GpuMat.t()) ::
  {Evision.scalar(), Evision.scalar()} | {:error, String.t()}

Variant 1:

meanStdDev

Positional Arguments
  • mtx: Evision.Mat.

    Source matrix. CV_8UC1 and CV_32FC1 matrices are supported for now.

Return
  • mean: Evision.scalar().t().

    Mean value.

  • stddev: Evision.scalar().t().

    Standard deviation value.

Has overloading in C++

Python prototype (for reference only):

meanStdDev(mtx) -> mean, stddev

Variant 2:

meanStdDev

Positional Arguments
  • mtx: Evision.CUDA.GpuMat.t().

    Source matrix. CV_8UC1 and CV_32FC1 matrices are supported for now.

Return
  • mean: Evision.scalar().t().

    Mean value.

  • stddev: Evision.scalar().t().

    Standard deviation value.

Has overloading in C++

Python prototype (for reference only):

meanStdDev(mtx) -> mean, stddev

Variant 1:

meanStdDev

Positional Arguments
  • src: Evision.Mat.

    Source matrix. CV_8UC1 and CV_32FC1 matrices are supported for now.

  • mask: Evision.Mat.

    Operation mask.

Return
  • mean: Evision.scalar().t().

    Mean value.

  • stddev: Evision.scalar().t().

    Standard deviation value.

Has overloading in C++

Python prototype (for reference only):

meanStdDev(src, mask) -> mean, stddev

Variant 2:

meanStdDev

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix. CV_8UC1 and CV_32FC1 matrices are supported for now.

  • mask: Evision.CUDA.GpuMat.t().

    Operation mask.

Return
  • mean: Evision.scalar().t().

    Mean value.

  • stddev: Evision.scalar().t().

    Standard deviation value.

Has overloading in C++

Python prototype (for reference only):

meanStdDev(src, mask) -> mean, stddev
@spec merge(Keyword.t()) :: any() | {:error, String.t()}
@spec merge([Evision.CUDA.GpuMat.t()]) :: Evision.Mat.t() | {:error, String.t()}

merge

Positional Arguments
  • src: [Evision.CUDA.GpuMat]
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

merge(src[, dst[, stream]]) -> dst
@spec merge([Evision.CUDA.GpuMat.t()], [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

merge

Positional Arguments
  • src: [Evision.CUDA.GpuMat]
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().
Return
  • dst: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

merge(src[, dst[, stream]]) -> dst
@spec min(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Computes the per-element minimum of two matrices (or a matrix and a scalar).

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa min

Python prototype (for reference only):

min(src1, src2[, dst[, stream]]) -> dst

Variant 2:

Computes the per-element minimum of two matrices (or a matrix and a scalar).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa min

Python prototype (for reference only):

min(src1, src2[, dst[, stream]]) -> dst
@spec min(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:stream, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec min(Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes the per-element minimum of two matrices (or a matrix and a scalar).

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa min

Python prototype (for reference only):

min(src1, src2[, dst[, stream]]) -> dst

Variant 2:

Computes the per-element minimum of two matrices (or a matrix and a scalar).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and type as the input array(s).

@sa min

Python prototype (for reference only):

min(src1, src2[, dst[, stream]]) -> dst
@spec minMax(Keyword.t()) :: any() | {:error, String.t()}
@spec minMax(Evision.Mat.maybe_mat_in()) ::
  {number(), number()} | {:error, String.t()}
@spec minMax(Evision.CUDA.GpuMat.t()) :: {number(), number()} | {:error, String.t()}

Variant 1:

Finds global minimum and maximum matrix elements and returns their values.

Positional Arguments
Keyword Arguments
  • mask: Evision.Mat.

    Optional mask to select a sub-matrix.

Return
  • minVal: double*.

    Pointer to the returned minimum value. Use NULL if not required.

  • maxVal: double*.

    Pointer to the returned maximum value. Use NULL if not required.

The function does not work with CV_64F images on GPUs with the compute capability \< 1.3. @sa minMaxLoc

Python prototype (for reference only):

minMax(src[, mask]) -> minVal, maxVal

Variant 2:

Finds global minimum and maximum matrix elements and returns their values.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Single-channel source image.

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Optional mask to select a sub-matrix.

Return
  • minVal: double*.

    Pointer to the returned minimum value. Use NULL if not required.

  • maxVal: double*.

    Pointer to the returned maximum value. Use NULL if not required.

The function does not work with CV_64F images on GPUs with the compute capability \< 1.3. @sa minMaxLoc

Python prototype (for reference only):

minMax(src[, mask]) -> minVal, maxVal
@spec minMax(Evision.Mat.maybe_mat_in(), [{:mask, term()}] | nil) ::
  {number(), number()} | {:error, String.t()}
@spec minMax(Evision.CUDA.GpuMat.t(), [{:mask, term()}] | nil) ::
  {number(), number()} | {:error, String.t()}

Variant 1:

Finds global minimum and maximum matrix elements and returns their values.

Positional Arguments
Keyword Arguments
  • mask: Evision.Mat.

    Optional mask to select a sub-matrix.

Return
  • minVal: double*.

    Pointer to the returned minimum value. Use NULL if not required.

  • maxVal: double*.

    Pointer to the returned maximum value. Use NULL if not required.

The function does not work with CV_64F images on GPUs with the compute capability \< 1.3. @sa minMaxLoc

Python prototype (for reference only):

minMax(src[, mask]) -> minVal, maxVal

Variant 2:

Finds global minimum and maximum matrix elements and returns their values.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Single-channel source image.

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Optional mask to select a sub-matrix.

Return
  • minVal: double*.

    Pointer to the returned minimum value. Use NULL if not required.

  • maxVal: double*.

    Pointer to the returned maximum value. Use NULL if not required.

The function does not work with CV_64F images on GPUs with the compute capability \< 1.3. @sa minMaxLoc

Python prototype (for reference only):

minMax(src[, mask]) -> minVal, maxVal
@spec minMaxLoc(Keyword.t()) :: any() | {:error, String.t()}
@spec minMaxLoc(Evision.Mat.maybe_mat_in()) ::
  {number(), number(), {number(), number()}, {number(), number()}}
  | {:error, String.t()}
@spec minMaxLoc(Evision.CUDA.GpuMat.t()) ::
  {number(), number(), {number(), number()}, {number(), number()}}
  | {:error, String.t()}

Variant 1:

Finds global minimum and maximum matrix elements and returns their values with locations.

Positional Arguments
Keyword Arguments
  • mask: Evision.Mat.

    Optional mask to select a sub-matrix.

Return
  • minVal: double*.

    Pointer to the returned minimum value. Use NULL if not required.

  • maxVal: double*.

    Pointer to the returned maximum value. Use NULL if not required.

  • minLoc: Point*.

    Pointer to the returned minimum location. Use NULL if not required.

  • maxLoc: Point*.

    Pointer to the returned maximum location. Use NULL if not required.

The function does not work with CV_64F images on GPU with the compute capability \< 1.3. @sa minMaxLoc

Python prototype (for reference only):

minMaxLoc(src[, mask]) -> minVal, maxVal, minLoc, maxLoc

Variant 2:

Finds global minimum and maximum matrix elements and returns their values with locations.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Single-channel source image.

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Optional mask to select a sub-matrix.

Return
  • minVal: double*.

    Pointer to the returned minimum value. Use NULL if not required.

  • maxVal: double*.

    Pointer to the returned maximum value. Use NULL if not required.

  • minLoc: Point*.

    Pointer to the returned minimum location. Use NULL if not required.

  • maxLoc: Point*.

    Pointer to the returned maximum location. Use NULL if not required.

The function does not work with CV_64F images on GPU with the compute capability \< 1.3. @sa minMaxLoc

Python prototype (for reference only):

minMaxLoc(src[, mask]) -> minVal, maxVal, minLoc, maxLoc
@spec minMaxLoc(Evision.Mat.maybe_mat_in(), [{:mask, term()}] | nil) ::
  {number(), number(), {number(), number()}, {number(), number()}}
  | {:error, String.t()}
@spec minMaxLoc(Evision.CUDA.GpuMat.t(), [{:mask, term()}] | nil) ::
  {number(), number(), {number(), number()}, {number(), number()}}
  | {:error, String.t()}

Variant 1:

Finds global minimum and maximum matrix elements and returns their values with locations.

Positional Arguments
Keyword Arguments
  • mask: Evision.Mat.

    Optional mask to select a sub-matrix.

Return
  • minVal: double*.

    Pointer to the returned minimum value. Use NULL if not required.

  • maxVal: double*.

    Pointer to the returned maximum value. Use NULL if not required.

  • minLoc: Point*.

    Pointer to the returned minimum location. Use NULL if not required.

  • maxLoc: Point*.

    Pointer to the returned maximum location. Use NULL if not required.

The function does not work with CV_64F images on GPU with the compute capability \< 1.3. @sa minMaxLoc

Python prototype (for reference only):

minMaxLoc(src[, mask]) -> minVal, maxVal, minLoc, maxLoc

Variant 2:

Finds global minimum and maximum matrix elements and returns their values with locations.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Single-channel source image.

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Optional mask to select a sub-matrix.

Return
  • minVal: double*.

    Pointer to the returned minimum value. Use NULL if not required.

  • maxVal: double*.

    Pointer to the returned maximum value. Use NULL if not required.

  • minLoc: Point*.

    Pointer to the returned minimum location. Use NULL if not required.

  • maxLoc: Point*.

    Pointer to the returned maximum location. Use NULL if not required.

The function does not work with CV_64F images on GPU with the compute capability \< 1.3. @sa minMaxLoc

Python prototype (for reference only):

minMaxLoc(src[, mask]) -> minVal, maxVal, minLoc, maxLoc
@spec moments(Keyword.t()) :: any() | {:error, String.t()}
@spec moments(Evision.Mat.maybe_mat_in()) :: map() | {:error, String.t()}
@spec moments(Evision.CUDA.GpuMat.t()) :: map() | {:error, String.t()}

Variant 1:

Calculates all of the moments up to the 3rd order of a rasterized shape.

Positional Arguments
  • src: Evision.Mat.

    Raster image (single-channel 2D array).

Keyword Arguments
  • binaryImage: bool.

    If it is true, all non-zero image pixels are treated as 1's.

  • order: MomentsOrder.

    Order of largest moments to calculate with lower order moments requiring less computation.

  • momentsType: integer().

    Precision to use when calculating moments. Available types are \ref CV_32F and \ref CV_64F with the performance of \ref CV_32F an order of magnitude greater than \ref CV_64F. If the image is small the accuracy from \ref CV_32F can be equal or very close to \ref CV_64F.

Return
  • retval: Moments

The function computes moments, up to the 3rd order, of a rasterized shape. The results are returned in the structure cv::Moments.

Note: For maximum performance use the asynchronous version cuda::spatialMoments() as this version interally allocates and deallocates both GpuMat and HostMem to respectively perform the calculation on the device and download the result to the host. The costly HostMem allocation cannot be avoided however the GpuMat device allocation can be by using BufferPool, e.g.

setBufferPoolUsage(true);
setBufferPoolConfig(getDevice(), numMoments(order) * ((momentsType == CV_64F) ? sizeof(double) : sizeof(float)), 1);

see the \a CUDA_TEST_P(Moments, Accuracy) test inside opencv_contrib_source_code/modules/cudaimgproc/test/test_moments.cpp for an example. @returns cv::Moments. @sa cuda::spatialMoments, cuda::convertSpatialMoments, cuda::numMoments, cuda::MomentsOrder

Python prototype (for reference only):

moments(src[, binaryImage[, order[, momentsType]]]) -> retval

Variant 2:

Calculates all of the moments up to the 3rd order of a rasterized shape.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Raster image (single-channel 2D array).

Keyword Arguments
  • binaryImage: bool.

    If it is true, all non-zero image pixels are treated as 1's.

  • order: MomentsOrder.

    Order of largest moments to calculate with lower order moments requiring less computation.

  • momentsType: integer().

    Precision to use when calculating moments. Available types are \ref CV_32F and \ref CV_64F with the performance of \ref CV_32F an order of magnitude greater than \ref CV_64F. If the image is small the accuracy from \ref CV_32F can be equal or very close to \ref CV_64F.

Return
  • retval: Moments

The function computes moments, up to the 3rd order, of a rasterized shape. The results are returned in the structure cv::Moments.

Note: For maximum performance use the asynchronous version cuda::spatialMoments() as this version interally allocates and deallocates both GpuMat and HostMem to respectively perform the calculation on the device and download the result to the host. The costly HostMem allocation cannot be avoided however the GpuMat device allocation can be by using BufferPool, e.g.

setBufferPoolUsage(true);
setBufferPoolConfig(getDevice(), numMoments(order) * ((momentsType == CV_64F) ? sizeof(double) : sizeof(float)), 1);

see the \a CUDA_TEST_P(Moments, Accuracy) test inside opencv_contrib_source_code/modules/cudaimgproc/test/test_moments.cpp for an example. @returns cv::Moments. @sa cuda::spatialMoments, cuda::convertSpatialMoments, cuda::numMoments, cuda::MomentsOrder

Python prototype (for reference only):

moments(src[, binaryImage[, order[, momentsType]]]) -> retval
@spec moments(
  Evision.Mat.maybe_mat_in(),
  [binaryImage: term(), momentsType: term(), order: term()] | nil
) :: map() | {:error, String.t()}
@spec moments(
  Evision.CUDA.GpuMat.t(),
  [binaryImage: term(), momentsType: term(), order: term()] | nil
) ::
  map() | {:error, String.t()}

Variant 1:

Calculates all of the moments up to the 3rd order of a rasterized shape.

Positional Arguments
  • src: Evision.Mat.

    Raster image (single-channel 2D array).

Keyword Arguments
  • binaryImage: bool.

    If it is true, all non-zero image pixels are treated as 1's.

  • order: MomentsOrder.

    Order of largest moments to calculate with lower order moments requiring less computation.

  • momentsType: integer().

    Precision to use when calculating moments. Available types are \ref CV_32F and \ref CV_64F with the performance of \ref CV_32F an order of magnitude greater than \ref CV_64F. If the image is small the accuracy from \ref CV_32F can be equal or very close to \ref CV_64F.

Return
  • retval: Moments

The function computes moments, up to the 3rd order, of a rasterized shape. The results are returned in the structure cv::Moments.

Note: For maximum performance use the asynchronous version cuda::spatialMoments() as this version interally allocates and deallocates both GpuMat and HostMem to respectively perform the calculation on the device and download the result to the host. The costly HostMem allocation cannot be avoided however the GpuMat device allocation can be by using BufferPool, e.g.

setBufferPoolUsage(true);
setBufferPoolConfig(getDevice(), numMoments(order) * ((momentsType == CV_64F) ? sizeof(double) : sizeof(float)), 1);

see the \a CUDA_TEST_P(Moments, Accuracy) test inside opencv_contrib_source_code/modules/cudaimgproc/test/test_moments.cpp for an example. @returns cv::Moments. @sa cuda::spatialMoments, cuda::convertSpatialMoments, cuda::numMoments, cuda::MomentsOrder

Python prototype (for reference only):

moments(src[, binaryImage[, order[, momentsType]]]) -> retval

Variant 2:

Calculates all of the moments up to the 3rd order of a rasterized shape.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Raster image (single-channel 2D array).

Keyword Arguments
  • binaryImage: bool.

    If it is true, all non-zero image pixels are treated as 1's.

  • order: MomentsOrder.

    Order of largest moments to calculate with lower order moments requiring less computation.

  • momentsType: integer().

    Precision to use when calculating moments. Available types are \ref CV_32F and \ref CV_64F with the performance of \ref CV_32F an order of magnitude greater than \ref CV_64F. If the image is small the accuracy from \ref CV_32F can be equal or very close to \ref CV_64F.

Return
  • retval: Moments

The function computes moments, up to the 3rd order, of a rasterized shape. The results are returned in the structure cv::Moments.

Note: For maximum performance use the asynchronous version cuda::spatialMoments() as this version interally allocates and deallocates both GpuMat and HostMem to respectively perform the calculation on the device and download the result to the host. The costly HostMem allocation cannot be avoided however the GpuMat device allocation can be by using BufferPool, e.g.

setBufferPoolUsage(true);
setBufferPoolConfig(getDevice(), numMoments(order) * ((momentsType == CV_64F) ? sizeof(double) : sizeof(float)), 1);

see the \a CUDA_TEST_P(Moments, Accuracy) test inside opencv_contrib_source_code/modules/cudaimgproc/test/test_moments.cpp for an example. @returns cv::Moments. @sa cuda::spatialMoments, cuda::convertSpatialMoments, cuda::numMoments, cuda::MomentsOrder

Python prototype (for reference only):

moments(src[, binaryImage[, order[, momentsType]]]) -> retval
Link to this function

mulAndScaleSpectrums(named_args)

View Source
@spec mulAndScaleSpectrums(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

mulAndScaleSpectrums(src1, src2, flags, scale)

View Source
@spec mulAndScaleSpectrums(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  number()
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec mulAndScaleSpectrums(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  number()
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs a per-element multiplication of two Fourier spectrums and scales the result.

Positional Arguments
  • src1: Evision.Mat.

    First spectrum.

  • src2: Evision.Mat.

    Second spectrum with the same size and type as a .

  • flags: integer().

    Mock parameter used for CPU/CUDA interfaces similarity, simply add a 0 value.

  • scale: float.

    Scale constant.

Keyword Arguments
  • conjB: bool.

    Optional flag to specify if the second spectrum needs to be conjugated before the multiplication.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination spectrum.

Only full (not packed) CV_32FC2 complex spectrums in the interleaved format are supported for now. @sa mulSpectrums

Python prototype (for reference only):

mulAndScaleSpectrums(src1, src2, flags, scale[, dst[, conjB[, stream]]]) -> dst

Variant 2:

Performs a per-element multiplication of two Fourier spectrums and scales the result.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First spectrum.

  • src2: Evision.CUDA.GpuMat.t().

    Second spectrum with the same size and type as a .

  • flags: integer().

    Mock parameter used for CPU/CUDA interfaces similarity, simply add a 0 value.

  • scale: float.

    Scale constant.

Keyword Arguments
  • conjB: bool.

    Optional flag to specify if the second spectrum needs to be conjugated before the multiplication.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination spectrum.

Only full (not packed) CV_32FC2 complex spectrums in the interleaved format are supported for now. @sa mulSpectrums

Python prototype (for reference only):

mulAndScaleSpectrums(src1, src2, flags, scale[, dst[, conjB[, stream]]]) -> dst
Link to this function

mulAndScaleSpectrums(src1, src2, flags, scale, opts)

View Source
@spec mulAndScaleSpectrums(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  number(),
  [conjB: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec mulAndScaleSpectrums(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  number(),
  [conjB: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs a per-element multiplication of two Fourier spectrums and scales the result.

Positional Arguments
  • src1: Evision.Mat.

    First spectrum.

  • src2: Evision.Mat.

    Second spectrum with the same size and type as a .

  • flags: integer().

    Mock parameter used for CPU/CUDA interfaces similarity, simply add a 0 value.

  • scale: float.

    Scale constant.

Keyword Arguments
  • conjB: bool.

    Optional flag to specify if the second spectrum needs to be conjugated before the multiplication.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination spectrum.

Only full (not packed) CV_32FC2 complex spectrums in the interleaved format are supported for now. @sa mulSpectrums

Python prototype (for reference only):

mulAndScaleSpectrums(src1, src2, flags, scale[, dst[, conjB[, stream]]]) -> dst

Variant 2:

Performs a per-element multiplication of two Fourier spectrums and scales the result.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First spectrum.

  • src2: Evision.CUDA.GpuMat.t().

    Second spectrum with the same size and type as a .

  • flags: integer().

    Mock parameter used for CPU/CUDA interfaces similarity, simply add a 0 value.

  • scale: float.

    Scale constant.

Keyword Arguments
  • conjB: bool.

    Optional flag to specify if the second spectrum needs to be conjugated before the multiplication.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination spectrum.

Only full (not packed) CV_32FC2 complex spectrums in the interleaved format are supported for now. @sa mulSpectrums

Python prototype (for reference only):

mulAndScaleSpectrums(src1, src2, flags, scale[, dst[, conjB[, stream]]]) -> dst
Link to this function

mulSpectrums(named_args)

View Source
@spec mulSpectrums(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

mulSpectrums(src1, src2, flags)

View Source
@spec mulSpectrums(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec mulSpectrums(Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs a per-element multiplication of two Fourier spectrums.

Positional Arguments
  • src1: Evision.Mat.

    First spectrum.

  • src2: Evision.Mat.

    Second spectrum with the same size and type as a .

  • flags: integer().

    Mock parameter used for CPU/CUDA interfaces similarity.

Keyword Arguments
  • conjB: bool.

    Optional flag to specify if the second spectrum needs to be conjugated before the multiplication.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination spectrum.

Only full (not packed) CV_32FC2 complex spectrums in the interleaved format are supported for now. @sa mulSpectrums

Python prototype (for reference only):

mulSpectrums(src1, src2, flags[, dst[, conjB[, stream]]]) -> dst

Variant 2:

Performs a per-element multiplication of two Fourier spectrums.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First spectrum.

  • src2: Evision.CUDA.GpuMat.t().

    Second spectrum with the same size and type as a .

  • flags: integer().

    Mock parameter used for CPU/CUDA interfaces similarity.

Keyword Arguments
  • conjB: bool.

    Optional flag to specify if the second spectrum needs to be conjugated before the multiplication.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination spectrum.

Only full (not packed) CV_32FC2 complex spectrums in the interleaved format are supported for now. @sa mulSpectrums

Python prototype (for reference only):

mulSpectrums(src1, src2, flags[, dst[, conjB[, stream]]]) -> dst
Link to this function

mulSpectrums(src1, src2, flags, opts)

View Source
@spec mulSpectrums(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  [conjB: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec mulSpectrums(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  [conjB: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs a per-element multiplication of two Fourier spectrums.

Positional Arguments
  • src1: Evision.Mat.

    First spectrum.

  • src2: Evision.Mat.

    Second spectrum with the same size and type as a .

  • flags: integer().

    Mock parameter used for CPU/CUDA interfaces similarity.

Keyword Arguments
  • conjB: bool.

    Optional flag to specify if the second spectrum needs to be conjugated before the multiplication.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination spectrum.

Only full (not packed) CV_32FC2 complex spectrums in the interleaved format are supported for now. @sa mulSpectrums

Python prototype (for reference only):

mulSpectrums(src1, src2, flags[, dst[, conjB[, stream]]]) -> dst

Variant 2:

Performs a per-element multiplication of two Fourier spectrums.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First spectrum.

  • src2: Evision.CUDA.GpuMat.t().

    Second spectrum with the same size and type as a .

  • flags: integer().

    Mock parameter used for CPU/CUDA interfaces similarity.

Keyword Arguments
  • conjB: bool.

    Optional flag to specify if the second spectrum needs to be conjugated before the multiplication.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination spectrum.

Only full (not packed) CV_32FC2 complex spectrums in the interleaved format are supported for now. @sa mulSpectrums

Python prototype (for reference only):

mulSpectrums(src1, src2, flags[, dst[, conjB[, stream]]]) -> dst
@spec multiply(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Computes a matrix-matrix or matrix-scalar per-element product.

Positional Arguments
Keyword Arguments
  • scale: double.

    Optional scale factor.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

@sa multiply

Python prototype (for reference only):

multiply(src1, src2[, dst[, scale[, dtype[, stream]]]]) -> dst

Variant 2:

Computes a matrix-matrix or matrix-scalar per-element product.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • scale: double.

    Optional scale factor.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

@sa multiply

Python prototype (for reference only):

multiply(src1, src2[, dst[, scale[, dtype[, stream]]]]) -> dst
Link to this function

multiply(src1, src2, opts)

View Source
@spec multiply(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [dtype: term(), scale: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec multiply(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [dtype: term(), scale: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes a matrix-matrix or matrix-scalar per-element product.

Positional Arguments
Keyword Arguments
  • scale: double.

    Optional scale factor.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

@sa multiply

Python prototype (for reference only):

multiply(src1, src2[, dst[, scale[, dtype[, stream]]]]) -> dst

Variant 2:

Computes a matrix-matrix or matrix-scalar per-element product.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    First source matrix or scalar.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix or scalar.

Keyword Arguments
  • scale: double.

    Optional scale factor.

  • dtype: integer().

    Optional depth of the output array.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix that has the same size and number of channels as the input array(s). The depth is defined by dtype or src1 depth.

@sa multiply

Python prototype (for reference only):

multiply(src1, src2[, dst[, scale[, dtype[, stream]]]]) -> dst
Link to this function

nonLocalMeans(named_args)

View Source
@spec nonLocalMeans(Keyword.t()) :: any() | {:error, String.t()}
@spec nonLocalMeans(Evision.CUDA.GpuMat.t(), number()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Performs pure non local means denoising without any simplification, and thus it is not fast.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Supports only CV_8UC1, CV_8UC2 and CV_8UC3.

  • h: float.

    Filter sigma regulating filter strength for color.

Keyword Arguments
  • search_window: integer().

    Size of search window.

  • block_size: integer().

    Size of block used for computing weights.

  • borderMode: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

@sa fastNlMeansDenoising

Python prototype (for reference only):

nonLocalMeans(src, h[, dst[, search_window[, block_size[, borderMode[, stream]]]]]) -> dst
Link to this function

nonLocalMeans(src, h, opts)

View Source
@spec nonLocalMeans(
  Evision.CUDA.GpuMat.t(),
  number(),
  [
    block_size: term(),
    borderMode: term(),
    search_window: term(),
    stream: term()
  ]
  | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Performs pure non local means denoising without any simplification, and thus it is not fast.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Supports only CV_8UC1, CV_8UC2 and CV_8UC3.

  • h: float.

    Filter sigma regulating filter strength for color.

Keyword Arguments
  • search_window: integer().

    Size of search window.

  • block_size: integer().

    Size of block used for computing weights.

  • borderMode: integer().

    Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image.

@sa fastNlMeansDenoising

Python prototype (for reference only):

nonLocalMeans(src, h[, dst[, search_window[, block_size[, borderMode[, stream]]]]]) -> dst
@spec norm(Keyword.t()) :: any() | {:error, String.t()}
@spec norm(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  number() | {:error, String.t()}
@spec norm(Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()) ::
  number() | {:error, String.t()}
@spec norm(Evision.Mat.maybe_mat_in(), integer()) :: number() | {:error, String.t()}
@spec norm(Evision.CUDA.GpuMat.t(), integer()) :: number() | {:error, String.t()}

Variant 1:

Returns the difference of two matrices.

Positional Arguments
  • src1: Evision.Mat.

    Source matrix. Any matrices except 64F are supported.

  • src2: Evision.Mat.

    Second source matrix (if any) with the same size and type as src1.

Keyword Arguments
  • normType: integer().

    Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now.

Return
  • retval: double

@sa norm

Python prototype (for reference only):

norm(src1, src2[, normType]) -> retval

Variant 2:

Returns the difference of two matrices.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    Source matrix. Any matrices except 64F are supported.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix (if any) with the same size and type as src1.

Keyword Arguments
  • normType: integer().

    Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now.

Return
  • retval: double

@sa norm

Python prototype (for reference only):

norm(src1, src2[, normType]) -> retval

Variant 3:

Returns the norm of a matrix (or difference of two matrices).

Positional Arguments
  • src1: Evision.Mat.

    Source matrix. Any matrices except 64F are supported.

  • normType: integer().

    Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now.

Keyword Arguments
  • mask: Evision.Mat.

    optional operation mask; it must have the same size as src1 and CV_8UC1 type.

Return
  • retval: double

@sa norm

Python prototype (for reference only):

norm(src1, normType[, mask]) -> retval

Variant 4:

Returns the norm of a matrix (or difference of two matrices).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    Source matrix. Any matrices except 64F are supported.

  • normType: integer().

    Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now.

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    optional operation mask; it must have the same size as src1 and CV_8UC1 type.

Return
  • retval: double

@sa norm

Python prototype (for reference only):

norm(src1, normType[, mask]) -> retval
@spec norm(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:normType, term()}] | nil
) ::
  number() | {:error, String.t()}
@spec norm(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [{:normType, term()}] | nil
) ::
  number() | {:error, String.t()}
@spec norm(Evision.Mat.maybe_mat_in(), integer(), [{:mask, term()}] | nil) ::
  number() | {:error, String.t()}
@spec norm(Evision.CUDA.GpuMat.t(), integer(), [{:mask, term()}] | nil) ::
  number() | {:error, String.t()}

Variant 1:

Returns the difference of two matrices.

Positional Arguments
  • src1: Evision.Mat.

    Source matrix. Any matrices except 64F are supported.

  • src2: Evision.Mat.

    Second source matrix (if any) with the same size and type as src1.

Keyword Arguments
  • normType: integer().

    Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now.

Return
  • retval: double

@sa norm

Python prototype (for reference only):

norm(src1, src2[, normType]) -> retval

Variant 2:

Returns the difference of two matrices.

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    Source matrix. Any matrices except 64F are supported.

  • src2: Evision.CUDA.GpuMat.t().

    Second source matrix (if any) with the same size and type as src1.

Keyword Arguments
  • normType: integer().

    Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now.

Return
  • retval: double

@sa norm

Python prototype (for reference only):

norm(src1, src2[, normType]) -> retval

Variant 3:

Returns the norm of a matrix (or difference of two matrices).

Positional Arguments
  • src1: Evision.Mat.

    Source matrix. Any matrices except 64F are supported.

  • normType: integer().

    Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now.

Keyword Arguments
  • mask: Evision.Mat.

    optional operation mask; it must have the same size as src1 and CV_8UC1 type.

Return
  • retval: double

@sa norm

Python prototype (for reference only):

norm(src1, normType[, mask]) -> retval

Variant 4:

Returns the norm of a matrix (or difference of two matrices).

Positional Arguments
  • src1: Evision.CUDA.GpuMat.t().

    Source matrix. Any matrices except 64F are supported.

  • normType: integer().

    Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now.

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    optional operation mask; it must have the same size as src1 and CV_8UC1 type.

Return
  • retval: double

@sa norm

Python prototype (for reference only):

norm(src1, normType[, mask]) -> retval
@spec normalize(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

normalize(src, alpha, beta, norm_type, dtype)

View Source
@spec normalize(Evision.Mat.maybe_mat_in(), number(), number(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec normalize(Evision.CUDA.GpuMat.t(), number(), number(), integer(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Normalizes the norm or value range of an array.

Positional Arguments
  • src: Evision.Mat.

    Input array.

  • alpha: double.

    Norm value to normalize to or the lower range boundary in case of the range normalization.

  • beta: double.

    Upper range boundary in case of the range normalization; it is not used for the norm normalization.

  • norm_type: integer().

    Normalization type ( NORM_MINMAX , NORM_L2 , NORM_L1 or NORM_INF ).

  • dtype: integer().

    When negative, the output array has the same type as src; otherwise, it has the same number of channels as src and the depth =CV_MAT_DEPTH(dtype).

Keyword Arguments
  • mask: Evision.Mat.

    Optional operation mask.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Output array of the same size as src .

@sa normalize

Python prototype (for reference only):

normalize(src, alpha, beta, norm_type, dtype[, dst[, mask[, stream]]]) -> dst

Variant 2:

Normalizes the norm or value range of an array.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Input array.

  • alpha: double.

    Norm value to normalize to or the lower range boundary in case of the range normalization.

  • beta: double.

    Upper range boundary in case of the range normalization; it is not used for the norm normalization.

  • norm_type: integer().

    Normalization type ( NORM_MINMAX , NORM_L2 , NORM_L1 or NORM_INF ).

  • dtype: integer().

    When negative, the output array has the same type as src; otherwise, it has the same number of channels as src and the depth =CV_MAT_DEPTH(dtype).

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Optional operation mask.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Output array of the same size as src .

@sa normalize

Python prototype (for reference only):

normalize(src, alpha, beta, norm_type, dtype[, dst[, mask[, stream]]]) -> dst
Link to this function

normalize(src, alpha, beta, norm_type, dtype, opts)

View Source
@spec normalize(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  integer(),
  integer(),
  [mask: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec normalize(
  Evision.CUDA.GpuMat.t(),
  number(),
  number(),
  integer(),
  integer(),
  [mask: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Normalizes the norm or value range of an array.

Positional Arguments
  • src: Evision.Mat.

    Input array.

  • alpha: double.

    Norm value to normalize to or the lower range boundary in case of the range normalization.

  • beta: double.

    Upper range boundary in case of the range normalization; it is not used for the norm normalization.

  • norm_type: integer().

    Normalization type ( NORM_MINMAX , NORM_L2 , NORM_L1 or NORM_INF ).

  • dtype: integer().

    When negative, the output array has the same type as src; otherwise, it has the same number of channels as src and the depth =CV_MAT_DEPTH(dtype).

Keyword Arguments
  • mask: Evision.Mat.

    Optional operation mask.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Output array of the same size as src .

@sa normalize

Python prototype (for reference only):

normalize(src, alpha, beta, norm_type, dtype[, dst[, mask[, stream]]]) -> dst

Variant 2:

Normalizes the norm or value range of an array.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Input array.

  • alpha: double.

    Norm value to normalize to or the lower range boundary in case of the range normalization.

  • beta: double.

    Upper range boundary in case of the range normalization; it is not used for the norm normalization.

  • norm_type: integer().

    Normalization type ( NORM_MINMAX , NORM_L2 , NORM_L1 or NORM_INF ).

  • dtype: integer().

    When negative, the output array has the same type as src; otherwise, it has the same number of channels as src and the depth =CV_MAT_DEPTH(dtype).

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Optional operation mask.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Output array of the same size as src .

@sa normalize

Python prototype (for reference only):

normalize(src, alpha, beta, norm_type, dtype[, dst[, mask[, stream]]]) -> dst
@spec numMoments(Keyword.t()) :: any() | {:error, String.t()}
@spec numMoments(Evision.CUDA.MomentsOrder.t()) :: integer() | {:error, String.t()}

Returns the number of image moments less than or equal to the largest image moments \a order.

Positional Arguments
  • order: MomentsOrder.

    Order of largest moments to calculate with lower order moments requiring less computation.

Return
  • retval: integer()

@returns number of image moments. @sa cuda::spatialMoments, cuda::moments, cuda::MomentsOrder

Python prototype (for reference only):

numMoments(order) -> retval
@spec phase(Keyword.t()) :: any() | {:error, String.t()}

Variant 1:

Computes polar angles of complex matrix elements.

Positional Arguments
  • x: Evision.Mat.

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.Mat.

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag for angles that must be evaluated in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • angle: Evision.Mat.t().

    Destination matrix of angles ( CV_32FC1 ).

@sa phase

Python prototype (for reference only):

phase(x, y[, angle[, angleInDegrees[, stream]]]) -> angle

Variant 2:

Computes polar angles of complex matrix elements.

Positional Arguments
  • x: Evision.CUDA.GpuMat.t().

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.CUDA.GpuMat.t().

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag for angles that must be evaluated in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • angle: Evision.CUDA.GpuMat.t().

    Destination matrix of angles ( CV_32FC1 ).

@sa phase

Python prototype (for reference only):

phase(x, y[, angle[, angleInDegrees[, stream]]]) -> angle
@spec phase(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [angleInDegrees: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec phase(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [angleInDegrees: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes polar angles of complex matrix elements.

Positional Arguments
  • x: Evision.Mat.

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.Mat.

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag for angles that must be evaluated in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • angle: Evision.Mat.t().

    Destination matrix of angles ( CV_32FC1 ).

@sa phase

Python prototype (for reference only):

phase(x, y[, angle[, angleInDegrees[, stream]]]) -> angle

Variant 2:

Computes polar angles of complex matrix elements.

Positional Arguments
  • x: Evision.CUDA.GpuMat.t().

    Source matrix containing real components ( CV_32FC1 ).

  • y: Evision.CUDA.GpuMat.t().

    Source matrix containing imaginary components ( CV_32FC1 ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag for angles that must be evaluated in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • angle: Evision.CUDA.GpuMat.t().

    Destination matrix of angles ( CV_32FC1 ).

@sa phase

Python prototype (for reference only):

phase(x, y[, angle[, angleInDegrees[, stream]]]) -> angle
@spec polarToCart(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

polarToCart(magnitude, angle)

View Source

Variant 1:

Converts polar coordinates into Cartesian.

Positional Arguments
  • magnitude: Evision.Mat.

    Source matrix containing magnitudes ( CV_32FC1 or CV_64FC1 ).

  • angle: Evision.Mat.

    Source matrix containing angles ( same type as magnitude ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag that indicates angles in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • x: Evision.Mat.t().

    Destination matrix of real components ( same type as magnitude ).

  • y: Evision.Mat.t().

    Destination matrix of imaginary components ( same type as magnitude ).

Python prototype (for reference only):

polarToCart(magnitude, angle[, x[, y[, angleInDegrees[, stream]]]]) -> x, y

Variant 2:

Converts polar coordinates into Cartesian.

Positional Arguments
  • magnitude: Evision.CUDA.GpuMat.t().

    Source matrix containing magnitudes ( CV_32FC1 or CV_64FC1 ).

  • angle: Evision.CUDA.GpuMat.t().

    Source matrix containing angles ( same type as magnitude ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag that indicates angles in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • x: Evision.CUDA.GpuMat.t().

    Destination matrix of real components ( same type as magnitude ).

  • y: Evision.CUDA.GpuMat.t().

    Destination matrix of imaginary components ( same type as magnitude ).

Python prototype (for reference only):

polarToCart(magnitude, angle[, x[, y[, angleInDegrees[, stream]]]]) -> x, y
Link to this function

polarToCart(magnitude, angle, opts)

View Source
@spec polarToCart(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [angleInDegrees: term(), stream: term()] | nil
) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec polarToCart(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [angleInDegrees: term(), stream: term()] | nil
) :: {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

Converts polar coordinates into Cartesian.

Positional Arguments
  • magnitude: Evision.Mat.

    Source matrix containing magnitudes ( CV_32FC1 or CV_64FC1 ).

  • angle: Evision.Mat.

    Source matrix containing angles ( same type as magnitude ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag that indicates angles in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • x: Evision.Mat.t().

    Destination matrix of real components ( same type as magnitude ).

  • y: Evision.Mat.t().

    Destination matrix of imaginary components ( same type as magnitude ).

Python prototype (for reference only):

polarToCart(magnitude, angle[, x[, y[, angleInDegrees[, stream]]]]) -> x, y

Variant 2:

Converts polar coordinates into Cartesian.

Positional Arguments
  • magnitude: Evision.CUDA.GpuMat.t().

    Source matrix containing magnitudes ( CV_32FC1 or CV_64FC1 ).

  • angle: Evision.CUDA.GpuMat.t().

    Source matrix containing angles ( same type as magnitude ).

Keyword Arguments
  • angleInDegrees: bool.

    Flag that indicates angles in degrees.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • x: Evision.CUDA.GpuMat.t().

    Destination matrix of real components ( same type as magnitude ).

  • y: Evision.CUDA.GpuMat.t().

    Destination matrix of imaginary components ( same type as magnitude ).

Python prototype (for reference only):

polarToCart(magnitude, angle[, x[, y[, angleInDegrees[, stream]]]]) -> x, y
@spec pow(Keyword.t()) :: any() | {:error, String.t()}
@spec pow(Evision.Mat.maybe_mat_in(), number()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec pow(Evision.CUDA.GpuMat.t(), number()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Raises every matrix element to a power.

Positional Arguments
  • src: Evision.Mat.

    Source matrix.

  • power: double.

    Exponent of power.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

The function pow raises every element of the input matrix to power : \f[\texttt{dst} (I) = \fork{\texttt{src}(I)^power}{if \texttt{power} is integer}{|\texttt{src}(I)|^power}{otherwise}\f] @sa pow

Python prototype (for reference only):

pow(src, power[, dst[, stream]]) -> dst

Variant 2:

Raises every matrix element to a power.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix.

  • power: double.

    Exponent of power.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

The function pow raises every element of the input matrix to power : \f[\texttt{dst} (I) = \fork{\texttt{src}(I)^power}{if \texttt{power} is integer}{|\texttt{src}(I)|^power}{otherwise}\f] @sa pow

Python prototype (for reference only):

pow(src, power[, dst[, stream]]) -> dst
@spec pow(Evision.Mat.maybe_mat_in(), number(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec pow(Evision.CUDA.GpuMat.t(), number(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Raises every matrix element to a power.

Positional Arguments
  • src: Evision.Mat.

    Source matrix.

  • power: double.

    Exponent of power.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

The function pow raises every element of the input matrix to power : \f[\texttt{dst} (I) = \fork{\texttt{src}(I)^power}{if \texttt{power} is integer}{|\texttt{src}(I)|^power}{otherwise}\f] @sa pow

Python prototype (for reference only):

pow(src, power[, dst[, stream]]) -> dst

Variant 2:

Raises every matrix element to a power.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix.

  • power: double.

    Exponent of power.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

The function pow raises every element of the input matrix to power : \f[\texttt{dst} (I) = \fork{\texttt{src}(I)^power}{if \texttt{power} is integer}{|\texttt{src}(I)|^power}{otherwise}\f] @sa pow

Python prototype (for reference only):

pow(src, power[, dst[, stream]]) -> dst
Link to this function

printCudaDeviceInfo(named_args)

View Source
@spec printCudaDeviceInfo(Keyword.t()) :: any() | {:error, String.t()}
@spec printCudaDeviceInfo(integer()) :: :ok | {:error, String.t()}

printCudaDeviceInfo

Positional Arguments
  • device: integer()

Python prototype (for reference only):

printCudaDeviceInfo(device) -> None
Link to this function

printShortCudaDeviceInfo(named_args)

View Source
@spec printShortCudaDeviceInfo(Keyword.t()) :: any() | {:error, String.t()}
@spec printShortCudaDeviceInfo(integer()) :: :ok | {:error, String.t()}

printShortCudaDeviceInfo

Positional Arguments
  • device: integer()

Python prototype (for reference only):

printShortCudaDeviceInfo(device) -> None
@spec pyrDown(Keyword.t()) :: any() | {:error, String.t()}
@spec pyrDown(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec pyrDown(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Smoothes an image and downsamples it.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image. Will have Size((src.cols+1)/2, (src.rows+1)/2) size and the same type as src .

@sa pyrDown

Python prototype (for reference only):

pyrDown(src[, dst[, stream]]) -> dst

Variant 2:

Smoothes an image and downsamples it.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image. Will have Size((src.cols+1)/2, (src.rows+1)/2) size and the same type as src .

@sa pyrDown

Python prototype (for reference only):

pyrDown(src[, dst[, stream]]) -> dst
@spec pyrDown(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec pyrDown(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Smoothes an image and downsamples it.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image. Will have Size((src.cols+1)/2, (src.rows+1)/2) size and the same type as src .

@sa pyrDown

Python prototype (for reference only):

pyrDown(src[, dst[, stream]]) -> dst

Variant 2:

Smoothes an image and downsamples it.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image. Will have Size((src.cols+1)/2, (src.rows+1)/2) size and the same type as src .

@sa pyrDown

Python prototype (for reference only):

pyrDown(src[, dst[, stream]]) -> dst
@spec pyrUp(Keyword.t()) :: any() | {:error, String.t()}
@spec pyrUp(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
@spec pyrUp(Evision.CUDA.GpuMat.t()) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Upsamples an image and then smoothes it.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image. Will have Size(src.cols*2, src.rows*2) size and the same type as src .

Python prototype (for reference only):

pyrUp(src[, dst[, stream]]) -> dst

Variant 2:

Upsamples an image and then smoothes it.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image. Will have Size(src.cols*2, src.rows*2) size and the same type as src .

Python prototype (for reference only):

pyrUp(src[, dst[, stream]]) -> dst
@spec pyrUp(Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec pyrUp(Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Upsamples an image and then smoothes it.

Positional Arguments
Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image. Will have Size(src.cols*2, src.rows*2) size and the same type as src .

Python prototype (for reference only):

pyrUp(src[, dst[, stream]]) -> dst

Variant 2:

Upsamples an image and then smoothes it.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image. Will have Size(src.cols*2, src.rows*2) size and the same type as src .

Python prototype (for reference only):

pyrUp(src[, dst[, stream]]) -> dst
@spec rectStdDev(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

rectStdDev(src, sqr, rect)

View Source
@spec rectStdDev(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  {number(), number(), number(), number()}
) :: Evision.Mat.t() | {:error, String.t()}
@spec rectStdDev(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  {number(), number(), number(), number()}
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes a standard deviation of integral images.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only the CV_32SC1 type is supported.

  • sqr: Evision.Mat.

    Squared source image. Only the CV_32FC1 type is supported.

  • rect: Rect.

    Rectangular window.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the same type and size as src.

Python prototype (for reference only):

rectStdDev(src, sqr, rect[, dst[, stream]]) -> dst

Variant 2:

Computes a standard deviation of integral images.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only the CV_32SC1 type is supported.

  • sqr: Evision.CUDA.GpuMat.t().

    Squared source image. Only the CV_32FC1 type is supported.

  • rect: Rect.

    Rectangular window.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the same type and size as src.

Python prototype (for reference only):

rectStdDev(src, sqr, rect[, dst[, stream]]) -> dst
Link to this function

rectStdDev(src, sqr, rect, opts)

View Source
@spec rectStdDev(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  {number(), number(), number(), number()},
  [{:stream, term()}] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec rectStdDev(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  {number(), number(), number(), number()},
  [{:stream, term()}] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Computes a standard deviation of integral images.

Positional Arguments
  • src: Evision.Mat.

    Source image. Only the CV_32SC1 type is supported.

  • sqr: Evision.Mat.

    Squared source image. Only the CV_32FC1 type is supported.

  • rect: Rect.

    Rectangular window.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the same type and size as src.

Python prototype (for reference only):

rectStdDev(src, sqr, rect[, dst[, stream]]) -> dst

Variant 2:

Computes a standard deviation of integral images.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Only the CV_32SC1 type is supported.

  • sqr: Evision.CUDA.GpuMat.t().

    Squared source image. Only the CV_32FC1 type is supported.

  • rect: Rect.

    Rectangular window.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the same type and size as src.

Python prototype (for reference only):

rectStdDev(src, sqr, rect[, dst[, stream]]) -> dst
@spec reduce(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

reduce(mtx, dim, reduceOp)

View Source
@spec reduce(Evision.Mat.maybe_mat_in(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec reduce(Evision.CUDA.GpuMat.t(), integer(), integer()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Reduces a matrix to a vector.

Positional Arguments
  • mtx: Evision.Mat.

    Source 2D matrix.

  • dim: integer().

    Dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row. 1 means that the matrix is reduced to a single column.

  • reduceOp: integer().

    Reduction operation that could be one of the following:

    • REDUCE_SUM The output is the sum of all rows/columns of the matrix.
    • REDUCE_AVG The output is the mean vector of all rows/columns of the matrix.
    • REDUCE_MAX The output is the maximum (column/row-wise) of all rows/columns of the matrix.
    • REDUCE_MIN The output is the minimum (column/row-wise) of all rows/columns of the matrix.
Keyword Arguments
  • dtype: integer().

    When it is negative, the destination vector will have the same type as the source matrix. Otherwise, its type will be CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels()) .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • vec: Evision.Mat.t().

    Destination vector. Its size and type is defined by dim and dtype parameters.

The function reduce reduces the matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. For example, the function can be used to compute horizontal and vertical projections of a raster image. In case of REDUCE_SUM and REDUCE_AVG , the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes. @sa reduce

Python prototype (for reference only):

reduce(mtx, dim, reduceOp[, vec[, dtype[, stream]]]) -> vec

Variant 2:

Reduces a matrix to a vector.

Positional Arguments
  • mtx: Evision.CUDA.GpuMat.t().

    Source 2D matrix.

  • dim: integer().

    Dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row. 1 means that the matrix is reduced to a single column.

  • reduceOp: integer().

    Reduction operation that could be one of the following:

    • REDUCE_SUM The output is the sum of all rows/columns of the matrix.
    • REDUCE_AVG The output is the mean vector of all rows/columns of the matrix.
    • REDUCE_MAX The output is the maximum (column/row-wise) of all rows/columns of the matrix.
    • REDUCE_MIN The output is the minimum (column/row-wise) of all rows/columns of the matrix.
Keyword Arguments
  • dtype: integer().

    When it is negative, the destination vector will have the same type as the source matrix. Otherwise, its type will be CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels()) .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • vec: Evision.CUDA.GpuMat.t().

    Destination vector. Its size and type is defined by dim and dtype parameters.

The function reduce reduces the matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. For example, the function can be used to compute horizontal and vertical projections of a raster image. In case of REDUCE_SUM and REDUCE_AVG , the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes. @sa reduce

Python prototype (for reference only):

reduce(mtx, dim, reduceOp[, vec[, dtype[, stream]]]) -> vec
Link to this function

reduce(mtx, dim, reduceOp, opts)

View Source
@spec reduce(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  [dtype: term(), stream: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec reduce(
  Evision.CUDA.GpuMat.t(),
  integer(),
  integer(),
  [dtype: term(), stream: term()] | nil
) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Reduces a matrix to a vector.

Positional Arguments
  • mtx: Evision.Mat.

    Source 2D matrix.

  • dim: integer().

    Dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row. 1 means that the matrix is reduced to a single column.

  • reduceOp: integer().

    Reduction operation that could be one of the following:

    • REDUCE_SUM The output is the sum of all rows/columns of the matrix.
    • REDUCE_AVG The output is the mean vector of all rows/columns of the matrix.
    • REDUCE_MAX The output is the maximum (column/row-wise) of all rows/columns of the matrix.
    • REDUCE_MIN The output is the minimum (column/row-wise) of all rows/columns of the matrix.
Keyword Arguments
  • dtype: integer().

    When it is negative, the destination vector will have the same type as the source matrix. Otherwise, its type will be CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels()) .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • vec: Evision.Mat.t().

    Destination vector. Its size and type is defined by dim and dtype parameters.

The function reduce reduces the matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. For example, the function can be used to compute horizontal and vertical projections of a raster image. In case of REDUCE_SUM and REDUCE_AVG , the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes. @sa reduce

Python prototype (for reference only):

reduce(mtx, dim, reduceOp[, vec[, dtype[, stream]]]) -> vec

Variant 2:

Reduces a matrix to a vector.

Positional Arguments
  • mtx: Evision.CUDA.GpuMat.t().

    Source 2D matrix.

  • dim: integer().

    Dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row. 1 means that the matrix is reduced to a single column.

  • reduceOp: integer().

    Reduction operation that could be one of the following:

    • REDUCE_SUM The output is the sum of all rows/columns of the matrix.
    • REDUCE_AVG The output is the mean vector of all rows/columns of the matrix.
    • REDUCE_MAX The output is the maximum (column/row-wise) of all rows/columns of the matrix.
    • REDUCE_MIN The output is the minimum (column/row-wise) of all rows/columns of the matrix.
Keyword Arguments
  • dtype: integer().

    When it is negative, the destination vector will have the same type as the source matrix. Otherwise, its type will be CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels()) .

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • vec: Evision.CUDA.GpuMat.t().

    Destination vector. Its size and type is defined by dim and dtype parameters.

The function reduce reduces the matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. For example, the function can be used to compute horizontal and vertical projections of a raster image. In case of REDUCE_SUM and REDUCE_AVG , the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes. @sa reduce

Python prototype (for reference only):

reduce(mtx, dim, reduceOp[, vec[, dtype[, stream]]]) -> vec
Link to this function

registerPageLocked(named_args)

View Source
@spec registerPageLocked(Keyword.t()) :: any() | {:error, String.t()}
@spec registerPageLocked(Evision.Mat.maybe_mat_in()) :: :ok | {:error, String.t()}

Page-locks the memory of matrix and maps it for the device(s).

Positional Arguments

Python prototype (for reference only):

registerPageLocked(m) -> None
@spec remap(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

remap(src, xmap, ymap, interpolation)

View Source

Variant 1:

Applies a generic geometrical transformation to an image.

Positional Arguments
  • src: Evision.Mat.

    Source image.

  • xmap: Evision.Mat.

    X values. Only CV_32FC1 type is supported.

  • ymap: Evision.Mat.

    Y values. Only CV_32FC1 type is supported.

  • interpolation: integer().

    Interpolation method (see resize ). INTER_NEAREST , INTER_LINEAR and INTER_CUBIC are supported for now.

Keyword Arguments
  • borderMode: integer().

    Pixel extrapolation method (see borderInterpolate ). BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • borderValue: Evision.scalar().

    Value used in case of a constant border. By default, it is 0.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the size the same as xmap and the type the same as src .

The function transforms the source image using the specified map: \f[\texttt{dst} (x,y) = \texttt{src} (xmap(x,y), ymap(x,y))\f] Values of pixels with non-integer coordinates are computed using the bilinear interpolation. @sa remap

Python prototype (for reference only):

remap(src, xmap, ymap, interpolation[, dst[, borderMode[, borderValue[, stream]]]]) -> dst

Variant 2:

Applies a generic geometrical transformation to an image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image.

  • xmap: Evision.CUDA.GpuMat.t().

    X values. Only CV_32FC1 type is supported.

  • ymap: Evision.CUDA.GpuMat.t().

    Y values. Only CV_32FC1 type is supported.

  • interpolation: integer().

    Interpolation method (see resize ). INTER_NEAREST , INTER_LINEAR and INTER_CUBIC are supported for now.

Keyword Arguments
  • borderMode: integer().

    Pixel extrapolation method (see borderInterpolate ). BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • borderValue: Evision.scalar().

    Value used in case of a constant border. By default, it is 0.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the size the same as xmap and the type the same as src .

The function transforms the source image using the specified map: \f[\texttt{dst} (x,y) = \texttt{src} (xmap(x,y), ymap(x,y))\f] Values of pixels with non-integer coordinates are computed using the bilinear interpolation. @sa remap

Python prototype (for reference only):

remap(src, xmap, ymap, interpolation[, dst[, borderMode[, borderValue[, stream]]]]) -> dst
Link to this function

remap(src, xmap, ymap, interpolation, opts)

View Source
@spec remap(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  [borderMode: term(), borderValue: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec remap(
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  integer(),
  [borderMode: term(), borderValue: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Applies a generic geometrical transformation to an image.

Positional Arguments
  • src: Evision.Mat.

    Source image.

  • xmap: Evision.Mat.

    X values. Only CV_32FC1 type is supported.

  • ymap: Evision.Mat.

    Y values. Only CV_32FC1 type is supported.

  • interpolation: integer().

    Interpolation method (see resize ). INTER_NEAREST , INTER_LINEAR and INTER_CUBIC are supported for now.

Keyword Arguments
  • borderMode: integer().

    Pixel extrapolation method (see borderInterpolate ). BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • borderValue: Evision.scalar().

    Value used in case of a constant border. By default, it is 0.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the size the same as xmap and the type the same as src .

The function transforms the source image using the specified map: \f[\texttt{dst} (x,y) = \texttt{src} (xmap(x,y), ymap(x,y))\f] Values of pixels with non-integer coordinates are computed using the bilinear interpolation. @sa remap

Python prototype (for reference only):

remap(src, xmap, ymap, interpolation[, dst[, borderMode[, borderValue[, stream]]]]) -> dst

Variant 2:

Applies a generic geometrical transformation to an image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image.

  • xmap: Evision.CUDA.GpuMat.t().

    X values. Only CV_32FC1 type is supported.

  • ymap: Evision.CUDA.GpuMat.t().

    Y values. Only CV_32FC1 type is supported.

  • interpolation: integer().

    Interpolation method (see resize ). INTER_NEAREST , INTER_LINEAR and INTER_CUBIC are supported for now.

Keyword Arguments
  • borderMode: integer().

    Pixel extrapolation method (see borderInterpolate ). BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.

  • borderValue: Evision.scalar().

    Value used in case of a constant border. By default, it is 0.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the size the same as xmap and the type the same as src .

The function transforms the source image using the specified map: \f[\texttt{dst} (x,y) = \texttt{src} (xmap(x,y), ymap(x,y))\f] Values of pixels with non-integer coordinates are computed using the bilinear interpolation. @sa remap

Python prototype (for reference only):

remap(src, xmap, ymap, interpolation[, dst[, borderMode[, borderValue[, stream]]]]) -> dst
Link to this function

reprojectImageTo3D(named_args)

View Source
@spec reprojectImageTo3D(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

reprojectImageTo3D(disp, q)

View Source
@spec reprojectImageTo3D(Evision.CUDA.GpuMat.t(), Evision.Mat.maybe_mat_in()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Reprojects a disparity image to 3D space.

Positional Arguments
  • disp: Evision.CUDA.GpuMat.t().

    Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image. If 16-bit signed format is used, the values are assumed to have no fractional bits.

  • q: Evision.Mat.

    \f$4 \times 4\f$ perspective transformation matrix that can be obtained via stereoRectify .

Keyword Arguments
  • dst_cn: integer().

    The number of channels for output image. Can be 3 or 4.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • xyzw: Evision.CUDA.GpuMat.t().

    Output 3- or 4-channel floating-point image of the same size as disp . Each element of xyzw(x,y) contains 3D coordinates (x,y,z) or (x,y,z,1) of the point (x,y) , computed from the disparity map.

@sa reprojectImageTo3D

Python prototype (for reference only):

reprojectImageTo3D(disp, Q[, xyzw[, dst_cn[, stream]]]) -> xyzw
Link to this function

reprojectImageTo3D(disp, q, opts)

View Source
@spec reprojectImageTo3D(
  Evision.CUDA.GpuMat.t(),
  Evision.Mat.maybe_mat_in(),
  [dst_cn: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Reprojects a disparity image to 3D space.

Positional Arguments
  • disp: Evision.CUDA.GpuMat.t().

    Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image. If 16-bit signed format is used, the values are assumed to have no fractional bits.

  • q: Evision.Mat.

    \f$4 \times 4\f$ perspective transformation matrix that can be obtained via stereoRectify .

Keyword Arguments
  • dst_cn: integer().

    The number of channels for output image. Can be 3 or 4.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • xyzw: Evision.CUDA.GpuMat.t().

    Output 3- or 4-channel floating-point image of the same size as disp . Each element of xyzw(x,y) contains 3D coordinates (x,y,z) or (x,y,z,1) of the point (x,y) , computed from the disparity map.

@sa reprojectImageTo3D

Python prototype (for reference only):

reprojectImageTo3D(disp, Q[, xyzw[, dst_cn[, stream]]]) -> xyzw
@spec resetDevice() :: :ok | {:error, String.t()}

Explicitly destroys and cleans up all resources associated with the current device in the current process.

Any subsequent API call to this device will reinitialize the device.

Python prototype (for reference only):

resetDevice() -> None
@spec resetDevice(Keyword.t()) :: any() | {:error, String.t()}
@spec resize(Keyword.t()) :: any() | {:error, String.t()}
@spec resize(
  Evision.Mat.maybe_mat_in(),
  {number(), number()}
) :: Evision.Mat.t() | {:error, String.t()}
@spec resize(
  Evision.CUDA.GpuMat.t(),
  {number(), number()}
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Resizes an image.

Positional Arguments
  • src: Evision.Mat.

    Source image.

  • dsize: Size.

    Destination image size. If it is zero, it is computed as: \f[\texttt{dsize = Size(round(fxsrc.cols), round(fysrc.rows))}\f] Either dsize or both fx and fy must be non-zero.

Keyword Arguments
  • fx: double.

    Scale factor along the horizontal axis. If it is zero, it is computed as: \f[\texttt{(double)dsize.width/src.cols}\f]

  • fy: double.

    Scale factor along the vertical axis. If it is zero, it is computed as: \f[\texttt{(double)dsize.height/src.rows}\f]

  • interpolation: integer().

    Interpolation method. INTER_NEAREST , INTER_LINEAR and INTER_CUBIC are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the same type as src . The size is dsize (when it is non-zero) or the size is computed from src.size() , fx , and fy .

@sa resize

Python prototype (for reference only):

resize(src, dsize[, dst[, fx[, fy[, interpolation[, stream]]]]]) -> dst

Variant 2:

Resizes an image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image.

  • dsize: Size.

    Destination image size. If it is zero, it is computed as: \f[\texttt{dsize = Size(round(fxsrc.cols), round(fysrc.rows))}\f] Either dsize or both fx and fy must be non-zero.

Keyword Arguments
  • fx: double.

    Scale factor along the horizontal axis. If it is zero, it is computed as: \f[\texttt{(double)dsize.width/src.cols}\f]

  • fy: double.

    Scale factor along the vertical axis. If it is zero, it is computed as: \f[\texttt{(double)dsize.height/src.rows}\f]

  • interpolation: integer().

    Interpolation method. INTER_NEAREST , INTER_LINEAR and INTER_CUBIC are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the same type as src . The size is dsize (when it is non-zero) or the size is computed from src.size() , fx , and fy .

@sa resize

Python prototype (for reference only):

resize(src, dsize[, dst[, fx[, fy[, interpolation[, stream]]]]]) -> dst
Link to this function

resize(src, dsize, opts)

View Source
@spec resize(
  Evision.Mat.maybe_mat_in(),
  {number(), number()},
  [fx: term(), fy: term(), interpolation: term(), stream: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec resize(
  Evision.CUDA.GpuMat.t(),
  {number(), number()},
  [fx: term(), fy: term(), interpolation: term(), stream: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Resizes an image.

Positional Arguments
  • src: Evision.Mat.

    Source image.

  • dsize: Size.

    Destination image size. If it is zero, it is computed as: \f[\texttt{dsize = Size(round(fxsrc.cols), round(fysrc.rows))}\f] Either dsize or both fx and fy must be non-zero.

Keyword Arguments
  • fx: double.

    Scale factor along the horizontal axis. If it is zero, it is computed as: \f[\texttt{(double)dsize.width/src.cols}\f]

  • fy: double.

    Scale factor along the vertical axis. If it is zero, it is computed as: \f[\texttt{(double)dsize.height/src.rows}\f]

  • interpolation: integer().

    Interpolation method. INTER_NEAREST , INTER_LINEAR and INTER_CUBIC are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the same type as src . The size is dsize (when it is non-zero) or the size is computed from src.size() , fx , and fy .

@sa resize

Python prototype (for reference only):

resize(src, dsize[, dst[, fx[, fy[, interpolation[, stream]]]]]) -> dst

Variant 2:

Resizes an image.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image.

  • dsize: Size.

    Destination image size. If it is zero, it is computed as: \f[\texttt{dsize = Size(round(fxsrc.cols), round(fysrc.rows))}\f] Either dsize or both fx and fy must be non-zero.

Keyword Arguments
  • fx: double.

    Scale factor along the horizontal axis. If it is zero, it is computed as: \f[\texttt{(double)dsize.width/src.cols}\f]

  • fy: double.

    Scale factor along the vertical axis. If it is zero, it is computed as: \f[\texttt{(double)dsize.height/src.rows}\f]

  • interpolation: integer().

    Interpolation method. INTER_NEAREST , INTER_LINEAR and INTER_CUBIC are supported for now.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the same type as src . The size is dsize (when it is non-zero) or the size is computed from src.size() , fx , and fy .

@sa resize

Python prototype (for reference only):

resize(src, dsize[, dst[, fx[, fy[, interpolation[, stream]]]]]) -> dst
@spec rotate(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

rotate(src, dsize, angle)

View Source
@spec rotate(Evision.Mat.maybe_mat_in(), {number(), number()}, number()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec rotate(Evision.CUDA.GpuMat.t(), {number(), number()}, number()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Rotates an image around the origin (0,0) and then shifts it.

Positional Arguments
  • src: Evision.Mat.

    Source image. Supports 1, 3 or 4 channels images with CV_8U , CV_16U or CV_32F depth.

  • dsize: Size.

    Size of the destination image.

  • angle: double.

    Angle of rotation in degrees.

Keyword Arguments
  • xShift: double.

    Shift along the horizontal axis.

  • yShift: double.

    Shift along the vertical axis.

  • interpolation: integer().

    Interpolation method. Only INTER_NEAREST , INTER_LINEAR , and INTER_CUBIC are supported.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the same type as src . The size is dsize .

@sa cuda::warpAffine

Python prototype (for reference only):

rotate(src, dsize, angle[, dst[, xShift[, yShift[, interpolation[, stream]]]]]) -> dst

Variant 2:

Rotates an image around the origin (0,0) and then shifts it.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Supports 1, 3 or 4 channels images with CV_8U , CV_16U or CV_32F depth.

  • dsize: Size.

    Size of the destination image.

  • angle: double.

    Angle of rotation in degrees.

Keyword Arguments
  • xShift: double.

    Shift along the horizontal axis.

  • yShift: double.

    Shift along the vertical axis.

  • interpolation: integer().

    Interpolation method. Only INTER_NEAREST , INTER_LINEAR , and INTER_CUBIC are supported.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the same type as src . The size is dsize .

@sa cuda::warpAffine

Python prototype (for reference only):

rotate(src, dsize, angle[, dst[, xShift[, yShift[, interpolation[, stream]]]]]) -> dst
Link to this function

rotate(src, dsize, angle, opts)

View Source
@spec rotate(
  Evision.Mat.maybe_mat_in(),
  {number(), number()},
  number(),
  [interpolation: term(), stream: term(), xShift: term(), yShift: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}
@spec rotate(
  Evision.CUDA.GpuMat.t(),
  {number(), number()},
  number(),
  [interpolation: term(), stream: term(), xShift: term(), yShift: term()] | nil
) :: Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Rotates an image around the origin (0,0) and then shifts it.

Positional Arguments
  • src: Evision.Mat.

    Source image. Supports 1, 3 or 4 channels images with CV_8U , CV_16U or CV_32F depth.

  • dsize: Size.

    Size of the destination image.

  • angle: double.

    Angle of rotation in degrees.

Keyword Arguments
  • xShift: double.

    Shift along the horizontal axis.

  • yShift: double.

    Shift along the vertical axis.

  • interpolation: integer().

    Interpolation method. Only INTER_NEAREST , INTER_LINEAR , and INTER_CUBIC are supported.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination image with the same type as src . The size is dsize .

@sa cuda::warpAffine

Python prototype (for reference only):

rotate(src, dsize, angle[, dst[, xShift[, yShift[, interpolation[, stream]]]]]) -> dst

Variant 2:

Rotates an image around the origin (0,0) and then shifts it.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source image. Supports 1, 3 or 4 channels images with CV_8U , CV_16U or CV_32F depth.

  • dsize: Size.

    Size of the destination image.

  • angle: double.

    Angle of rotation in degrees.

Keyword Arguments
  • xShift: double.

    Shift along the horizontal axis.

  • yShift: double.

    Shift along the vertical axis.

  • interpolation: integer().

    Interpolation method. Only INTER_NEAREST , INTER_LINEAR , and INTER_CUBIC are supported.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination image with the same type as src . The size is dsize .

@sa cuda::warpAffine

Python prototype (for reference only):

rotate(src, dsize, angle[, dst[, xShift[, yShift[, interpolation[, stream]]]]]) -> dst
@spec rshift(Keyword.t()) :: any() | {:error, String.t()}
@spec rshift(Evision.Mat.maybe_mat_in(), Evision.scalar()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec rshift(Evision.CUDA.GpuMat.t(), Evision.scalar()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs pixel by pixel right shift of an image by a constant value.

Positional Arguments
  • src: Evision.Mat.

    Source matrix. Supports 1, 3 and 4 channels images with integers elements.

  • val: Evision.scalar().

    Constant values, one per channel.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

Python prototype (for reference only):

rshift(src, val[, dst[, stream]]) -> dst

Variant 2:

Performs pixel by pixel right shift of an image by a constant value.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix. Supports 1, 3 and 4 channels images with integers elements.

  • val: Evision.scalar().

    Constant values, one per channel.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

Python prototype (for reference only):

rshift(src, val[, dst[, stream]]) -> dst
@spec rshift(Evision.Mat.maybe_mat_in(), Evision.scalar(), [{:stream, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec rshift(Evision.CUDA.GpuMat.t(), Evision.scalar(), [{:stream, term()}] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Performs pixel by pixel right shift of an image by a constant value.

Positional Arguments
  • src: Evision.Mat.

    Source matrix. Supports 1, 3 and 4 channels images with integers elements.

  • val: Evision.scalar().

    Constant values, one per channel.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.Mat.t().

    Destination matrix with the same size and type as src .

Python prototype (for reference only):

rshift(src, val[, dst[, stream]]) -> dst

Variant 2:

Performs pixel by pixel right shift of an image by a constant value.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Source matrix. Supports 1, 3 and 4 channels images with integers elements.

  • val: Evision.scalar().

    Constant values, one per channel.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • dst: Evision.CUDA.GpuMat.t().

    Destination matrix with the same size and type as src .

Python prototype (for reference only):

rshift(src, val[, dst[, stream]]) -> dst
Link to this function

setBufferPoolConfig(named_args)

View Source
@spec setBufferPoolConfig(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setBufferPoolConfig(deviceId, stackSize, stackCount)

View Source
@spec setBufferPoolConfig(integer(), integer(), integer()) ::
  :ok | {:error, String.t()}

setBufferPoolConfig

Positional Arguments
  • deviceId: integer()
  • stackSize: size_t
  • stackCount: integer()

Python prototype (for reference only):

setBufferPoolConfig(deviceId, stackSize, stackCount) -> None
Link to this function

setBufferPoolUsage(named_args)

View Source
@spec setBufferPoolUsage(Keyword.t()) :: any() | {:error, String.t()}
@spec setBufferPoolUsage(boolean()) :: :ok | {:error, String.t()}

setBufferPoolUsage

Positional Arguments
  • on: bool

Python prototype (for reference only):

setBufferPoolUsage(on) -> None
@spec setDevice(Keyword.t()) :: any() | {:error, String.t()}
@spec setDevice(integer()) :: :ok | {:error, String.t()}

Sets a device and initializes it for the current thread.

Positional Arguments
  • device: integer().

    System index of a CUDA device starting with 0.

If the call of this function is omitted, a default device is initialized at the fist CUDA usage.

Python prototype (for reference only):

setDevice(device) -> None
Link to this function

spatialMoments(named_args)

View Source
@spec spatialMoments(Keyword.t()) :: any() | {:error, String.t()}
@spec spatialMoments(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec spatialMoments(Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Calculates all of the spatial moments up to the 3rd order of a rasterized shape.

Positional Arguments
  • src: Evision.Mat.

    Raster image (single-channel 2D array).

Keyword Arguments
  • binaryImage: bool.

    If it is true, all non-zero image pixels are treated as 1's.

  • order: MomentsOrder.

    Order of largest moments to calculate with lower order moments requiring less computation.

  • momentsType: integer().

    Precision to use when calculating moments. Available types are \ref CV_32F and \ref CV_64F with the performance of \ref CV_32F an order of magnitude greater than \ref CV_64F. If the image is small the accuracy from \ref CV_32F can be equal or very close to \ref CV_64F.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • moments: Evision.Mat.t().

Asynchronous version of cuda::moments() which only calculates the spatial (not centralized or normalized) moments, up to the 3rd order, of a rasterized shape. Each moment is returned as a column entry in the 1D \a moments array.

Note: For maximum performance pre-allocate a 1D GpuMat for \a moments of the correct type and size large enough to store the all the image moments of up to the desired \a order. e.g. With \a order === MomentsOrder::SECOND_ORDER_MOMENTS and \a momentsType == \ref CV_32F \a moments can be allocated as

GpuMat momentsDevice(1,numMoments(MomentsOrder::SECOND_ORDER_MOMENTS),CV_32F)

The central and normalized moments can easily be calculated on the host by downloading the \a moments array and using the cuda::convertSpatialMoments helper function. e.g.

HostMem spatialMomentsHostMem(1, numMoments(MomentsOrder::SECOND_ORDER_MOMENTS), CV_32F);
spatialMomentsDevice.download(spatialMomentsHostMem, stream);
stream.waitForCompletion();
Mat spatialMoments = spatialMomentsHostMem.createMatHeader();
cv::Moments cvMoments = convertSpatialMoments<float>(spatialMoments, order);

see the \a CUDA_TEST_P(Moments, Async) test inside opencv_contrib_source_code/modules/cudaimgproc/test/test_moments.cpp for an example. @sa cuda::moments, cuda::convertSpatialMoments, cuda::numMoments, cuda::MomentsOrder

Python prototype (for reference only):

spatialMoments(src[, moments[, binaryImage[, order[, momentsType[, stream]]]]]) -> moments

Variant 2:

Calculates all of the spatial moments up to the 3rd order of a rasterized shape.

Positional Arguments
  • src: Evision.CUDA.GpuMat.t().

    Raster image (single-channel 2D array).

Keyword Arguments
  • binaryImage: bool.

    If it is true, all non-zero image pixels are treated as 1's.

  • order: MomentsOrder.

    Order of largest moments to calculate with lower order moments requiring less computation.

  • momentsType: integer().

    Precision to use when calculating moments. Available types are \ref CV_32F and \ref CV_64F with the performance of \ref CV_32F an order of magnitude greater than \ref CV_64F. If the image is small the accuracy from \ref CV_32F can be equal or very close to \ref CV_64F.

  • stream: Evision.CUDA.Stream.t().

    Stream for the asynchronous version.

Return
  • moments: Evision.CUDA.GpuMat.t().

Asynchronous version of cuda::moments() which only calculates the spatial (not centralized or normalized) moments, up to the 3rd order, of a rasterized shape. Each moment is returned as a column entry in the 1D \a moments array.

Note: For maximum performance pre-allocate a 1D GpuMat for \a moments of the correct type and size large enough to store the all the image moments of up to the desired \a order. e.g. With \a order === MomentsOrder::SECOND_ORDER_MOMENTS and \a momentsType == \ref CV_32F \a moments can be allocated as

GpuMat momentsDevice(1,