View Source Evision.XImgProc (Evision v0.2.9)

Summary

Types

t()

Type that represents an XImgProc struct.

Functions

Simple one-line Adaptive Manifold Filter call.

Simple one-line Adaptive Manifold Filter call.

Performs anisotropic diffusion on an image.

Performs anisotropic diffusion on an image.

Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.

Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.

Compares a color template against overlapped color image regions.

Compares a color template against overlapped color image regions.

Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)

Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)

Function for computing mean square error for disparity maps

Contour sampling .

Computes the estimated covariance matrix of an image using the sliding window forumlation.

Computes the estimated covariance matrix of an image using the sliding window forumlation.

Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.

Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.

create ContourFitting algorithm object

create ContourFitting algorithm object

Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM.

More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself.

Factory method, create instance of DTFilter and produce initialization routines.

Factory method, create instance of DTFilter and produce initialization routines.

Factory method that creates an instance of the EdgeAwareInterpolator.

Creates a Edgeboxes

Creates a Edgeboxes

Creates a smart pointer to a EdgeDrawing object and initializes it

Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.

Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.

Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.

Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.

Creates a smart pointer to a FastLineDetector object and initializes it

Creates a smart pointer to a FastLineDetector object and initializes it

Creates a graph based segmentor

Creates a graph based segmentor

Factory method, create instance of GuidedFilter and produce initialization routines.

Factory method, create instance of GuidedFilter and produce initialization routines.

creates a quaternion image.

creates a quaternion image.

createRFFeatureGetter

Factory method that creates an instance of the RICInterpolator.

Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence.

Create a new SelectiveSearchSegmentation class.

Create a new color-based strategy

Create a new fill-based strategy

Create a new multiple strategy and set one subtrategy

Create a new multiple strategy and set two subtrategies, with equal weights

Create a new multiple strategy and set three subtrategies, with equal weights

Create a new multiple strategy and set four subtrategies, with equal weights

Create a new size-based strategy

Create a new size-based strategy

createStructuredEdgeDetection

createStructuredEdgeDetection

Class implementing the LSC (Linear Spectral Clustering) superpixels

Class implementing the LSC (Linear Spectral Clustering) superpixels

Initialize a SuperpixelSLIC object

Initialize a SuperpixelSLIC object

Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.

Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.

Smoothes an image using the Edge-Preserving filter.

Smoothes an image using the Edge-Preserving filter.

Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.

Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.

Calculates 2D Fast Hough transform of an image.

Calculates 2D Fast Hough transform of an image.

Finds ellipses fastly in an image using projective invariant pruning.

Finds ellipses fastly in an image using projective invariant pruning.

Fourier descriptors for planed closed curves

Fourier descriptors for planed closed curves

Function for creating a disparity map visualization (clamped CV_8U image)

Function for creating a disparity map visualization (clamped CV_8U image)

Applies X Deriche filter to an image.

Applies X Deriche filter to an image.

Applies Y Deriche filter to an image.

Applies Y Deriche filter to an image.

Simple one-line (Fast) Guided Filter call.

Simple one-line (Fast) Guided Filter call.

Calculates coordinates of line segment corresponded by point in Hough space.

Calculates coordinates of line segment corresponded by point in Hough space.

Applies the joint bilateral filter to an image.

Applies the joint bilateral filter to an image.

Global image smoothing via L0 gradient minimization.

Global image smoothing via L0 gradient minimization.

Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.

Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.

PeiLinNormalization

PeiLinNormalization

calculates conjugate of a quaternion image.

calculates conjugate of a quaternion image.

Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.

Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.

Calculates the per-element quaternion product of two arrays

Calculates the per-element quaternion product of two arrays

divides each element by its modulus.

divides each element by its modulus.

Calculate Radon Transform of an image.

Calculate Radon Transform of an image.

Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.

Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.

Applies the rolling guidance filter to an image.

Applies the rolling guidance filter to an image.

Applies a binary blob thinning operation, to achieve a skeletization of the input image.

Applies a binary blob thinning operation, to achieve a skeletization of the input image.

transform a contour

transform a contour

Applies weighted median filter to an image.

Applies weighted median filter to an image.

Types

@type t() :: %Evision.XImgProc{ref: reference()}

Type that represents an XImgProc struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec amFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

amFilter(joint, src, sigma_s, sigma_r)

View Source
@spec amFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  number(),
  number()
) ::
  Evision.Mat.t() | {:error, String.t()}

Simple one-line Adaptive Manifold Filter call.

Positional Arguments
  • joint: Evision.Mat.

    joint (also called as guided) image or array of images with any numbers of channels.

  • src: Evision.Mat.

    filtering image with any numbers of channels.

  • sigma_s: double.

    spatial standard deviation.

  • sigma_r: double.

    color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • adjust_outliers: bool.

    optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.

Return
  • dst: Evision.Mat.t().

    output image.

Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions. @sa bilateralFilter, dtFilter, guidedFilter

Python prototype (for reference only):

amFilter(joint, src, sigma_s, sigma_r[, dst[, adjust_outliers]]) -> dst
Link to this function

amFilter(joint, src, sigma_s, sigma_r, opts)

View Source
@spec amFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  [{:adjust_outliers, term()}] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Simple one-line Adaptive Manifold Filter call.

Positional Arguments
  • joint: Evision.Mat.

    joint (also called as guided) image or array of images with any numbers of channels.

  • src: Evision.Mat.

    filtering image with any numbers of channels.

  • sigma_s: double.

    spatial standard deviation.

  • sigma_r: double.

    color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • adjust_outliers: bool.

    optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.

Return
  • dst: Evision.Mat.t().

    output image.

Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions. @sa bilateralFilter, dtFilter, guidedFilter

Python prototype (for reference only):

amFilter(joint, src, sigma_s, sigma_r[, dst[, adjust_outliers]]) -> dst
Link to this function

anisotropicDiffusion(named_args)

View Source
@spec anisotropicDiffusion(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

anisotropicDiffusion(src, alpha, k, niters)

View Source
@spec anisotropicDiffusion(Evision.Mat.maybe_mat_in(), number(), number(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

Performs anisotropic diffusion on an image.

Positional Arguments
  • src: Evision.Mat.

    Source image with 3 channels.

  • alpha: float.

    The amount of time to step forward by on each iteration (normally, it's between 0 and 1).

  • k: float.

    sensitivity to the edges

  • niters: integer().

    The number of iterations

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and the same number of channels as src .

The function applies Perona-Malik anisotropic diffusion to an image. This is the solution to the partial differential equation: \f[{\frac {\partial I}{\partial t}}={\mathrm {div}}\left(c(x,y,t)\nabla I\right)=\nabla c\cdot \nabla I+c(x,y,t)\Delta I\f] Suggested functions for c(x,y,t) are: \f[c\left(\|\nabla I\|\right)=e^{{-\left(\|\nabla I\|/K\right)^{2}}}\f] or \f[ c\left(\|\nabla I\|\right)={\frac {1}{1+\left({\frac {\|\nabla I\|}{K}}\right)^{2}}} \f]

Python prototype (for reference only):

anisotropicDiffusion(src, alpha, K, niters[, dst]) -> dst
Link to this function

anisotropicDiffusion(src, alpha, k, niters, opts)

View Source
@spec anisotropicDiffusion(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  integer(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Performs anisotropic diffusion on an image.

Positional Arguments
  • src: Evision.Mat.

    Source image with 3 channels.

  • alpha: float.

    The amount of time to step forward by on each iteration (normally, it's between 0 and 1).

  • k: float.

    sensitivity to the edges

  • niters: integer().

    The number of iterations

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and the same number of channels as src .

The function applies Perona-Malik anisotropic diffusion to an image. This is the solution to the partial differential equation: \f[{\frac {\partial I}{\partial t}}={\mathrm {div}}\left(c(x,y,t)\nabla I\right)=\nabla c\cdot \nabla I+c(x,y,t)\Delta I\f] Suggested functions for c(x,y,t) are: \f[c\left(\|\nabla I\|\right)=e^{{-\left(\|\nabla I\|/K\right)^{2}}}\f] or \f[ c\left(\|\nabla I\|\right)={\frac {1}{1+\left({\frac {\|\nabla I\|}{K}}\right)^{2}}} \f]

Python prototype (for reference only):

anisotropicDiffusion(src, alpha, K, niters[, dst]) -> dst
Link to this function

bilateralTextureFilter(named_args)

View Source
@spec bilateralTextureFilter(Keyword.t()) :: any() | {:error, String.t()}
@spec bilateralTextureFilter(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.

Positional Arguments
  • src: Evision.Mat.

    Source image whose depth is 8-bit UINT or 32-bit FLOAT

Keyword Arguments
  • fr: integer().

    Radius of kernel to be used for filtering. It should be positive integer

  • numIter: integer().

    Number of iterations of algorithm, It should be positive integer

  • sigmaAlpha: double.

    Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated.

  • sigmaAvg: double.

    Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper.

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and type as src.

@sa rollingGuidanceFilter, bilateralFilter

Python prototype (for reference only):

bilateralTextureFilter(src[, dst[, fr[, numIter[, sigmaAlpha[, sigmaAvg]]]]]) -> dst
Link to this function

bilateralTextureFilter(src, opts)

View Source
@spec bilateralTextureFilter(
  Evision.Mat.maybe_mat_in(),
  [fr: term(), numIter: term(), sigmaAlpha: term(), sigmaAvg: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.

Positional Arguments
  • src: Evision.Mat.

    Source image whose depth is 8-bit UINT or 32-bit FLOAT

Keyword Arguments
  • fr: integer().

    Radius of kernel to be used for filtering. It should be positive integer

  • numIter: integer().

    Number of iterations of algorithm, It should be positive integer

  • sigmaAlpha: double.

    Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated.

  • sigmaAvg: double.

    Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper.

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and type as src.

@sa rollingGuidanceFilter, bilateralFilter

Python prototype (for reference only):

bilateralTextureFilter(src[, dst[, fr[, numIter[, sigmaAlpha[, sigmaAvg]]]]]) -> dst
Link to this function

colorMatchTemplate(named_args)

View Source
@spec colorMatchTemplate(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

colorMatchTemplate(img, templ)

View Source
@spec colorMatchTemplate(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Compares a color template against overlapped color image regions.

Positional Arguments
Return
  • result: Evision.Mat.t().

Python prototype (for reference only):

colorMatchTemplate(img, templ[, result]) -> result
Link to this function

colorMatchTemplate(img, templ, opts)

View Source
@spec colorMatchTemplate(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Compares a color template against overlapped color image regions.

Positional Arguments
Return
  • result: Evision.Mat.t().

Python prototype (for reference only):

colorMatchTemplate(img, templ[, result]) -> result
Link to this function

computeBadPixelPercent(named_args)

View Source
@spec computeBadPixelPercent(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

computeBadPixelPercent(gT, src, rOI)

View Source
@spec computeBadPixelPercent(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  {number(), number(), number(), number()}
) :: number() | {:error, String.t()}

Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)

Positional Arguments
  • gT: Evision.Mat.

    ground truth disparity map

  • src: Evision.Mat.

    disparity map to evaluate

  • rOI: Rect.

    region of interest

Keyword Arguments
  • thresh: integer().

    threshold used to determine "bad" pixels

Return
  • retval: double

@result returns mean square error between GT and src

Python prototype (for reference only):

computeBadPixelPercent(GT, src, ROI[, thresh]) -> retval
Link to this function

computeBadPixelPercent(gT, src, rOI, opts)

View Source
@spec computeBadPixelPercent(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  {number(), number(), number(), number()},
  [{:thresh, term()}] | nil
) :: number() | {:error, String.t()}

Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)

Positional Arguments
  • gT: Evision.Mat.

    ground truth disparity map

  • src: Evision.Mat.

    disparity map to evaluate

  • rOI: Rect.

    region of interest

Keyword Arguments
  • thresh: integer().

    threshold used to determine "bad" pixels

Return
  • retval: double

@result returns mean square error between GT and src

Python prototype (for reference only):

computeBadPixelPercent(GT, src, ROI[, thresh]) -> retval
@spec computeMSE(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

computeMSE(gT, src, rOI)

View Source
@spec computeMSE(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  {number(), number(), number(), number()}
) :: number() | {:error, String.t()}

Function for computing mean square error for disparity maps

Positional Arguments
  • gT: Evision.Mat.

    ground truth disparity map

  • src: Evision.Mat.

    disparity map to evaluate

  • rOI: Rect.

    region of interest

Return
  • retval: double

@result returns mean square error between GT and src

Python prototype (for reference only):

computeMSE(GT, src, ROI) -> retval
Link to this function

contourSampling(named_args)

View Source
@spec contourSampling(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

contourSampling(src, nbElt)

View Source
@spec contourSampling(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

Contour sampling .

Positional Arguments
Return
  • out: Evision.Mat.t().

Python prototype (for reference only):

contourSampling(src, nbElt[, out]) -> out
Link to this function

contourSampling(src, nbElt, opts)

View Source
@spec contourSampling(
  Evision.Mat.maybe_mat_in(),
  integer(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

Contour sampling .

Positional Arguments
Return
  • out: Evision.Mat.t().

Python prototype (for reference only):

contourSampling(src, nbElt[, out]) -> out
Link to this function

covarianceEstimation(named_args)

View Source
@spec covarianceEstimation(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

covarianceEstimation(src, windowRows, windowCols)

View Source
@spec covarianceEstimation(Evision.Mat.maybe_mat_in(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

Computes the estimated covariance matrix of an image using the sliding window forumlation.

Positional Arguments
  • src: Evision.Mat.

    The source image. Input image must be of a complex type.

  • windowRows: integer().

    The number of rows in the window.

  • windowCols: integer().

    The number of cols in the window. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner. Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix.

Return
  • dst: Evision.Mat.t().

    The destination estimated covariance matrix. Output matrix will be size (windowRowswindowCols, windowRowswindowCols).

Python prototype (for reference only):

covarianceEstimation(src, windowRows, windowCols[, dst]) -> dst
Link to this function

covarianceEstimation(src, windowRows, windowCols, opts)

View Source
@spec covarianceEstimation(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Computes the estimated covariance matrix of an image using the sliding window forumlation.

Positional Arguments
  • src: Evision.Mat.

    The source image. Input image must be of a complex type.

  • windowRows: integer().

    The number of rows in the window.

  • windowCols: integer().

    The number of cols in the window. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner. Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix.

Return
  • dst: Evision.Mat.t().

    The destination estimated covariance matrix. Output matrix will be size (windowRowswindowCols, windowRowswindowCols).

Python prototype (for reference only):

covarianceEstimation(src, windowRows, windowCols[, dst]) -> dst
Link to this function

createAMFilter(named_args)

View Source
@spec createAMFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createAMFilter(sigma_s, sigma_r)

View Source
@spec createAMFilter(number(), number()) ::
  Evision.XImgProc.AdaptiveManifoldFilter.t() | {:error, String.t()}

Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.

Positional Arguments
  • sigma_s: double.

    spatial standard deviation.

  • sigma_r: double.

    color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • adjust_outliers: bool.

    optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.

Return
  • retval: Evision.XImgProc.AdaptiveManifoldFilter.t()

For more details about Adaptive Manifold Filter parameters, see the original article @cite Gastal12 . Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions.

Python prototype (for reference only):

createAMFilter(sigma_s, sigma_r[, adjust_outliers]) -> retval
Link to this function

createAMFilter(sigma_s, sigma_r, opts)

View Source
@spec createAMFilter(number(), number(), [{:adjust_outliers, term()}] | nil) ::
  Evision.XImgProc.AdaptiveManifoldFilter.t() | {:error, String.t()}

Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.

Positional Arguments
  • sigma_s: double.

    spatial standard deviation.

  • sigma_r: double.

    color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • adjust_outliers: bool.

    optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.

Return
  • retval: Evision.XImgProc.AdaptiveManifoldFilter.t()

For more details about Adaptive Manifold Filter parameters, see the original article @cite Gastal12 . Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions.

Python prototype (for reference only):

createAMFilter(sigma_s, sigma_r[, adjust_outliers]) -> retval
@spec createContourFitting() ::
  Evision.XImgProc.ContourFitting.t() | {:error, String.t()}

create ContourFitting algorithm object

Keyword Arguments
  • ctr: integer().

    number of Fourier descriptors equal to number of contour points after resampling.

  • fd: integer().

    Contour defining second shape (Target).

Return
  • retval: Evision.XImgProc.ContourFitting.t()

Python prototype (for reference only):

createContourFitting([, ctr[, fd]]) -> retval
Link to this function

createContourFitting(named_args)

View Source
@spec createContourFitting(Keyword.t()) :: any() | {:error, String.t()}
@spec createContourFitting([ctr: term(), fd: term()] | nil) ::
  Evision.XImgProc.ContourFitting.t() | {:error, String.t()}

create ContourFitting algorithm object

Keyword Arguments
  • ctr: integer().

    number of Fourier descriptors equal to number of contour points after resampling.

  • fd: integer().

    Contour defining second shape (Target).

Return
  • retval: Evision.XImgProc.ContourFitting.t()

Python prototype (for reference only):

createContourFitting([, ctr[, fd]]) -> retval
Link to this function

createDisparityWLSFilter(named_args)

View Source
@spec createDisparityWLSFilter(Keyword.t()) :: any() | {:error, String.t()}
@spec createDisparityWLSFilter(Evision.StereoMatcher.t()) ::
  Evision.XImgProc.DisparityWLSFilter.t() | {:error, String.t()}

Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM.

Positional Arguments
Return
  • retval: Evision.XImgProc.DisparityWLSFilter.t()

Python prototype (for reference only):

createDisparityWLSFilter(matcher_left) -> retval
Link to this function

createDisparityWLSFilterGeneric(named_args)

View Source
@spec createDisparityWLSFilterGeneric(Keyword.t()) :: any() | {:error, String.t()}
@spec createDisparityWLSFilterGeneric(boolean()) ::
  Evision.XImgProc.DisparityWLSFilter.t() | {:error, String.t()}

More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself.

Positional Arguments
  • use_confidence: bool.

    filtering with confidence requires two disparity maps (for the left and right views) and is approximately two times slower. However, quality is typically significantly better.

Return
  • retval: Evision.XImgProc.DisparityWLSFilter.t()

Python prototype (for reference only):

createDisparityWLSFilterGeneric(use_confidence) -> retval
Link to this function

createDTFilter(named_args)

View Source
@spec createDTFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createDTFilter(guide, sigmaSpatial, sigmaColor)

View Source
@spec createDTFilter(Evision.Mat.maybe_mat_in(), number(), number()) ::
  Evision.XImgProc.DTFilter.t() | {:error, String.t()}

Factory method, create instance of DTFilter and produce initialization routines.

Positional Arguments
  • guide: Evision.Mat.

    guided image (used to build transformed distance, which describes edge structure of guided image).

  • sigmaSpatial: double.

    \f${\sigma}_H\f$ parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.

  • sigmaColor: double.

    \f${\sigma}_r\f$ parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • mode: integer().

    one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

  • numIters: integer().

    optional number of iterations used for filtering, 3 is quite enough.

Return
  • retval: Evision.XImgProc.DTFilter.t()

For more details about Domain Transform filter parameters, see the original article @cite Gastal11 and Domain Transform filter homepage.

Python prototype (for reference only):

createDTFilter(guide, sigmaSpatial, sigmaColor[, mode[, numIters]]) -> retval
Link to this function

createDTFilter(guide, sigmaSpatial, sigmaColor, opts)

View Source
@spec createDTFilter(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  [mode: term(), numIters: term()] | nil
) :: Evision.XImgProc.DTFilter.t() | {:error, String.t()}

Factory method, create instance of DTFilter and produce initialization routines.

Positional Arguments
  • guide: Evision.Mat.

    guided image (used to build transformed distance, which describes edge structure of guided image).

  • sigmaSpatial: double.

    \f${\sigma}_H\f$ parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.

  • sigmaColor: double.

    \f${\sigma}_r\f$ parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • mode: integer().

    one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

  • numIters: integer().

    optional number of iterations used for filtering, 3 is quite enough.

Return
  • retval: Evision.XImgProc.DTFilter.t()

For more details about Domain Transform filter parameters, see the original article @cite Gastal11 and Domain Transform filter homepage.

Python prototype (for reference only):

createDTFilter(guide, sigmaSpatial, sigmaColor[, mode[, numIters]]) -> retval
Link to this function

createEdgeAwareInterpolator()

View Source
@spec createEdgeAwareInterpolator() ::
  Evision.XImgProc.EdgeAwareInterpolator.t() | {:error, String.t()}

Factory method that creates an instance of the EdgeAwareInterpolator.

Return
  • retval: Evision.XImgProc.EdgeAwareInterpolator.t()

Python prototype (for reference only):

createEdgeAwareInterpolator() -> retval
Link to this function

createEdgeAwareInterpolator(named_args)

View Source
@spec createEdgeAwareInterpolator(Keyword.t()) :: any() | {:error, String.t()}
@spec createEdgeBoxes() :: Evision.XImgProc.EdgeBoxes.t() | {:error, String.t()}

Creates a Edgeboxes

Keyword Arguments
  • alpha: float.

    step size of sliding window search.

  • beta: float.

    nms threshold for object proposals.

  • eta: float.

    adaptation rate for nms threshold.

  • minScore: float.

    min score of boxes to detect.

  • maxBoxes: integer().

    max number of boxes to detect.

  • edgeMinMag: float.

    edge min magnitude. Increase to trade off accuracy for speed.

  • edgeMergeThr: float.

    edge merge threshold. Increase to trade off accuracy for speed.

  • clusterMinMag: float.

    cluster min magnitude. Increase to trade off accuracy for speed.

  • maxAspectRatio: float.

    max aspect ratio of boxes.

  • minBoxArea: float.

    minimum area of boxes.

  • gamma: float.

    affinity sensitivity.

  • kappa: float.

    scale sensitivity.

Return
  • retval: Evision.XImgProc.EdgeBoxes.t()

Python prototype (for reference only):

createEdgeBoxes([, alpha[, beta[, eta[, minScore[, maxBoxes[, edgeMinMag[, edgeMergeThr[, clusterMinMag[, maxAspectRatio[, minBoxArea[, gamma[, kappa]]]]]]]]]]]]) -> retval
Link to this function

createEdgeBoxes(named_args)

View Source
@spec createEdgeBoxes(Keyword.t()) :: any() | {:error, String.t()}
@spec createEdgeBoxes(
  [
    alpha: term(),
    beta: term(),
    clusterMinMag: term(),
    edgeMergeThr: term(),
    edgeMinMag: term(),
    eta: term(),
    gamma: term(),
    kappa: term(),
    maxAspectRatio: term(),
    maxBoxes: term(),
    minBoxArea: term(),
    minScore: term()
  ]
  | nil
) :: Evision.XImgProc.EdgeBoxes.t() | {:error, String.t()}

Creates a Edgeboxes

Keyword Arguments
  • alpha: float.

    step size of sliding window search.

  • beta: float.

    nms threshold for object proposals.

  • eta: float.

    adaptation rate for nms threshold.

  • minScore: float.

    min score of boxes to detect.

  • maxBoxes: integer().

    max number of boxes to detect.

  • edgeMinMag: float.

    edge min magnitude. Increase to trade off accuracy for speed.

  • edgeMergeThr: float.

    edge merge threshold. Increase to trade off accuracy for speed.

  • clusterMinMag: float.

    cluster min magnitude. Increase to trade off accuracy for speed.

  • maxAspectRatio: float.

    max aspect ratio of boxes.

  • minBoxArea: float.

    minimum area of boxes.

  • gamma: float.

    affinity sensitivity.

  • kappa: float.

    scale sensitivity.

Return
  • retval: Evision.XImgProc.EdgeBoxes.t()

Python prototype (for reference only):

createEdgeBoxes([, alpha[, beta[, eta[, minScore[, maxBoxes[, edgeMinMag[, edgeMergeThr[, clusterMinMag[, maxAspectRatio[, minBoxArea[, gamma[, kappa]]]]]]]]]]]]) -> retval
@spec createEdgeDrawing() :: Evision.XImgProc.EdgeDrawing.t() | {:error, String.t()}

Creates a smart pointer to a EdgeDrawing object and initializes it

Return
  • retval: Evision.XImgProc.EdgeDrawing.t()

Python prototype (for reference only):

createEdgeDrawing() -> retval
Link to this function

createEdgeDrawing(named_args)

View Source
@spec createEdgeDrawing(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createFastBilateralSolverFilter(named_args)

View Source
@spec createFastBilateralSolverFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createFastBilateralSolverFilter(guide, sigma_spatial, sigma_luma, sigma_chroma)

View Source
@spec createFastBilateralSolverFilter(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  number()
) ::
  Evision.XImgProc.FastBilateralSolverFilter.t() | {:error, String.t()}

Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.

Positional Arguments
  • guide: Evision.Mat.

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

  • sigma_spatial: double.

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

  • sigma_luma: double.

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

  • sigma_chroma: double.

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

Keyword Arguments
  • lambda: double.

    smoothness strength parameter for solver.

  • num_iter: integer().

    number of iterations used for solver, 25 is usually enough.

  • max_tol: double.

    convergence tolerance used for solver.

Return
  • retval: Evision.XImgProc.FastBilateralSolverFilter.t()

For more details about the Fast Bilateral Solver parameters, see the original paper @cite BarronPoole2016.

Python prototype (for reference only):

createFastBilateralSolverFilter(guide, sigma_spatial, sigma_luma, sigma_chroma[, lambda[, num_iter[, max_tol]]]) -> retval
Link to this function

createFastBilateralSolverFilter(guide, sigma_spatial, sigma_luma, sigma_chroma, opts)

View Source
@spec createFastBilateralSolverFilter(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  number(),
  [lambda: term(), max_tol: term(), num_iter: term()] | nil
) :: Evision.XImgProc.FastBilateralSolverFilter.t() | {:error, String.t()}

Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.

Positional Arguments
  • guide: Evision.Mat.

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

  • sigma_spatial: double.

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

  • sigma_luma: double.

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

  • sigma_chroma: double.

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

Keyword Arguments
  • lambda: double.

    smoothness strength parameter for solver.

  • num_iter: integer().

    number of iterations used for solver, 25 is usually enough.

  • max_tol: double.

    convergence tolerance used for solver.

Return
  • retval: Evision.XImgProc.FastBilateralSolverFilter.t()

For more details about the Fast Bilateral Solver parameters, see the original paper @cite BarronPoole2016.

Python prototype (for reference only):

createFastBilateralSolverFilter(guide, sigma_spatial, sigma_luma, sigma_chroma[, lambda[, num_iter[, max_tol]]]) -> retval
Link to this function

createFastGlobalSmootherFilter(named_args)

View Source
@spec createFastGlobalSmootherFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createFastGlobalSmootherFilter(guide, lambda, sigma_color)

View Source
@spec createFastGlobalSmootherFilter(Evision.Mat.maybe_mat_in(), number(), number()) ::
  Evision.XImgProc.FastGlobalSmootherFilter.t() | {:error, String.t()}

Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.

Positional Arguments
  • guide: Evision.Mat.

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

  • lambda: double.

    parameter defining the amount of regularization

  • sigma_color: double.

    parameter, that is similar to color space sigma in bilateralFilter.

Keyword Arguments
  • lambda_attenuation: double.

    internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

  • num_iter: integer().

    number of iterations used for filtering, 3 is usually enough.

Return
  • retval: Evision.XImgProc.FastGlobalSmootherFilter.t()

For more details about Fast Global Smoother parameters, see the original paper @cite Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.

Python prototype (for reference only):

createFastGlobalSmootherFilter(guide, lambda, sigma_color[, lambda_attenuation[, num_iter]]) -> retval
Link to this function

createFastGlobalSmootherFilter(guide, lambda, sigma_color, opts)

View Source
@spec createFastGlobalSmootherFilter(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  [lambda_attenuation: term(), num_iter: term()] | nil
) :: Evision.XImgProc.FastGlobalSmootherFilter.t() | {:error, String.t()}

Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.

Positional Arguments
  • guide: Evision.Mat.

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

  • lambda: double.

    parameter defining the amount of regularization

  • sigma_color: double.

    parameter, that is similar to color space sigma in bilateralFilter.

Keyword Arguments
  • lambda_attenuation: double.

    internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

  • num_iter: integer().

    number of iterations used for filtering, 3 is usually enough.

Return
  • retval: Evision.XImgProc.FastGlobalSmootherFilter.t()

For more details about Fast Global Smoother parameters, see the original paper @cite Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.

Python prototype (for reference only):

createFastGlobalSmootherFilter(guide, lambda, sigma_color[, lambda_attenuation[, num_iter]]) -> retval
Link to this function

createFastLineDetector()

View Source
@spec createFastLineDetector() ::
  Evision.XImgProc.FastLineDetector.t() | {:error, String.t()}

Creates a smart pointer to a FastLineDetector object and initializes it

Keyword Arguments
  • length_threshold: integer().

    Segment shorter than this will be discarded

  • distance_threshold: float.

    A point placed from a hypothesis line segment farther than this will be regarded as an outlier

  • canny_th1: double.

    First threshold for hysteresis procedure in Canny()

  • canny_th2: double.

    Second threshold for hysteresis procedure in Canny()

  • canny_aperture_size: integer().

    Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image.

  • do_merge: bool.

    If true, incremental merging of segments will be performed

Return
  • retval: Evision.XImgProc.FastLineDetector.t()

Python prototype (for reference only):

createFastLineDetector([, length_threshold[, distance_threshold[, canny_th1[, canny_th2[, canny_aperture_size[, do_merge]]]]]]) -> retval
Link to this function

createFastLineDetector(named_args)

View Source
@spec createFastLineDetector(Keyword.t()) :: any() | {:error, String.t()}
@spec createFastLineDetector(
  [
    canny_aperture_size: term(),
    canny_th1: term(),
    canny_th2: term(),
    distance_threshold: term(),
    do_merge: term(),
    length_threshold: term()
  ]
  | nil
) :: Evision.XImgProc.FastLineDetector.t() | {:error, String.t()}

Creates a smart pointer to a FastLineDetector object and initializes it

Keyword Arguments
  • length_threshold: integer().

    Segment shorter than this will be discarded

  • distance_threshold: float.

    A point placed from a hypothesis line segment farther than this will be regarded as an outlier

  • canny_th1: double.

    First threshold for hysteresis procedure in Canny()

  • canny_th2: double.

    Second threshold for hysteresis procedure in Canny()

  • canny_aperture_size: integer().

    Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image.

  • do_merge: bool.

    If true, incremental merging of segments will be performed

Return
  • retval: Evision.XImgProc.FastLineDetector.t()

Python prototype (for reference only):

createFastLineDetector([, length_threshold[, distance_threshold[, canny_th1[, canny_th2[, canny_aperture_size[, do_merge]]]]]]) -> retval
Link to this function

createGraphSegmentation()

View Source
@spec createGraphSegmentation() ::
  Evision.XImgProc.GraphSegmentation.t() | {:error, String.t()}

Creates a graph based segmentor

Keyword Arguments
  • sigma: double.

    The sigma parameter, used to smooth image

  • k: float.

    The k parameter of the algorythm

  • min_size: integer().

    The minimum size of segments

Return
  • retval: Evision.XImgProc.GraphSegmentation.t()

Python prototype (for reference only):

createGraphSegmentation([, sigma[, k[, min_size]]]) -> retval
Link to this function

createGraphSegmentation(named_args)

View Source
@spec createGraphSegmentation(Keyword.t()) :: any() | {:error, String.t()}
@spec createGraphSegmentation([k: term(), min_size: term(), sigma: term()] | nil) ::
  Evision.XImgProc.GraphSegmentation.t() | {:error, String.t()}

Creates a graph based segmentor

Keyword Arguments
  • sigma: double.

    The sigma parameter, used to smooth image

  • k: float.

    The k parameter of the algorythm

  • min_size: integer().

    The minimum size of segments

Return
  • retval: Evision.XImgProc.GraphSegmentation.t()

Python prototype (for reference only):

createGraphSegmentation([, sigma[, k[, min_size]]]) -> retval
Link to this function

createGuidedFilter(named_args)

View Source
@spec createGuidedFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createGuidedFilter(guide, radius, eps)

View Source
@spec createGuidedFilter(Evision.Mat.maybe_mat_in(), integer(), number()) ::
  Evision.XImgProc.GuidedFilter.t() | {:error, String.t()}

Factory method, create instance of GuidedFilter and produce initialization routines.

Positional Arguments
  • guide: Evision.Mat.

    guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

  • radius: integer().

    radius of Guided Filter.

  • eps: double.

    regularization term of Guided Filter. \f${eps}^2\f$ is similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • scale: double.

    subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter)

Return
  • retval: Evision.XImgProc.GuidedFilter.t()

For more details about (Fast) Guided Filter parameters, see the original articles @cite Kaiming10 @cite Kaiming15 .

Python prototype (for reference only):

createGuidedFilter(guide, radius, eps[, scale]) -> retval
Link to this function

createGuidedFilter(guide, radius, eps, opts)

View Source
@spec createGuidedFilter(
  Evision.Mat.maybe_mat_in(),
  integer(),
  number(),
  [{:scale, term()}] | nil
) ::
  Evision.XImgProc.GuidedFilter.t() | {:error, String.t()}

Factory method, create instance of GuidedFilter and produce initialization routines.

Positional Arguments
  • guide: Evision.Mat.

    guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

  • radius: integer().

    radius of Guided Filter.

  • eps: double.

    regularization term of Guided Filter. \f${eps}^2\f$ is similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • scale: double.

    subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter)

Return
  • retval: Evision.XImgProc.GuidedFilter.t()

For more details about (Fast) Guided Filter parameters, see the original articles @cite Kaiming10 @cite Kaiming15 .

Python prototype (for reference only):

createGuidedFilter(guide, radius, eps[, scale]) -> retval
Link to this function

createQuaternionImage(named_args)

View Source
@spec createQuaternionImage(Keyword.t()) :: any() | {:error, String.t()}
@spec createQuaternionImage(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

creates a quaternion image.

Positional Arguments
Return
  • qimg: Evision.Mat.t().

Python prototype (for reference only):

createQuaternionImage(img[, qimg]) -> qimg
Link to this function

createQuaternionImage(img, opts)

View Source
@spec createQuaternionImage(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

creates a quaternion image.

Positional Arguments
Return
  • qimg: Evision.Mat.t().

Python prototype (for reference only):

createQuaternionImage(img[, qimg]) -> qimg
@spec createRFFeatureGetter() ::
  Evision.XImgProc.RFFeatureGetter.t() | {:error, String.t()}

createRFFeatureGetter

Return
  • retval: Evision.XImgProc.RFFeatureGetter.t()

Python prototype (for reference only):

createRFFeatureGetter() -> retval
Link to this function

createRFFeatureGetter(named_args)

View Source
@spec createRFFeatureGetter(Keyword.t()) :: any() | {:error, String.t()}
@spec createRICInterpolator() ::
  Evision.XImgProc.RICInterpolator.t() | {:error, String.t()}

Factory method that creates an instance of the RICInterpolator.

Return
  • retval: Evision.XImgProc.RICInterpolator.t()

Python prototype (for reference only):

createRICInterpolator() -> retval
Link to this function

createRICInterpolator(named_args)

View Source
@spec createRICInterpolator(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createRightMatcher(named_args)

View Source
@spec createRightMatcher(Keyword.t()) :: any() | {:error, String.t()}
@spec createRightMatcher(Evision.StereoMatcher.t()) ::
  Evision.StereoMatcher.t() | {:error, String.t()}

Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence.

Positional Arguments
Return
  • retval: Evision.StereoMatcher.t()

Python prototype (for reference only):

createRightMatcher(matcher_left) -> retval
Link to this function

createScanSegment(named_args)

View Source
@spec createScanSegment(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createScanSegment(image_width, image_height, num_superpixels)

View Source
@spec createScanSegment(integer(), integer(), integer()) ::
  Evision.XImgProc.ScanSegment.t() | {:error, String.t()}

Initializes a ScanSegment object.

Positional Arguments
  • image_width: integer().

    Image width.

  • image_height: integer().

    Image height.

  • num_superpixels: integer().

    Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size). Use getNumberOfSuperpixels() to get the actual number.

Keyword Arguments
  • slices: integer().

    Number of processing threads for parallelisation. Setting -1 uses the maximum number of threads. In practice, four threads is enough for smaller images and eight threads for larger ones.

  • merge_small: bool.

    merge small segments to give the desired number of superpixels. Processing is much faster without merging, but many small segments will be left in the image.

Return
  • retval: Evision.XImgProc.ScanSegment.t()

The function initializes a ScanSegment object for the input image. It stores the parameters of the image: image_width and image_height. It also sets the parameters of the F-DBSCAN superpixel algorithm, which are: num_superpixels, threads, and merge_small.

Python prototype (for reference only):

createScanSegment(image_width, image_height, num_superpixels[, slices[, merge_small]]) -> retval
Link to this function

createScanSegment(image_width, image_height, num_superpixels, opts)

View Source
@spec createScanSegment(
  integer(),
  integer(),
  integer(),
  [merge_small: term(), slices: term()] | nil
) ::
  Evision.XImgProc.ScanSegment.t() | {:error, String.t()}

Initializes a ScanSegment object.

Positional Arguments
  • image_width: integer().

    Image width.

  • image_height: integer().

    Image height.

  • num_superpixels: integer().

    Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size). Use getNumberOfSuperpixels() to get the actual number.

Keyword Arguments
  • slices: integer().

    Number of processing threads for parallelisation. Setting -1 uses the maximum number of threads. In practice, four threads is enough for smaller images and eight threads for larger ones.

  • merge_small: bool.

    merge small segments to give the desired number of superpixels. Processing is much faster without merging, but many small segments will be left in the image.

Return
  • retval: Evision.XImgProc.ScanSegment.t()

The function initializes a ScanSegment object for the input image. It stores the parameters of the image: image_width and image_height. It also sets the parameters of the F-DBSCAN superpixel algorithm, which are: num_superpixels, threads, and merge_small.

Python prototype (for reference only):

createScanSegment(image_width, image_height, num_superpixels[, slices[, merge_small]]) -> retval
Link to this function

createSelectiveSearchSegmentation()

View Source
@spec createSelectiveSearchSegmentation() ::
  Evision.XImgProc.SelectiveSearchSegmentation.t() | {:error, String.t()}

Create a new SelectiveSearchSegmentation class.

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentation.t()

Python prototype (for reference only):

createSelectiveSearchSegmentation() -> retval
Link to this function

createSelectiveSearchSegmentation(named_args)

View Source
@spec createSelectiveSearchSegmentation(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createSelectiveSearchSegmentationStrategyColor()

View Source
@spec createSelectiveSearchSegmentationStrategyColor() ::
  Evision.XImgProc.SelectiveSearchSegmentationStrategyColor.t()
  | {:error, String.t()}

Create a new color-based strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategyColor.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategyColor() -> retval
Link to this function

createSelectiveSearchSegmentationStrategyColor(named_args)

View Source
@spec createSelectiveSearchSegmentationStrategyColor(Keyword.t()) ::
  any() | {:error, String.t()}
Link to this function

createSelectiveSearchSegmentationStrategyFill()

View Source
@spec createSelectiveSearchSegmentationStrategyFill() ::
  Evision.XImgProc.SelectiveSearchSegmentationStrategyFill.t()
  | {:error, String.t()}

Create a new fill-based strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategyFill.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategyFill() -> retval
Link to this function

createSelectiveSearchSegmentationStrategyFill(named_args)

View Source
@spec createSelectiveSearchSegmentationStrategyFill(Keyword.t()) ::
  any() | {:error, String.t()}
Link to this function

createSelectiveSearchSegmentationStrategyMultiple()

View Source
@spec createSelectiveSearchSegmentationStrategyMultiple() ::
  Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()
  | {:error, String.t()}

Create a new multiple strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategyMultiple() -> retval
Link to this function

createSelectiveSearchSegmentationStrategyMultiple(named_args)

View Source
@spec createSelectiveSearchSegmentationStrategyMultiple(Keyword.t()) ::
  any() | {:error, String.t()}
@spec createSelectiveSearchSegmentationStrategyMultiple(
  Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
) ::
  Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()
  | {:error, String.t()}

Create a new multiple strategy and set one subtrategy

Positional Arguments
  • s1: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The first strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategyMultiple(s1) -> retval
Link to this function

createSelectiveSearchSegmentationStrategyMultiple(s1, s2)

View Source

Create a new multiple strategy and set two subtrategies, with equal weights

Positional Arguments
  • s1: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The first strategy

  • s2: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The second strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategyMultiple(s1, s2) -> retval
Link to this function

createSelectiveSearchSegmentationStrategyMultiple(s1, s2, s3)

View Source

Create a new multiple strategy and set three subtrategies, with equal weights

Positional Arguments
  • s1: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The first strategy

  • s2: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The second strategy

  • s3: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The third strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategyMultiple(s1, s2, s3) -> retval
Link to this function

createSelectiveSearchSegmentationStrategyMultiple(s1, s2, s3, s4)

View Source

Create a new multiple strategy and set four subtrategies, with equal weights

Positional Arguments
  • s1: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The first strategy

  • s2: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The second strategy

  • s3: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The third strategy

  • s4: Evision.XImgProc.SelectiveSearchSegmentationStrategy.t().

    The forth strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategyMultiple(s1, s2, s3, s4) -> retval
Link to this function

createSelectiveSearchSegmentationStrategySize()

View Source
@spec createSelectiveSearchSegmentationStrategySize() ::
  Evision.XImgProc.SelectiveSearchSegmentationStrategySize.t()
  | {:error, String.t()}

Create a new size-based strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategySize.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategySize() -> retval
Link to this function

createSelectiveSearchSegmentationStrategySize(named_args)

View Source
@spec createSelectiveSearchSegmentationStrategySize(Keyword.t()) ::
  any() | {:error, String.t()}
Link to this function

createSelectiveSearchSegmentationStrategyTexture()

View Source
@spec createSelectiveSearchSegmentationStrategyTexture() ::
  Evision.XImgProc.SelectiveSearchSegmentationStrategyTexture.t()
  | {:error, String.t()}

Create a new size-based strategy

Return
  • retval: Evision.XImgProc.SelectiveSearchSegmentationStrategyTexture.t()

Python prototype (for reference only):

createSelectiveSearchSegmentationStrategyTexture() -> retval
Link to this function

createSelectiveSearchSegmentationStrategyTexture(named_args)

View Source
@spec createSelectiveSearchSegmentationStrategyTexture(Keyword.t()) ::
  any() | {:error, String.t()}
Link to this function

createStructuredEdgeDetection(named_args)

View Source
@spec createStructuredEdgeDetection(Keyword.t()) :: any() | {:error, String.t()}
@spec createStructuredEdgeDetection(binary()) ::
  Evision.XImgProc.StructuredEdgeDetection.t() | {:error, String.t()}

createStructuredEdgeDetection

Positional Arguments
Keyword Arguments
  • howToGetFeatures: Evision.XImgProc.RFFeatureGetter.t().
Return
  • retval: Evision.XImgProc.StructuredEdgeDetection.t()

Python prototype (for reference only):

createStructuredEdgeDetection(model[, howToGetFeatures]) -> retval
Link to this function

createStructuredEdgeDetection(model, opts)

View Source
@spec createStructuredEdgeDetection(binary(), [{:howToGetFeatures, term()}] | nil) ::
  Evision.XImgProc.StructuredEdgeDetection.t() | {:error, String.t()}

createStructuredEdgeDetection

Positional Arguments
Keyword Arguments
  • howToGetFeatures: Evision.XImgProc.RFFeatureGetter.t().
Return
  • retval: Evision.XImgProc.StructuredEdgeDetection.t()

Python prototype (for reference only):

createStructuredEdgeDetection(model[, howToGetFeatures]) -> retval
Link to this function

createSuperpixelLSC(named_args)

View Source
@spec createSuperpixelLSC(Keyword.t()) :: any() | {:error, String.t()}
@spec createSuperpixelLSC(Evision.Mat.maybe_mat_in()) ::
  Evision.XImgProc.SuperpixelLSC.t() | {:error, String.t()}

Class implementing the LSC (Linear Spectral Clustering) superpixels

Positional Arguments
Keyword Arguments
  • region_size: integer().

    Chooses an average superpixel size measured in pixels

  • ratio: float.

    Chooses the enforcement of superpixel compactness factor of superpixel

Return
  • retval: Evision.XImgProc.SuperpixelLSC.t()

The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space. image

Python prototype (for reference only):

createSuperpixelLSC(image[, region_size[, ratio]]) -> retval
Link to this function

createSuperpixelLSC(image, opts)

View Source
@spec createSuperpixelLSC(
  Evision.Mat.maybe_mat_in(),
  [ratio: term(), region_size: term()] | nil
) ::
  Evision.XImgProc.SuperpixelLSC.t() | {:error, String.t()}

Class implementing the LSC (Linear Spectral Clustering) superpixels

Positional Arguments
Keyword Arguments
  • region_size: integer().

    Chooses an average superpixel size measured in pixels

  • ratio: float.

    Chooses the enforcement of superpixel compactness factor of superpixel

Return
  • retval: Evision.XImgProc.SuperpixelLSC.t()

The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space. image

Python prototype (for reference only):

createSuperpixelLSC(image[, region_size[, ratio]]) -> retval
Link to this function

createSuperpixelSEEDS(named_args)

View Source
@spec createSuperpixelSEEDS(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

createSuperpixelSEEDS(image_width, image_height, image_channels, num_superpixels, num_levels)

View Source
@spec createSuperpixelSEEDS(integer(), integer(), integer(), integer(), integer()) ::
  Evision.XImgProc.SuperpixelSEEDS.t() | {:error, String.t()}

Initializes a SuperpixelSEEDS object.

Positional Arguments
  • image_width: integer().

    Image width.

  • image_height: integer().

    Image height.

  • image_channels: integer().

    Number of channels of the image.

  • num_superpixels: integer().

    Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.

  • num_levels: integer().

    Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.

Keyword Arguments
  • prior: integer().

    enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].

  • histogram_bins: integer().

    Number of histogram bins.

  • double_step: bool.

    If true, iterate each block level twice for higher accuracy.

Return
  • retval: Evision.XImgProc.SuperpixelSEEDS.t()

The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step. The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure. image

Python prototype (for reference only):

createSuperpixelSEEDS(image_width, image_height, image_channels, num_superpixels, num_levels[, prior[, histogram_bins[, double_step]]]) -> retval
Link to this function

createSuperpixelSEEDS(image_width, image_height, image_channels, num_superpixels, num_levels, opts)

View Source
@spec createSuperpixelSEEDS(
  integer(),
  integer(),
  integer(),
  integer(),
  integer(),
  [double_step: term(), histogram_bins: term(), prior: term()] | nil
) :: Evision.XImgProc.SuperpixelSEEDS.t() | {:error, String.t()}

Initializes a SuperpixelSEEDS object.

Positional Arguments
  • image_width: integer().

    Image width.

  • image_height: integer().

    Image height.

  • image_channels: integer().

    Number of channels of the image.

  • num_superpixels: integer().

    Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.

  • num_levels: integer().

    Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.

Keyword Arguments
  • prior: integer().

    enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].

  • histogram_bins: integer().

    Number of histogram bins.

  • double_step: bool.

    If true, iterate each block level twice for higher accuracy.

Return
  • retval: Evision.XImgProc.SuperpixelSEEDS.t()

The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step. The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure. image

Python prototype (for reference only):

createSuperpixelSEEDS(image_width, image_height, image_channels, num_superpixels, num_levels[, prior[, histogram_bins[, double_step]]]) -> retval
Link to this function

createSuperpixelSLIC(named_args)

View Source
@spec createSuperpixelSLIC(Keyword.t()) :: any() | {:error, String.t()}
@spec createSuperpixelSLIC(Evision.Mat.maybe_mat_in()) ::
  Evision.XImgProc.SuperpixelSLIC.t() | {:error, String.t()}

Initialize a SuperpixelSLIC object

Positional Arguments
Keyword Arguments
  • algorithm: integer().

    Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.

  • region_size: integer().

    Chooses an average superpixel size measured in pixels

  • ruler: float.

    Chooses the enforcement of superpixel smoothness factor of superpixel

Return
  • retval: Evision.XImgProc.SuperpixelSLIC.t()

The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture. image

Python prototype (for reference only):

createSuperpixelSLIC(image[, algorithm[, region_size[, ruler]]]) -> retval
Link to this function

createSuperpixelSLIC(image, opts)

View Source
@spec createSuperpixelSLIC(
  Evision.Mat.maybe_mat_in(),
  [algorithm: term(), region_size: term(), ruler: term()] | nil
) :: Evision.XImgProc.SuperpixelSLIC.t() | {:error, String.t()}

Initialize a SuperpixelSLIC object

Positional Arguments
Keyword Arguments
  • algorithm: integer().

    Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.

  • region_size: integer().

    Chooses an average superpixel size measured in pixels

  • ruler: float.

    Chooses the enforcement of superpixel smoothness factor of superpixel

Return
  • retval: Evision.XImgProc.SuperpixelSLIC.t()

The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture. image

Python prototype (for reference only):

createSuperpixelSLIC(image[, algorithm[, region_size[, ruler]]]) -> retval
@spec dtFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

dtFilter(guide, src, sigmaSpatial, sigmaColor)

View Source
@spec dtFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  number(),
  number()
) ::
  Evision.Mat.t() | {:error, String.t()}

Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.

Positional Arguments
  • guide: Evision.Mat.

    guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

  • src: Evision.Mat.

    filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

  • sigmaSpatial: double.

    \f${\sigma}_H\f$ parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.

  • sigmaColor: double.

    \f${\sigma}_r\f$ parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • mode: integer().

    one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

  • numIters: integer().

    optional number of iterations used for filtering, 3 is quite enough.

Return
  • dst: Evision.Mat.t().

    destination image

@sa bilateralFilter, guidedFilter, amFilter

Python prototype (for reference only):

dtFilter(guide, src, sigmaSpatial, sigmaColor[, dst[, mode[, numIters]]]) -> dst
Link to this function

dtFilter(guide, src, sigmaSpatial, sigmaColor, opts)

View Source
@spec dtFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  [mode: term(), numIters: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.

Positional Arguments
  • guide: Evision.Mat.

    guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

  • src: Evision.Mat.

    filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

  • sigmaSpatial: double.

    \f${\sigma}_H\f$ parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.

  • sigmaColor: double.

    \f${\sigma}_r\f$ parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • mode: integer().

    one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

  • numIters: integer().

    optional number of iterations used for filtering, 3 is quite enough.

Return
  • dst: Evision.Mat.t().

    destination image

@sa bilateralFilter, guidedFilter, amFilter

Python prototype (for reference only):

dtFilter(guide, src, sigmaSpatial, sigmaColor[, dst[, mode[, numIters]]]) -> dst
Link to this function

edgePreservingFilter(named_args)

View Source
@spec edgePreservingFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

edgePreservingFilter(src, d, threshold)

View Source
@spec edgePreservingFilter(Evision.Mat.maybe_mat_in(), integer(), number()) ::
  Evision.Mat.t() | {:error, String.t()}

Smoothes an image using the Edge-Preserving filter.

Positional Arguments
  • src: Evision.Mat.

    Source 8-bit 3-channel image.

  • d: integer().

    Diameter of each pixel neighborhood that is used during filtering. Must be greater or equal 3.

  • threshold: double.

    Threshold, which distinguishes between noise, outliers, and data.

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and type as src.

The function smoothes Gaussian noise as well as salt & pepper noise. For more details about this implementation, please see [ReiWoe18] Reich, S. and Wörgötter, F. and Dellen, B. (2018). A Real-Time Edge-Preserving Denoising Filter. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 85-94, 4. DOI: 10.5220/0006509000850094.

Python prototype (for reference only):

edgePreservingFilter(src, d, threshold[, dst]) -> dst
Link to this function

edgePreservingFilter(src, d, threshold, opts)

View Source
@spec edgePreservingFilter(
  Evision.Mat.maybe_mat_in(),
  integer(),
  number(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Smoothes an image using the Edge-Preserving filter.

Positional Arguments
  • src: Evision.Mat.

    Source 8-bit 3-channel image.

  • d: integer().

    Diameter of each pixel neighborhood that is used during filtering. Must be greater or equal 3.

  • threshold: double.

    Threshold, which distinguishes between noise, outliers, and data.

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and type as src.

The function smoothes Gaussian noise as well as salt & pepper noise. For more details about this implementation, please see [ReiWoe18] Reich, S. and Wörgötter, F. and Dellen, B. (2018). A Real-Time Edge-Preserving Denoising Filter. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 85-94, 4. DOI: 10.5220/0006509000850094.

Python prototype (for reference only):

edgePreservingFilter(src, d, threshold[, dst]) -> dst
Link to this function

fastBilateralSolverFilter(named_args)

View Source
@spec fastBilateralSolverFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

fastBilateralSolverFilter(guide, src, confidence)

View Source
@spec fastBilateralSolverFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in()
) :: Evision.Mat.t() | {:error, String.t()}

Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

Positional Arguments
  • guide: Evision.Mat.

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

  • src: Evision.Mat.

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

  • confidence: Evision.Mat.

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

Keyword Arguments
  • sigma_spatial: double.

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

  • sigma_luma: double.

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

  • sigma_chroma: double.

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

  • lambda: double.

    smoothness strength parameter for solver.

  • num_iter: integer().

    number of iterations used for solver, 25 is usually enough.

  • max_tol: double.

    convergence tolerance used for solver.

Return
  • dst: Evision.Mat.t().

    destination image.

For more details about the Fast Bilateral Solver parameters, see the original paper @cite BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

Python prototype (for reference only):

fastBilateralSolverFilter(guide, src, confidence[, dst[, sigma_spatial[, sigma_luma[, sigma_chroma[, lambda[, num_iter[, max_tol]]]]]]]) -> dst
Link to this function

fastBilateralSolverFilter(guide, src, confidence, opts)

View Source
@spec fastBilateralSolverFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [
    lambda: term(),
    max_tol: term(),
    num_iter: term(),
    sigma_chroma: term(),
    sigma_luma: term(),
    sigma_spatial: term()
  ]
  | nil
) :: Evision.Mat.t() | {:error, String.t()}

Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

Positional Arguments
  • guide: Evision.Mat.

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

  • src: Evision.Mat.

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

  • confidence: Evision.Mat.

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

Keyword Arguments
  • sigma_spatial: double.

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

  • sigma_luma: double.

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

  • sigma_chroma: double.

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

  • lambda: double.

    smoothness strength parameter for solver.

  • num_iter: integer().

    number of iterations used for solver, 25 is usually enough.

  • max_tol: double.

    convergence tolerance used for solver.

Return
  • dst: Evision.Mat.t().

    destination image.

For more details about the Fast Bilateral Solver parameters, see the original paper @cite BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

Python prototype (for reference only):

fastBilateralSolverFilter(guide, src, confidence[, dst[, sigma_spatial[, sigma_luma[, sigma_chroma[, lambda[, num_iter[, max_tol]]]]]]]) -> dst
Link to this function

fastGlobalSmootherFilter(named_args)

View Source
@spec fastGlobalSmootherFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

fastGlobalSmootherFilter(guide, src, lambda, sigma_color)

View Source
@spec fastGlobalSmootherFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  number(),
  number()
) :: Evision.Mat.t() | {:error, String.t()}

Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.

Positional Arguments
  • guide: Evision.Mat.

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

  • src: Evision.Mat.

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

  • lambda: double.

    parameter defining the amount of regularization

  • sigma_color: double.

    parameter, that is similar to color space sigma in bilateralFilter.

Keyword Arguments
  • lambda_attenuation: double.

    internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

  • num_iter: integer().

    number of iterations used for filtering, 3 is usually enough.

Return
  • dst: Evision.Mat.t().

    destination image.

Python prototype (for reference only):

fastGlobalSmootherFilter(guide, src, lambda, sigma_color[, dst[, lambda_attenuation[, num_iter]]]) -> dst
Link to this function

fastGlobalSmootherFilter(guide, src, lambda, sigma_color, opts)

View Source
@spec fastGlobalSmootherFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  [lambda_attenuation: term(), num_iter: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.

Positional Arguments
  • guide: Evision.Mat.

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

  • src: Evision.Mat.

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

  • lambda: double.

    parameter defining the amount of regularization

  • sigma_color: double.

    parameter, that is similar to color space sigma in bilateralFilter.

Keyword Arguments
  • lambda_attenuation: double.

    internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

  • num_iter: integer().

    number of iterations used for filtering, 3 is usually enough.

Return
  • dst: Evision.Mat.t().

    destination image.

Python prototype (for reference only):

fastGlobalSmootherFilter(guide, src, lambda, sigma_color[, dst[, lambda_attenuation[, num_iter]]]) -> dst
Link to this function

fastHoughTransform(named_args)

View Source
@spec fastHoughTransform(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

fastHoughTransform(src, dstMatDepth)

View Source
@spec fastHoughTransform(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

Calculates 2D Fast Hough transform of an image.

Positional Arguments
Keyword Arguments
  • angleRange: integer().
  • op: integer().
  • makeSkew: integer().
Return
  • dst: Evision.Mat.t().

The function calculates the fast Hough transform for full, half or quarter range of angles.

Python prototype (for reference only):

FastHoughTransform(src, dstMatDepth[, dst[, angleRange[, op[, makeSkew]]]]) -> dst
Link to this function

fastHoughTransform(src, dstMatDepth, opts)

View Source
@spec fastHoughTransform(
  Evision.Mat.maybe_mat_in(),
  integer(),
  [angleRange: term(), makeSkew: term(), op: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Calculates 2D Fast Hough transform of an image.

Positional Arguments
Keyword Arguments
  • angleRange: integer().
  • op: integer().
  • makeSkew: integer().
Return
  • dst: Evision.Mat.t().

The function calculates the fast Hough transform for full, half or quarter range of angles.

Python prototype (for reference only):

FastHoughTransform(src, dstMatDepth[, dst[, angleRange[, op[, makeSkew]]]]) -> dst
Link to this function

findEllipses(named_args)

View Source
@spec findEllipses(Keyword.t()) :: any() | {:error, String.t()}
@spec findEllipses(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Finds ellipses fastly in an image using projective invariant pruning.

Positional Arguments
  • image: Evision.Mat.

    input image, could be gray or color.

Keyword Arguments
  • scoreThreshold: float.

    float, the threshold of ellipse score.

  • reliabilityThreshold: float.

    float, the threshold of reliability.

  • centerDistanceThreshold: float.

    float, the threshold of center distance.

Return
  • ellipses: Evision.Mat.t().

    output vector of found ellipses. each vector is encoded as five float $x, y, a, b, radius, score$.

The function detects ellipses in images using projective invariant pruning. For more details about this implementation, please see @cite jia2017fast Jia, Qi et al, (2017). A Fast Ellipse Detector using Projective Invariant Pruning. IEEE Transactions on Image Processing.

Python prototype (for reference only):

findEllipses(image[, ellipses[, scoreThreshold[, reliabilityThreshold[, centerDistanceThreshold]]]]) -> ellipses
Link to this function

findEllipses(image, opts)

View Source
@spec findEllipses(
  Evision.Mat.maybe_mat_in(),
  [
    centerDistanceThreshold: term(),
    reliabilityThreshold: term(),
    scoreThreshold: term()
  ]
  | nil
) :: Evision.Mat.t() | {:error, String.t()}

Finds ellipses fastly in an image using projective invariant pruning.

Positional Arguments
  • image: Evision.Mat.

    input image, could be gray or color.

Keyword Arguments
  • scoreThreshold: float.

    float, the threshold of ellipse score.

  • reliabilityThreshold: float.

    float, the threshold of reliability.

  • centerDistanceThreshold: float.

    float, the threshold of center distance.

Return
  • ellipses: Evision.Mat.t().

    output vector of found ellipses. each vector is encoded as five float $x, y, a, b, radius, score$.

The function detects ellipses in images using projective invariant pruning. For more details about this implementation, please see @cite jia2017fast Jia, Qi et al, (2017). A Fast Ellipse Detector using Projective Invariant Pruning. IEEE Transactions on Image Processing.

Python prototype (for reference only):

findEllipses(image[, ellipses[, scoreThreshold[, reliabilityThreshold[, centerDistanceThreshold]]]]) -> ellipses
Link to this function

fourierDescriptor(named_args)

View Source
@spec fourierDescriptor(Keyword.t()) :: any() | {:error, String.t()}
@spec fourierDescriptor(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Fourier descriptors for planed closed curves

Positional Arguments
Keyword Arguments
  • nbElt: integer().
  • nbFD: integer().
Return
  • dst: Evision.Mat.t().

For more details about this implementation, please see @cite PersoonFu1977

Python prototype (for reference only):

fourierDescriptor(src[, dst[, nbElt[, nbFD]]]) -> dst
Link to this function

fourierDescriptor(src, opts)

View Source
@spec fourierDescriptor(
  Evision.Mat.maybe_mat_in(),
  [nbElt: term(), nbFD: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

Fourier descriptors for planed closed curves

Positional Arguments
Keyword Arguments
  • nbElt: integer().
  • nbFD: integer().
Return
  • dst: Evision.Mat.t().

For more details about this implementation, please see @cite PersoonFu1977

Python prototype (for reference only):

fourierDescriptor(src[, dst[, nbElt[, nbFD]]]) -> dst
Link to this function

getDisparityVis(named_args)

View Source
@spec getDisparityVis(Keyword.t()) :: any() | {:error, String.t()}
@spec getDisparityVis(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Function for creating a disparity map visualization (clamped CV_8U image)

Positional Arguments
Keyword Arguments
  • scale: double.

    disparity map will be multiplied by this value for visualization

Return
  • dst: Evision.Mat.t().

    output visualization

Python prototype (for reference only):

getDisparityVis(src[, dst[, scale]]) -> dst
Link to this function

getDisparityVis(src, opts)

View Source
@spec getDisparityVis(Evision.Mat.maybe_mat_in(), [{:scale, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

Function for creating a disparity map visualization (clamped CV_8U image)

Positional Arguments
Keyword Arguments
  • scale: double.

    disparity map will be multiplied by this value for visualization

Return
  • dst: Evision.Mat.t().

    output visualization

Python prototype (for reference only):

getDisparityVis(src[, dst[, scale]]) -> dst
Link to this function

gradientDericheX(named_args)

View Source
@spec gradientDericheX(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

gradientDericheX(op, alpha, omega)

View Source
@spec gradientDericheX(Evision.Mat.maybe_mat_in(), number(), number()) ::
  Evision.Mat.t() | {:error, String.t()}

Applies X Deriche filter to an image.

Positional Arguments
Return
  • dst: Evision.Mat.t().

For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf

Python prototype (for reference only):

GradientDericheX(op, alpha, omega[, dst]) -> dst
Link to this function

gradientDericheX(op, alpha, omega, opts)

View Source
@spec gradientDericheX(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

Applies X Deriche filter to an image.

Positional Arguments
Return
  • dst: Evision.Mat.t().

For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf

Python prototype (for reference only):

GradientDericheX(op, alpha, omega[, dst]) -> dst
Link to this function

gradientDericheY(named_args)

View Source
@spec gradientDericheY(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

gradientDericheY(op, alpha, omega)

View Source
@spec gradientDericheY(Evision.Mat.maybe_mat_in(), number(), number()) ::
  Evision.Mat.t() | {:error, String.t()}

Applies Y Deriche filter to an image.

Positional Arguments
Return
  • dst: Evision.Mat.t().

For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf

Python prototype (for reference only):

GradientDericheY(op, alpha, omega[, dst]) -> dst
Link to this function

gradientDericheY(op, alpha, omega, opts)

View Source
@spec gradientDericheY(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

Applies Y Deriche filter to an image.

Positional Arguments
Return
  • dst: Evision.Mat.t().

For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf

Python prototype (for reference only):

GradientDericheY(op, alpha, omega[, dst]) -> dst
Link to this function

guidedFilter(named_args)

View Source
@spec guidedFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

guidedFilter(guide, src, radius, eps)

View Source
@spec guidedFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  number()
) ::
  Evision.Mat.t() | {:error, String.t()}

Simple one-line (Fast) Guided Filter call.

Positional Arguments
  • guide: Evision.Mat.

    guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

  • src: Evision.Mat.

    filtering image with any numbers of channels.

  • radius: integer().

    radius of Guided Filter.

  • eps: double.

    regularization term of Guided Filter. \f${eps}^2\f$ is similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • dDepth: integer().

    optional depth of the output image.

  • scale: double.

    subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter)

Return
  • dst: Evision.Mat.t().

    output image.

If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.

@sa bilateralFilter, dtFilter, amFilter

Python prototype (for reference only):

guidedFilter(guide, src, radius, eps[, dst[, dDepth[, scale]]]) -> dst
Link to this function

guidedFilter(guide, src, radius, eps, opts)

View Source
@spec guidedFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  number(),
  [dDepth: term(), scale: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Simple one-line (Fast) Guided Filter call.

Positional Arguments
  • guide: Evision.Mat.

    guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

  • src: Evision.Mat.

    filtering image with any numbers of channels.

  • radius: integer().

    radius of Guided Filter.

  • eps: double.

    regularization term of Guided Filter. \f${eps}^2\f$ is similar to the sigma in the color space into bilateralFilter.

Keyword Arguments
  • dDepth: integer().

    optional depth of the output image.

  • scale: double.

    subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter)

Return
  • dst: Evision.Mat.t().

    output image.

If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.

@sa bilateralFilter, dtFilter, amFilter

Python prototype (for reference only):

guidedFilter(guide, src, radius, eps[, dst[, dDepth[, scale]]]) -> dst
Link to this function

houghPoint2Line(named_args)

View Source
@spec houghPoint2Line(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

houghPoint2Line(houghPoint, srcImgInfo)

View Source
@spec houghPoint2Line(
  {number(), number()},
  Evision.Mat.maybe_mat_in()
) :: {integer(), integer(), integer(), integer()} | {:error, String.t()}

Calculates coordinates of line segment corresponded by point in Hough space.

Positional Arguments
Keyword Arguments
  • angleRange: integer().
  • makeSkew: integer().
  • rules: integer().
Return
  • retval: Vec4i

@retval [Vec4i] Coordinates of line segment corresponded by point in Hough space. @remarks If rules parameter set to RO_STRICT then returned line cut along the border of source image. @remarks If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image. The function calculates coordinates of line segment corresponded by point in Hough space.

Python prototype (for reference only):

HoughPoint2Line(houghPoint, srcImgInfo[, angleRange[, makeSkew[, rules]]]) -> retval
Link to this function

houghPoint2Line(houghPoint, srcImgInfo, opts)

View Source
@spec houghPoint2Line(
  {number(), number()},
  Evision.Mat.maybe_mat_in(),
  [angleRange: term(), makeSkew: term(), rules: term()] | nil
) :: {integer(), integer(), integer(), integer()} | {:error, String.t()}

Calculates coordinates of line segment corresponded by point in Hough space.

Positional Arguments
Keyword Arguments
  • angleRange: integer().
  • makeSkew: integer().
  • rules: integer().
Return
  • retval: Vec4i

@retval [Vec4i] Coordinates of line segment corresponded by point in Hough space. @remarks If rules parameter set to RO_STRICT then returned line cut along the border of source image. @remarks If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image. The function calculates coordinates of line segment corresponded by point in Hough space.

Python prototype (for reference only):

HoughPoint2Line(houghPoint, srcImgInfo[, angleRange[, makeSkew[, rules]]]) -> retval
Link to this function

jointBilateralFilter(named_args)

View Source
@spec jointBilateralFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

jointBilateralFilter(joint, src, d, sigmaColor, sigmaSpace)

View Source
@spec jointBilateralFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  number(),
  number()
) :: Evision.Mat.t() | {:error, String.t()}

Applies the joint bilateral filter to an image.

Positional Arguments
  • joint: Evision.Mat.

    Joint 8-bit or floating-point, 1-channel or 3-channel image.

  • src: Evision.Mat.

    Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.

  • d: integer().

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

  • sigmaColor: double.

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

  • sigmaSpace: double.

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

Keyword Arguments
  • borderType: integer().
Return
  • dst: Evision.Mat.t().

    Destination image of the same size and type as src .

Note: bilateralFilter and jointBilateralFilter use L1 norm to compute difference between colors. @sa bilateralFilter, amFilter

Python prototype (for reference only):

jointBilateralFilter(joint, src, d, sigmaColor, sigmaSpace[, dst[, borderType]]) -> dst
Link to this function

jointBilateralFilter(joint, src, d, sigmaColor, sigmaSpace, opts)

View Source
@spec jointBilateralFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  number(),
  number(),
  [{:borderType, term()}] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Applies the joint bilateral filter to an image.

Positional Arguments
  • joint: Evision.Mat.

    Joint 8-bit or floating-point, 1-channel or 3-channel image.

  • src: Evision.Mat.

    Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.

  • d: integer().

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

  • sigmaColor: double.

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

  • sigmaSpace: double.

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

Keyword Arguments
  • borderType: integer().
Return
  • dst: Evision.Mat.t().

    Destination image of the same size and type as src .

Note: bilateralFilter and jointBilateralFilter use L1 norm to compute difference between colors. @sa bilateralFilter, amFilter

Python prototype (for reference only):

jointBilateralFilter(joint, src, d, sigmaColor, sigmaSpace[, dst[, borderType]]) -> dst
@spec l0Smooth(Keyword.t()) :: any() | {:error, String.t()}
@spec l0Smooth(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}

Global image smoothing via L0 gradient minimization.

Positional Arguments
  • src: Evision.Mat.

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.

Keyword Arguments
  • lambda: double.

    parameter defining the smooth term weight.

  • kappa: double.

    parameter defining the increasing factor of the weight of the gradient data term.

Return
  • dst: Evision.Mat.t().

    destination image.

For more details about L0 Smoother, see the original paper @cite xu2011image.

Python prototype (for reference only):

l0Smooth(src[, dst[, lambda[, kappa]]]) -> dst
@spec l0Smooth(Evision.Mat.maybe_mat_in(), [kappa: term(), lambda: term()] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

Global image smoothing via L0 gradient minimization.

Positional Arguments
  • src: Evision.Mat.

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.

Keyword Arguments
  • lambda: double.

    parameter defining the smooth term weight.

  • kappa: double.

    parameter defining the increasing factor of the weight of the gradient data term.

Return
  • dst: Evision.Mat.t().

    destination image.

For more details about L0 Smoother, see the original paper @cite xu2011image.

Python prototype (for reference only):

l0Smooth(src[, dst[, lambda[, kappa]]]) -> dst
Link to this function

niBlackThreshold(named_args)

View Source
@spec niBlackThreshold(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

niBlackThreshold(src, maxValue, type, blockSize, k)

View Source
@spec niBlackThreshold(
  Evision.Mat.maybe_mat_in(),
  number(),
  integer(),
  integer(),
  number()
) ::
  Evision.Mat.t() | {:error, String.t()}

Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.

Positional Arguments
  • src: Evision.Mat.

    Source 8-bit single-channel image.

  • maxValue: double.

    Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.

  • type: integer().

    Thresholding type, see cv::ThresholdTypes.

  • blockSize: integer().

    Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

  • k: double.

    The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.

Keyword Arguments
  • binarizationMethod: integer().

    Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods.

  • r: double.

    The user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation.

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and the same type as src.

The function transforms a grayscale image to a binary image according to the formulae:

  • THRESH_BINARY \f[dst(x,y) = \fork{\texttt{maxValue}}{if (src(x,y) > T(x,y))}{0}{otherwise}\f]

  • THRESH_BINARY_INV \f[dst(x,y) = \fork{0}{if (src(x,y) > T(x,y))}{\texttt{maxValue}}{otherwise}\f] where \f$T(x,y)\f$ is a threshold calculated individually for each pixel.

The threshold value \f$T(x, y)\f$ is determined based on the binarization method chosen. For classic Niblack, it is the mean minus \f$ k \f$ times standard deviation of \f$\texttt{blockSize} \times\texttt{blockSize}\f$ neighborhood of \f$(x, y)\f$. The function can't process the image in-place. @sa threshold, adaptiveThreshold

Python prototype (for reference only):

niBlackThreshold(_src, maxValue, type, blockSize, k[, _dst[, binarizationMethod[, r]]]) -> _dst
Link to this function

niBlackThreshold(src, maxValue, type, blockSize, k, opts)

View Source
@spec niBlackThreshold(
  Evision.Mat.maybe_mat_in(),
  number(),
  integer(),
  integer(),
  number(),
  [binarizationMethod: term(), r: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.

Positional Arguments
  • src: Evision.Mat.

    Source 8-bit single-channel image.

  • maxValue: double.

    Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.

  • type: integer().

    Thresholding type, see cv::ThresholdTypes.

  • blockSize: integer().

    Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

  • k: double.

    The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.

Keyword Arguments
  • binarizationMethod: integer().

    Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods.

  • r: double.

    The user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation.

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and the same type as src.

The function transforms a grayscale image to a binary image according to the formulae:

  • THRESH_BINARY \f[dst(x,y) = \fork{\texttt{maxValue}}{if (src(x,y) > T(x,y))}{0}{otherwise}\f]

  • THRESH_BINARY_INV \f[dst(x,y) = \fork{0}{if (src(x,y) > T(x,y))}{\texttt{maxValue}}{otherwise}\f] where \f$T(x,y)\f$ is a threshold calculated individually for each pixel.

The threshold value \f$T(x, y)\f$ is determined based on the binarization method chosen. For classic Niblack, it is the mean minus \f$ k \f$ times standard deviation of \f$\texttt{blockSize} \times\texttt{blockSize}\f$ neighborhood of \f$(x, y)\f$. The function can't process the image in-place. @sa threshold, adaptiveThreshold

Python prototype (for reference only):

niBlackThreshold(_src, maxValue, type, blockSize, k[, _dst[, binarizationMethod[, r]]]) -> _dst
Link to this function

peiLinNormalization(named_args)

View Source
@spec peiLinNormalization(Keyword.t()) :: any() | {:error, String.t()}
@spec peiLinNormalization(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

PeiLinNormalization

Positional Arguments
Return
  • t: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

PeiLinNormalization(I[, T]) -> T
Link to this function

peiLinNormalization(i, opts)

View Source
@spec peiLinNormalization(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

PeiLinNormalization

Positional Arguments
Return
  • t: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

PeiLinNormalization(I[, T]) -> T
@spec qconj(Keyword.t()) :: any() | {:error, String.t()}
@spec qconj(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}

calculates conjugate of a quaternion image.

Positional Arguments
Return
  • qcimg: Evision.Mat.t().

Python prototype (for reference only):

qconj(qimg[, qcimg]) -> qcimg
@spec qconj(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

calculates conjugate of a quaternion image.

Positional Arguments
Return
  • qcimg: Evision.Mat.t().

Python prototype (for reference only):

qconj(qimg[, qcimg]) -> qcimg
@spec qdft(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

qdft(img, flags, sideLeft)

View Source
@spec qdft(Evision.Mat.maybe_mat_in(), integer(), boolean()) ::
  Evision.Mat.t() | {:error, String.t()}

Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.

Positional Arguments
Return
  • qimg: Evision.Mat.t().

Python prototype (for reference only):

qdft(img, flags, sideLeft[, qimg]) -> qimg
Link to this function

qdft(img, flags, sideLeft, opts)

View Source
@spec qdft(
  Evision.Mat.maybe_mat_in(),
  integer(),
  boolean(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.

Positional Arguments
Return
  • qimg: Evision.Mat.t().

Python prototype (for reference only):

qdft(img, flags, sideLeft[, qimg]) -> qimg
@spec qmultiply(Keyword.t()) :: any() | {:error, String.t()}
@spec qmultiply(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Calculates the per-element quaternion product of two arrays

Positional Arguments
Return
  • dst: Evision.Mat.t().

Python prototype (for reference only):

qmultiply(src1, src2[, dst]) -> dst
Link to this function

qmultiply(src1, src2, opts)

View Source
@spec qmultiply(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

Calculates the per-element quaternion product of two arrays

Positional Arguments
Return
  • dst: Evision.Mat.t().

Python prototype (for reference only):

qmultiply(src1, src2[, dst]) -> dst
@spec qunitary(Keyword.t()) :: any() | {:error, String.t()}
@spec qunitary(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}

divides each element by its modulus.

Positional Arguments
Return
  • qnimg: Evision.Mat.t().

Python prototype (for reference only):

qunitary(qimg[, qnimg]) -> qnimg
@spec qunitary(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

divides each element by its modulus.

Positional Arguments
Return
  • qnimg: Evision.Mat.t().

Python prototype (for reference only):

qunitary(qimg[, qnimg]) -> qnimg
Link to this function

radonTransform(named_args)

View Source
@spec radonTransform(Keyword.t()) :: any() | {:error, String.t()}
@spec radonTransform(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Calculate Radon Transform of an image.

Positional Arguments
Keyword Arguments
  • theta: double.
  • start_angle: double.
  • end_angle: double.
  • crop: bool.
  • norm: bool.
Return
  • dst: Evision.Mat.t().

This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.

Python prototype (for reference only):

RadonTransform(src[, dst[, theta[, start_angle[, end_angle[, crop[, norm]]]]]]) -> dst
Link to this function

radonTransform(src, opts)

View Source
@spec radonTransform(
  Evision.Mat.maybe_mat_in(),
  [
    crop: term(),
    end_angle: term(),
    norm: term(),
    start_angle: term(),
    theta: term()
  ]
  | nil
) :: Evision.Mat.t() | {:error, String.t()}

Calculate Radon Transform of an image.

Positional Arguments
Keyword Arguments
  • theta: double.
  • start_angle: double.
  • end_angle: double.
  • crop: bool.
  • norm: bool.
Return
  • dst: Evision.Mat.t().

This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.

Python prototype (for reference only):

RadonTransform(src[, dst[, theta[, start_angle[, end_angle[, crop[, norm]]]]]]) -> dst
@spec readGT(Keyword.t()) :: any() | {:error, String.t()}
@spec readGT(binary()) :: {integer(), Evision.Mat.t()} | {:error, String.t()}

Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.

Positional Arguments
  • src_path: String.

    path to the image, containing ground-truth disparity map

Return
  • retval: integer()

  • dst: Evision.Mat.t().

    output disparity map, CV_16S depth

@result returns zero if successfully read the ground truth

Python prototype (for reference only):

readGT(src_path[, dst]) -> retval, dst
@spec readGT(binary(), [{atom(), term()}, ...] | nil) ::
  {integer(), Evision.Mat.t()} | {:error, String.t()}

Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.

Positional Arguments
  • src_path: String.

    path to the image, containing ground-truth disparity map

Return
  • retval: integer()

  • dst: Evision.Mat.t().

    output disparity map, CV_16S depth

@result returns zero if successfully read the ground truth

Python prototype (for reference only):

readGT(src_path[, dst]) -> retval, dst
Link to this function

rollingGuidanceFilter(named_args)

View Source
@spec rollingGuidanceFilter(Keyword.t()) :: any() | {:error, String.t()}
@spec rollingGuidanceFilter(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Applies the rolling guidance filter to an image.

Positional Arguments
  • src: Evision.Mat.

    Source 8-bit or floating-point, 1-channel or 3-channel image.

Keyword Arguments
  • d: integer().

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

  • sigmaColor: double.

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

  • sigmaSpace: double.

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

  • numOfIter: integer().

    Number of iterations of joint edge-preserving filtering applied on the source image.

  • borderType: integer().

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and type as src.

For more details, please see @cite zhang2014rolling

Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. @sa jointBilateralFilter, bilateralFilter, amFilter

Python prototype (for reference only):

rollingGuidanceFilter(src[, dst[, d[, sigmaColor[, sigmaSpace[, numOfIter[, borderType]]]]]]) -> dst
Link to this function

rollingGuidanceFilter(src, opts)

View Source
@spec rollingGuidanceFilter(
  Evision.Mat.maybe_mat_in(),
  [
    borderType: term(),
    d: term(),
    numOfIter: term(),
    sigmaColor: term(),
    sigmaSpace: term()
  ]
  | nil
) :: Evision.Mat.t() | {:error, String.t()}

Applies the rolling guidance filter to an image.

Positional Arguments
  • src: Evision.Mat.

    Source 8-bit or floating-point, 1-channel or 3-channel image.

Keyword Arguments
  • d: integer().

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

  • sigmaColor: double.

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

  • sigmaSpace: double.

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

  • numOfIter: integer().

    Number of iterations of joint edge-preserving filtering applied on the source image.

  • borderType: integer().

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and type as src.

For more details, please see @cite zhang2014rolling

Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. @sa jointBilateralFilter, bilateralFilter, amFilter

Python prototype (for reference only):

rollingGuidanceFilter(src[, dst[, d[, sigmaColor[, sigmaSpace[, numOfIter[, borderType]]]]]]) -> dst
@spec thinning(Keyword.t()) :: any() | {:error, String.t()}
@spec thinning(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}

Applies a binary blob thinning operation, to achieve a skeletization of the input image.

Positional Arguments
  • src: Evision.Mat.

    Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.

Keyword Arguments
  • thinningType: integer().

    Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and the same type as src. The function can work in-place.

The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.

Python prototype (for reference only):

thinning(src[, dst[, thinningType]]) -> dst
@spec thinning(Evision.Mat.maybe_mat_in(), [{:thinningType, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

Applies a binary blob thinning operation, to achieve a skeletization of the input image.

Positional Arguments
  • src: Evision.Mat.

    Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.

Keyword Arguments
  • thinningType: integer().

    Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes

Return
  • dst: Evision.Mat.t().

    Destination image of the same size and the same type as src. The function can work in-place.

The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.

Python prototype (for reference only):

thinning(src[, dst[, thinningType]]) -> dst
@spec transformFD(Keyword.t()) :: any() | {:error, String.t()}
@spec transformFD(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

transform a contour

Positional Arguments
Keyword Arguments
  • fdContour: bool.
Return
  • dst: Evision.Mat.t().

Python prototype (for reference only):

transformFD(src, t[, dst[, fdContour]]) -> dst
Link to this function

transformFD(src, t, opts)

View Source
@spec transformFD(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:fdContour, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

transform a contour

Positional Arguments
Keyword Arguments
  • fdContour: bool.
Return
  • dst: Evision.Mat.t().

Python prototype (for reference only):

transformFD(src, t[, dst[, fdContour]]) -> dst
Link to this function

weightedMedianFilter(named_args)

View Source
@spec weightedMedianFilter(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

weightedMedianFilter(joint, src, r)

View Source
@spec weightedMedianFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer()
) ::
  Evision.Mat.t() | {:error, String.t()}

Applies weighted median filter to an image.

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

For more details about this implementation, please see @cite zhang2014100+

@sa medianBlur, jointBilateralFilter

Python prototype (for reference only):

weightedMedianFilter(joint, src, r[, dst[, sigma[, weightType[, mask]]]]) -> dst
Link to this function

weightedMedianFilter(joint, src, r, opts)

View Source
@spec weightedMedianFilter(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  [mask: term(), sigma: term(), weightType: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Applies weighted median filter to an image.

Positional Arguments
Keyword Arguments
Return
  • dst: Evision.Mat.t().

For more details about this implementation, please see @cite zhang2014100+

@sa medianBlur, jointBilateralFilter

Python prototype (for reference only):

weightedMedianFilter(joint, src, r[, dst[, sigma[, weightType[, mask]]]]) -> dst