View Source Evision.XImgProc (Evision v0.2.9)
Summary
Functions
Simple one-line Adaptive Manifold Filter call.
Simple one-line Adaptive Manifold Filter call.
Performs anisotropic diffusion on an image.
Performs anisotropic diffusion on an image.
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.
Compares a color template against overlapped color image regions.
Compares a color template against overlapped color image regions.
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
Function for computing mean square error for disparity maps
Contour sampling .
Contour sampling .
Computes the estimated covariance matrix of an image using the sliding window forumlation.
Computes the estimated covariance matrix of an image using the sliding window forumlation.
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.
create ContourFitting algorithm object
create ContourFitting algorithm object
Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM.
More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself.
Factory method, create instance of DTFilter and produce initialization routines.
Factory method, create instance of DTFilter and produce initialization routines.
Factory method that creates an instance of the EdgeAwareInterpolator.
Creates a Edgeboxes
Creates a Edgeboxes
Creates a smart pointer to a EdgeDrawing object and initializes it
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.
Creates a smart pointer to a FastLineDetector object and initializes it
Creates a smart pointer to a FastLineDetector object and initializes it
Creates a graph based segmentor
Creates a graph based segmentor
Factory method, create instance of GuidedFilter and produce initialization routines.
Factory method, create instance of GuidedFilter and produce initialization routines.
creates a quaternion image.
creates a quaternion image.
createRFFeatureGetter
Factory method that creates an instance of the RICInterpolator.
Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence.
Initializes a ScanSegment object.
Initializes a ScanSegment object.
Create a new SelectiveSearchSegmentation class.
Create a new color-based strategy
Create a new fill-based strategy
Create a new multiple strategy
Create a new multiple strategy and set one subtrategy
Create a new multiple strategy and set two subtrategies, with equal weights
Create a new multiple strategy and set three subtrategies, with equal weights
Create a new multiple strategy and set four subtrategies, with equal weights
Create a new size-based strategy
Create a new size-based strategy
createStructuredEdgeDetection
createStructuredEdgeDetection
Class implementing the LSC (Linear Spectral Clustering) superpixels
Class implementing the LSC (Linear Spectral Clustering) superpixels
Initializes a SuperpixelSEEDS object.
Initializes a SuperpixelSEEDS object.
Initialize a SuperpixelSLIC object
Initialize a SuperpixelSLIC object
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.
Smoothes an image using the Edge-Preserving filter.
Smoothes an image using the Edge-Preserving filter.
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.
Calculates 2D Fast Hough transform of an image.
Calculates 2D Fast Hough transform of an image.
Finds ellipses fastly in an image using projective invariant pruning.
Finds ellipses fastly in an image using projective invariant pruning.
Fourier descriptors for planed closed curves
Fourier descriptors for planed closed curves
Function for creating a disparity map visualization (clamped CV_8U image)
Function for creating a disparity map visualization (clamped CV_8U image)
Applies X Deriche filter to an image.
Applies X Deriche filter to an image.
Applies Y Deriche filter to an image.
Applies Y Deriche filter to an image.
Simple one-line (Fast) Guided Filter call.
Simple one-line (Fast) Guided Filter call.
Calculates coordinates of line segment corresponded by point in Hough space.
Calculates coordinates of line segment corresponded by point in Hough space.
Applies the joint bilateral filter to an image.
Applies the joint bilateral filter to an image.
Global image smoothing via L0 gradient minimization.
Global image smoothing via L0 gradient minimization.
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.
PeiLinNormalization
PeiLinNormalization
calculates conjugate of a quaternion image.
calculates conjugate of a quaternion image.
Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.
Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.
Calculates the per-element quaternion product of two arrays
Calculates the per-element quaternion product of two arrays
divides each element by its modulus.
divides each element by its modulus.
Calculate Radon Transform of an image.
Calculate Radon Transform of an image.
Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.
Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.
Applies the rolling guidance filter to an image.
Applies the rolling guidance filter to an image.
Applies a binary blob thinning operation, to achieve a skeletization of the input image.
Applies a binary blob thinning operation, to achieve a skeletization of the input image.
transform a contour
transform a contour
Applies weighted median filter to an image.
Applies weighted median filter to an image.
Types
@type t() :: %Evision.XImgProc{ref: reference()}
Type that represents an XImgProc
struct.
ref.
reference()
The underlying erlang resource variable.
Functions
@spec amFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number(), number() ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line Adaptive Manifold Filter call.
Positional Arguments
joint:
Evision.Mat
.joint (also called as guided) image or array of images with any numbers of channels.
src:
Evision.Mat
.filtering image with any numbers of channels.
sigma_s:
double
.spatial standard deviation.
sigma_r:
double
.color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
adjust_outliers:
bool
.optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.
Return
dst:
Evision.Mat.t()
.output image.
Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions. @sa bilateralFilter, dtFilter, guidedFilter
Python prototype (for reference only):
amFilter(joint, src, sigma_s, sigma_r[, dst[, adjust_outliers]]) -> dst
@spec amFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number(), number(), [{:adjust_outliers, term()}] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line Adaptive Manifold Filter call.
Positional Arguments
joint:
Evision.Mat
.joint (also called as guided) image or array of images with any numbers of channels.
src:
Evision.Mat
.filtering image with any numbers of channels.
sigma_s:
double
.spatial standard deviation.
sigma_r:
double
.color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
adjust_outliers:
bool
.optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.
Return
dst:
Evision.Mat.t()
.output image.
Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions. @sa bilateralFilter, dtFilter, guidedFilter
Python prototype (for reference only):
amFilter(joint, src, sigma_s, sigma_r[, dst[, adjust_outliers]]) -> dst
@spec anisotropicDiffusion(Evision.Mat.maybe_mat_in(), number(), number(), integer()) :: Evision.Mat.t() | {:error, String.t()}
Performs anisotropic diffusion on an image.
Positional Arguments
src:
Evision.Mat
.Source image with 3 channels.
alpha:
float
.The amount of time to step forward by on each iteration (normally, it's between 0 and 1).
k:
float
.sensitivity to the edges
niters:
integer()
.The number of iterations
Return
dst:
Evision.Mat.t()
.Destination image of the same size and the same number of channels as src .
The function applies Perona-Malik anisotropic diffusion to an image. This is the solution to the partial differential equation: \f[{\frac {\partial I}{\partial t}}={\mathrm {div}}\left(c(x,y,t)\nabla I\right)=\nabla c\cdot \nabla I+c(x,y,t)\Delta I\f] Suggested functions for c(x,y,t) are: \f[c\left(\|\nabla I\|\right)=e^{{-\left(\|\nabla I\|/K\right)^{2}}}\f] or \f[ c\left(\|\nabla I\|\right)={\frac {1}{1+\left({\frac {\|\nabla I\|}{K}}\right)^{2}}} \f]
Python prototype (for reference only):
anisotropicDiffusion(src, alpha, K, niters[, dst]) -> dst
@spec anisotropicDiffusion( Evision.Mat.maybe_mat_in(), number(), number(), integer(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Performs anisotropic diffusion on an image.
Positional Arguments
src:
Evision.Mat
.Source image with 3 channels.
alpha:
float
.The amount of time to step forward by on each iteration (normally, it's between 0 and 1).
k:
float
.sensitivity to the edges
niters:
integer()
.The number of iterations
Return
dst:
Evision.Mat.t()
.Destination image of the same size and the same number of channels as src .
The function applies Perona-Malik anisotropic diffusion to an image. This is the solution to the partial differential equation: \f[{\frac {\partial I}{\partial t}}={\mathrm {div}}\left(c(x,y,t)\nabla I\right)=\nabla c\cdot \nabla I+c(x,y,t)\Delta I\f] Suggested functions for c(x,y,t) are: \f[c\left(\|\nabla I\|\right)=e^{{-\left(\|\nabla I\|/K\right)^{2}}}\f] or \f[ c\left(\|\nabla I\|\right)={\frac {1}{1+\left({\frac {\|\nabla I\|}{K}}\right)^{2}}} \f]
Python prototype (for reference only):
anisotropicDiffusion(src, alpha, K, niters[, dst]) -> dst
@spec bilateralTextureFilter(Keyword.t()) :: any() | {:error, String.t()}
@spec bilateralTextureFilter(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.
Positional Arguments
src:
Evision.Mat
.Source image whose depth is 8-bit UINT or 32-bit FLOAT
Keyword Arguments
fr:
integer()
.Radius of kernel to be used for filtering. It should be positive integer
numIter:
integer()
.Number of iterations of algorithm, It should be positive integer
sigmaAlpha:
double
.Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated.
sigmaAvg:
double
.Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and type as src.
@sa rollingGuidanceFilter, bilateralFilter
Python prototype (for reference only):
bilateralTextureFilter(src[, dst[, fr[, numIter[, sigmaAlpha[, sigmaAvg]]]]]) -> dst
@spec bilateralTextureFilter( Evision.Mat.maybe_mat_in(), [fr: term(), numIter: term(), sigmaAlpha: term(), sigmaAvg: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.
Positional Arguments
src:
Evision.Mat
.Source image whose depth is 8-bit UINT or 32-bit FLOAT
Keyword Arguments
fr:
integer()
.Radius of kernel to be used for filtering. It should be positive integer
numIter:
integer()
.Number of iterations of algorithm, It should be positive integer
sigmaAlpha:
double
.Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated.
sigmaAvg:
double
.Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and type as src.
@sa rollingGuidanceFilter, bilateralFilter
Python prototype (for reference only):
bilateralTextureFilter(src[, dst[, fr[, numIter[, sigmaAlpha[, sigmaAvg]]]]]) -> dst
@spec colorMatchTemplate(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Compares a color template against overlapped color image regions.
Positional Arguments
- img:
Evision.Mat
- templ:
Evision.Mat
Return
- result:
Evision.Mat.t()
.
Python prototype (for reference only):
colorMatchTemplate(img, templ[, result]) -> result
@spec colorMatchTemplate( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Compares a color template against overlapped color image regions.
Positional Arguments
- img:
Evision.Mat
- templ:
Evision.Mat
Return
- result:
Evision.Mat.t()
.
Python prototype (for reference only):
colorMatchTemplate(img, templ[, result]) -> result
@spec computeBadPixelPercent( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), {number(), number(), number(), number()} ) :: number() | {:error, String.t()}
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
Positional Arguments
gT:
Evision.Mat
.ground truth disparity map
src:
Evision.Mat
.disparity map to evaluate
rOI:
Rect
.region of interest
Keyword Arguments
thresh:
integer()
.threshold used to determine "bad" pixels
Return
- retval:
double
@result returns mean square error between GT and src
Python prototype (for reference only):
computeBadPixelPercent(GT, src, ROI[, thresh]) -> retval
@spec computeBadPixelPercent( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), {number(), number(), number(), number()}, [{:thresh, term()}] | nil ) :: number() | {:error, String.t()}
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
Positional Arguments
gT:
Evision.Mat
.ground truth disparity map
src:
Evision.Mat
.disparity map to evaluate
rOI:
Rect
.region of interest
Keyword Arguments
thresh:
integer()
.threshold used to determine "bad" pixels
Return
- retval:
double
@result returns mean square error between GT and src
Python prototype (for reference only):
computeBadPixelPercent(GT, src, ROI[, thresh]) -> retval
@spec computeMSE( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), {number(), number(), number(), number()} ) :: number() | {:error, String.t()}
Function for computing mean square error for disparity maps
Positional Arguments
gT:
Evision.Mat
.ground truth disparity map
src:
Evision.Mat
.disparity map to evaluate
rOI:
Rect
.region of interest
Return
- retval:
double
@result returns mean square error between GT and src
Python prototype (for reference only):
computeMSE(GT, src, ROI) -> retval
@spec contourSampling(Evision.Mat.maybe_mat_in(), integer()) :: Evision.Mat.t() | {:error, String.t()}
Contour sampling .
Positional Arguments
- src:
Evision.Mat
- nbElt:
integer()
Return
- out:
Evision.Mat.t()
.
Python prototype (for reference only):
contourSampling(src, nbElt[, out]) -> out
@spec contourSampling( Evision.Mat.maybe_mat_in(), integer(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Contour sampling .
Positional Arguments
- src:
Evision.Mat
- nbElt:
integer()
Return
- out:
Evision.Mat.t()
.
Python prototype (for reference only):
contourSampling(src, nbElt[, out]) -> out
@spec covarianceEstimation(Evision.Mat.maybe_mat_in(), integer(), integer()) :: Evision.Mat.t() | {:error, String.t()}
Computes the estimated covariance matrix of an image using the sliding window forumlation.
Positional Arguments
src:
Evision.Mat
.The source image. Input image must be of a complex type.
windowRows:
integer()
.The number of rows in the window.
windowCols:
integer()
.The number of cols in the window. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner. Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix.
Return
dst:
Evision.Mat.t()
.The destination estimated covariance matrix. Output matrix will be size (windowRowswindowCols, windowRowswindowCols).
Python prototype (for reference only):
covarianceEstimation(src, windowRows, windowCols[, dst]) -> dst
@spec covarianceEstimation( Evision.Mat.maybe_mat_in(), integer(), integer(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Computes the estimated covariance matrix of an image using the sliding window forumlation.
Positional Arguments
src:
Evision.Mat
.The source image. Input image must be of a complex type.
windowRows:
integer()
.The number of rows in the window.
windowCols:
integer()
.The number of cols in the window. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner. Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix.
Return
dst:
Evision.Mat.t()
.The destination estimated covariance matrix. Output matrix will be size (windowRowswindowCols, windowRowswindowCols).
Python prototype (for reference only):
covarianceEstimation(src, windowRows, windowCols[, dst]) -> dst
@spec createAMFilter(number(), number()) :: Evision.XImgProc.AdaptiveManifoldFilter.t() | {:error, String.t()}
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.
Positional Arguments
sigma_s:
double
.spatial standard deviation.
sigma_r:
double
.color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
adjust_outliers:
bool
.optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.
Return
- retval:
Evision.XImgProc.AdaptiveManifoldFilter.t()
For more details about Adaptive Manifold Filter parameters, see the original article @cite Gastal12 . Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions.
Python prototype (for reference only):
createAMFilter(sigma_s, sigma_r[, adjust_outliers]) -> retval
@spec createAMFilter(number(), number(), [{:adjust_outliers, term()}] | nil) :: Evision.XImgProc.AdaptiveManifoldFilter.t() | {:error, String.t()}
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.
Positional Arguments
sigma_s:
double
.spatial standard deviation.
sigma_r:
double
.color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
adjust_outliers:
bool
.optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.
Return
- retval:
Evision.XImgProc.AdaptiveManifoldFilter.t()
For more details about Adaptive Manifold Filter parameters, see the original article @cite Gastal12 . Note: Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions.
Python prototype (for reference only):
createAMFilter(sigma_s, sigma_r[, adjust_outliers]) -> retval
@spec createContourFitting() :: Evision.XImgProc.ContourFitting.t() | {:error, String.t()}
create ContourFitting algorithm object
Keyword Arguments
ctr:
integer()
.number of Fourier descriptors equal to number of contour points after resampling.
fd:
integer()
.Contour defining second shape (Target).
Return
- retval:
Evision.XImgProc.ContourFitting.t()
Python prototype (for reference only):
createContourFitting([, ctr[, fd]]) -> retval
@spec createContourFitting(Keyword.t()) :: any() | {:error, String.t()}
@spec createContourFitting([ctr: term(), fd: term()] | nil) :: Evision.XImgProc.ContourFitting.t() | {:error, String.t()}
create ContourFitting algorithm object
Keyword Arguments
ctr:
integer()
.number of Fourier descriptors equal to number of contour points after resampling.
fd:
integer()
.Contour defining second shape (Target).
Return
- retval:
Evision.XImgProc.ContourFitting.t()
Python prototype (for reference only):
createContourFitting([, ctr[, fd]]) -> retval
@spec createDisparityWLSFilter(Keyword.t()) :: any() | {:error, String.t()}
@spec createDisparityWLSFilter(Evision.StereoMatcher.t()) :: Evision.XImgProc.DisparityWLSFilter.t() | {:error, String.t()}
Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM.
Positional Arguments
matcher_left:
Evision.StereoMatcher
.stereo matcher instance that will be used with the filter
Return
- retval:
Evision.XImgProc.DisparityWLSFilter.t()
Python prototype (for reference only):
createDisparityWLSFilter(matcher_left) -> retval
@spec createDisparityWLSFilterGeneric(Keyword.t()) :: any() | {:error, String.t()}
@spec createDisparityWLSFilterGeneric(boolean()) :: Evision.XImgProc.DisparityWLSFilter.t() | {:error, String.t()}
More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself.
Positional Arguments
use_confidence:
bool
.filtering with confidence requires two disparity maps (for the left and right views) and is approximately two times slower. However, quality is typically significantly better.
Return
- retval:
Evision.XImgProc.DisparityWLSFilter.t()
Python prototype (for reference only):
createDisparityWLSFilterGeneric(use_confidence) -> retval
@spec createDTFilter(Evision.Mat.maybe_mat_in(), number(), number()) :: Evision.XImgProc.DTFilter.t() | {:error, String.t()}
Factory method, create instance of DTFilter and produce initialization routines.
Positional Arguments
guide:
Evision.Mat
.guided image (used to build transformed distance, which describes edge structure of guided image).
sigmaSpatial:
double
.\f${\sigma}_H\f$ parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.
sigmaColor:
double
.\f${\sigma}_r\f$ parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
mode:
integer()
.one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.
numIters:
integer()
.optional number of iterations used for filtering, 3 is quite enough.
Return
- retval:
Evision.XImgProc.DTFilter.t()
For more details about Domain Transform filter parameters, see the original article @cite Gastal11 and Domain Transform filter homepage.
Python prototype (for reference only):
createDTFilter(guide, sigmaSpatial, sigmaColor[, mode[, numIters]]) -> retval
@spec createDTFilter( Evision.Mat.maybe_mat_in(), number(), number(), [mode: term(), numIters: term()] | nil ) :: Evision.XImgProc.DTFilter.t() | {:error, String.t()}
Factory method, create instance of DTFilter and produce initialization routines.
Positional Arguments
guide:
Evision.Mat
.guided image (used to build transformed distance, which describes edge structure of guided image).
sigmaSpatial:
double
.\f${\sigma}_H\f$ parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.
sigmaColor:
double
.\f${\sigma}_r\f$ parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
mode:
integer()
.one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.
numIters:
integer()
.optional number of iterations used for filtering, 3 is quite enough.
Return
- retval:
Evision.XImgProc.DTFilter.t()
For more details about Domain Transform filter parameters, see the original article @cite Gastal11 and Domain Transform filter homepage.
Python prototype (for reference only):
createDTFilter(guide, sigmaSpatial, sigmaColor[, mode[, numIters]]) -> retval
@spec createEdgeAwareInterpolator() :: Evision.XImgProc.EdgeAwareInterpolator.t() | {:error, String.t()}
Factory method that creates an instance of the EdgeAwareInterpolator.
Return
- retval:
Evision.XImgProc.EdgeAwareInterpolator.t()
Python prototype (for reference only):
createEdgeAwareInterpolator() -> retval
@spec createEdgeBoxes() :: Evision.XImgProc.EdgeBoxes.t() | {:error, String.t()}
Creates a Edgeboxes
Keyword Arguments
alpha:
float
.step size of sliding window search.
beta:
float
.nms threshold for object proposals.
eta:
float
.adaptation rate for nms threshold.
minScore:
float
.min score of boxes to detect.
maxBoxes:
integer()
.max number of boxes to detect.
edgeMinMag:
float
.edge min magnitude. Increase to trade off accuracy for speed.
edgeMergeThr:
float
.edge merge threshold. Increase to trade off accuracy for speed.
clusterMinMag:
float
.cluster min magnitude. Increase to trade off accuracy for speed.
maxAspectRatio:
float
.max aspect ratio of boxes.
minBoxArea:
float
.minimum area of boxes.
gamma:
float
.affinity sensitivity.
kappa:
float
.scale sensitivity.
Return
- retval:
Evision.XImgProc.EdgeBoxes.t()
Python prototype (for reference only):
createEdgeBoxes([, alpha[, beta[, eta[, minScore[, maxBoxes[, edgeMinMag[, edgeMergeThr[, clusterMinMag[, maxAspectRatio[, minBoxArea[, gamma[, kappa]]]]]]]]]]]]) -> retval
@spec createEdgeBoxes(Keyword.t()) :: any() | {:error, String.t()}
@spec createEdgeBoxes( [ alpha: term(), beta: term(), clusterMinMag: term(), edgeMergeThr: term(), edgeMinMag: term(), eta: term(), gamma: term(), kappa: term(), maxAspectRatio: term(), maxBoxes: term(), minBoxArea: term(), minScore: term() ] | nil ) :: Evision.XImgProc.EdgeBoxes.t() | {:error, String.t()}
Creates a Edgeboxes
Keyword Arguments
alpha:
float
.step size of sliding window search.
beta:
float
.nms threshold for object proposals.
eta:
float
.adaptation rate for nms threshold.
minScore:
float
.min score of boxes to detect.
maxBoxes:
integer()
.max number of boxes to detect.
edgeMinMag:
float
.edge min magnitude. Increase to trade off accuracy for speed.
edgeMergeThr:
float
.edge merge threshold. Increase to trade off accuracy for speed.
clusterMinMag:
float
.cluster min magnitude. Increase to trade off accuracy for speed.
maxAspectRatio:
float
.max aspect ratio of boxes.
minBoxArea:
float
.minimum area of boxes.
gamma:
float
.affinity sensitivity.
kappa:
float
.scale sensitivity.
Return
- retval:
Evision.XImgProc.EdgeBoxes.t()
Python prototype (for reference only):
createEdgeBoxes([, alpha[, beta[, eta[, minScore[, maxBoxes[, edgeMinMag[, edgeMergeThr[, clusterMinMag[, maxAspectRatio[, minBoxArea[, gamma[, kappa]]]]]]]]]]]]) -> retval
@spec createEdgeDrawing() :: Evision.XImgProc.EdgeDrawing.t() | {:error, String.t()}
Creates a smart pointer to a EdgeDrawing object and initializes it
Return
- retval:
Evision.XImgProc.EdgeDrawing.t()
Python prototype (for reference only):
createEdgeDrawing() -> retval
createFastBilateralSolverFilter(guide, sigma_spatial, sigma_luma, sigma_chroma)
View Source@spec createFastBilateralSolverFilter( Evision.Mat.maybe_mat_in(), number(), number(), number() ) :: Evision.XImgProc.FastBilateralSolverFilter.t() | {:error, String.t()}
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.
Positional Arguments
guide:
Evision.Mat
.image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
sigma_spatial:
double
.parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.
sigma_luma:
double
.parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.
sigma_chroma:
double
.parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.
Keyword Arguments
lambda:
double
.smoothness strength parameter for solver.
num_iter:
integer()
.number of iterations used for solver, 25 is usually enough.
max_tol:
double
.convergence tolerance used for solver.
Return
- retval:
Evision.XImgProc.FastBilateralSolverFilter.t()
For more details about the Fast Bilateral Solver parameters, see the original paper @cite BarronPoole2016.
Python prototype (for reference only):
createFastBilateralSolverFilter(guide, sigma_spatial, sigma_luma, sigma_chroma[, lambda[, num_iter[, max_tol]]]) -> retval
createFastBilateralSolverFilter(guide, sigma_spatial, sigma_luma, sigma_chroma, opts)
View Source@spec createFastBilateralSolverFilter( Evision.Mat.maybe_mat_in(), number(), number(), number(), [lambda: term(), max_tol: term(), num_iter: term()] | nil ) :: Evision.XImgProc.FastBilateralSolverFilter.t() | {:error, String.t()}
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.
Positional Arguments
guide:
Evision.Mat
.image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
sigma_spatial:
double
.parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.
sigma_luma:
double
.parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.
sigma_chroma:
double
.parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.
Keyword Arguments
lambda:
double
.smoothness strength parameter for solver.
num_iter:
integer()
.number of iterations used for solver, 25 is usually enough.
max_tol:
double
.convergence tolerance used for solver.
Return
- retval:
Evision.XImgProc.FastBilateralSolverFilter.t()
For more details about the Fast Bilateral Solver parameters, see the original paper @cite BarronPoole2016.
Python prototype (for reference only):
createFastBilateralSolverFilter(guide, sigma_spatial, sigma_luma, sigma_chroma[, lambda[, num_iter[, max_tol]]]) -> retval
@spec createFastGlobalSmootherFilter(Evision.Mat.maybe_mat_in(), number(), number()) :: Evision.XImgProc.FastGlobalSmootherFilter.t() | {:error, String.t()}
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.
Positional Arguments
guide:
Evision.Mat
.image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
lambda:
double
.parameter defining the amount of regularization
sigma_color:
double
.parameter, that is similar to color space sigma in bilateralFilter.
Keyword Arguments
lambda_attenuation:
double
.internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.
num_iter:
integer()
.number of iterations used for filtering, 3 is usually enough.
Return
- retval:
Evision.XImgProc.FastGlobalSmootherFilter.t()
For more details about Fast Global Smoother parameters, see the original paper @cite Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.
Python prototype (for reference only):
createFastGlobalSmootherFilter(guide, lambda, sigma_color[, lambda_attenuation[, num_iter]]) -> retval
@spec createFastGlobalSmootherFilter( Evision.Mat.maybe_mat_in(), number(), number(), [lambda_attenuation: term(), num_iter: term()] | nil ) :: Evision.XImgProc.FastGlobalSmootherFilter.t() | {:error, String.t()}
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.
Positional Arguments
guide:
Evision.Mat
.image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
lambda:
double
.parameter defining the amount of regularization
sigma_color:
double
.parameter, that is similar to color space sigma in bilateralFilter.
Keyword Arguments
lambda_attenuation:
double
.internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.
num_iter:
integer()
.number of iterations used for filtering, 3 is usually enough.
Return
- retval:
Evision.XImgProc.FastGlobalSmootherFilter.t()
For more details about Fast Global Smoother parameters, see the original paper @cite Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.
Python prototype (for reference only):
createFastGlobalSmootherFilter(guide, lambda, sigma_color[, lambda_attenuation[, num_iter]]) -> retval
@spec createFastLineDetector() :: Evision.XImgProc.FastLineDetector.t() | {:error, String.t()}
Creates a smart pointer to a FastLineDetector object and initializes it
Keyword Arguments
length_threshold:
integer()
.Segment shorter than this will be discarded
distance_threshold:
float
.A point placed from a hypothesis line segment farther than this will be regarded as an outlier
canny_th1:
double
.First threshold for hysteresis procedure in Canny()
canny_th2:
double
.Second threshold for hysteresis procedure in Canny()
canny_aperture_size:
integer()
.Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image.
do_merge:
bool
.If true, incremental merging of segments will be performed
Return
- retval:
Evision.XImgProc.FastLineDetector.t()
Python prototype (for reference only):
createFastLineDetector([, length_threshold[, distance_threshold[, canny_th1[, canny_th2[, canny_aperture_size[, do_merge]]]]]]) -> retval
@spec createFastLineDetector(Keyword.t()) :: any() | {:error, String.t()}
@spec createFastLineDetector( [ canny_aperture_size: term(), canny_th1: term(), canny_th2: term(), distance_threshold: term(), do_merge: term(), length_threshold: term() ] | nil ) :: Evision.XImgProc.FastLineDetector.t() | {:error, String.t()}
Creates a smart pointer to a FastLineDetector object and initializes it
Keyword Arguments
length_threshold:
integer()
.Segment shorter than this will be discarded
distance_threshold:
float
.A point placed from a hypothesis line segment farther than this will be regarded as an outlier
canny_th1:
double
.First threshold for hysteresis procedure in Canny()
canny_th2:
double
.Second threshold for hysteresis procedure in Canny()
canny_aperture_size:
integer()
.Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image.
do_merge:
bool
.If true, incremental merging of segments will be performed
Return
- retval:
Evision.XImgProc.FastLineDetector.t()
Python prototype (for reference only):
createFastLineDetector([, length_threshold[, distance_threshold[, canny_th1[, canny_th2[, canny_aperture_size[, do_merge]]]]]]) -> retval
@spec createGraphSegmentation() :: Evision.XImgProc.GraphSegmentation.t() | {:error, String.t()}
Creates a graph based segmentor
Keyword Arguments
sigma:
double
.The sigma parameter, used to smooth image
k:
float
.The k parameter of the algorythm
min_size:
integer()
.The minimum size of segments
Return
- retval:
Evision.XImgProc.GraphSegmentation.t()
Python prototype (for reference only):
createGraphSegmentation([, sigma[, k[, min_size]]]) -> retval
@spec createGraphSegmentation(Keyword.t()) :: any() | {:error, String.t()}
@spec createGraphSegmentation([k: term(), min_size: term(), sigma: term()] | nil) :: Evision.XImgProc.GraphSegmentation.t() | {:error, String.t()}
Creates a graph based segmentor
Keyword Arguments
sigma:
double
.The sigma parameter, used to smooth image
k:
float
.The k parameter of the algorythm
min_size:
integer()
.The minimum size of segments
Return
- retval:
Evision.XImgProc.GraphSegmentation.t()
Python prototype (for reference only):
createGraphSegmentation([, sigma[, k[, min_size]]]) -> retval
@spec createGuidedFilter(Evision.Mat.maybe_mat_in(), integer(), number()) :: Evision.XImgProc.GuidedFilter.t() | {:error, String.t()}
Factory method, create instance of GuidedFilter and produce initialization routines.
Positional Arguments
guide:
Evision.Mat
.guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.
radius:
integer()
.radius of Guided Filter.
eps:
double
.regularization term of Guided Filter. \f${eps}^2\f$ is similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
scale:
double
.subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter)
Return
- retval:
Evision.XImgProc.GuidedFilter.t()
For more details about (Fast) Guided Filter parameters, see the original articles @cite Kaiming10 @cite Kaiming15 .
Python prototype (for reference only):
createGuidedFilter(guide, radius, eps[, scale]) -> retval
@spec createGuidedFilter( Evision.Mat.maybe_mat_in(), integer(), number(), [{:scale, term()}] | nil ) :: Evision.XImgProc.GuidedFilter.t() | {:error, String.t()}
Factory method, create instance of GuidedFilter and produce initialization routines.
Positional Arguments
guide:
Evision.Mat
.guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.
radius:
integer()
.radius of Guided Filter.
eps:
double
.regularization term of Guided Filter. \f${eps}^2\f$ is similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
scale:
double
.subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter)
Return
- retval:
Evision.XImgProc.GuidedFilter.t()
For more details about (Fast) Guided Filter parameters, see the original articles @cite Kaiming10 @cite Kaiming15 .
Python prototype (for reference only):
createGuidedFilter(guide, radius, eps[, scale]) -> retval
@spec createQuaternionImage(Keyword.t()) :: any() | {:error, String.t()}
@spec createQuaternionImage(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
creates a quaternion image.
Positional Arguments
- img:
Evision.Mat
Return
- qimg:
Evision.Mat.t()
.
Python prototype (for reference only):
createQuaternionImage(img[, qimg]) -> qimg
@spec createQuaternionImage(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) :: Evision.Mat.t() | {:error, String.t()}
creates a quaternion image.
Positional Arguments
- img:
Evision.Mat
Return
- qimg:
Evision.Mat.t()
.
Python prototype (for reference only):
createQuaternionImage(img[, qimg]) -> qimg
@spec createRFFeatureGetter() :: Evision.XImgProc.RFFeatureGetter.t() | {:error, String.t()}
createRFFeatureGetter
Return
- retval:
Evision.XImgProc.RFFeatureGetter.t()
Python prototype (for reference only):
createRFFeatureGetter() -> retval
@spec createRICInterpolator() :: Evision.XImgProc.RICInterpolator.t() | {:error, String.t()}
Factory method that creates an instance of the RICInterpolator.
Return
- retval:
Evision.XImgProc.RICInterpolator.t()
Python prototype (for reference only):
createRICInterpolator() -> retval
@spec createRightMatcher(Keyword.t()) :: any() | {:error, String.t()}
@spec createRightMatcher(Evision.StereoMatcher.t()) :: Evision.StereoMatcher.t() | {:error, String.t()}
Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence.
Positional Arguments
matcher_left:
Evision.StereoMatcher
.main stereo matcher instance that will be used with the filter
Return
- retval:
Evision.StereoMatcher.t()
Python prototype (for reference only):
createRightMatcher(matcher_left) -> retval
@spec createScanSegment(integer(), integer(), integer()) :: Evision.XImgProc.ScanSegment.t() | {:error, String.t()}
Initializes a ScanSegment object.
Positional Arguments
image_width:
integer()
.Image width.
image_height:
integer()
.Image height.
num_superpixels:
integer()
.Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size). Use getNumberOfSuperpixels() to get the actual number.
Keyword Arguments
slices:
integer()
.Number of processing threads for parallelisation. Setting -1 uses the maximum number of threads. In practice, four threads is enough for smaller images and eight threads for larger ones.
merge_small:
bool
.merge small segments to give the desired number of superpixels. Processing is much faster without merging, but many small segments will be left in the image.
Return
- retval:
Evision.XImgProc.ScanSegment.t()
The function initializes a ScanSegment object for the input image. It stores the parameters of the image: image_width and image_height. It also sets the parameters of the F-DBSCAN superpixel algorithm, which are: num_superpixels, threads, and merge_small.
Python prototype (for reference only):
createScanSegment(image_width, image_height, num_superpixels[, slices[, merge_small]]) -> retval
createScanSegment(image_width, image_height, num_superpixels, opts)
View Source@spec createScanSegment( integer(), integer(), integer(), [merge_small: term(), slices: term()] | nil ) :: Evision.XImgProc.ScanSegment.t() | {:error, String.t()}
Initializes a ScanSegment object.
Positional Arguments
image_width:
integer()
.Image width.
image_height:
integer()
.Image height.
num_superpixels:
integer()
.Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size). Use getNumberOfSuperpixels() to get the actual number.
Keyword Arguments
slices:
integer()
.Number of processing threads for parallelisation. Setting -1 uses the maximum number of threads. In practice, four threads is enough for smaller images and eight threads for larger ones.
merge_small:
bool
.merge small segments to give the desired number of superpixels. Processing is much faster without merging, but many small segments will be left in the image.
Return
- retval:
Evision.XImgProc.ScanSegment.t()
The function initializes a ScanSegment object for the input image. It stores the parameters of the image: image_width and image_height. It also sets the parameters of the F-DBSCAN superpixel algorithm, which are: num_superpixels, threads, and merge_small.
Python prototype (for reference only):
createScanSegment(image_width, image_height, num_superpixels[, slices[, merge_small]]) -> retval
@spec createSelectiveSearchSegmentation() :: Evision.XImgProc.SelectiveSearchSegmentation.t() | {:error, String.t()}
Create a new SelectiveSearchSegmentation class.
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentation.t()
Python prototype (for reference only):
createSelectiveSearchSegmentation() -> retval
@spec createSelectiveSearchSegmentationStrategyColor() :: Evision.XImgProc.SelectiveSearchSegmentationStrategyColor.t() | {:error, String.t()}
Create a new color-based strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategyColor.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategyColor() -> retval
@spec createSelectiveSearchSegmentationStrategyFill() :: Evision.XImgProc.SelectiveSearchSegmentationStrategyFill.t() | {:error, String.t()}
Create a new fill-based strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategyFill.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategyFill() -> retval
@spec createSelectiveSearchSegmentationStrategyMultiple() :: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t() | {:error, String.t()}
Create a new multiple strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategyMultiple() -> retval
@spec createSelectiveSearchSegmentationStrategyMultiple(Keyword.t()) :: any() | {:error, String.t()}
@spec createSelectiveSearchSegmentationStrategyMultiple( Evision.XImgProc.SelectiveSearchSegmentationStrategy.t() ) :: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t() | {:error, String.t()}
Create a new multiple strategy and set one subtrategy
Positional Arguments
s1:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The first strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategyMultiple(s1) -> retval
@spec createSelectiveSearchSegmentationStrategyMultiple( Evision.XImgProc.SelectiveSearchSegmentationStrategy.t(), Evision.XImgProc.SelectiveSearchSegmentationStrategy.t() ) :: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t() | {:error, String.t()}
Create a new multiple strategy and set two subtrategies, with equal weights
Positional Arguments
s1:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The first strategy
s2:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The second strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategyMultiple(s1, s2) -> retval
@spec createSelectiveSearchSegmentationStrategyMultiple( Evision.XImgProc.SelectiveSearchSegmentationStrategy.t(), Evision.XImgProc.SelectiveSearchSegmentationStrategy.t(), Evision.XImgProc.SelectiveSearchSegmentationStrategy.t() ) :: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t() | {:error, String.t()}
Create a new multiple strategy and set three subtrategies, with equal weights
Positional Arguments
s1:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The first strategy
s2:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The second strategy
s3:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The third strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategyMultiple(s1, s2, s3) -> retval
@spec createSelectiveSearchSegmentationStrategyMultiple( Evision.XImgProc.SelectiveSearchSegmentationStrategy.t(), Evision.XImgProc.SelectiveSearchSegmentationStrategy.t(), Evision.XImgProc.SelectiveSearchSegmentationStrategy.t(), Evision.XImgProc.SelectiveSearchSegmentationStrategy.t() ) :: Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t() | {:error, String.t()}
Create a new multiple strategy and set four subtrategies, with equal weights
Positional Arguments
s1:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The first strategy
s2:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The second strategy
s3:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The third strategy
s4:
Evision.XImgProc.SelectiveSearchSegmentationStrategy.t()
.The forth strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategyMultiple.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategyMultiple(s1, s2, s3, s4) -> retval
@spec createSelectiveSearchSegmentationStrategySize() :: Evision.XImgProc.SelectiveSearchSegmentationStrategySize.t() | {:error, String.t()}
Create a new size-based strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategySize.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategySize() -> retval
@spec createSelectiveSearchSegmentationStrategyTexture() :: Evision.XImgProc.SelectiveSearchSegmentationStrategyTexture.t() | {:error, String.t()}
Create a new size-based strategy
Return
- retval:
Evision.XImgProc.SelectiveSearchSegmentationStrategyTexture.t()
Python prototype (for reference only):
createSelectiveSearchSegmentationStrategyTexture() -> retval
@spec createStructuredEdgeDetection(Keyword.t()) :: any() | {:error, String.t()}
@spec createStructuredEdgeDetection(binary()) :: Evision.XImgProc.StructuredEdgeDetection.t() | {:error, String.t()}
createStructuredEdgeDetection
Positional Arguments
- model:
String
Keyword Arguments
- howToGetFeatures:
Evision.XImgProc.RFFeatureGetter.t()
.
Return
- retval:
Evision.XImgProc.StructuredEdgeDetection.t()
Python prototype (for reference only):
createStructuredEdgeDetection(model[, howToGetFeatures]) -> retval
@spec createStructuredEdgeDetection(binary(), [{:howToGetFeatures, term()}] | nil) :: Evision.XImgProc.StructuredEdgeDetection.t() | {:error, String.t()}
createStructuredEdgeDetection
Positional Arguments
- model:
String
Keyword Arguments
- howToGetFeatures:
Evision.XImgProc.RFFeatureGetter.t()
.
Return
- retval:
Evision.XImgProc.StructuredEdgeDetection.t()
Python prototype (for reference only):
createStructuredEdgeDetection(model[, howToGetFeatures]) -> retval
@spec createSuperpixelLSC(Keyword.t()) :: any() | {:error, String.t()}
@spec createSuperpixelLSC(Evision.Mat.maybe_mat_in()) :: Evision.XImgProc.SuperpixelLSC.t() | {:error, String.t()}
Class implementing the LSC (Linear Spectral Clustering) superpixels
Positional Arguments
image:
Evision.Mat
.Image to segment
Keyword Arguments
region_size:
integer()
.Chooses an average superpixel size measured in pixels
ratio:
float
.Chooses the enforcement of superpixel compactness factor of superpixel
Return
- retval:
Evision.XImgProc.SuperpixelLSC.t()
The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.
Python prototype (for reference only):
createSuperpixelLSC(image[, region_size[, ratio]]) -> retval
@spec createSuperpixelLSC( Evision.Mat.maybe_mat_in(), [ratio: term(), region_size: term()] | nil ) :: Evision.XImgProc.SuperpixelLSC.t() | {:error, String.t()}
Class implementing the LSC (Linear Spectral Clustering) superpixels
Positional Arguments
image:
Evision.Mat
.Image to segment
Keyword Arguments
region_size:
integer()
.Chooses an average superpixel size measured in pixels
ratio:
float
.Chooses the enforcement of superpixel compactness factor of superpixel
Return
- retval:
Evision.XImgProc.SuperpixelLSC.t()
The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.
Python prototype (for reference only):
createSuperpixelLSC(image[, region_size[, ratio]]) -> retval
createSuperpixelSEEDS(image_width, image_height, image_channels, num_superpixels, num_levels)
View Source@spec createSuperpixelSEEDS(integer(), integer(), integer(), integer(), integer()) :: Evision.XImgProc.SuperpixelSEEDS.t() | {:error, String.t()}
Initializes a SuperpixelSEEDS object.
Positional Arguments
image_width:
integer()
.Image width.
image_height:
integer()
.Image height.
image_channels:
integer()
.Number of channels of the image.
num_superpixels:
integer()
.Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.
num_levels:
integer()
.Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.
Keyword Arguments
prior:
integer()
.enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].
histogram_bins:
integer()
.Number of histogram bins.
double_step:
bool
.If true, iterate each block level twice for higher accuracy.
Return
- retval:
Evision.XImgProc.SuperpixelSEEDS.t()
The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step. The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.
Python prototype (for reference only):
createSuperpixelSEEDS(image_width, image_height, image_channels, num_superpixels, num_levels[, prior[, histogram_bins[, double_step]]]) -> retval
createSuperpixelSEEDS(image_width, image_height, image_channels, num_superpixels, num_levels, opts)
View Source@spec createSuperpixelSEEDS( integer(), integer(), integer(), integer(), integer(), [double_step: term(), histogram_bins: term(), prior: term()] | nil ) :: Evision.XImgProc.SuperpixelSEEDS.t() | {:error, String.t()}
Initializes a SuperpixelSEEDS object.
Positional Arguments
image_width:
integer()
.Image width.
image_height:
integer()
.Image height.
image_channels:
integer()
.Number of channels of the image.
num_superpixels:
integer()
.Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.
num_levels:
integer()
.Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.
Keyword Arguments
prior:
integer()
.enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].
histogram_bins:
integer()
.Number of histogram bins.
double_step:
bool
.If true, iterate each block level twice for higher accuracy.
Return
- retval:
Evision.XImgProc.SuperpixelSEEDS.t()
The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step. The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.
Python prototype (for reference only):
createSuperpixelSEEDS(image_width, image_height, image_channels, num_superpixels, num_levels[, prior[, histogram_bins[, double_step]]]) -> retval
@spec createSuperpixelSLIC(Keyword.t()) :: any() | {:error, String.t()}
@spec createSuperpixelSLIC(Evision.Mat.maybe_mat_in()) :: Evision.XImgProc.SuperpixelSLIC.t() | {:error, String.t()}
Initialize a SuperpixelSLIC object
Positional Arguments
image:
Evision.Mat
.Image to segment
Keyword Arguments
algorithm:
integer()
.Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.
region_size:
integer()
.Chooses an average superpixel size measured in pixels
ruler:
float
.Chooses the enforcement of superpixel smoothness factor of superpixel
Return
- retval:
Evision.XImgProc.SuperpixelSLIC.t()
The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.
Python prototype (for reference only):
createSuperpixelSLIC(image[, algorithm[, region_size[, ruler]]]) -> retval
@spec createSuperpixelSLIC( Evision.Mat.maybe_mat_in(), [algorithm: term(), region_size: term(), ruler: term()] | nil ) :: Evision.XImgProc.SuperpixelSLIC.t() | {:error, String.t()}
Initialize a SuperpixelSLIC object
Positional Arguments
image:
Evision.Mat
.Image to segment
Keyword Arguments
algorithm:
integer()
.Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.
region_size:
integer()
.Chooses an average superpixel size measured in pixels
ruler:
float
.Chooses the enforcement of superpixel smoothness factor of superpixel
Return
- retval:
Evision.XImgProc.SuperpixelSLIC.t()
The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.
Python prototype (for reference only):
createSuperpixelSLIC(image[, algorithm[, region_size[, ruler]]]) -> retval
@spec dtFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number(), number() ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.
Positional Arguments
guide:
Evision.Mat
.guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.
src:
Evision.Mat
.filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.
sigmaSpatial:
double
.\f${\sigma}_H\f$ parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.
sigmaColor:
double
.\f${\sigma}_r\f$ parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
mode:
integer()
.one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.
numIters:
integer()
.optional number of iterations used for filtering, 3 is quite enough.
Return
dst:
Evision.Mat.t()
.destination image
@sa bilateralFilter, guidedFilter, amFilter
Python prototype (for reference only):
dtFilter(guide, src, sigmaSpatial, sigmaColor[, dst[, mode[, numIters]]]) -> dst
@spec dtFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number(), number(), [mode: term(), numIters: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.
Positional Arguments
guide:
Evision.Mat
.guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.
src:
Evision.Mat
.filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.
sigmaSpatial:
double
.\f${\sigma}_H\f$ parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.
sigmaColor:
double
.\f${\sigma}_r\f$ parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
mode:
integer()
.one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.
numIters:
integer()
.optional number of iterations used for filtering, 3 is quite enough.
Return
dst:
Evision.Mat.t()
.destination image
@sa bilateralFilter, guidedFilter, amFilter
Python prototype (for reference only):
dtFilter(guide, src, sigmaSpatial, sigmaColor[, dst[, mode[, numIters]]]) -> dst
@spec edgePreservingFilter(Evision.Mat.maybe_mat_in(), integer(), number()) :: Evision.Mat.t() | {:error, String.t()}
Smoothes an image using the Edge-Preserving filter.
Positional Arguments
src:
Evision.Mat
.Source 8-bit 3-channel image.
d:
integer()
.Diameter of each pixel neighborhood that is used during filtering. Must be greater or equal 3.
threshold:
double
.Threshold, which distinguishes between noise, outliers, and data.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and type as src.
The function smoothes Gaussian noise as well as salt & pepper noise. For more details about this implementation, please see [ReiWoe18] Reich, S. and Wörgötter, F. and Dellen, B. (2018). A Real-Time Edge-Preserving Denoising Filter. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 85-94, 4. DOI: 10.5220/0006509000850094.
Python prototype (for reference only):
edgePreservingFilter(src, d, threshold[, dst]) -> dst
@spec edgePreservingFilter( Evision.Mat.maybe_mat_in(), integer(), number(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Smoothes an image using the Edge-Preserving filter.
Positional Arguments
src:
Evision.Mat
.Source 8-bit 3-channel image.
d:
integer()
.Diameter of each pixel neighborhood that is used during filtering. Must be greater or equal 3.
threshold:
double
.Threshold, which distinguishes between noise, outliers, and data.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and type as src.
The function smoothes Gaussian noise as well as salt & pepper noise. For more details about this implementation, please see [ReiWoe18] Reich, S. and Wörgötter, F. and Dellen, B. (2018). A Real-Time Edge-Preserving Denoising Filter. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 85-94, 4. DOI: 10.5220/0006509000850094.
Python prototype (for reference only):
edgePreservingFilter(src, d, threshold[, dst]) -> dst
@spec fastBilateralSolverFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in() ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
Positional Arguments
guide:
Evision.Mat
.image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
src:
Evision.Mat
.source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.
confidence:
Evision.Mat
.confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.
Keyword Arguments
sigma_spatial:
double
.parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.
sigma_luma:
double
.parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.
sigma_chroma:
double
.parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.
lambda:
double
.smoothness strength parameter for solver.
num_iter:
integer()
.number of iterations used for solver, 25 is usually enough.
max_tol:
double
.convergence tolerance used for solver.
Return
dst:
Evision.Mat.t()
.destination image.
For more details about the Fast Bilateral Solver parameters, see the original paper @cite BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
Python prototype (for reference only):
fastBilateralSolverFilter(guide, src, confidence[, dst[, sigma_spatial[, sigma_luma[, sigma_chroma[, lambda[, num_iter[, max_tol]]]]]]]) -> dst
@spec fastBilateralSolverFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), [ lambda: term(), max_tol: term(), num_iter: term(), sigma_chroma: term(), sigma_luma: term(), sigma_spatial: term() ] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
Positional Arguments
guide:
Evision.Mat
.image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
src:
Evision.Mat
.source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.
confidence:
Evision.Mat
.confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.
Keyword Arguments
sigma_spatial:
double
.parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.
sigma_luma:
double
.parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.
sigma_chroma:
double
.parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.
lambda:
double
.smoothness strength parameter for solver.
num_iter:
integer()
.number of iterations used for solver, 25 is usually enough.
max_tol:
double
.convergence tolerance used for solver.
Return
dst:
Evision.Mat.t()
.destination image.
For more details about the Fast Bilateral Solver parameters, see the original paper @cite BarronPoole2016. Note: Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.
Python prototype (for reference only):
fastBilateralSolverFilter(guide, src, confidence[, dst[, sigma_spatial[, sigma_luma[, sigma_chroma[, lambda[, num_iter[, max_tol]]]]]]]) -> dst
@spec fastGlobalSmootherFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number(), number() ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.
Positional Arguments
guide:
Evision.Mat
.image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
src:
Evision.Mat
.source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.
lambda:
double
.parameter defining the amount of regularization
sigma_color:
double
.parameter, that is similar to color space sigma in bilateralFilter.
Keyword Arguments
lambda_attenuation:
double
.internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.
num_iter:
integer()
.number of iterations used for filtering, 3 is usually enough.
Return
dst:
Evision.Mat.t()
.destination image.
Python prototype (for reference only):
fastGlobalSmootherFilter(guide, src, lambda, sigma_color[, dst[, lambda_attenuation[, num_iter]]]) -> dst
@spec fastGlobalSmootherFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number(), number(), [lambda_attenuation: term(), num_iter: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.
Positional Arguments
guide:
Evision.Mat
.image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
src:
Evision.Mat
.source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.
lambda:
double
.parameter defining the amount of regularization
sigma_color:
double
.parameter, that is similar to color space sigma in bilateralFilter.
Keyword Arguments
lambda_attenuation:
double
.internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.
num_iter:
integer()
.number of iterations used for filtering, 3 is usually enough.
Return
dst:
Evision.Mat.t()
.destination image.
Python prototype (for reference only):
fastGlobalSmootherFilter(guide, src, lambda, sigma_color[, dst[, lambda_attenuation[, num_iter]]]) -> dst
@spec fastHoughTransform(Evision.Mat.maybe_mat_in(), integer()) :: Evision.Mat.t() | {:error, String.t()}
Calculates 2D Fast Hough transform of an image.
Positional Arguments
- src:
Evision.Mat
- dstMatDepth:
integer()
Keyword Arguments
- angleRange:
integer()
. - op:
integer()
. - makeSkew:
integer()
.
Return
- dst:
Evision.Mat.t()
.
The function calculates the fast Hough transform for full, half or quarter range of angles.
Python prototype (for reference only):
FastHoughTransform(src, dstMatDepth[, dst[, angleRange[, op[, makeSkew]]]]) -> dst
@spec fastHoughTransform( Evision.Mat.maybe_mat_in(), integer(), [angleRange: term(), makeSkew: term(), op: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Calculates 2D Fast Hough transform of an image.
Positional Arguments
- src:
Evision.Mat
- dstMatDepth:
integer()
Keyword Arguments
- angleRange:
integer()
. - op:
integer()
. - makeSkew:
integer()
.
Return
- dst:
Evision.Mat.t()
.
The function calculates the fast Hough transform for full, half or quarter range of angles.
Python prototype (for reference only):
FastHoughTransform(src, dstMatDepth[, dst[, angleRange[, op[, makeSkew]]]]) -> dst
@spec findEllipses(Keyword.t()) :: any() | {:error, String.t()}
@spec findEllipses(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Finds ellipses fastly in an image using projective invariant pruning.
Positional Arguments
image:
Evision.Mat
.input image, could be gray or color.
Keyword Arguments
scoreThreshold:
float
.float, the threshold of ellipse score.
reliabilityThreshold:
float
.float, the threshold of reliability.
centerDistanceThreshold:
float
.float, the threshold of center distance.
Return
ellipses:
Evision.Mat.t()
.output vector of found ellipses. each vector is encoded as five float $x, y, a, b, radius, score$.
The function detects ellipses in images using projective invariant pruning. For more details about this implementation, please see @cite jia2017fast Jia, Qi et al, (2017). A Fast Ellipse Detector using Projective Invariant Pruning. IEEE Transactions on Image Processing.
Python prototype (for reference only):
findEllipses(image[, ellipses[, scoreThreshold[, reliabilityThreshold[, centerDistanceThreshold]]]]) -> ellipses
@spec findEllipses( Evision.Mat.maybe_mat_in(), [ centerDistanceThreshold: term(), reliabilityThreshold: term(), scoreThreshold: term() ] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Finds ellipses fastly in an image using projective invariant pruning.
Positional Arguments
image:
Evision.Mat
.input image, could be gray or color.
Keyword Arguments
scoreThreshold:
float
.float, the threshold of ellipse score.
reliabilityThreshold:
float
.float, the threshold of reliability.
centerDistanceThreshold:
float
.float, the threshold of center distance.
Return
ellipses:
Evision.Mat.t()
.output vector of found ellipses. each vector is encoded as five float $x, y, a, b, radius, score$.
The function detects ellipses in images using projective invariant pruning. For more details about this implementation, please see @cite jia2017fast Jia, Qi et al, (2017). A Fast Ellipse Detector using Projective Invariant Pruning. IEEE Transactions on Image Processing.
Python prototype (for reference only):
findEllipses(image[, ellipses[, scoreThreshold[, reliabilityThreshold[, centerDistanceThreshold]]]]) -> ellipses
@spec fourierDescriptor(Keyword.t()) :: any() | {:error, String.t()}
@spec fourierDescriptor(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Fourier descriptors for planed closed curves
Positional Arguments
- src:
Evision.Mat
Keyword Arguments
- nbElt:
integer()
. - nbFD:
integer()
.
Return
- dst:
Evision.Mat.t()
.
For more details about this implementation, please see @cite PersoonFu1977
Python prototype (for reference only):
fourierDescriptor(src[, dst[, nbElt[, nbFD]]]) -> dst
@spec fourierDescriptor( Evision.Mat.maybe_mat_in(), [nbElt: term(), nbFD: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Fourier descriptors for planed closed curves
Positional Arguments
- src:
Evision.Mat
Keyword Arguments
- nbElt:
integer()
. - nbFD:
integer()
.
Return
- dst:
Evision.Mat.t()
.
For more details about this implementation, please see @cite PersoonFu1977
Python prototype (for reference only):
fourierDescriptor(src[, dst[, nbElt[, nbFD]]]) -> dst
@spec getDisparityVis(Keyword.t()) :: any() | {:error, String.t()}
@spec getDisparityVis(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Function for creating a disparity map visualization (clamped CV_8U image)
Positional Arguments
src:
Evision.Mat
.input disparity map (CV_16S depth)
Keyword Arguments
scale:
double
.disparity map will be multiplied by this value for visualization
Return
dst:
Evision.Mat.t()
.output visualization
Python prototype (for reference only):
getDisparityVis(src[, dst[, scale]]) -> dst
@spec getDisparityVis(Evision.Mat.maybe_mat_in(), [{:scale, term()}] | nil) :: Evision.Mat.t() | {:error, String.t()}
Function for creating a disparity map visualization (clamped CV_8U image)
Positional Arguments
src:
Evision.Mat
.input disparity map (CV_16S depth)
Keyword Arguments
scale:
double
.disparity map will be multiplied by this value for visualization
Return
dst:
Evision.Mat.t()
.output visualization
Python prototype (for reference only):
getDisparityVis(src[, dst[, scale]]) -> dst
@spec gradientDericheX(Evision.Mat.maybe_mat_in(), number(), number()) :: Evision.Mat.t() | {:error, String.t()}
Applies X Deriche filter to an image.
Positional Arguments
- op:
Evision.Mat
- alpha:
double
- omega:
double
Return
- dst:
Evision.Mat.t()
.
For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf
Python prototype (for reference only):
GradientDericheX(op, alpha, omega[, dst]) -> dst
@spec gradientDericheX( Evision.Mat.maybe_mat_in(), number(), number(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Applies X Deriche filter to an image.
Positional Arguments
- op:
Evision.Mat
- alpha:
double
- omega:
double
Return
- dst:
Evision.Mat.t()
.
For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf
Python prototype (for reference only):
GradientDericheX(op, alpha, omega[, dst]) -> dst
@spec gradientDericheY(Evision.Mat.maybe_mat_in(), number(), number()) :: Evision.Mat.t() | {:error, String.t()}
Applies Y Deriche filter to an image.
Positional Arguments
- op:
Evision.Mat
- alpha:
double
- omega:
double
Return
- dst:
Evision.Mat.t()
.
For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf
Python prototype (for reference only):
GradientDericheY(op, alpha, omega[, dst]) -> dst
@spec gradientDericheY( Evision.Mat.maybe_mat_in(), number(), number(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Applies Y Deriche filter to an image.
Positional Arguments
- op:
Evision.Mat
- alpha:
double
- omega:
double
Return
- dst:
Evision.Mat.t()
.
For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf
Python prototype (for reference only):
GradientDericheY(op, alpha, omega[, dst]) -> dst
@spec guidedFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), number() ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line (Fast) Guided Filter call.
Positional Arguments
guide:
Evision.Mat
.guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.
src:
Evision.Mat
.filtering image with any numbers of channels.
radius:
integer()
.radius of Guided Filter.
eps:
double
.regularization term of Guided Filter. \f${eps}^2\f$ is similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
dDepth:
integer()
.optional depth of the output image.
scale:
double
.subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter)
Return
dst:
Evision.Mat.t()
.output image.
If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.
@sa bilateralFilter, dtFilter, amFilter
Python prototype (for reference only):
guidedFilter(guide, src, radius, eps[, dst[, dDepth[, scale]]]) -> dst
@spec guidedFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), number(), [dDepth: term(), scale: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Simple one-line (Fast) Guided Filter call.
Positional Arguments
guide:
Evision.Mat
.guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.
src:
Evision.Mat
.filtering image with any numbers of channels.
radius:
integer()
.radius of Guided Filter.
eps:
double
.regularization term of Guided Filter. \f${eps}^2\f$ is similar to the sigma in the color space into bilateralFilter.
Keyword Arguments
dDepth:
integer()
.optional depth of the output image.
scale:
double
.subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter)
Return
dst:
Evision.Mat.t()
.output image.
If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.
@sa bilateralFilter, dtFilter, amFilter
Python prototype (for reference only):
guidedFilter(guide, src, radius, eps[, dst[, dDepth[, scale]]]) -> dst
@spec houghPoint2Line( {number(), number()}, Evision.Mat.maybe_mat_in() ) :: {integer(), integer(), integer(), integer()} | {:error, String.t()}
Calculates coordinates of line segment corresponded by point in Hough space.
Positional Arguments
- houghPoint:
Point
- srcImgInfo:
Evision.Mat
Keyword Arguments
- angleRange:
integer()
. - makeSkew:
integer()
. - rules:
integer()
.
Return
- retval:
Vec4i
@retval [Vec4i] Coordinates of line segment corresponded by point in Hough space. @remarks If rules parameter set to RO_STRICT then returned line cut along the border of source image. @remarks If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image. The function calculates coordinates of line segment corresponded by point in Hough space.
Python prototype (for reference only):
HoughPoint2Line(houghPoint, srcImgInfo[, angleRange[, makeSkew[, rules]]]) -> retval
@spec houghPoint2Line( {number(), number()}, Evision.Mat.maybe_mat_in(), [angleRange: term(), makeSkew: term(), rules: term()] | nil ) :: {integer(), integer(), integer(), integer()} | {:error, String.t()}
Calculates coordinates of line segment corresponded by point in Hough space.
Positional Arguments
- houghPoint:
Point
- srcImgInfo:
Evision.Mat
Keyword Arguments
- angleRange:
integer()
. - makeSkew:
integer()
. - rules:
integer()
.
Return
- retval:
Vec4i
@retval [Vec4i] Coordinates of line segment corresponded by point in Hough space. @remarks If rules parameter set to RO_STRICT then returned line cut along the border of source image. @remarks If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image. The function calculates coordinates of line segment corresponded by point in Hough space.
Python prototype (for reference only):
HoughPoint2Line(houghPoint, srcImgInfo[, angleRange[, makeSkew[, rules]]]) -> retval
@spec jointBilateralFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), number(), number() ) :: Evision.Mat.t() | {:error, String.t()}
Applies the joint bilateral filter to an image.
Positional Arguments
joint:
Evision.Mat
.Joint 8-bit or floating-point, 1-channel or 3-channel image.
src:
Evision.Mat
.Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.
d:
integer()
.Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .
sigmaColor:
double
.Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.
sigmaSpace:
double
.Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .
Keyword Arguments
- borderType:
integer()
.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and type as src .
Note: bilateralFilter and jointBilateralFilter use L1 norm to compute difference between colors. @sa bilateralFilter, amFilter
Python prototype (for reference only):
jointBilateralFilter(joint, src, d, sigmaColor, sigmaSpace[, dst[, borderType]]) -> dst
@spec jointBilateralFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), number(), number(), [{:borderType, term()}] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Applies the joint bilateral filter to an image.
Positional Arguments
joint:
Evision.Mat
.Joint 8-bit or floating-point, 1-channel or 3-channel image.
src:
Evision.Mat
.Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.
d:
integer()
.Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .
sigmaColor:
double
.Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.
sigmaSpace:
double
.Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .
Keyword Arguments
- borderType:
integer()
.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and type as src .
Note: bilateralFilter and jointBilateralFilter use L1 norm to compute difference between colors. @sa bilateralFilter, amFilter
Python prototype (for reference only):
jointBilateralFilter(joint, src, d, sigmaColor, sigmaSpace[, dst[, borderType]]) -> dst
@spec l0Smooth(Keyword.t()) :: any() | {:error, String.t()}
@spec l0Smooth(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Global image smoothing via L0 gradient minimization.
Positional Arguments
src:
Evision.Mat
.source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.
Keyword Arguments
lambda:
double
.parameter defining the smooth term weight.
kappa:
double
.parameter defining the increasing factor of the weight of the gradient data term.
Return
dst:
Evision.Mat.t()
.destination image.
For more details about L0 Smoother, see the original paper @cite xu2011image.
Python prototype (for reference only):
l0Smooth(src[, dst[, lambda[, kappa]]]) -> dst
@spec l0Smooth(Evision.Mat.maybe_mat_in(), [kappa: term(), lambda: term()] | nil) :: Evision.Mat.t() | {:error, String.t()}
Global image smoothing via L0 gradient minimization.
Positional Arguments
src:
Evision.Mat
.source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.
Keyword Arguments
lambda:
double
.parameter defining the smooth term weight.
kappa:
double
.parameter defining the increasing factor of the weight of the gradient data term.
Return
dst:
Evision.Mat.t()
.destination image.
For more details about L0 Smoother, see the original paper @cite xu2011image.
Python prototype (for reference only):
l0Smooth(src[, dst[, lambda[, kappa]]]) -> dst
@spec niBlackThreshold( Evision.Mat.maybe_mat_in(), number(), integer(), integer(), number() ) :: Evision.Mat.t() | {:error, String.t()}
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.
Positional Arguments
src:
Evision.Mat
.Source 8-bit single-channel image.
maxValue:
double
.Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.
type:
integer()
.Thresholding type, see cv::ThresholdTypes.
blockSize:
integer()
.Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.
k:
double
.The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.
Keyword Arguments
binarizationMethod:
integer()
.Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods.
r:
double
.The user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and the same type as src.
The function transforms a grayscale image to a binary image according to the formulae:
THRESH_BINARY \f[dst(x,y) = \fork{\texttt{maxValue}}{if (src(x,y) > T(x,y))}{0}{otherwise}\f]
THRESH_BINARY_INV \f[dst(x,y) = \fork{0}{if (src(x,y) > T(x,y))}{\texttt{maxValue}}{otherwise}\f] where \f$T(x,y)\f$ is a threshold calculated individually for each pixel.
The threshold value \f$T(x, y)\f$ is determined based on the binarization method chosen. For classic Niblack, it is the mean minus \f$ k \f$ times standard deviation of \f$\texttt{blockSize} \times\texttt{blockSize}\f$ neighborhood of \f$(x, y)\f$. The function can't process the image in-place. @sa threshold, adaptiveThreshold
Python prototype (for reference only):
niBlackThreshold(_src, maxValue, type, blockSize, k[, _dst[, binarizationMethod[, r]]]) -> _dst
@spec niBlackThreshold( Evision.Mat.maybe_mat_in(), number(), integer(), integer(), number(), [binarizationMethod: term(), r: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.
Positional Arguments
src:
Evision.Mat
.Source 8-bit single-channel image.
maxValue:
double
.Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.
type:
integer()
.Thresholding type, see cv::ThresholdTypes.
blockSize:
integer()
.Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.
k:
double
.The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.
Keyword Arguments
binarizationMethod:
integer()
.Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods.
r:
double
.The user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and the same type as src.
The function transforms a grayscale image to a binary image according to the formulae:
THRESH_BINARY \f[dst(x,y) = \fork{\texttt{maxValue}}{if (src(x,y) > T(x,y))}{0}{otherwise}\f]
THRESH_BINARY_INV \f[dst(x,y) = \fork{0}{if (src(x,y) > T(x,y))}{\texttt{maxValue}}{otherwise}\f] where \f$T(x,y)\f$ is a threshold calculated individually for each pixel.
The threshold value \f$T(x, y)\f$ is determined based on the binarization method chosen. For classic Niblack, it is the mean minus \f$ k \f$ times standard deviation of \f$\texttt{blockSize} \times\texttt{blockSize}\f$ neighborhood of \f$(x, y)\f$. The function can't process the image in-place. @sa threshold, adaptiveThreshold
Python prototype (for reference only):
niBlackThreshold(_src, maxValue, type, blockSize, k[, _dst[, binarizationMethod[, r]]]) -> _dst
@spec peiLinNormalization(Keyword.t()) :: any() | {:error, String.t()}
@spec peiLinNormalization(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
PeiLinNormalization
Positional Arguments
- i:
Evision.Mat
Return
- t:
Evision.Mat.t()
.
Has overloading in C++
Python prototype (for reference only):
PeiLinNormalization(I[, T]) -> T
@spec peiLinNormalization(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) :: Evision.Mat.t() | {:error, String.t()}
PeiLinNormalization
Positional Arguments
- i:
Evision.Mat
Return
- t:
Evision.Mat.t()
.
Has overloading in C++
Python prototype (for reference only):
PeiLinNormalization(I[, T]) -> T
@spec qconj(Keyword.t()) :: any() | {:error, String.t()}
@spec qconj(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
calculates conjugate of a quaternion image.
Positional Arguments
- qimg:
Evision.Mat
Return
- qcimg:
Evision.Mat.t()
.
Python prototype (for reference only):
qconj(qimg[, qcimg]) -> qcimg
@spec qconj(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) :: Evision.Mat.t() | {:error, String.t()}
calculates conjugate of a quaternion image.
Positional Arguments
- qimg:
Evision.Mat
Return
- qcimg:
Evision.Mat.t()
.
Python prototype (for reference only):
qconj(qimg[, qcimg]) -> qcimg
@spec qdft(Evision.Mat.maybe_mat_in(), integer(), boolean()) :: Evision.Mat.t() | {:error, String.t()}
Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.
Positional Arguments
- img:
Evision.Mat
- flags:
integer()
- sideLeft:
bool
Return
- qimg:
Evision.Mat.t()
.
Python prototype (for reference only):
qdft(img, flags, sideLeft[, qimg]) -> qimg
@spec qdft( Evision.Mat.maybe_mat_in(), integer(), boolean(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.
Positional Arguments
- img:
Evision.Mat
- flags:
integer()
- sideLeft:
bool
Return
- qimg:
Evision.Mat.t()
.
Python prototype (for reference only):
qdft(img, flags, sideLeft[, qimg]) -> qimg
@spec qmultiply(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Calculates the per-element quaternion product of two arrays
Positional Arguments
- src1:
Evision.Mat
- src2:
Evision.Mat
Return
- dst:
Evision.Mat.t()
.
Python prototype (for reference only):
qmultiply(src1, src2[, dst]) -> dst
@spec qmultiply( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Calculates the per-element quaternion product of two arrays
Positional Arguments
- src1:
Evision.Mat
- src2:
Evision.Mat
Return
- dst:
Evision.Mat.t()
.
Python prototype (for reference only):
qmultiply(src1, src2[, dst]) -> dst
@spec qunitary(Keyword.t()) :: any() | {:error, String.t()}
@spec qunitary(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
divides each element by its modulus.
Positional Arguments
- qimg:
Evision.Mat
Return
- qnimg:
Evision.Mat.t()
.
Python prototype (for reference only):
qunitary(qimg[, qnimg]) -> qnimg
@spec qunitary(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) :: Evision.Mat.t() | {:error, String.t()}
divides each element by its modulus.
Positional Arguments
- qimg:
Evision.Mat
Return
- qnimg:
Evision.Mat.t()
.
Python prototype (for reference only):
qunitary(qimg[, qnimg]) -> qnimg
@spec radonTransform(Keyword.t()) :: any() | {:error, String.t()}
@spec radonTransform(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Calculate Radon Transform of an image.
Positional Arguments
- src:
Evision.Mat
Keyword Arguments
- theta:
double
. - start_angle:
double
. - end_angle:
double
. - crop:
bool
. - norm:
bool
.
Return
- dst:
Evision.Mat.t()
.
This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.
Python prototype (for reference only):
RadonTransform(src[, dst[, theta[, start_angle[, end_angle[, crop[, norm]]]]]]) -> dst
@spec radonTransform( Evision.Mat.maybe_mat_in(), [ crop: term(), end_angle: term(), norm: term(), start_angle: term(), theta: term() ] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Calculate Radon Transform of an image.
Positional Arguments
- src:
Evision.Mat
Keyword Arguments
- theta:
double
. - start_angle:
double
. - end_angle:
double
. - crop:
bool
. - norm:
bool
.
Return
- dst:
Evision.Mat.t()
.
This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.
Python prototype (for reference only):
RadonTransform(src[, dst[, theta[, start_angle[, end_angle[, crop[, norm]]]]]]) -> dst
@spec readGT(Keyword.t()) :: any() | {:error, String.t()}
@spec readGT(binary()) :: {integer(), Evision.Mat.t()} | {:error, String.t()}
Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.
Positional Arguments
src_path:
String
.path to the image, containing ground-truth disparity map
Return
retval:
integer()
dst:
Evision.Mat.t()
.output disparity map, CV_16S depth
@result returns zero if successfully read the ground truth
Python prototype (for reference only):
readGT(src_path[, dst]) -> retval, dst
@spec readGT(binary(), [{atom(), term()}, ...] | nil) :: {integer(), Evision.Mat.t()} | {:error, String.t()}
Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.
Positional Arguments
src_path:
String
.path to the image, containing ground-truth disparity map
Return
retval:
integer()
dst:
Evision.Mat.t()
.output disparity map, CV_16S depth
@result returns zero if successfully read the ground truth
Python prototype (for reference only):
readGT(src_path[, dst]) -> retval, dst
@spec rollingGuidanceFilter(Keyword.t()) :: any() | {:error, String.t()}
@spec rollingGuidanceFilter(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Applies the rolling guidance filter to an image.
Positional Arguments
src:
Evision.Mat
.Source 8-bit or floating-point, 1-channel or 3-channel image.
Keyword Arguments
d:
integer()
.Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .
sigmaColor:
double
.Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.
sigmaSpace:
double
.Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .
numOfIter:
integer()
.Number of iterations of joint edge-preserving filtering applied on the source image.
borderType:
integer()
.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and type as src.
For more details, please see @cite zhang2014rolling
Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. @sa jointBilateralFilter, bilateralFilter, amFilter
Python prototype (for reference only):
rollingGuidanceFilter(src[, dst[, d[, sigmaColor[, sigmaSpace[, numOfIter[, borderType]]]]]]) -> dst
@spec rollingGuidanceFilter( Evision.Mat.maybe_mat_in(), [ borderType: term(), d: term(), numOfIter: term(), sigmaColor: term(), sigmaSpace: term() ] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Applies the rolling guidance filter to an image.
Positional Arguments
src:
Evision.Mat
.Source 8-bit or floating-point, 1-channel or 3-channel image.
Keyword Arguments
d:
integer()
.Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .
sigmaColor:
double
.Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.
sigmaSpace:
double
.Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .
numOfIter:
integer()
.Number of iterations of joint edge-preserving filtering applied on the source image.
borderType:
integer()
.
Return
dst:
Evision.Mat.t()
.Destination image of the same size and type as src.
For more details, please see @cite zhang2014rolling
Note: rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter. @sa jointBilateralFilter, bilateralFilter, amFilter
Python prototype (for reference only):
rollingGuidanceFilter(src[, dst[, d[, sigmaColor[, sigmaSpace[, numOfIter[, borderType]]]]]]) -> dst
@spec thinning(Keyword.t()) :: any() | {:error, String.t()}
@spec thinning(Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
Applies a binary blob thinning operation, to achieve a skeletization of the input image.
Positional Arguments
src:
Evision.Mat
.Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.
Keyword Arguments
thinningType:
integer()
.Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes
Return
dst:
Evision.Mat.t()
.Destination image of the same size and the same type as src. The function can work in-place.
The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.
Python prototype (for reference only):
thinning(src[, dst[, thinningType]]) -> dst
@spec thinning(Evision.Mat.maybe_mat_in(), [{:thinningType, term()}] | nil) :: Evision.Mat.t() | {:error, String.t()}
Applies a binary blob thinning operation, to achieve a skeletization of the input image.
Positional Arguments
src:
Evision.Mat
.Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.
Keyword Arguments
thinningType:
integer()
.Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes
Return
dst:
Evision.Mat.t()
.Destination image of the same size and the same type as src. The function can work in-place.
The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.
Python prototype (for reference only):
thinning(src[, dst[, thinningType]]) -> dst
@spec transformFD(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) :: Evision.Mat.t() | {:error, String.t()}
transform a contour
Positional Arguments
- src:
Evision.Mat
- t:
Evision.Mat
Keyword Arguments
- fdContour:
bool
.
Return
- dst:
Evision.Mat.t()
.
Python prototype (for reference only):
transformFD(src, t[, dst[, fdContour]]) -> dst
@spec transformFD( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), [{:fdContour, term()}] | nil ) :: Evision.Mat.t() | {:error, String.t()}
transform a contour
Positional Arguments
- src:
Evision.Mat
- t:
Evision.Mat
Keyword Arguments
- fdContour:
bool
.
Return
- dst:
Evision.Mat.t()
.
Python prototype (for reference only):
transformFD(src, t[, dst[, fdContour]]) -> dst
@spec weightedMedianFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer() ) :: Evision.Mat.t() | {:error, String.t()}
Applies weighted median filter to an image.
Positional Arguments
- joint:
Evision.Mat
- src:
Evision.Mat
- r:
integer()
Keyword Arguments
- sigma:
double
. - weightType:
integer()
. - mask:
Evision.Mat
.
Return
- dst:
Evision.Mat.t()
.
For more details about this implementation, please see @cite zhang2014100+
@sa medianBlur, jointBilateralFilter
Python prototype (for reference only):
weightedMedianFilter(joint, src, r[, dst[, sigma[, weightType[, mask]]]]) -> dst
@spec weightedMedianFilter( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), [mask: term(), sigma: term(), weightType: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Applies weighted median filter to an image.
Positional Arguments
- joint:
Evision.Mat
- src:
Evision.Mat
- r:
integer()
Keyword Arguments
- sigma:
double
. - weightType:
integer()
. - mask:
Evision.Mat
.
Return
- dst:
Evision.Mat.t()
.
For more details about this implementation, please see @cite zhang2014100+
@sa medianBlur, jointBilateralFilter
Python prototype (for reference only):
weightedMedianFilter(joint, src, r[, dst[, sigma[, weightType[, mask]]]]) -> dst