View Source Evision.XPhoto (Evision v0.1.38)

Summary

Types

t()

Type that represents an XPhoto struct.

Functions

Implements an efficient fixed-point approximation for applying channel gains, which is the last step of multiple white balance algorithms.

Implements an efficient fixed-point approximation for applying channel gains, which is the last step of multiple white balance algorithms.

Performs image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise.

Variant 1:

Performs image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise.

Performs image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise.

Creates an instance of GrayworldWB

Creates an instance of LearningBasedWB

Creates an instance of LearningBasedWB

Creates an instance of SimpleWB

Creates TonemapDurand object

Creates TonemapDurand object

The function implements simple dct-based denoising

The function implements simple dct-based denoising

The function implements different single-image inpainting algorithms.

oilPainting See the book @cite Holzmann1988 for details.

Variant 1:

oilPainting See the book @cite Holzmann1988 for details.

oilPainting See the book @cite Holzmann1988 for details.

Types

@type t() :: %Evision.XPhoto{ref: reference()}

Type that represents an XPhoto struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

Link to this function

applyChannelGains(src, gainB, gainG, gainR)

View Source
@spec applyChannelGains(Evision.Mat.maybe_mat_in(), number(), number(), number()) ::
  Evision.Mat.t() | {:error, String.t()}

Implements an efficient fixed-point approximation for applying channel gains, which is the last step of multiple white balance algorithms.

Positional Arguments
  • src: Evision.Mat.t().

    Input three-channel image in the BGR color space (either CV_8UC3 or CV_16UC3)

  • gainB: float.

    gain for the B channel

  • gainG: float.

    gain for the G channel

  • gainR: float.

    gain for the R channel

Return
  • dst: Evision.Mat.t().

    Output image of the same size and type as src.

Python prototype (for reference only):

applyChannelGains(src, gainB, gainG, gainR[, dst]) -> dst
Link to this function

applyChannelGains(src, gainB, gainG, gainR, opts)

View Source
@spec applyChannelGains(
  Evision.Mat.maybe_mat_in(),
  number(),
  number(),
  number(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Implements an efficient fixed-point approximation for applying channel gains, which is the last step of multiple white balance algorithms.

Positional Arguments
  • src: Evision.Mat.t().

    Input three-channel image in the BGR color space (either CV_8UC3 or CV_16UC3)

  • gainB: float.

    gain for the B channel

  • gainG: float.

    gain for the G channel

  • gainR: float.

    gain for the R channel

Return
  • dst: Evision.Mat.t().

    Output image of the same size and type as src.

Python prototype (for reference only):

applyChannelGains(src, gainB, gainG, gainR[, dst]) -> dst
@spec bm3dDenoising(Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

Performs image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise.

Positional Arguments
  • src: Evision.Mat.t().

    Input 8-bit or 16-bit 1-channel image.

Keyword Arguments
  • h: float.

    Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.

  • templateWindowSize: int.

    Size in pixels of the template patch that is used for block-matching. Should be power of 2.

  • searchWindowSize: int.

    Size in pixels of the window that is used to perform block-matching. Affect performance linearly: greater searchWindowsSize - greater denoising time. Must be larger than templateWindowSize.

  • blockMatchingStep1: int.

    Block matching threshold for the first step of BM3D (hard thresholding), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance.

  • blockMatchingStep2: int.

    Block matching threshold for the second step of BM3D (Wiener filtering), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance.

  • groupSize: int.

    Maximum size of the 3D group for collaborative filtering.

  • slidingStep: int.

    Sliding step to process every next reference block.

  • beta: float.

    Kaiser window parameter that affects the sidelobe attenuation of the transform of the window. Kaiser window is used in order to reduce border effects. To prevent usage of the window, set beta to zero.

  • normType: int.

    Norm used to calculate distance between blocks. L2 is slower than L1 but yields more accurate results.

  • step: int.

    Step of BM3D to be executed. Allowed are only BM3D_STEP1 and BM3D_STEPALL. BM3D_STEP2 is not allowed as it requires basic estimate to be present.

  • transformType: int.

    Type of the orthogonal transform used in collaborative filtering step. Currently only Haar transform is supported.

Return
  • dst: Evision.Mat.t().

    Output image with the same size and type as src.

This function expected to be applied to grayscale images. Advanced usage of this function can be manual denoising of colored image in different colorspaces. @sa fastNlMeansDenoising

Python prototype (for reference only):

bm3dDenoising(src[, dst[, h[, templateWindowSize[, searchWindowSize[, blockMatchingStep1[, blockMatchingStep2[, groupSize[, slidingStep[, beta[, normType[, step[, transformType]]]]]]]]]]]]) -> dst
Link to this function

bm3dDenoising(src, opts)

View Source
@spec bm3dDenoising(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  Evision.Mat.t() | {:error, String.t()}
@spec bm3dDenoising(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}

Variant 1:

Performs image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise.

Positional Arguments
  • src: Evision.Mat.t().

    Input 8-bit or 16-bit 1-channel image.

Keyword Arguments
  • h: float.

    Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.

  • templateWindowSize: int.

    Size in pixels of the template patch that is used for block-matching. Should be power of 2.

  • searchWindowSize: int.

    Size in pixels of the window that is used to perform block-matching. Affect performance linearly: greater searchWindowsSize - greater denoising time. Must be larger than templateWindowSize.

  • blockMatchingStep1: int.

    Block matching threshold for the first step of BM3D (hard thresholding), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance.

  • blockMatchingStep2: int.

    Block matching threshold for the second step of BM3D (Wiener filtering), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance.

  • groupSize: int.

    Maximum size of the 3D group for collaborative filtering.

  • slidingStep: int.

    Sliding step to process every next reference block.

  • beta: float.

    Kaiser window parameter that affects the sidelobe attenuation of the transform of the window. Kaiser window is used in order to reduce border effects. To prevent usage of the window, set beta to zero.

  • normType: int.

    Norm used to calculate distance between blocks. L2 is slower than L1 but yields more accurate results.

  • step: int.

    Step of BM3D to be executed. Possible variants are: step 1, step 2, both steps.

  • transformType: int.

    Type of the orthogonal transform used in collaborative filtering step. Currently only Haar transform is supported.

Return
  • dstStep1: Evision.Mat.t().

    Output image of the first step of BM3D with the same size and type as src.

  • dstStep2: Evision.Mat.t().

    Output image of the second step of BM3D with the same size and type as src.

This function expected to be applied to grayscale images. Advanced usage of this function can be manual denoising of colored image in different colorspaces. @sa fastNlMeansDenoising

Python prototype (for reference only):

bm3dDenoising(src, dstStep1[, dstStep2[, h[, templateWindowSize[, searchWindowSize[, blockMatchingStep1[, blockMatchingStep2[, groupSize[, slidingStep[, beta[, normType[, step[, transformType]]]]]]]]]]]]) -> dstStep1, dstStep2

Variant 2:

Performs image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise.

Positional Arguments
  • src: Evision.Mat.t().

    Input 8-bit or 16-bit 1-channel image.

Keyword Arguments
  • h: float.

    Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.

  • templateWindowSize: int.

    Size in pixels of the template patch that is used for block-matching. Should be power of 2.

  • searchWindowSize: int.

    Size in pixels of the window that is used to perform block-matching. Affect performance linearly: greater searchWindowsSize - greater denoising time. Must be larger than templateWindowSize.

  • blockMatchingStep1: int.

    Block matching threshold for the first step of BM3D (hard thresholding), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance.

  • blockMatchingStep2: int.

    Block matching threshold for the second step of BM3D (Wiener filtering), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance.

  • groupSize: int.

    Maximum size of the 3D group for collaborative filtering.

  • slidingStep: int.

    Sliding step to process every next reference block.

  • beta: float.

    Kaiser window parameter that affects the sidelobe attenuation of the transform of the window. Kaiser window is used in order to reduce border effects. To prevent usage of the window, set beta to zero.

  • normType: int.

    Norm used to calculate distance between blocks. L2 is slower than L1 but yields more accurate results.

  • step: int.

    Step of BM3D to be executed. Allowed are only BM3D_STEP1 and BM3D_STEPALL. BM3D_STEP2 is not allowed as it requires basic estimate to be present.

  • transformType: int.

    Type of the orthogonal transform used in collaborative filtering step. Currently only Haar transform is supported.

Return
  • dst: Evision.Mat.t().

    Output image with the same size and type as src.

This function expected to be applied to grayscale images. Advanced usage of this function can be manual denoising of colored image in different colorspaces. @sa fastNlMeansDenoising

Python prototype (for reference only):

bm3dDenoising(src[, dst[, h[, templateWindowSize[, searchWindowSize[, blockMatchingStep1[, blockMatchingStep2[, groupSize[, slidingStep[, beta[, normType[, step[, transformType]]]]]]]]]]]]) -> dst
Link to this function

bm3dDenoising(src, dstStep1, opts)

View Source
@spec bm3dDenoising(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{atom(), term()}, ...] | nil
) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}

Performs image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise.

Positional Arguments
  • src: Evision.Mat.t().

    Input 8-bit or 16-bit 1-channel image.

Keyword Arguments
  • h: float.

    Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.

  • templateWindowSize: int.

    Size in pixels of the template patch that is used for block-matching. Should be power of 2.

  • searchWindowSize: int.

    Size in pixels of the window that is used to perform block-matching. Affect performance linearly: greater searchWindowsSize - greater denoising time. Must be larger than templateWindowSize.

  • blockMatchingStep1: int.

    Block matching threshold for the first step of BM3D (hard thresholding), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance.

  • blockMatchingStep2: int.

    Block matching threshold for the second step of BM3D (Wiener filtering), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance.

  • groupSize: int.

    Maximum size of the 3D group for collaborative filtering.

  • slidingStep: int.

    Sliding step to process every next reference block.

  • beta: float.

    Kaiser window parameter that affects the sidelobe attenuation of the transform of the window. Kaiser window is used in order to reduce border effects. To prevent usage of the window, set beta to zero.

  • normType: int.

    Norm used to calculate distance between blocks. L2 is slower than L1 but yields more accurate results.

  • step: int.

    Step of BM3D to be executed. Possible variants are: step 1, step 2, both steps.

  • transformType: int.

    Type of the orthogonal transform used in collaborative filtering step. Currently only Haar transform is supported.

Return
  • dstStep1: Evision.Mat.t().

    Output image of the first step of BM3D with the same size and type as src.

  • dstStep2: Evision.Mat.t().

    Output image of the second step of BM3D with the same size and type as src.

This function expected to be applied to grayscale images. Advanced usage of this function can be manual denoising of colored image in different colorspaces. @sa fastNlMeansDenoising

Python prototype (for reference only):

bm3dDenoising(src, dstStep1[, dstStep2[, h[, templateWindowSize[, searchWindowSize[, blockMatchingStep1[, blockMatchingStep2[, groupSize[, slidingStep[, beta[, normType[, step[, transformType]]]]]]]]]]]]) -> dstStep1, dstStep2
@spec createGrayworldWB() :: Evision.XPhoto.GrayworldWB.t() | {:error, String.t()}

Creates an instance of GrayworldWB

Return
  • retval: Evision.XPhoto.GrayworldWB.t()

Python prototype (for reference only):

createGrayworldWB() -> retval
@spec createLearningBasedWB() ::
  Evision.XPhoto.LearningBasedWB.t() | {:error, String.t()}

Creates an instance of LearningBasedWB

Keyword Arguments
  • path_to_model: String.

    Path to a .yml file with the model. If not specified, the default model is used

Return
  • retval: Evision.XPhoto.LearningBasedWB.t()

Python prototype (for reference only):

createLearningBasedWB([, path_to_model]) -> retval
Link to this function

createLearningBasedWB(opts)

View Source
@spec createLearningBasedWB([{atom(), term()}, ...] | nil) ::
  Evision.XPhoto.LearningBasedWB.t() | {:error, String.t()}

Creates an instance of LearningBasedWB

Keyword Arguments
  • path_to_model: String.

    Path to a .yml file with the model. If not specified, the default model is used

Return
  • retval: Evision.XPhoto.LearningBasedWB.t()

Python prototype (for reference only):

createLearningBasedWB([, path_to_model]) -> retval
@spec createSimpleWB() :: Evision.XPhoto.SimpleWB.t() | {:error, String.t()}

Creates an instance of SimpleWB

Return
  • retval: Evision.XPhoto.SimpleWB.t()

Python prototype (for reference only):

createSimpleWB() -> retval
@spec createTonemapDurand() :: Evision.XPhoto.TonemapDurand.t() | {:error, String.t()}

Creates TonemapDurand object

Keyword Arguments
  • gamma: float.

    gamma value for gamma correction. See createTonemap

  • contrast: float.

    resulting contrast on logarithmic scale, i. e. log(max / min), where max and min are maximum and minimum luminance values of the resulting image.

  • saturation: float.

    saturation enhancement value. See createTonemapDrago

  • sigma_color: float.

    bilateral filter sigma in color space

  • sigma_space: float.

    bilateral filter sigma in coordinate space

Return
  • retval: Evision.XPhoto.TonemapDurand.t()

You need to set the OPENCV_ENABLE_NONFREE option in cmake to use those. Use them at your own risk.

Python prototype (for reference only):

createTonemapDurand([, gamma[, contrast[, saturation[, sigma_color[, sigma_space]]]]]) -> retval
Link to this function

createTonemapDurand(opts)

View Source
@spec createTonemapDurand([{atom(), term()}, ...] | nil) ::
  Evision.XPhoto.TonemapDurand.t() | {:error, String.t()}

Creates TonemapDurand object

Keyword Arguments
  • gamma: float.

    gamma value for gamma correction. See createTonemap

  • contrast: float.

    resulting contrast on logarithmic scale, i. e. log(max / min), where max and min are maximum and minimum luminance values of the resulting image.

  • saturation: float.

    saturation enhancement value. See createTonemapDrago

  • sigma_color: float.

    bilateral filter sigma in color space

  • sigma_space: float.

    bilateral filter sigma in coordinate space

Return
  • retval: Evision.XPhoto.TonemapDurand.t()

You need to set the OPENCV_ENABLE_NONFREE option in cmake to use those. Use them at your own risk.

Python prototype (for reference only):

createTonemapDurand([, gamma[, contrast[, saturation[, sigma_color[, sigma_space]]]]]) -> retval
Link to this function

dctDenoising(src, dst, sigma)

View Source
@spec dctDenoising(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number()) ::
  :ok | {:error, String.t()}

The function implements simple dct-based denoising

Positional Arguments
  • src: Evision.Mat.t().

    source image

  • dst: Evision.Mat.t().

    destination image

  • sigma: double.

    expected noise standard deviation

Keyword Arguments
  • psize: int.

    size of block side where dct is computed

http://www.ipol.im/pub/art/2011/ys-dct/.

@sa fastNlMeansDenoising

Python prototype (for reference only):

dctDenoising(src, dst, sigma[, psize]) -> None
Link to this function

dctDenoising(src, dst, sigma, opts)

View Source
@spec dctDenoising(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  number(),
  [{atom(), term()}, ...] | nil
) :: :ok | {:error, String.t()}

The function implements simple dct-based denoising

Positional Arguments
  • src: Evision.Mat.t().

    source image

  • dst: Evision.Mat.t().

    destination image

  • sigma: double.

    expected noise standard deviation

Keyword Arguments
  • psize: int.

    size of block side where dct is computed

http://www.ipol.im/pub/art/2011/ys-dct/.

@sa fastNlMeansDenoising

Python prototype (for reference only):

dctDenoising(src, dst, sigma[, psize]) -> None
Link to this function

inpaint(src, mask, dst, algorithmType)

View Source

The function implements different single-image inpainting algorithms.

Positional Arguments
  • src: Evision.Mat.t().

    source image

    • #INPAINT_SHIFTMAP: it could be of any type and any number of channels from 1 to 4. In case of 3- and 4-channels images the function expect them in CIELab colorspace or similar one, where first color component shows intensity, while second and third shows colors. Nonetheless you can try any colorspaces.
    • #INPAINT_FSR_BEST or #INPAINT_FSR_FAST: 1-channel grayscale or 3-channel BGR image.
  • mask: Evision.Mat.t().

    mask (#CV_8UC1), where non-zero pixels indicate valid image area, while zero pixels indicate area to be inpainted

  • dst: Evision.Mat.t().

    destination image

  • algorithmType: int.

    see xphoto::InpaintTypes

See the original papers @cite He2012 (Shiftmap) or @cite GenserPCS2018 and @cite SeilerTIP2015 (FSR) for details.

Python prototype (for reference only):

inpaint(src, mask, dst, algorithmType) -> None
Link to this function

oilPainting(src, size, dynRatio)

View Source
@spec oilPainting(Evision.Mat.maybe_mat_in(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

oilPainting See the book @cite Holzmann1988 for details.

Positional Arguments
  • src: Evision.Mat.t().

    Input three-channel or one channel image (either CV_8UC3 or CV_8UC1)

  • size: int.

    neighbouring size is 2-size+1

  • dynRatio: int.

    image is divided by dynRatio before histogram processing

Return
  • dst: Evision.Mat.t().

    Output image of the same size and type as src.

Python prototype (for reference only):

oilPainting(src, size, dynRatio[, dst]) -> dst
Link to this function

oilPainting(src, size, dynRatio, opts)

View Source
@spec oilPainting(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec oilPainting(Evision.Mat.maybe_mat_in(), integer(), integer(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

Variant 1:

oilPainting See the book @cite Holzmann1988 for details.

Positional Arguments
  • src: Evision.Mat.t().

    Input three-channel or one channel image (either CV_8UC3 or CV_8UC1)

  • size: int.

    neighbouring size is 2-size+1

  • dynRatio: int.

    image is divided by dynRatio before histogram processing

  • code: int

Return
  • dst: Evision.Mat.t().

    Output image of the same size and type as src.

Python prototype (for reference only):

oilPainting(src, size, dynRatio, code[, dst]) -> dst

Variant 2:

oilPainting See the book @cite Holzmann1988 for details.

Positional Arguments
  • src: Evision.Mat.t().

    Input three-channel or one channel image (either CV_8UC3 or CV_8UC1)

  • size: int.

    neighbouring size is 2-size+1

  • dynRatio: int.

    image is divided by dynRatio before histogram processing

Return
  • dst: Evision.Mat.t().

    Output image of the same size and type as src.

Python prototype (for reference only):

oilPainting(src, size, dynRatio[, dst]) -> dst
Link to this function

oilPainting(src, size, dynRatio, code, opts)

View Source
@spec oilPainting(
  Evision.Mat.maybe_mat_in(),
  integer(),
  integer(),
  integer(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

oilPainting See the book @cite Holzmann1988 for details.

Positional Arguments
  • src: Evision.Mat.t().

    Input three-channel or one channel image (either CV_8UC3 or CV_8UC1)

  • size: int.

    neighbouring size is 2-size+1

  • dynRatio: int.

    image is divided by dynRatio before histogram processing

  • code: int

Return
  • dst: Evision.Mat.t().

    Output image of the same size and type as src.

Python prototype (for reference only):

oilPainting(src, size, dynRatio, code[, dst]) -> dst