View Source Evision.Segmentation.IntelligentScissorsMB (Evision v0.2.9)

Summary

Types

t()

Type that represents an Segmentation.IntelligentScissorsMB struct.

Functions

Specify input image and extract image features

Prepares a map of optimal paths for the given source point on the image

Extracts optimal contour for the given target point on the image

Extracts optimal contour for the given target point on the image

IntelligentScissorsMB

Switch edge feature extractor to use Canny edge detector

Switch edge feature extractor to use Canny edge detector

Switch to "Laplacian Zero-Crossing" edge feature extractor and specify its parameters

Switch to "Laplacian Zero-Crossing" edge feature extractor and specify its parameters

Specify gradient magnitude max value threshold

Specify gradient magnitude max value threshold

Types

@type t() :: %Evision.Segmentation.IntelligentScissorsMB{ref: reference()}

Type that represents an Segmentation.IntelligentScissorsMB struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec applyImage(Keyword.t()) :: any() | {:error, String.t()}
@spec applyImage(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}

Specify input image and extract image features

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()

  • image: Evision.Mat.

    input image. Type is #CV_8UC1 / #CV_8UC3

Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

Python prototype (for reference only):

applyImage(image) -> retval
Link to this function

applyImageFeatures(named_args)

View Source
@spec applyImageFeatures(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

applyImageFeatures(self, non_edge, gradient_direction, gradient_magnitude)

View Source
@spec applyImageFeatures(
  t(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in()
) :: t() | {:error, String.t()}

Specify custom features of input image

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()

  • non_edge: Evision.Mat.

    Specify cost of non-edge pixels. Type is CV_8UC1. Expected values are {0, 1}.

  • gradient_direction: Evision.Mat.

    Specify gradient direction feature. Type is CV_32FC2. Values are expected to be normalized: x^2 + y^2 == 1

  • gradient_magnitude: Evision.Mat.

    Specify cost of gradient magnitude function: Type is CV_32FC1. Values should be in range [0, 1].

Keyword Arguments
  • image: Evision.Mat.

    Optional parameter. Must be specified if subset of features is specified (non-specified features are calculated internally)

Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

Customized advanced variant of applyImage() call.

Python prototype (for reference only):

applyImageFeatures(non_edge, gradient_direction, gradient_magnitude[, image]) -> retval
Link to this function

applyImageFeatures(self, non_edge, gradient_direction, gradient_magnitude, opts)

View Source
@spec applyImageFeatures(
  t(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:image, term()}] | nil
) :: t() | {:error, String.t()}

Specify custom features of input image

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()

  • non_edge: Evision.Mat.

    Specify cost of non-edge pixels. Type is CV_8UC1. Expected values are {0, 1}.

  • gradient_direction: Evision.Mat.

    Specify gradient direction feature. Type is CV_32FC2. Values are expected to be normalized: x^2 + y^2 == 1

  • gradient_magnitude: Evision.Mat.

    Specify cost of gradient magnitude function: Type is CV_32FC1. Values should be in range [0, 1].

Keyword Arguments
  • image: Evision.Mat.

    Optional parameter. Must be specified if subset of features is specified (non-specified features are calculated internally)

Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

Customized advanced variant of applyImage() call.

Python prototype (for reference only):

applyImageFeatures(non_edge, gradient_direction, gradient_magnitude[, image]) -> retval
@spec buildMap(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

buildMap(self, sourcePt)

View Source
@spec buildMap(
  t(),
  {number(), number()}
) :: t() | {:error, String.t()}

Prepares a map of optimal paths for the given source point on the image

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()

  • sourcePt: Point.

    The source point used to find the paths

Note: applyImage() / applyImageFeatures() must be called before this call

Python prototype (for reference only):

buildMap(sourcePt) -> None
@spec getContour(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

getContour(self, targetPt)

View Source
@spec getContour(
  t(),
  {number(), number()}
) :: Evision.Mat.t() | {:error, String.t()}

Extracts optimal contour for the given target point on the image

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()

  • targetPt: Point.

    The target point

Keyword Arguments
  • backward: bool.

    Flag to indicate reverse order of retrived pixels (use "true" value to fetch points from the target to the source point)

Return
  • contour: Evision.Mat.t().

    The list of pixels which contains optimal path between the source and the target points of the image. Type is CV_32SC2 (compatible with std::vector<Point>)

Note: buildMap() must be called before this call

Python prototype (for reference only):

getContour(targetPt[, contour[, backward]]) -> contour
Link to this function

getContour(self, targetPt, opts)

View Source
@spec getContour(t(), {number(), number()}, [{:backward, term()}] | nil) ::
  Evision.Mat.t() | {:error, String.t()}

Extracts optimal contour for the given target point on the image

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()

  • targetPt: Point.

    The target point

Keyword Arguments
  • backward: bool.

    Flag to indicate reverse order of retrived pixels (use "true" value to fetch points from the target to the source point)

Return
  • contour: Evision.Mat.t().

    The list of pixels which contains optimal path between the source and the target points of the image. Type is CV_32SC2 (compatible with std::vector<Point>)

Note: buildMap() must be called before this call

Python prototype (for reference only):

getContour(targetPt[, contour[, backward]]) -> contour
@spec intelligentScissorsMB() :: t() | {:error, String.t()}

IntelligentScissorsMB

Return
  • self: Evision.Segmentation.IntelligentScissorsMB.t()

Python prototype (for reference only):

IntelligentScissorsMB() -> <segmentation_IntelligentScissorsMB object>
Link to this function

intelligentScissorsMB(named_args)

View Source
@spec intelligentScissorsMB(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setEdgeFeatureCannyParameters(named_args)

View Source
@spec setEdgeFeatureCannyParameters(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setEdgeFeatureCannyParameters(self, threshold1, threshold2)

View Source
@spec setEdgeFeatureCannyParameters(t(), number(), number()) ::
  t() | {:error, String.t()}

Switch edge feature extractor to use Canny edge detector

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()
  • threshold1: double
  • threshold2: double
Keyword Arguments
  • apertureSize: integer().
  • l2gradient: bool.
Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

Note: "Laplacian Zero-Crossing" feature extractor is used by default (following to original article) @sa Canny

Python prototype (for reference only):

setEdgeFeatureCannyParameters(threshold1, threshold2[, apertureSize[, L2gradient]]) -> retval
Link to this function

setEdgeFeatureCannyParameters(self, threshold1, threshold2, opts)

View Source
@spec setEdgeFeatureCannyParameters(
  t(),
  number(),
  number(),
  [apertureSize: term(), l2gradient: term()] | nil
) :: t() | {:error, String.t()}

Switch edge feature extractor to use Canny edge detector

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()
  • threshold1: double
  • threshold2: double
Keyword Arguments
  • apertureSize: integer().
  • l2gradient: bool.
Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

Note: "Laplacian Zero-Crossing" feature extractor is used by default (following to original article) @sa Canny

Python prototype (for reference only):

setEdgeFeatureCannyParameters(threshold1, threshold2[, apertureSize[, L2gradient]]) -> retval
Link to this function

setEdgeFeatureZeroCrossingParameters(named_args)

View Source
@spec setEdgeFeatureZeroCrossingParameters(Keyword.t()) ::
  any() | {:error, String.t()}
@spec setEdgeFeatureZeroCrossingParameters(t()) :: t() | {:error, String.t()}

Switch to "Laplacian Zero-Crossing" edge feature extractor and specify its parameters

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()
Keyword Arguments
  • gradient_magnitude_min_value: float.

    Minimal gradient magnitude value for edge pixels (default: 0, check is disabled)

Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

This feature extractor is used by default according to article. Implementation has additional filtering for regions with low-amplitude noise. This filtering is enabled through parameter of minimal gradient amplitude (use some small value 4, 8, 16). Note: Current implementation of this feature extractor is based on processing of grayscale images (color image is converted to grayscale image first). Note: Canny edge detector is a bit slower, but provides better results (especially on color images): use setEdgeFeatureCannyParameters().

Python prototype (for reference only):

setEdgeFeatureZeroCrossingParameters([, gradient_magnitude_min_value]) -> retval
Link to this function

setEdgeFeatureZeroCrossingParameters(self, opts)

View Source
@spec setEdgeFeatureZeroCrossingParameters(
  t(),
  [{:gradient_magnitude_min_value, term()}] | nil
) ::
  t() | {:error, String.t()}

Switch to "Laplacian Zero-Crossing" edge feature extractor and specify its parameters

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()
Keyword Arguments
  • gradient_magnitude_min_value: float.

    Minimal gradient magnitude value for edge pixels (default: 0, check is disabled)

Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

This feature extractor is used by default according to article. Implementation has additional filtering for regions with low-amplitude noise. This filtering is enabled through parameter of minimal gradient amplitude (use some small value 4, 8, 16). Note: Current implementation of this feature extractor is based on processing of grayscale images (color image is converted to grayscale image first). Note: Canny edge detector is a bit slower, but provides better results (especially on color images): use setEdgeFeatureCannyParameters().

Python prototype (for reference only):

setEdgeFeatureZeroCrossingParameters([, gradient_magnitude_min_value]) -> retval
Link to this function

setGradientMagnitudeMaxLimit(named_args)

View Source
@spec setGradientMagnitudeMaxLimit(Keyword.t()) :: any() | {:error, String.t()}
@spec setGradientMagnitudeMaxLimit(t()) :: t() | {:error, String.t()}

Specify gradient magnitude max value threshold

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()
Keyword Arguments
  • gradient_magnitude_threshold_max: float.

    Specify gradient magnitude max value threshold (default: 0, disabled)

Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

Zero limit value is used to disable gradient magnitude thresholding (default behavior, as described in original article). Otherwize pixels with gradient magnitude >= threshold have zero cost. Note: Thresholding should be used for images with irregular regions (to avoid stuck on parameters from high-contract areas, like embedded logos).

Python prototype (for reference only):

setGradientMagnitudeMaxLimit([, gradient_magnitude_threshold_max]) -> retval
Link to this function

setGradientMagnitudeMaxLimit(self, opts)

View Source
@spec setGradientMagnitudeMaxLimit(
  t(),
  [{:gradient_magnitude_threshold_max, term()}] | nil
) ::
  t() | {:error, String.t()}

Specify gradient magnitude max value threshold

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()
Keyword Arguments
  • gradient_magnitude_threshold_max: float.

    Specify gradient magnitude max value threshold (default: 0, disabled)

Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

Zero limit value is used to disable gradient magnitude thresholding (default behavior, as described in original article). Otherwize pixels with gradient magnitude >= threshold have zero cost. Note: Thresholding should be used for images with irregular regions (to avoid stuck on parameters from high-contract areas, like embedded logos).

Python prototype (for reference only):

setGradientMagnitudeMaxLimit([, gradient_magnitude_threshold_max]) -> retval
@spec setWeights(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setWeights(self, weight_non_edge, weight_gradient_direction, weight_gradient_magnitude)

View Source
@spec setWeights(t(), number(), number(), number()) :: t() | {:error, String.t()}

Specify weights of feature functions

Positional Arguments
  • self: Evision.Segmentation.IntelligentScissorsMB.t()

  • weight_non_edge: float.

    Specify cost of non-edge pixels (default: 0.43f)

  • weight_gradient_direction: float.

    Specify cost of gradient direction function (default: 0.43f)

  • weight_gradient_magnitude: float.

    Specify cost of gradient magnitude function (default: 0.14f)

Return
  • retval: Evision.Segmentation.IntelligentScissorsMB.t()

Consider keeping weights normalized (sum of weights equals to 1.0) Discrete dynamic programming (DP) goal is minimization of costs between pixels.

Python prototype (for reference only):

setWeights(weight_non_edge, weight_gradient_direction, weight_gradient_magnitude) -> retval