View Source Evision.CUDA.FastFeatureDetector (Evision v0.2.9)

Summary

Types

t()

Type that represents an CUDA.FastFeatureDetector struct.

Functions

Variant 1:

Computes the descriptors for a set of keypoints detected in an image.

Variant 1:

Computes the descriptors for a set of keypoints detected in an image.

Variant 1:

convert

create

Variant 1:

detect

Variant 1:

detect

Variant 1:

detectAndComputeAsync

Variant 1:

detectAndComputeAsync

Variant 1:

Detects keypoints in an image.

Variant 1:

Detects keypoints in an image.

getMaxNumPoints

Variant 1:

read

Types

@type t() :: %Evision.CUDA.FastFeatureDetector{ref: reference()}

Type that represents an CUDA.FastFeatureDetector struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec compute(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

compute(self, images, keypoints)

View Source
@spec compute(t(), [Evision.Mat.maybe_mat_in()], [[Evision.KeyPoint.t()]]) ::
  {[[Evision.KeyPoint.t()]], [Evision.Mat.t()]} | {:error, String.t()}
@spec compute(t(), Evision.Mat.maybe_mat_in(), [Evision.KeyPoint.t()]) ::
  {[Evision.KeyPoint.t()], Evision.Mat.t()} | {:error, String.t()}

Variant 1:

compute

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • images: [Evision.Mat].

    Image set.

Return
  • keypoints: [[Evision.KeyPoint]].

    Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).

  • descriptors: [Evision.Mat].

    Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.

Has overloading in C++

Python prototype (for reference only):

compute(images, keypoints[, descriptors]) -> keypoints, descriptors

Variant 2:

Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant).

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.Mat.

    Image.

Return
  • keypoints: [Evision.KeyPoint].

    Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).

  • descriptors: Evision.Mat.t().

    Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.

Python prototype (for reference only):

compute(image, keypoints[, descriptors]) -> keypoints, descriptors
Link to this function

compute(self, images, keypoints, opts)

View Source
@spec compute(
  t(),
  [Evision.Mat.maybe_mat_in()],
  [[Evision.KeyPoint.t()]],
  [{atom(), term()}, ...] | nil
) :: {[[Evision.KeyPoint.t()]], [Evision.Mat.t()]} | {:error, String.t()}
@spec compute(
  t(),
  Evision.Mat.maybe_mat_in(),
  [Evision.KeyPoint.t()],
  [{atom(), term()}, ...] | nil
) ::
  {[Evision.KeyPoint.t()], Evision.Mat.t()} | {:error, String.t()}

Variant 1:

compute

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • images: [Evision.Mat].

    Image set.

Return
  • keypoints: [[Evision.KeyPoint]].

    Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).

  • descriptors: [Evision.Mat].

    Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.

Has overloading in C++

Python prototype (for reference only):

compute(images, keypoints[, descriptors]) -> keypoints, descriptors

Variant 2:

Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant).

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.Mat.

    Image.

Return
  • keypoints: [Evision.KeyPoint].

    Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).

  • descriptors: Evision.Mat.t().

    Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.

Python prototype (for reference only):

compute(image, keypoints[, descriptors]) -> keypoints, descriptors
Link to this function

computeAsync(named_args)

View Source
@spec computeAsync(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

computeAsync(self, image)

View Source
@spec computeAsync(t(), Evision.Mat.maybe_mat_in()) ::
  {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec computeAsync(t(), Evision.CUDA.GpuMat.t()) ::
  {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

Computes the descriptors for a set of keypoints detected in an image.

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.Mat.

    Image.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    CUDA stream.

Return
  • keypoints: Evision.Mat.t().

    Input collection of keypoints.

  • descriptors: Evision.Mat.t().

    Computed descriptors. Row j is the descriptor for j-th keypoint.

Python prototype (for reference only):

computeAsync(image[, keypoints[, descriptors[, stream]]]) -> keypoints, descriptors

Variant 2:

Computes the descriptors for a set of keypoints detected in an image.

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.CUDA.GpuMat.t().

    Image.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    CUDA stream.

Return
  • keypoints: Evision.CUDA.GpuMat.t().

    Input collection of keypoints.

  • descriptors: Evision.CUDA.GpuMat.t().

    Computed descriptors. Row j is the descriptor for j-th keypoint.

Python prototype (for reference only):

computeAsync(image[, keypoints[, descriptors[, stream]]]) -> keypoints, descriptors
Link to this function

computeAsync(self, image, opts)

View Source
@spec computeAsync(t(), Evision.Mat.maybe_mat_in(), [{:stream, term()}] | nil) ::
  {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec computeAsync(t(), Evision.CUDA.GpuMat.t(), [{:stream, term()}] | nil) ::
  {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

Computes the descriptors for a set of keypoints detected in an image.

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.Mat.

    Image.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    CUDA stream.

Return
  • keypoints: Evision.Mat.t().

    Input collection of keypoints.

  • descriptors: Evision.Mat.t().

    Computed descriptors. Row j is the descriptor for j-th keypoint.

Python prototype (for reference only):

computeAsync(image[, keypoints[, descriptors[, stream]]]) -> keypoints, descriptors

Variant 2:

Computes the descriptors for a set of keypoints detected in an image.

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.CUDA.GpuMat.t().

    Image.

Keyword Arguments
  • stream: Evision.CUDA.Stream.t().

    CUDA stream.

Return
  • keypoints: Evision.CUDA.GpuMat.t().

    Input collection of keypoints.

  • descriptors: Evision.CUDA.GpuMat.t().

    Computed descriptors. Row j is the descriptor for j-th keypoint.

Python prototype (for reference only):

computeAsync(image[, keypoints[, descriptors[, stream]]]) -> keypoints, descriptors
@spec convert(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

convert(self, gpu_keypoints)

View Source
@spec convert(t(), Evision.Mat.maybe_mat_in()) ::
  [Evision.KeyPoint.t()] | {:error, String.t()}
@spec convert(t(), Evision.CUDA.GpuMat.t()) ::
  [Evision.KeyPoint.t()] | {:error, String.t()}

Variant 1:

convert

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
  • gpu_keypoints: Evision.Mat
Return
  • keypoints: [Evision.KeyPoint]

Converts keypoints array from internal representation to standard vector.

Python prototype (for reference only):

convert(gpu_keypoints) -> keypoints

Variant 2:

convert

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
  • gpu_keypoints: Evision.CUDA.GpuMat.t()
Return
  • keypoints: [Evision.KeyPoint]

Converts keypoints array from internal representation to standard vector.

Python prototype (for reference only):

convert(gpu_keypoints) -> keypoints
@spec create() :: t() | {:error, String.t()}

create

Keyword Arguments
  • threshold: integer().
  • nonmaxSuppression: bool.
  • type: integer().
  • max_npoints: integer().
Return
  • retval: Evision.CUDA.FastFeatureDetector.t()

Python prototype (for reference only):

create([, threshold[, nonmaxSuppression[, type[, max_npoints]]]]) -> retval
@spec create(Keyword.t()) :: any() | {:error, String.t()}
@spec create(
  [
    max_npoints: term(),
    nonmaxSuppression: term(),
    threshold: term(),
    type: term()
  ]
  | nil
) ::
  t() | {:error, String.t()}

create

Keyword Arguments
  • threshold: integer().
  • nonmaxSuppression: bool.
  • type: integer().
  • max_npoints: integer().
Return
  • retval: Evision.CUDA.FastFeatureDetector.t()

Python prototype (for reference only):

create([, threshold[, nonmaxSuppression[, type[, max_npoints]]]]) -> retval
@spec defaultNorm(Keyword.t()) :: any() | {:error, String.t()}
@spec defaultNorm(t()) :: integer() | {:error, String.t()}

defaultNorm

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
Return
  • retval: integer()

Python prototype (for reference only):

defaultNorm() -> retval
Link to this function

descriptorSize(named_args)

View Source
@spec descriptorSize(Keyword.t()) :: any() | {:error, String.t()}
@spec descriptorSize(t()) :: integer() | {:error, String.t()}

descriptorSize

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
Return
  • retval: integer()

Python prototype (for reference only):

descriptorSize() -> retval
Link to this function

descriptorType(named_args)

View Source
@spec descriptorType(Keyword.t()) :: any() | {:error, String.t()}
@spec descriptorType(t()) :: integer() | {:error, String.t()}

descriptorType

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
Return
  • retval: integer()

Python prototype (for reference only):

descriptorType() -> retval
@spec detect(Keyword.t()) :: any() | {:error, String.t()}
@spec detect(t(), [Evision.Mat.maybe_mat_in()]) ::
  [[Evision.KeyPoint.t()]] | {:error, String.t()}
@spec detect(t(), Evision.Mat.maybe_mat_in()) ::
  [Evision.KeyPoint.t()] | {:error, String.t()}

Variant 1:

detect

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • images: [Evision.Mat].

    Image set.

Keyword Arguments
  • masks: [Evision.Mat].

    Masks for each input image specifying where to look for keypoints (optional). masks[i] is a mask for images[i].

Return
  • keypoints: [[Evision.KeyPoint]].

    The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .

Has overloading in C++

Python prototype (for reference only):

detect(images[, masks]) -> keypoints

Variant 2:

Detects keypoints in an image (first variant) or image set (second variant).

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.Mat.

    Image.

Keyword Arguments
  • mask: Evision.Mat.

    Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.

Return
  • keypoints: [Evision.KeyPoint].

    The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .

Python prototype (for reference only):

detect(image[, mask]) -> keypoints
Link to this function

detect(self, images, opts)

View Source
@spec detect(t(), [Evision.Mat.maybe_mat_in()], [{:masks, term()}] | nil) ::
  [[Evision.KeyPoint.t()]] | {:error, String.t()}
@spec detect(t(), Evision.Mat.maybe_mat_in(), [{:mask, term()}] | nil) ::
  [Evision.KeyPoint.t()] | {:error, String.t()}

Variant 1:

detect

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • images: [Evision.Mat].

    Image set.

Keyword Arguments
  • masks: [Evision.Mat].

    Masks for each input image specifying where to look for keypoints (optional). masks[i] is a mask for images[i].

Return
  • keypoints: [[Evision.KeyPoint]].

    The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .

Has overloading in C++

Python prototype (for reference only):

detect(images[, masks]) -> keypoints

Variant 2:

Detects keypoints in an image (first variant) or image set (second variant).

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.Mat.

    Image.

Keyword Arguments
  • mask: Evision.Mat.

    Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.

Return
  • keypoints: [Evision.KeyPoint].

    The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .

Python prototype (for reference only):

detect(image[, mask]) -> keypoints
Link to this function

detectAndCompute(named_args)

View Source
@spec detectAndCompute(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

detectAndCompute(self, image, mask)

View Source
@spec detectAndCompute(t(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  {[Evision.KeyPoint.t()], Evision.Mat.t()} | {:error, String.t()}

detectAndCompute

Positional Arguments
Keyword Arguments
  • useProvidedKeypoints: bool.
Return
  • keypoints: [Evision.KeyPoint]
  • descriptors: Evision.Mat.t().

Detects keypoints and computes the descriptors

Python prototype (for reference only):

detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
Link to this function

detectAndCompute(self, image, mask, opts)

View Source
@spec detectAndCompute(
  t(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:useProvidedKeypoints, term()}] | nil
) :: {[Evision.KeyPoint.t()], Evision.Mat.t()} | {:error, String.t()}

detectAndCompute

Positional Arguments
Keyword Arguments
  • useProvidedKeypoints: bool.
Return
  • keypoints: [Evision.KeyPoint]
  • descriptors: Evision.Mat.t().

Detects keypoints and computes the descriptors

Python prototype (for reference only):

detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
Link to this function

detectAndComputeAsync(named_args)

View Source
@spec detectAndComputeAsync(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

detectAndComputeAsync(self, image, mask)

View Source
@spec detectAndComputeAsync(
  t(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in()
) ::
  {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec detectAndComputeAsync(t(), Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()) ::
  {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

detectAndComputeAsync

Positional Arguments
Keyword Arguments
  • useProvidedKeypoints: bool.
  • stream: Evision.CUDA.Stream.t().
Return
  • keypoints: Evision.Mat.t().
  • descriptors: Evision.Mat.t().

Detects keypoints and computes the descriptors.

Python prototype (for reference only):

detectAndComputeAsync(image, mask[, keypoints[, descriptors[, useProvidedKeypoints[, stream]]]]) -> keypoints, descriptors

Variant 2:

detectAndComputeAsync

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
  • image: Evision.CUDA.GpuMat.t()
  • mask: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • useProvidedKeypoints: bool.
  • stream: Evision.CUDA.Stream.t().
Return
  • keypoints: Evision.CUDA.GpuMat.t().
  • descriptors: Evision.CUDA.GpuMat.t().

Detects keypoints and computes the descriptors.

Python prototype (for reference only):

detectAndComputeAsync(image, mask[, keypoints[, descriptors[, useProvidedKeypoints[, stream]]]]) -> keypoints, descriptors
Link to this function

detectAndComputeAsync(self, image, mask, opts)

View Source
@spec detectAndComputeAsync(
  t(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [stream: term(), useProvidedKeypoints: term()] | nil
) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
@spec detectAndComputeAsync(
  t(),
  Evision.CUDA.GpuMat.t(),
  Evision.CUDA.GpuMat.t(),
  [stream: term(), useProvidedKeypoints: term()] | nil
) :: {Evision.CUDA.GpuMat.t(), Evision.CUDA.GpuMat.t()} | {:error, String.t()}

Variant 1:

detectAndComputeAsync

Positional Arguments
Keyword Arguments
  • useProvidedKeypoints: bool.
  • stream: Evision.CUDA.Stream.t().
Return
  • keypoints: Evision.Mat.t().
  • descriptors: Evision.Mat.t().

Detects keypoints and computes the descriptors.

Python prototype (for reference only):

detectAndComputeAsync(image, mask[, keypoints[, descriptors[, useProvidedKeypoints[, stream]]]]) -> keypoints, descriptors

Variant 2:

detectAndComputeAsync

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
  • image: Evision.CUDA.GpuMat.t()
  • mask: Evision.CUDA.GpuMat.t()
Keyword Arguments
  • useProvidedKeypoints: bool.
  • stream: Evision.CUDA.Stream.t().
Return
  • keypoints: Evision.CUDA.GpuMat.t().
  • descriptors: Evision.CUDA.GpuMat.t().

Detects keypoints and computes the descriptors.

Python prototype (for reference only):

detectAndComputeAsync(image, mask[, keypoints[, descriptors[, useProvidedKeypoints[, stream]]]]) -> keypoints, descriptors
@spec detectAsync(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

detectAsync(self, image)

View Source
@spec detectAsync(t(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}
@spec detectAsync(t(), Evision.CUDA.GpuMat.t()) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Detects keypoints in an image.

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.Mat.

    Image.

Keyword Arguments
  • mask: Evision.Mat.

    Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.

  • stream: Evision.CUDA.Stream.t().

    CUDA stream.

Return
  • keypoints: Evision.Mat.t().

    The detected keypoints.

Python prototype (for reference only):

detectAsync(image[, keypoints[, mask[, stream]]]) -> keypoints

Variant 2:

Detects keypoints in an image.

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.CUDA.GpuMat.t().

    Image.

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.

  • stream: Evision.CUDA.Stream.t().

    CUDA stream.

Return
  • keypoints: Evision.CUDA.GpuMat.t().

    The detected keypoints.

Python prototype (for reference only):

detectAsync(image[, keypoints[, mask[, stream]]]) -> keypoints
Link to this function

detectAsync(self, image, opts)

View Source
@spec detectAsync(
  t(),
  Evision.Mat.maybe_mat_in(),
  [mask: term(), stream: term()] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}
@spec detectAsync(t(), Evision.CUDA.GpuMat.t(), [mask: term(), stream: term()] | nil) ::
  Evision.CUDA.GpuMat.t() | {:error, String.t()}

Variant 1:

Detects keypoints in an image.

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.Mat.

    Image.

Keyword Arguments
  • mask: Evision.Mat.

    Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.

  • stream: Evision.CUDA.Stream.t().

    CUDA stream.

Return
  • keypoints: Evision.Mat.t().

    The detected keypoints.

Python prototype (for reference only):

detectAsync(image[, keypoints[, mask[, stream]]]) -> keypoints

Variant 2:

Detects keypoints in an image.

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()

  • image: Evision.CUDA.GpuMat.t().

    Image.

Keyword Arguments
  • mask: Evision.CUDA.GpuMat.t().

    Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.

  • stream: Evision.CUDA.Stream.t().

    CUDA stream.

Return
  • keypoints: Evision.CUDA.GpuMat.t().

    The detected keypoints.

Python prototype (for reference only):

detectAsync(image[, keypoints[, mask[, stream]]]) -> keypoints
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}

empty

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
Return
  • retval: bool

Python prototype (for reference only):

empty() -> retval
Link to this function

getDefaultName(named_args)

View Source
@spec getDefaultName(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultName(t()) :: binary() | {:error, String.t()}

getDefaultName

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
Return

Python prototype (for reference only):

getDefaultName() -> retval
Link to this function

getMaxNumPoints(named_args)

View Source
@spec getMaxNumPoints(Keyword.t()) :: any() | {:error, String.t()}
@spec getMaxNumPoints(t()) :: integer() | {:error, String.t()}

getMaxNumPoints

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
Return
  • retval: integer()

Python prototype (for reference only):

getMaxNumPoints() -> retval
@spec read(Keyword.t()) :: any() | {:error, String.t()}
@spec read(t(), Evision.FileNode.t()) :: t() | {:error, String.t()}
@spec read(t(), binary()) :: t() | {:error, String.t()}

Variant 1:

read

Positional Arguments

Python prototype (for reference only):

read(arg1) -> None

Variant 2:

read

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
  • fileName: String

Python prototype (for reference only):

read(fileName) -> None
Link to this function

setMaxNumPoints(named_args)

View Source
@spec setMaxNumPoints(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setMaxNumPoints(self, max_npoints)

View Source
@spec setMaxNumPoints(t(), integer()) :: t() | {:error, String.t()}

setMaxNumPoints

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
  • max_npoints: integer()

Python prototype (for reference only):

setMaxNumPoints(max_npoints) -> None
Link to this function

setThreshold(named_args)

View Source
@spec setThreshold(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setThreshold(self, threshold)

View Source
@spec setThreshold(t(), integer()) :: t() | {:error, String.t()}

setThreshold

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
  • threshold: integer()

Python prototype (for reference only):

setThreshold(threshold) -> None
@spec write(Keyword.t()) :: any() | {:error, String.t()}
@spec write(t(), binary()) :: t() | {:error, String.t()}

write

Positional Arguments
  • self: Evision.CUDA.FastFeatureDetector.t()
  • fileName: String

Python prototype (for reference only):

write(fileName) -> None
@spec write(t(), Evision.FileStorage.t(), binary()) :: t() | {:error, String.t()}

write

Positional Arguments

Python prototype (for reference only):

write(fs, name) -> None