View Source Evision.SIFT (Evision v0.2.9)
Summary
Functions
Variant 1:
compute
Variant 1:
compute
create
create
Create SIFT with specified descriptorType.
Create SIFT with specified descriptorType.
defaultNorm
descriptorSize
descriptorType
Variant 1:
detect
Variant 1:
detect
detectAndCompute
detectAndCompute
empty
getContrastThreshold
getDefaultName
getEdgeThreshold
getNFeatures
getNOctaveLayers
getSigma
Variant 1:
read
setContrastThreshold
setEdgeThreshold
setNFeatures
setNOctaveLayers
setSigma
write
write
Types
@type t() :: %Evision.SIFT{ref: reference()}
Type that represents an SIFT
struct.
ref.
reference()
The underlying erlang resource variable.
Functions
@spec compute(t(), [Evision.Mat.maybe_mat_in()], [[Evision.KeyPoint.t()]]) :: {[[Evision.KeyPoint.t()]], [Evision.Mat.t()]} | {:error, String.t()}
@spec compute(t(), Evision.Mat.maybe_mat_in(), [Evision.KeyPoint.t()]) :: {[Evision.KeyPoint.t()], Evision.Mat.t()} | {:error, String.t()}
Variant 1:
compute
Positional Arguments
self:
Evision.SIFT.t()
images:
[Evision.Mat]
.Image set.
Return
keypoints:
[[Evision.KeyPoint]]
.Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).
descriptors:
[Evision.Mat]
.Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.
Has overloading in C++
Python prototype (for reference only):
compute(images, keypoints[, descriptors]) -> keypoints, descriptors
Variant 2:
Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant).
Positional Arguments
self:
Evision.SIFT.t()
image:
Evision.Mat
.Image.
Return
keypoints:
[Evision.KeyPoint]
.Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).
descriptors:
Evision.Mat.t()
.Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.
Python prototype (for reference only):
compute(image, keypoints[, descriptors]) -> keypoints, descriptors
@spec compute( t(), [Evision.Mat.maybe_mat_in()], [[Evision.KeyPoint.t()]], [{atom(), term()}, ...] | nil ) :: {[[Evision.KeyPoint.t()]], [Evision.Mat.t()]} | {:error, String.t()}
@spec compute( t(), Evision.Mat.maybe_mat_in(), [Evision.KeyPoint.t()], [{atom(), term()}, ...] | nil ) :: {[Evision.KeyPoint.t()], Evision.Mat.t()} | {:error, String.t()}
Variant 1:
compute
Positional Arguments
self:
Evision.SIFT.t()
images:
[Evision.Mat]
.Image set.
Return
keypoints:
[[Evision.KeyPoint]]
.Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).
descriptors:
[Evision.Mat]
.Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.
Has overloading in C++
Python prototype (for reference only):
compute(images, keypoints[, descriptors]) -> keypoints, descriptors
Variant 2:
Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant).
Positional Arguments
self:
Evision.SIFT.t()
image:
Evision.Mat
.Image.
Return
keypoints:
[Evision.KeyPoint]
.Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).
descriptors:
Evision.Mat.t()
.Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.
Python prototype (for reference only):
compute(image, keypoints[, descriptors]) -> keypoints, descriptors
create
Keyword Arguments
nfeatures:
integer()
.The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)
nOctaveLayers:
integer()
.The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.
contrastThreshold:
double
.The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.
edgeThreshold:
double
.The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).
sigma:
double
.The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.
enable_precise_upscale:
bool
.Whether to enable precise upscaling in the scale pyramid, which maps index \f$\texttt{x}\f$ to \f$\texttt{2x}\f$. This prevents localization bias. The option is disabled by default.
Return
- retval:
Evision.SIFT.t()
Note: The contrast threshold will be divided by nOctaveLayers when the filtering is applied. When nOctaveLayers is set to default and if you want to use the value used in D. Lowe paper, 0.03, set this argument to 0.09.
Python prototype (for reference only):
create([, nfeatures[, nOctaveLayers[, contrastThreshold[, edgeThreshold[, sigma[, enable_precise_upscale]]]]]]) -> retval
@spec create(Keyword.t()) :: any() | {:error, String.t()}
@spec create( [ contrastThreshold: term(), edgeThreshold: term(), enable_precise_upscale: term(), nOctaveLayers: term(), nfeatures: term(), sigma: term() ] | nil ) :: t() | {:error, String.t()}
create
Keyword Arguments
nfeatures:
integer()
.The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)
nOctaveLayers:
integer()
.The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.
contrastThreshold:
double
.The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.
edgeThreshold:
double
.The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).
sigma:
double
.The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.
enable_precise_upscale:
bool
.Whether to enable precise upscaling in the scale pyramid, which maps index \f$\texttt{x}\f$ to \f$\texttt{2x}\f$. This prevents localization bias. The option is disabled by default.
Return
- retval:
Evision.SIFT.t()
Note: The contrast threshold will be divided by nOctaveLayers when the filtering is applied. When nOctaveLayers is set to default and if you want to use the value used in D. Lowe paper, 0.03, set this argument to 0.09.
Python prototype (for reference only):
create([, nfeatures[, nOctaveLayers[, contrastThreshold[, edgeThreshold[, sigma[, enable_precise_upscale]]]]]]) -> retval
create(nfeatures, nOctaveLayers, contrastThreshold, edgeThreshold, sigma, descriptorType)
View Source@spec create(integer(), integer(), number(), number(), number(), integer()) :: t() | {:error, String.t()}
Create SIFT with specified descriptorType.
Positional Arguments
nfeatures:
integer()
.The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)
nOctaveLayers:
integer()
.The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.
contrastThreshold:
double
.The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.
edgeThreshold:
double
.The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).
sigma:
double
.The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.
descriptorType:
integer()
.The type of descriptors. Only CV_32F and CV_8U are supported.
Keyword Arguments
enable_precise_upscale:
bool
.Whether to enable precise upscaling in the scale pyramid, which maps index \f$\texttt{x}\f$ to \f$\texttt{2x}\f$. This prevents localization bias. The option is disabled by default.
Return
- retval:
Evision.SIFT.t()
Note: The contrast threshold will be divided by nOctaveLayers when the filtering is applied. When nOctaveLayers is set to default and if you want to use the value used in D. Lowe paper, 0.03, set this argument to 0.09.
Python prototype (for reference only):
create(nfeatures, nOctaveLayers, contrastThreshold, edgeThreshold, sigma, descriptorType[, enable_precise_upscale]) -> retval
create(nfeatures, nOctaveLayers, contrastThreshold, edgeThreshold, sigma, descriptorType, opts)
View Source@spec create( integer(), integer(), number(), number(), number(), integer(), [{:enable_precise_upscale, term()}] | nil ) :: t() | {:error, String.t()}
Create SIFT with specified descriptorType.
Positional Arguments
nfeatures:
integer()
.The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)
nOctaveLayers:
integer()
.The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.
contrastThreshold:
double
.The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.
edgeThreshold:
double
.The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).
sigma:
double
.The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.
descriptorType:
integer()
.The type of descriptors. Only CV_32F and CV_8U are supported.
Keyword Arguments
enable_precise_upscale:
bool
.Whether to enable precise upscaling in the scale pyramid, which maps index \f$\texttt{x}\f$ to \f$\texttt{2x}\f$. This prevents localization bias. The option is disabled by default.
Return
- retval:
Evision.SIFT.t()
Note: The contrast threshold will be divided by nOctaveLayers when the filtering is applied. When nOctaveLayers is set to default and if you want to use the value used in D. Lowe paper, 0.03, set this argument to 0.09.
Python prototype (for reference only):
create(nfeatures, nOctaveLayers, contrastThreshold, edgeThreshold, sigma, descriptorType[, enable_precise_upscale]) -> retval
@spec defaultNorm(Keyword.t()) :: any() | {:error, String.t()}
@spec defaultNorm(t()) :: integer() | {:error, String.t()}
defaultNorm
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
integer()
Python prototype (for reference only):
defaultNorm() -> retval
@spec descriptorSize(Keyword.t()) :: any() | {:error, String.t()}
@spec descriptorSize(t()) :: integer() | {:error, String.t()}
descriptorSize
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
integer()
Python prototype (for reference only):
descriptorSize() -> retval
@spec descriptorType(Keyword.t()) :: any() | {:error, String.t()}
@spec descriptorType(t()) :: integer() | {:error, String.t()}
descriptorType
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
integer()
Python prototype (for reference only):
descriptorType() -> retval
@spec detect(t(), [Evision.Mat.maybe_mat_in()]) :: [[Evision.KeyPoint.t()]] | {:error, String.t()}
@spec detect(t(), Evision.Mat.maybe_mat_in()) :: [Evision.KeyPoint.t()] | {:error, String.t()}
Variant 1:
detect
Positional Arguments
self:
Evision.SIFT.t()
images:
[Evision.Mat]
.Image set.
Keyword Arguments
masks:
[Evision.Mat]
.Masks for each input image specifying where to look for keypoints (optional). masks[i] is a mask for images[i].
Return
keypoints:
[[Evision.KeyPoint]]
.The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .
Has overloading in C++
Python prototype (for reference only):
detect(images[, masks]) -> keypoints
Variant 2:
Detects keypoints in an image (first variant) or image set (second variant).
Positional Arguments
self:
Evision.SIFT.t()
image:
Evision.Mat
.Image.
Keyword Arguments
mask:
Evision.Mat
.Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.
Return
keypoints:
[Evision.KeyPoint]
.The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .
Python prototype (for reference only):
detect(image[, mask]) -> keypoints
@spec detect(t(), [Evision.Mat.maybe_mat_in()], [{:masks, term()}] | nil) :: [[Evision.KeyPoint.t()]] | {:error, String.t()}
@spec detect(t(), Evision.Mat.maybe_mat_in(), [{:mask, term()}] | nil) :: [Evision.KeyPoint.t()] | {:error, String.t()}
Variant 1:
detect
Positional Arguments
self:
Evision.SIFT.t()
images:
[Evision.Mat]
.Image set.
Keyword Arguments
masks:
[Evision.Mat]
.Masks for each input image specifying where to look for keypoints (optional). masks[i] is a mask for images[i].
Return
keypoints:
[[Evision.KeyPoint]]
.The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .
Has overloading in C++
Python prototype (for reference only):
detect(images[, masks]) -> keypoints
Variant 2:
Detects keypoints in an image (first variant) or image set (second variant).
Positional Arguments
self:
Evision.SIFT.t()
image:
Evision.Mat
.Image.
Keyword Arguments
mask:
Evision.Mat
.Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.
Return
keypoints:
[Evision.KeyPoint]
.The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .
Python prototype (for reference only):
detect(image[, mask]) -> keypoints
@spec detectAndCompute(t(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) :: {[Evision.KeyPoint.t()], Evision.Mat.t()} | {:error, String.t()}
detectAndCompute
Positional Arguments
- self:
Evision.SIFT.t()
- image:
Evision.Mat
- mask:
Evision.Mat
Keyword Arguments
- useProvidedKeypoints:
bool
.
Return
- keypoints:
[Evision.KeyPoint]
- descriptors:
Evision.Mat.t()
.
Detects keypoints and computes the descriptors
Python prototype (for reference only):
detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
@spec detectAndCompute( t(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), [{:useProvidedKeypoints, term()}] | nil ) :: {[Evision.KeyPoint.t()], Evision.Mat.t()} | {:error, String.t()}
detectAndCompute
Positional Arguments
- self:
Evision.SIFT.t()
- image:
Evision.Mat
- mask:
Evision.Mat
Keyword Arguments
- useProvidedKeypoints:
bool
.
Return
- keypoints:
[Evision.KeyPoint]
- descriptors:
Evision.Mat.t()
.
Detects keypoints and computes the descriptors
Python prototype (for reference only):
detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}
empty
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
bool
Python prototype (for reference only):
empty() -> retval
@spec getContrastThreshold(Keyword.t()) :: any() | {:error, String.t()}
@spec getContrastThreshold(t()) :: number() | {:error, String.t()}
getContrastThreshold
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
double
Python prototype (for reference only):
getContrastThreshold() -> retval
@spec getDefaultName(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultName(t()) :: binary() | {:error, String.t()}
getDefaultName
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
String
Python prototype (for reference only):
getDefaultName() -> retval
@spec getEdgeThreshold(Keyword.t()) :: any() | {:error, String.t()}
@spec getEdgeThreshold(t()) :: number() | {:error, String.t()}
getEdgeThreshold
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
double
Python prototype (for reference only):
getEdgeThreshold() -> retval
@spec getNFeatures(Keyword.t()) :: any() | {:error, String.t()}
@spec getNFeatures(t()) :: integer() | {:error, String.t()}
getNFeatures
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
integer()
Python prototype (for reference only):
getNFeatures() -> retval
@spec getNOctaveLayers(Keyword.t()) :: any() | {:error, String.t()}
@spec getNOctaveLayers(t()) :: integer() | {:error, String.t()}
getNOctaveLayers
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
integer()
Python prototype (for reference only):
getNOctaveLayers() -> retval
@spec getSigma(Keyword.t()) :: any() | {:error, String.t()}
@spec getSigma(t()) :: number() | {:error, String.t()}
getSigma
Positional Arguments
- self:
Evision.SIFT.t()
Return
- retval:
double
Python prototype (for reference only):
getSigma() -> retval
@spec read(t(), Evision.FileNode.t()) :: t() | {:error, String.t()}
@spec read(t(), binary()) :: t() | {:error, String.t()}
Variant 1:
read
Positional Arguments
- self:
Evision.SIFT.t()
- arg1:
Evision.FileNode
Python prototype (for reference only):
read(arg1) -> None
Variant 2:
read
Positional Arguments
- self:
Evision.SIFT.t()
- fileName:
String
Python prototype (for reference only):
read(fileName) -> None
setContrastThreshold
Positional Arguments
- self:
Evision.SIFT.t()
- contrastThreshold:
double
Python prototype (for reference only):
setContrastThreshold(contrastThreshold) -> None
setEdgeThreshold
Positional Arguments
- self:
Evision.SIFT.t()
- edgeThreshold:
double
Python prototype (for reference only):
setEdgeThreshold(edgeThreshold) -> None
setNFeatures
Positional Arguments
- self:
Evision.SIFT.t()
- maxFeatures:
integer()
Python prototype (for reference only):
setNFeatures(maxFeatures) -> None
setNOctaveLayers
Positional Arguments
- self:
Evision.SIFT.t()
- nOctaveLayers:
integer()
Python prototype (for reference only):
setNOctaveLayers(nOctaveLayers) -> None
setSigma
Positional Arguments
- self:
Evision.SIFT.t()
- sigma:
double
Python prototype (for reference only):
setSigma(sigma) -> None
write
Positional Arguments
- self:
Evision.SIFT.t()
- fileName:
String
Python prototype (for reference only):
write(fileName) -> None
@spec write(t(), Evision.FileStorage.t(), binary()) :: t() | {:error, String.t()}
write
Positional Arguments
- self:
Evision.SIFT.t()
- fs:
Evision.FileStorage
- name:
String
Python prototype (for reference only):
write(fs, name) -> None