View Source Evision.XFeatures2D (Evision v0.2.9)
Summary
Functions
GMS (Grid-based Motion Statistics) feature matching strategy described in @cite Bian2017gms .
GMS (Grid-based Motion Statistics) feature matching strategy described in @cite Bian2017gms .
LOGOS (Local geometric support for high-outlier spatial verification) feature matching strategy described in @cite Lowry2018LOGOSLG .
Types
@type t() :: %Evision.XFeatures2D{ref: reference()}
Type that represents an XFeatures2D
struct.
ref.
reference()
The underlying erlang resource variable.
Functions
@spec matchGMS( {number(), number()}, {number(), number()}, [Evision.KeyPoint.t()], [Evision.KeyPoint.t()], [Evision.DMatch.t()] ) :: [Evision.DMatch.t()] | {:error, String.t()}
GMS (Grid-based Motion Statistics) feature matching strategy described in @cite Bian2017gms .
Positional Arguments
size1:
Size
.Input size of image1.
size2:
Size
.Input size of image2.
keypoints1:
[Evision.KeyPoint]
.Input keypoints of image1.
keypoints2:
[Evision.KeyPoint]
.Input keypoints of image2.
matches1to2:
[Evision.DMatch]
.Input 1-nearest neighbor matches.
Keyword Arguments
withRotation:
bool
.Take rotation transformation into account.
withScale:
bool
.Take scale transformation into account.
thresholdFactor:
double
.The higher, the less matches.
Return
matchesGMS:
[Evision.DMatch]
.Matches returned by the GMS matching strategy.
Note: Since GMS works well when the number of features is large, we recommend to use the ORB feature and set FastThreshold to 0 to get as many as possible features quickly. If matching results are not satisfying, please add more features. (We use 10000 for images with 640 X 480). If your images have big rotation and scale changes, please set withRotation or withScale to true.
Python prototype (for reference only):
matchGMS(size1, size2, keypoints1, keypoints2, matches1to2[, withRotation[, withScale[, thresholdFactor]]]) -> matchesGMS
@spec matchGMS( {number(), number()}, {number(), number()}, [Evision.KeyPoint.t()], [Evision.KeyPoint.t()], [Evision.DMatch.t()], [thresholdFactor: term(), withRotation: term(), withScale: term()] | nil ) :: [Evision.DMatch.t()] | {:error, String.t()}
GMS (Grid-based Motion Statistics) feature matching strategy described in @cite Bian2017gms .
Positional Arguments
size1:
Size
.Input size of image1.
size2:
Size
.Input size of image2.
keypoints1:
[Evision.KeyPoint]
.Input keypoints of image1.
keypoints2:
[Evision.KeyPoint]
.Input keypoints of image2.
matches1to2:
[Evision.DMatch]
.Input 1-nearest neighbor matches.
Keyword Arguments
withRotation:
bool
.Take rotation transformation into account.
withScale:
bool
.Take scale transformation into account.
thresholdFactor:
double
.The higher, the less matches.
Return
matchesGMS:
[Evision.DMatch]
.Matches returned by the GMS matching strategy.
Note: Since GMS works well when the number of features is large, we recommend to use the ORB feature and set FastThreshold to 0 to get as many as possible features quickly. If matching results are not satisfying, please add more features. (We use 10000 for images with 640 X 480). If your images have big rotation and scale changes, please set withRotation or withScale to true.
Python prototype (for reference only):
matchGMS(size1, size2, keypoints1, keypoints2, matches1to2[, withRotation[, withScale[, thresholdFactor]]]) -> matchesGMS
@spec matchLOGOS([Evision.KeyPoint.t()], [Evision.KeyPoint.t()], [integer()], [ integer() ]) :: [Evision.DMatch.t()] | {:error, String.t()}
LOGOS (Local geometric support for high-outlier spatial verification) feature matching strategy described in @cite Lowry2018LOGOSLG .
Positional Arguments
keypoints1:
[Evision.KeyPoint]
.Input keypoints of image1.
keypoints2:
[Evision.KeyPoint]
.Input keypoints of image2.
nn1:
[integer()]
.Index to the closest BoW centroid for each descriptors of image1.
nn2:
[integer()]
.Index to the closest BoW centroid for each descriptors of image2.
Return
matches1to2:
[Evision.DMatch]
.Matches returned by the LOGOS matching strategy.
Note: This matching strategy is suitable for features matching against large scale database. First step consists in constructing the bag-of-words (BoW) from a representative image database. Image descriptors are then represented by their closest codevector (nearest BoW centroid).
Python prototype (for reference only):
matchLOGOS(keypoints1, keypoints2, nn1, nn2) -> matches1to2