View Source Evision.ArUco.ArucoDetector (Evision v0.2.9)
Summary
Functions
Basic ArucoDetector constructor
Basic ArucoDetector constructor
Clears the algorithm state
Basic marker detection
Basic marker detection
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
getDefaultName
getDetectorParameters
getDictionary
getRefineParameters
Reads algorithm parameters from a file storage
Refine not detected markers based on the already detected and the board layout
Refine not detected markers based on the already detected and the board layout
save
setDetectorParameters
setDictionary
setRefineParameters
simplified API for language bindings
Types
@type t() :: %Evision.ArUco.ArucoDetector{ref: reference()}
Type that represents an ArUco.ArucoDetector
struct.
ref.
reference()
The underlying erlang resource variable.
Functions
Basic ArucoDetector constructor
Keyword Arguments
dictionary:
Dictionary
.indicates the type of markers that will be searched
detectorParams:
DetectorParameters
.marker detection parameters
refineParams:
RefineParameters
.marker refine detection parameters
Return
- self:
Evision.ArUco.ArucoDetector.t()
Python prototype (for reference only):
ArucoDetector([, dictionary[, detectorParams[, refineParams]]]) -> <aruco_ArucoDetector object>
@spec arucoDetector(Keyword.t()) :: any() | {:error, String.t()}
@spec arucoDetector( [detectorParams: term(), dictionary: term(), refineParams: term()] | nil ) :: t() | {:error, String.t()}
Basic ArucoDetector constructor
Keyword Arguments
dictionary:
Dictionary
.indicates the type of markers that will be searched
detectorParams:
DetectorParameters
.marker detection parameters
refineParams:
RefineParameters
.marker refine detection parameters
Return
- self:
Evision.ArUco.ArucoDetector.t()
Python prototype (for reference only):
ArucoDetector([, dictionary[, detectorParams[, refineParams]]]) -> <aruco_ArucoDetector object>
@spec clear(Keyword.t()) :: any() | {:error, String.t()}
@spec clear(t()) :: t() | {:error, String.t()}
Clears the algorithm state
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
Python prototype (for reference only):
clear() -> None
@spec detectMarkers(t(), Evision.Mat.maybe_mat_in()) :: {[Evision.Mat.t()], Evision.Mat.t(), [Evision.Mat.t()]} | {:error, String.t()}
Basic marker detection
Positional Arguments
self:
Evision.ArUco.ArucoDetector.t()
image:
Evision.Mat
.input image
Return
corners:
[Evision.Mat]
.vector of detected marker corners. For each marker, its four corners are provided, (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers, the dimensions of this array is Nx4. The order of the corners is clockwise.
ids:
Evision.Mat.t()
.vector of identifiers of the detected markers. The identifier is of type int (e.g. std::vector<int>). For N detected markers, the size of ids is also N. The identifiers have the same order than the markers in the imgPoints array.
rejectedImgPoints:
[Evision.Mat]
.contains the imgPoints of those squares whose inner code has not a correct codification. Useful for debugging purposes.
Performs marker detection in the input image. Only markers included in the specific dictionary are searched. For each detected marker, it returns the 2D position of its corner in the image and its corresponding identifier. Note that this function does not perform pose estimation. Note: The function does not correct lens distortion or takes it into account. It's recommended to undistort input image with corresponding camera model, if camera parameters are known @sa undistort, estimatePoseSingleMarkers, estimatePoseBoard
Python prototype (for reference only):
detectMarkers(image[, corners[, ids[, rejectedImgPoints]]]) -> corners, ids, rejectedImgPoints
@spec detectMarkers(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) :: {[Evision.Mat.t()], Evision.Mat.t(), [Evision.Mat.t()]} | {:error, String.t()}
Basic marker detection
Positional Arguments
self:
Evision.ArUco.ArucoDetector.t()
image:
Evision.Mat
.input image
Return
corners:
[Evision.Mat]
.vector of detected marker corners. For each marker, its four corners are provided, (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers, the dimensions of this array is Nx4. The order of the corners is clockwise.
ids:
Evision.Mat.t()
.vector of identifiers of the detected markers. The identifier is of type int (e.g. std::vector<int>). For N detected markers, the size of ids is also N. The identifiers have the same order than the markers in the imgPoints array.
rejectedImgPoints:
[Evision.Mat]
.contains the imgPoints of those squares whose inner code has not a correct codification. Useful for debugging purposes.
Performs marker detection in the input image. Only markers included in the specific dictionary are searched. For each detected marker, it returns the 2D position of its corner in the image and its corresponding identifier. Note that this function does not perform pose estimation. Note: The function does not correct lens distortion or takes it into account. It's recommended to undistort input image with corresponding camera model, if camera parameters are known @sa undistort, estimatePoseSingleMarkers, estimatePoseBoard
Python prototype (for reference only):
detectMarkers(image[, corners[, ids[, rejectedImgPoints]]]) -> corners, ids, rejectedImgPoints
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
Return
- retval:
bool
Python prototype (for reference only):
empty() -> retval
@spec getDefaultName(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultName(t()) :: binary() | {:error, String.t()}
getDefaultName
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
Return
- retval:
String
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.
Python prototype (for reference only):
getDefaultName() -> retval
@spec getDetectorParameters(Keyword.t()) :: any() | {:error, String.t()}
@spec getDetectorParameters(t()) :: Evision.ArUco.DetectorParameters.t() | {:error, String.t()}
getDetectorParameters
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
Return
- retval:
DetectorParameters
Python prototype (for reference only):
getDetectorParameters() -> retval
@spec getDictionary(Keyword.t()) :: any() | {:error, String.t()}
@spec getDictionary(t()) :: Evision.ArUco.Dictionary.t() | {:error, String.t()}
getDictionary
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
Return
- retval:
Dictionary
Python prototype (for reference only):
getDictionary() -> retval
@spec getRefineParameters(Keyword.t()) :: any() | {:error, String.t()}
@spec getRefineParameters(t()) :: Evision.ArUco.RefineParameters.t() | {:error, String.t()}
getRefineParameters
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
Return
- retval:
RefineParameters
Python prototype (for reference only):
getRefineParameters() -> retval
@spec read(t(), Evision.FileNode.t()) :: t() | {:error, String.t()}
Reads algorithm parameters from a file storage
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
- func:
Evision.FileNode
Python prototype (for reference only):
read(fn) -> None
refineDetectedMarkers(self, image, board, detectedCorners, detectedIds, rejectedCorners)
View Source@spec refineDetectedMarkers( t(), Evision.Mat.maybe_mat_in(), Evision.ArUco.Board.t(), [Evision.Mat.maybe_mat_in()], Evision.Mat.maybe_mat_in(), [Evision.Mat.maybe_mat_in()] ) :: {[Evision.Mat.t()], Evision.Mat.t(), [Evision.Mat.t()], Evision.Mat.t()} | {:error, String.t()}
Refine not detected markers based on the already detected and the board layout
Positional Arguments
self:
Evision.ArUco.ArucoDetector.t()
image:
Evision.Mat
.input image
board:
Board
.layout of markers in the board.
Keyword Arguments
cameraMatrix:
Evision.Mat
.optional input 3x3 floating-point camera matrix \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
distCoeffs:
Evision.Mat
.optional vector of distortion coefficients \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
Return
detectedCorners:
[Evision.Mat]
.vector of already detected marker corners.
detectedIds:
Evision.Mat.t()
.vector of already detected marker identifiers.
rejectedCorners:
[Evision.Mat]
.vector of rejected candidates during the marker detection process.
recoveredIdxs:
Evision.Mat.t()
.Optional array to returns the indexes of the recovered candidates in the original rejectedCorners array.
This function tries to find markers that were not detected in the basic detecMarkers function. First, based on the current detected marker and the board layout, the function interpolates the position of the missing markers. Then it tries to find correspondence between the reprojected markers and the rejected candidates based on the minRepDistance and errorCorrectionRate parameters. If camera parameters and distortion coefficients are provided, missing markers are reprojected using projectPoint function. If not, missing marker projections are interpolated using global homography, and all the marker corners in the board must have the same Z coordinate.
Python prototype (for reference only):
refineDetectedMarkers(image, board, detectedCorners, detectedIds, rejectedCorners[, cameraMatrix[, distCoeffs[, recoveredIdxs]]]) -> detectedCorners, detectedIds, rejectedCorners, recoveredIdxs
refineDetectedMarkers(self, image, board, detectedCorners, detectedIds, rejectedCorners, opts)
View Source@spec refineDetectedMarkers( t(), Evision.Mat.maybe_mat_in(), Evision.ArUco.Board.t(), [Evision.Mat.maybe_mat_in()], Evision.Mat.maybe_mat_in(), [Evision.Mat.maybe_mat_in()], [cameraMatrix: term(), distCoeffs: term()] | nil ) :: {[Evision.Mat.t()], Evision.Mat.t(), [Evision.Mat.t()], Evision.Mat.t()} | {:error, String.t()}
Refine not detected markers based on the already detected and the board layout
Positional Arguments
self:
Evision.ArUco.ArucoDetector.t()
image:
Evision.Mat
.input image
board:
Board
.layout of markers in the board.
Keyword Arguments
cameraMatrix:
Evision.Mat
.optional input 3x3 floating-point camera matrix \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$
distCoeffs:
Evision.Mat
.optional vector of distortion coefficients \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements
Return
detectedCorners:
[Evision.Mat]
.vector of already detected marker corners.
detectedIds:
Evision.Mat.t()
.vector of already detected marker identifiers.
rejectedCorners:
[Evision.Mat]
.vector of rejected candidates during the marker detection process.
recoveredIdxs:
Evision.Mat.t()
.Optional array to returns the indexes of the recovered candidates in the original rejectedCorners array.
This function tries to find markers that were not detected in the basic detecMarkers function. First, based on the current detected marker and the board layout, the function interpolates the position of the missing markers. Then it tries to find correspondence between the reprojected markers and the rejected candidates based on the minRepDistance and errorCorrectionRate parameters. If camera parameters and distortion coefficients are provided, missing markers are reprojected using projectPoint function. If not, missing marker projections are interpolated using global homography, and all the marker corners in the board must have the same Z coordinate.
Python prototype (for reference only):
refineDetectedMarkers(image, board, detectedCorners, detectedIds, rejectedCorners[, cameraMatrix[, distCoeffs[, recoveredIdxs]]]) -> detectedCorners, detectedIds, rejectedCorners, recoveredIdxs
save
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
- filename:
String
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).
Python prototype (for reference only):
save(filename) -> None
@spec setDetectorParameters(t(), Evision.ArUco.DetectorParameters.t()) :: t() | {:error, String.t()}
setDetectorParameters
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
- detectorParameters:
DetectorParameters
Python prototype (for reference only):
setDetectorParameters(detectorParameters) -> None
@spec setDictionary(t(), Evision.ArUco.Dictionary.t()) :: t() | {:error, String.t()}
setDictionary
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
- dictionary:
Dictionary
Python prototype (for reference only):
setDictionary(dictionary) -> None
@spec setRefineParameters(t(), Evision.ArUco.RefineParameters.t()) :: t() | {:error, String.t()}
setRefineParameters
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
- refineParameters:
RefineParameters
Python prototype (for reference only):
setRefineParameters(refineParameters) -> None
@spec write(t(), Evision.FileStorage.t(), binary()) :: t() | {:error, String.t()}
simplified API for language bindings
Positional Arguments
- self:
Evision.ArUco.ArucoDetector.t()
- fs:
Evision.FileStorage
- name:
String
Python prototype (for reference only):
write(fs, name) -> None