View Source Evision.Omnidir (Evision v0.2.9)
Summary
Functions
Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.
Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.
Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.
Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.
Projects points for omnidirectional camera using CMei's model
Projects points for omnidirectional camera using CMei's model
Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.
Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.
Stereo 3D reconstruction from a pair of images
Stereo 3D reconstruction from a pair of images
Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras
Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras
Undistort omnidirectional images to perspective images
Undistort omnidirectional images to perspective images
Undistort 2D image points for omnidirectional camera using CMei's model
Undistort 2D image points for omnidirectional camera using CMei's model
Enumerator
Types
Functions
calibrate(objectPoints, imagePoints, size, k, xi, d, flags, criteria)
View Source@spec calibrate( [Evision.Mat.maybe_mat_in()], [Evision.Mat.maybe_mat_in()], {number(), number()}, Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), {integer(), integer(), number()} ) :: {number(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), [Evision.Mat.t()], [Evision.Mat.t()], Evision.Mat.t()} | {:error, String.t()}
Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.
Positional Arguments
objectPoints:
[Evision.Mat]
.Vector of vector of Vec3f object points in world (pattern) coordinate. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.
imagePoints:
[Evision.Mat]
.Vector of vector of Vec2f corresponding image points of objectPoints. It must be the same size and the same type with objectPoints.
size:
Size
.Image size of calibration images.
flags:
integer()
.The flags that control calibrate
criteria:
TermCriteria
.Termination criteria for optimization
Return
retval:
double
k:
Evision.Mat.t()
.Output calibrated camera matrix.
xi:
Evision.Mat.t()
.Output parameter xi for CMei's model
d:
Evision.Mat.t()
.Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$
rvecs:
[Evision.Mat]
.Output rotations for each calibration images
tvecs:
[Evision.Mat]
.Output translation for each calibration images
idx:
Evision.Mat.t()
.Indices of images that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().
Python prototype (for reference only):
calibrate(objectPoints, imagePoints, size, K, xi, D, flags, criteria[, rvecs[, tvecs[, idx]]]) -> retval, K, xi, D, rvecs, tvecs, idx
calibrate(objectPoints, imagePoints, size, k, xi, d, flags, criteria, opts)
View Source@spec calibrate( [Evision.Mat.maybe_mat_in()], [Evision.Mat.maybe_mat_in()], {number(), number()}, Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), {integer(), integer(), number()}, [{atom(), term()}, ...] | nil ) :: {number(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), [Evision.Mat.t()], [Evision.Mat.t()], Evision.Mat.t()} | {:error, String.t()}
Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.
Positional Arguments
objectPoints:
[Evision.Mat]
.Vector of vector of Vec3f object points in world (pattern) coordinate. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.
imagePoints:
[Evision.Mat]
.Vector of vector of Vec2f corresponding image points of objectPoints. It must be the same size and the same type with objectPoints.
size:
Size
.Image size of calibration images.
flags:
integer()
.The flags that control calibrate
criteria:
TermCriteria
.Termination criteria for optimization
Return
retval:
double
k:
Evision.Mat.t()
.Output calibrated camera matrix.
xi:
Evision.Mat.t()
.Output parameter xi for CMei's model
d:
Evision.Mat.t()
.Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$
rvecs:
[Evision.Mat]
.Output rotations for each calibration images
tvecs:
[Evision.Mat]
.Output translation for each calibration images
idx:
Evision.Mat.t()
.Indices of images that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().
Python prototype (for reference only):
calibrate(objectPoints, imagePoints, size, K, xi, D, flags, criteria[, rvecs[, tvecs[, idx]]]) -> retval, K, xi, D, rvecs, tvecs, idx
@spec initUndistortRectifyMap( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), {number(), number()}, integer(), integer() ) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.
Positional Arguments
k:
Evision.Mat
.Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$, with depth CV_32F or CV_64F
d:
Evision.Mat
.Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$, with depth CV_32F or CV_64F
xi:
Evision.Mat
.The parameter xi for CMei's model
r:
Evision.Mat
.Rotation transform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3, with depth CV_32F or CV_64F
p:
Evision.Mat
.New camera matrix (3x3) or new projection matrix (3x4)
size:
Size
.Undistorted image size.
m1type:
integer()
.Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps() for details.
flags:
integer()
.Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC are supported.
Return
map1:
Evision.Mat.t()
.The first output map.
map2:
Evision.Mat.t()
.The second output map.
Python prototype (for reference only):
initUndistortRectifyMap(K, D, xi, R, P, size, m1type, flags[, map1[, map2]]) -> map1, map2
initUndistortRectifyMap(k, d, xi, r, p, size, m1type, flags, opts)
View Source@spec initUndistortRectifyMap( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), {number(), number()}, integer(), integer(), [{atom(), term()}, ...] | nil ) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.
Positional Arguments
k:
Evision.Mat
.Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$, with depth CV_32F or CV_64F
d:
Evision.Mat
.Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$, with depth CV_32F or CV_64F
xi:
Evision.Mat
.The parameter xi for CMei's model
r:
Evision.Mat
.Rotation transform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3, with depth CV_32F or CV_64F
p:
Evision.Mat
.New camera matrix (3x3) or new projection matrix (3x4)
size:
Size
.Undistorted image size.
m1type:
integer()
.Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps() for details.
flags:
integer()
.Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC are supported.
Return
map1:
Evision.Mat.t()
.The first output map.
map2:
Evision.Mat.t()
.The second output map.
Python prototype (for reference only):
initUndistortRectifyMap(K, D, xi, R, P, size, m1type, flags[, map1[, map2]]) -> map1, map2
@spec projectPoints( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number(), Evision.Mat.maybe_mat_in() ) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Projects points for omnidirectional camera using CMei's model
Positional Arguments
objectPoints:
Evision.Mat
.Object points in world coordinate, vector of vector of Vec3f or Mat of 1xN/Nx1 3-channel of type CV_32F and N is the number of points. 64F is also acceptable.
rvec:
Evision.Mat
.vector of rotation between world coordinate and camera coordinate, i.e., om
tvec:
Evision.Mat
.vector of translation between pattern coordinate and camera coordinate
k:
Evision.Mat
.Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
xi:
double
.The parameter xi for CMei's model
d:
Evision.Mat
.Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
Return
imagePoints:
Evision.Mat.t()
.Output array of image points, vector of vector of Vec2f or 1xN/Nx1 2-channel of type CV_32F. 64F is also acceptable.
jacobian:
Evision.Mat.t()
.Optional output 2Nx16 of type CV_64F jacobian matrix, contains the derivatives of image pixel points wrt parameters including \f$om, T, f_x, f_y, s, c_x, c_y, xi, k_1, k_2, p_1, p_2\f$. This matrix will be used in calibration by optimization.
The function projects object 3D points of world coordinate to image pixels, parameter by intrinsic and extrinsic parameters. Also, it optionally compute a by-product: the jacobian matrix containing contains the derivatives of image pixel points wrt intrinsic and extrinsic parameters.
Python prototype (for reference only):
projectPoints(objectPoints, rvec, tvec, K, xi, D[, imagePoints[, jacobian]]) -> imagePoints, jacobian
@spec projectPoints( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), number(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil ) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Projects points for omnidirectional camera using CMei's model
Positional Arguments
objectPoints:
Evision.Mat
.Object points in world coordinate, vector of vector of Vec3f or Mat of 1xN/Nx1 3-channel of type CV_32F and N is the number of points. 64F is also acceptable.
rvec:
Evision.Mat
.vector of rotation between world coordinate and camera coordinate, i.e., om
tvec:
Evision.Mat
.vector of translation between pattern coordinate and camera coordinate
k:
Evision.Mat
.Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
xi:
double
.The parameter xi for CMei's model
d:
Evision.Mat
.Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
Return
imagePoints:
Evision.Mat.t()
.Output array of image points, vector of vector of Vec2f or 1xN/Nx1 2-channel of type CV_32F. 64F is also acceptable.
jacobian:
Evision.Mat.t()
.Optional output 2Nx16 of type CV_64F jacobian matrix, contains the derivatives of image pixel points wrt parameters including \f$om, T, f_x, f_y, s, c_x, c_y, xi, k_1, k_2, p_1, p_2\f$. This matrix will be used in calibration by optimization.
The function projects object 3D points of world coordinate to image pixels, parameter by intrinsic and extrinsic parameters. Also, it optionally compute a by-product: the jacobian matrix containing contains the derivatives of image pixel points wrt intrinsic and extrinsic parameters.
Python prototype (for reference only):
projectPoints(objectPoints, rvec, tvec, K, xi, D[, imagePoints[, jacobian]]) -> imagePoints, jacobian
stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, k1, xi1, d1, k2, xi2, d2, flags, criteria)
View Source@spec stereoCalibrate( [Evision.Mat.maybe_mat_in()], [Evision.Mat.maybe_mat_in()], [Evision.Mat.maybe_mat_in()], {number(), number()}, {number(), number()}, Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), {integer(), integer(), number()} ) :: {number(), [Evision.Mat.t()], [Evision.Mat.t()], [Evision.Mat.t()], Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), [Evision.Mat.t()], [Evision.Mat.t()], Evision.Mat.t()} | {:error, String.t()}
Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.
Positional Arguments
imageSize1:
Size
.Image size of calibration images of the first camera.
imageSize2:
Size
.Image size of calibration images of the second camera.
flags:
integer()
.The flags that control stereoCalibrate
criteria:
TermCriteria
.Termination criteria for optimization
Return
retval:
double
objectPoints:
[Evision.Mat]
.Object points in world (pattern) coordinate. Its type is vector<vector<Vec3f> >. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.
imagePoints1:
[Evision.Mat]
.The corresponding image points of the first camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.
imagePoints2:
[Evision.Mat]
.The corresponding image points of the second camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.
k1:
Evision.Mat.t()
.Output camera matrix for the first camera.
xi1:
Evision.Mat.t()
.Output parameter xi of Mei's model for the first camera
d1:
Evision.Mat.t()
.Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera
k2:
Evision.Mat.t()
.Output camera matrix for the first camera.
xi2:
Evision.Mat.t()
.Output parameter xi of CMei's model for the second camera
d2:
Evision.Mat.t()
.Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera
rvec:
Evision.Mat.t()
.Output rotation between the first and second camera
tvec:
Evision.Mat.t()
.Output translation between the first and second camera
rvecsL:
[Evision.Mat]
.Output rotation for each image of the first camera
tvecsL:
[Evision.Mat]
.Output translation for each image of the first camera
idx:
Evision.Mat.t()
.Indices of image pairs that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().
@
Python prototype (for reference only):
stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, K1, xi1, D1, K2, xi2, D2, flags, criteria[, rvec[, tvec[, rvecsL[, tvecsL[, idx]]]]]) -> retval, objectPoints, imagePoints1, imagePoints2, K1, xi1, D1, K2, xi2, D2, rvec, tvec, rvecsL, tvecsL, idx
stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, k1, xi1, d1, k2, xi2, d2, flags, criteria, opts)
View Source@spec stereoCalibrate( [Evision.Mat.maybe_mat_in()], [Evision.Mat.maybe_mat_in()], [Evision.Mat.maybe_mat_in()], {number(), number()}, {number(), number()}, Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), {integer(), integer(), number()}, [{atom(), term()}, ...] | nil ) :: {number(), [Evision.Mat.t()], [Evision.Mat.t()], [Evision.Mat.t()], Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), [Evision.Mat.t()], [Evision.Mat.t()], Evision.Mat.t()} | {:error, String.t()}
Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.
Positional Arguments
imageSize1:
Size
.Image size of calibration images of the first camera.
imageSize2:
Size
.Image size of calibration images of the second camera.
flags:
integer()
.The flags that control stereoCalibrate
criteria:
TermCriteria
.Termination criteria for optimization
Return
retval:
double
objectPoints:
[Evision.Mat]
.Object points in world (pattern) coordinate. Its type is vector<vector<Vec3f> >. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.
imagePoints1:
[Evision.Mat]
.The corresponding image points of the first camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.
imagePoints2:
[Evision.Mat]
.The corresponding image points of the second camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.
k1:
Evision.Mat.t()
.Output camera matrix for the first camera.
xi1:
Evision.Mat.t()
.Output parameter xi of Mei's model for the first camera
d1:
Evision.Mat.t()
.Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera
k2:
Evision.Mat.t()
.Output camera matrix for the first camera.
xi2:
Evision.Mat.t()
.Output parameter xi of CMei's model for the second camera
d2:
Evision.Mat.t()
.Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera
rvec:
Evision.Mat.t()
.Output rotation between the first and second camera
tvec:
Evision.Mat.t()
.Output translation between the first and second camera
rvecsL:
[Evision.Mat]
.Output rotation for each image of the first camera
tvecsL:
[Evision.Mat]
.Output translation for each image of the first camera
idx:
Evision.Mat.t()
.Indices of image pairs that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().
@
Python prototype (for reference only):
stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, K1, xi1, D1, K2, xi2, D2, flags, criteria[, rvec[, tvec[, rvecsL[, tvecsL[, idx]]]]]) -> retval, objectPoints, imagePoints1, imagePoints2, K1, xi1, D1, K2, xi2, D2, rvec, tvec, rvecsL, tvecsL, idx
stereoReconstruct(image1, image2, k1, d1, xi1, k2, d2, xi2, r, t, flag, numDisparities, sADWindowSize)
View Source@spec stereoReconstruct( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), integer(), integer() ) :: {Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Stereo 3D reconstruction from a pair of images
Positional Arguments
image1:
Evision.Mat
.The first input image
image2:
Evision.Mat
.The second input image
k1:
Evision.Mat
.Input camera matrix of the first camera
d1:
Evision.Mat
.Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera
xi1:
Evision.Mat
.Input parameter xi for the first camera for CMei's model
k2:
Evision.Mat
.Input camera matrix of the second camera
d2:
Evision.Mat
.Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera
xi2:
Evision.Mat
.Input parameter xi for the second camera for CMei's model
r:
Evision.Mat
.Rotation between the first and second camera
t:
Evision.Mat
.Translation between the first and second camera
flag:
integer()
.Flag of rectification type, RECTIFY_PERSPECTIVE or RECTIFY_LONGLATI
numDisparities:
integer()
.The parameter 'numDisparities' in StereoSGBM, see StereoSGBM for details.
sADWindowSize:
integer()
.The parameter 'SADWindowSize' in StereoSGBM, see StereoSGBM for details.
Keyword Arguments
newSize:
Size
.Image size of rectified image, see omnidir::undistortImage
knew:
Evision.Mat
.New camera matrix of rectified image, see omnidir::undistortImage
pointType:
integer()
.Point cloud type, it can be XYZRGB or XYZ
Return
disparity:
Evision.Mat.t()
.Disparity map generated by stereo matching
image1Rec:
Evision.Mat.t()
.Rectified image of the first image
image2Rec:
Evision.Mat.t()
.rectified image of the second image
pointCloud:
Evision.Mat.t()
.Point cloud of 3D reconstruction, with type CV_64FC3
Python prototype (for reference only):
stereoReconstruct(image1, image2, K1, D1, xi1, K2, D2, xi2, R, T, flag, numDisparities, SADWindowSize[, disparity[, image1Rec[, image2Rec[, newSize[, Knew[, pointCloud[, pointType]]]]]]]) -> disparity, image1Rec, image2Rec, pointCloud
stereoReconstruct(image1, image2, k1, d1, xi1, k2, d2, xi2, r, t, flag, numDisparities, sADWindowSize, opts)
View Source@spec stereoReconstruct( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), integer(), integer(), [knew: term(), newSize: term(), pointType: term()] | nil ) :: {Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Stereo 3D reconstruction from a pair of images
Positional Arguments
image1:
Evision.Mat
.The first input image
image2:
Evision.Mat
.The second input image
k1:
Evision.Mat
.Input camera matrix of the first camera
d1:
Evision.Mat
.Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera
xi1:
Evision.Mat
.Input parameter xi for the first camera for CMei's model
k2:
Evision.Mat
.Input camera matrix of the second camera
d2:
Evision.Mat
.Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera
xi2:
Evision.Mat
.Input parameter xi for the second camera for CMei's model
r:
Evision.Mat
.Rotation between the first and second camera
t:
Evision.Mat
.Translation between the first and second camera
flag:
integer()
.Flag of rectification type, RECTIFY_PERSPECTIVE or RECTIFY_LONGLATI
numDisparities:
integer()
.The parameter 'numDisparities' in StereoSGBM, see StereoSGBM for details.
sADWindowSize:
integer()
.The parameter 'SADWindowSize' in StereoSGBM, see StereoSGBM for details.
Keyword Arguments
newSize:
Size
.Image size of rectified image, see omnidir::undistortImage
knew:
Evision.Mat
.New camera matrix of rectified image, see omnidir::undistortImage
pointType:
integer()
.Point cloud type, it can be XYZRGB or XYZ
Return
disparity:
Evision.Mat.t()
.Disparity map generated by stereo matching
image1Rec:
Evision.Mat.t()
.Rectified image of the first image
image2Rec:
Evision.Mat.t()
.rectified image of the second image
pointCloud:
Evision.Mat.t()
.Point cloud of 3D reconstruction, with type CV_64FC3
Python prototype (for reference only):
stereoReconstruct(image1, image2, K1, D1, xi1, K2, D2, xi2, R, T, flag, numDisparities, SADWindowSize[, disparity[, image1Rec[, image2Rec[, newSize[, Knew[, pointCloud[, pointType]]]]]]]) -> disparity, image1Rec, image2Rec, pointCloud
@spec stereoRectify(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras
Positional Arguments
r:
Evision.Mat
.Rotation between the first and second camera
t:
Evision.Mat
.Translation between the first and second camera
Return
r1:
Evision.Mat.t()
.Output 3x3 rotation matrix for the first camera
r2:
Evision.Mat.t()
.Output 3x3 rotation matrix for the second camera
Python prototype (for reference only):
stereoRectify(R, T[, R1[, R2]]) -> R1, R2
@spec stereoRectify( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil ) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras
Positional Arguments
r:
Evision.Mat
.Rotation between the first and second camera
t:
Evision.Mat
.Translation between the first and second camera
Return
r1:
Evision.Mat.t()
.Output 3x3 rotation matrix for the first camera
r2:
Evision.Mat.t()
.Output 3x3 rotation matrix for the second camera
Python prototype (for reference only):
stereoRectify(R, T[, R1[, R2]]) -> R1, R2
@spec undistortImage( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer() ) :: Evision.Mat.t() | {:error, String.t()}
Undistort omnidirectional images to perspective images
Positional Arguments
distorted:
Evision.Mat
.The input omnidirectional image.
k:
Evision.Mat
.Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
d:
Evision.Mat
.Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
xi:
Evision.Mat
.The parameter xi for CMei's model.
flags:
integer()
.Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC
Keyword Arguments
knew:
Evision.Mat
.Camera matrix of the distorted image. If it is not assigned, it is just K.
new_size:
Size
.The new image size. By default, it is the size of distorted.
r:
Evision.Mat
.Rotation matrix between the input and output images. By default, it is identity matrix.
Return
undistorted:
Evision.Mat.t()
.The output undistorted image.
Python prototype (for reference only):
undistortImage(distorted, K, D, xi, flags[, undistorted[, Knew[, new_size[, R]]]]) -> undistorted
@spec undistortImage( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), integer(), [knew: term(), new_size: term(), r: term()] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Undistort omnidirectional images to perspective images
Positional Arguments
distorted:
Evision.Mat
.The input omnidirectional image.
k:
Evision.Mat
.Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
d:
Evision.Mat
.Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
xi:
Evision.Mat
.The parameter xi for CMei's model.
flags:
integer()
.Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC
Keyword Arguments
knew:
Evision.Mat
.Camera matrix of the distorted image. If it is not assigned, it is just K.
new_size:
Size
.The new image size. By default, it is the size of distorted.
r:
Evision.Mat
.Rotation matrix between the input and output images. By default, it is identity matrix.
Return
undistorted:
Evision.Mat.t()
.The output undistorted image.
Python prototype (for reference only):
undistortImage(distorted, K, D, xi, flags[, undistorted[, Knew[, new_size[, R]]]]) -> undistorted
@spec undistortPoints( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in() ) :: Evision.Mat.t() | {:error, String.t()}
Undistort 2D image points for omnidirectional camera using CMei's model
Positional Arguments
distorted:
Evision.Mat
.Array of distorted image points, vector of Vec2f or 1xN/Nx1 2-channel Mat of type CV_32F, 64F depth is also acceptable
k:
Evision.Mat
.Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
d:
Evision.Mat
.Distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
xi:
Evision.Mat
.The parameter xi for CMei's model
r:
Evision.Mat
.Rotation trainsform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel
Return
undistorted:
Evision.Mat.t()
.array of normalized object points, vector of Vec2f/Vec2d or 1xN/Nx1 2-channel Mat with the same depth of distorted points.
Python prototype (for reference only):
undistortPoints(distorted, K, D, xi, R[, undistorted]) -> undistorted
@spec undistortPoints( Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil ) :: Evision.Mat.t() | {:error, String.t()}
Undistort 2D image points for omnidirectional camera using CMei's model
Positional Arguments
distorted:
Evision.Mat
.Array of distorted image points, vector of Vec2f or 1xN/Nx1 2-channel Mat of type CV_32F, 64F depth is also acceptable
k:
Evision.Mat
.Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
d:
Evision.Mat
.Distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
xi:
Evision.Mat
.The parameter xi for CMei's model
r:
Evision.Mat
.Rotation trainsform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel
Return
undistorted:
Evision.Mat.t()
.array of normalized object points, vector of Vec2f/Vec2d or 1xN/Nx1 2-channel Mat with the same depth of distorted points.
Python prototype (for reference only):
undistortPoints(distorted, K, D, xi, R[, undistorted]) -> undistorted