View Source Evision.Omnidir (Evision v0.1.38)

Summary

Types

t()

Type that represents an Omnidir struct.

Functions

Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.

Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.

Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.

Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.

Projects points for omnidirectional camera using CMei's model

Projects points for omnidirectional camera using CMei's model

Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.

Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.

Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras

Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras

Undistort omnidirectional images to perspective images

Undistort omnidirectional images to perspective images

Undistort 2D image points for omnidirectional camera using CMei's model

Undistort 2D image points for omnidirectional camera using CMei's model

Types

@type t() :: %Evision.Omnidir{ref: reference()}

Type that represents an Omnidir struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

Link to this function

calibrate(objectPoints, imagePoints, size, k, xi, d, flags, criteria)

View Source

Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.

Positional Arguments
  • objectPoints: [Evision.Mat].

    Vector of vector of Vec3f object points in world (pattern) coordinate. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.

  • imagePoints: [Evision.Mat].

    Vector of vector of Vec2f corresponding image points of objectPoints. It must be the same size and the same type with objectPoints.

  • size: Size.

    Image size of calibration images.

  • flags: int.

    The flags that control calibrate

  • criteria: TermCriteria.

    Termination criteria for optimization

Return
  • retval: double

  • k: Evision.Mat.t().

    Output calibrated camera matrix.

  • xi: Evision.Mat.t().

    Output parameter xi for CMei's model

  • d: Evision.Mat.t().

    Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$

  • rvecs: [Evision.Mat].

    Output rotations for each calibration images

  • tvecs: [Evision.Mat].

    Output translation for each calibration images

  • idx: Evision.Mat.t().

    Indices of images that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().

Python prototype (for reference only):

calibrate(objectPoints, imagePoints, size, K, xi, D, flags, criteria[, rvecs[, tvecs[, idx]]]) -> retval, K, xi, D, rvecs, tvecs, idx
Link to this function

calibrate(objectPoints, imagePoints, size, k, xi, d, flags, criteria, opts)

View Source

Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.

Positional Arguments
  • objectPoints: [Evision.Mat].

    Vector of vector of Vec3f object points in world (pattern) coordinate. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.

  • imagePoints: [Evision.Mat].

    Vector of vector of Vec2f corresponding image points of objectPoints. It must be the same size and the same type with objectPoints.

  • size: Size.

    Image size of calibration images.

  • flags: int.

    The flags that control calibrate

  • criteria: TermCriteria.

    Termination criteria for optimization

Return
  • retval: double

  • k: Evision.Mat.t().

    Output calibrated camera matrix.

  • xi: Evision.Mat.t().

    Output parameter xi for CMei's model

  • d: Evision.Mat.t().

    Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$

  • rvecs: [Evision.Mat].

    Output rotations for each calibration images

  • tvecs: [Evision.Mat].

    Output translation for each calibration images

  • idx: Evision.Mat.t().

    Indices of images that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().

Python prototype (for reference only):

calibrate(objectPoints, imagePoints, size, K, xi, D, flags, criteria[, rvecs[, tvecs[, idx]]]) -> retval, K, xi, D, rvecs, tvecs, idx
Link to this function

initUndistortRectifyMap(k, d, xi, r, p, size, m1type, flags)

View Source

Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.

Positional Arguments
  • k: Evision.Mat.t().

    Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$, with depth CV_32F or CV_64F

  • d: Evision.Mat.t().

    Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$, with depth CV_32F or CV_64F

  • xi: Evision.Mat.t().

    The parameter xi for CMei's model

  • r: Evision.Mat.t().

    Rotation transform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3, with depth CV_32F or CV_64F

  • p: Evision.Mat.t().

    New camera matrix (3x3) or new projection matrix (3x4)

  • size: Size.

    Undistorted image size.

  • m1type: int.

    Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps() for details.

  • flags: int.

    Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC are supported.

Return
  • map1: Evision.Mat.t().

    The first output map.

  • map2: Evision.Mat.t().

    The second output map.

Python prototype (for reference only):

initUndistortRectifyMap(K, D, xi, R, P, size, m1type, flags[, map1[, map2]]) -> map1, map2
Link to this function

initUndistortRectifyMap(k, d, xi, r, p, size, m1type, flags, opts)

View Source

Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.

Positional Arguments
  • k: Evision.Mat.t().

    Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$, with depth CV_32F or CV_64F

  • d: Evision.Mat.t().

    Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$, with depth CV_32F or CV_64F

  • xi: Evision.Mat.t().

    The parameter xi for CMei's model

  • r: Evision.Mat.t().

    Rotation transform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3, with depth CV_32F or CV_64F

  • p: Evision.Mat.t().

    New camera matrix (3x3) or new projection matrix (3x4)

  • size: Size.

    Undistorted image size.

  • m1type: int.

    Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps() for details.

  • flags: int.

    Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC are supported.

Return
  • map1: Evision.Mat.t().

    The first output map.

  • map2: Evision.Mat.t().

    The second output map.

Python prototype (for reference only):

initUndistortRectifyMap(K, D, xi, R, P, size, m1type, flags[, map1[, map2]]) -> map1, map2
Link to this function

projectPoints(objectPoints, rvec, tvec, k, xi, d)

View Source

Projects points for omnidirectional camera using CMei's model

Positional Arguments
  • objectPoints: Evision.Mat.t().

    Object points in world coordinate, vector of vector of Vec3f or Mat of 1xN/Nx1 3-channel of type CV_32F and N is the number of points. 64F is also acceptable.

  • rvec: Evision.Mat.t().

    vector of rotation between world coordinate and camera coordinate, i.e., om

  • tvec: Evision.Mat.t().

    vector of translation between pattern coordinate and camera coordinate

  • k: Evision.Mat.t().

    Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.

  • xi: double.

    The parameter xi for CMei's model

  • d: Evision.Mat.t().

    Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.

Return
  • imagePoints: Evision.Mat.t().

    Output array of image points, vector of vector of Vec2f or 1xN/Nx1 2-channel of type CV_32F. 64F is also acceptable.

  • jacobian: Evision.Mat.t().

    Optional output 2Nx16 of type CV_64F jacobian matrix, contains the derivatives of image pixel points wrt parameters including \f$om, T, f_x, f_y, s, c_x, c_y, xi, k_1, k_2, p_1, p_2\f$. This matrix will be used in calibration by optimization.

The function projects object 3D points of world coordinate to image pixels, parameter by intrinsic and extrinsic parameters. Also, it optionally compute a by-product: the jacobian matrix containing contains the derivatives of image pixel points wrt intrinsic and extrinsic parameters.

Python prototype (for reference only):

projectPoints(objectPoints, rvec, tvec, K, xi, D[, imagePoints[, jacobian]]) -> imagePoints, jacobian
Link to this function

projectPoints(objectPoints, rvec, tvec, k, xi, d, opts)

View Source

Projects points for omnidirectional camera using CMei's model

Positional Arguments
  • objectPoints: Evision.Mat.t().

    Object points in world coordinate, vector of vector of Vec3f or Mat of 1xN/Nx1 3-channel of type CV_32F and N is the number of points. 64F is also acceptable.

  • rvec: Evision.Mat.t().

    vector of rotation between world coordinate and camera coordinate, i.e., om

  • tvec: Evision.Mat.t().

    vector of translation between pattern coordinate and camera coordinate

  • k: Evision.Mat.t().

    Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.

  • xi: double.

    The parameter xi for CMei's model

  • d: Evision.Mat.t().

    Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.

Return
  • imagePoints: Evision.Mat.t().

    Output array of image points, vector of vector of Vec2f or 1xN/Nx1 2-channel of type CV_32F. 64F is also acceptable.

  • jacobian: Evision.Mat.t().

    Optional output 2Nx16 of type CV_64F jacobian matrix, contains the derivatives of image pixel points wrt parameters including \f$om, T, f_x, f_y, s, c_x, c_y, xi, k_1, k_2, p_1, p_2\f$. This matrix will be used in calibration by optimization.

The function projects object 3D points of world coordinate to image pixels, parameter by intrinsic and extrinsic parameters. Also, it optionally compute a by-product: the jacobian matrix containing contains the derivatives of image pixel points wrt intrinsic and extrinsic parameters.

Python prototype (for reference only):

projectPoints(objectPoints, rvec, tvec, K, xi, D[, imagePoints[, jacobian]]) -> imagePoints, jacobian
Link to this function

stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, k1, xi1, d1, k2, xi2, d2, flags, criteria)

View Source

Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.

Positional Arguments
  • imageSize1: Size.

    Image size of calibration images of the first camera.

  • imageSize2: Size.

    Image size of calibration images of the second camera.

  • flags: int.

    The flags that control stereoCalibrate

  • criteria: TermCriteria.

    Termination criteria for optimization

Return
  • retval: double

  • objectPoints: [Evision.Mat].

    Object points in world (pattern) coordinate. Its type is vector<vector<Vec3f> >. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.

  • imagePoints1: [Evision.Mat].

    The corresponding image points of the first camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.

  • imagePoints2: [Evision.Mat].

    The corresponding image points of the second camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.

  • k1: Evision.Mat.t().

    Output camera matrix for the first camera.

  • xi1: Evision.Mat.t().

    Output parameter xi of Mei's model for the first camera

  • d1: Evision.Mat.t().

    Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera

  • k2: Evision.Mat.t().

    Output camera matrix for the first camera.

  • xi2: Evision.Mat.t().

    Output parameter xi of CMei's model for the second camera

  • d2: Evision.Mat.t().

    Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera

  • rvec: Evision.Mat.t().

    Output rotation between the first and second camera

  • tvec: Evision.Mat.t().

    Output translation between the first and second camera

  • rvecsL: [Evision.Mat].

    Output rotation for each image of the first camera

  • tvecsL: [Evision.Mat].

    Output translation for each image of the first camera

  • idx: Evision.Mat.t().

    Indices of image pairs that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().

@

Python prototype (for reference only):

stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, K1, xi1, D1, K2, xi2, D2, flags, criteria[, rvec[, tvec[, rvecsL[, tvecsL[, idx]]]]]) -> retval, objectPoints, imagePoints1, imagePoints2, K1, xi1, D1, K2, xi2, D2, rvec, tvec, rvecsL, tvecsL, idx
Link to this function

stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, k1, xi1, d1, k2, xi2, d2, flags, criteria, opts)

View Source

Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.

Positional Arguments
  • imageSize1: Size.

    Image size of calibration images of the first camera.

  • imageSize2: Size.

    Image size of calibration images of the second camera.

  • flags: int.

    The flags that control stereoCalibrate

  • criteria: TermCriteria.

    Termination criteria for optimization

Return
  • retval: double

  • objectPoints: [Evision.Mat].

    Object points in world (pattern) coordinate. Its type is vector<vector<Vec3f> >. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.

  • imagePoints1: [Evision.Mat].

    The corresponding image points of the first camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.

  • imagePoints2: [Evision.Mat].

    The corresponding image points of the second camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.

  • k1: Evision.Mat.t().

    Output camera matrix for the first camera.

  • xi1: Evision.Mat.t().

    Output parameter xi of Mei's model for the first camera

  • d1: Evision.Mat.t().

    Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera

  • k2: Evision.Mat.t().

    Output camera matrix for the first camera.

  • xi2: Evision.Mat.t().

    Output parameter xi of CMei's model for the second camera

  • d2: Evision.Mat.t().

    Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera

  • rvec: Evision.Mat.t().

    Output rotation between the first and second camera

  • tvec: Evision.Mat.t().

    Output translation between the first and second camera

  • rvecsL: [Evision.Mat].

    Output rotation for each image of the first camera

  • tvecsL: [Evision.Mat].

    Output translation for each image of the first camera

  • idx: Evision.Mat.t().

    Indices of image pairs that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().

@

Python prototype (for reference only):

stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, K1, xi1, D1, K2, xi2, D2, flags, criteria[, rvec[, tvec[, rvecsL[, tvecsL[, idx]]]]]) -> retval, objectPoints, imagePoints1, imagePoints2, K1, xi1, D1, K2, xi2, D2, rvec, tvec, rvecsL, tvecsL, idx
Link to this function

stereoReconstruct(image1, image2, k1, d1, xi1, k2, d2, xi2, r, t, flag, numDisparities, sADWindowSize)

View Source

Stereo 3D reconstruction from a pair of images

Positional Arguments
  • image1: Evision.Mat.t().

    The first input image

  • image2: Evision.Mat.t().

    The second input image

  • k1: Evision.Mat.t().

    Input camera matrix of the first camera

  • d1: Evision.Mat.t().

    Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera

  • xi1: Evision.Mat.t().

    Input parameter xi for the first camera for CMei's model

  • k2: Evision.Mat.t().

    Input camera matrix of the second camera

  • d2: Evision.Mat.t().

    Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera

  • xi2: Evision.Mat.t().

    Input parameter xi for the second camera for CMei's model

  • r: Evision.Mat.t().

    Rotation between the first and second camera

  • t: Evision.Mat.t().

    Translation between the first and second camera

  • flag: int.

    Flag of rectification type, RECTIFY_PERSPECTIVE or RECTIFY_LONGLATI

  • numDisparities: int.

    The parameter 'numDisparities' in StereoSGBM, see StereoSGBM for details.

  • sADWindowSize: int.

    The parameter 'SADWindowSize' in StereoSGBM, see StereoSGBM for details.

Keyword Arguments
  • newSize: Size.

    Image size of rectified image, see omnidir::undistortImage

  • knew: Evision.Mat.t().

    New camera matrix of rectified image, see omnidir::undistortImage

  • pointType: int.

    Point cloud type, it can be XYZRGB or XYZ

Return
  • disparity: Evision.Mat.t().

    Disparity map generated by stereo matching

  • image1Rec: Evision.Mat.t().

    Rectified image of the first image

  • image2Rec: Evision.Mat.t().

    rectified image of the second image

  • pointCloud: Evision.Mat.t().

    Point cloud of 3D reconstruction, with type CV_64FC3

Python prototype (for reference only):

stereoReconstruct(image1, image2, K1, D1, xi1, K2, D2, xi2, R, T, flag, numDisparities, SADWindowSize[, disparity[, image1Rec[, image2Rec[, newSize[, Knew[, pointCloud[, pointType]]]]]]]) -> disparity, image1Rec, image2Rec, pointCloud
Link to this function

stereoReconstruct(image1, image2, k1, d1, xi1, k2, d2, xi2, r, t, flag, numDisparities, sADWindowSize, opts)

View Source

Stereo 3D reconstruction from a pair of images

Positional Arguments
  • image1: Evision.Mat.t().

    The first input image

  • image2: Evision.Mat.t().

    The second input image

  • k1: Evision.Mat.t().

    Input camera matrix of the first camera

  • d1: Evision.Mat.t().

    Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera

  • xi1: Evision.Mat.t().

    Input parameter xi for the first camera for CMei's model

  • k2: Evision.Mat.t().

    Input camera matrix of the second camera

  • d2: Evision.Mat.t().

    Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera

  • xi2: Evision.Mat.t().

    Input parameter xi for the second camera for CMei's model

  • r: Evision.Mat.t().

    Rotation between the first and second camera

  • t: Evision.Mat.t().

    Translation between the first and second camera

  • flag: int.

    Flag of rectification type, RECTIFY_PERSPECTIVE or RECTIFY_LONGLATI

  • numDisparities: int.

    The parameter 'numDisparities' in StereoSGBM, see StereoSGBM for details.

  • sADWindowSize: int.

    The parameter 'SADWindowSize' in StereoSGBM, see StereoSGBM for details.

Keyword Arguments
  • newSize: Size.

    Image size of rectified image, see omnidir::undistortImage

  • knew: Evision.Mat.t().

    New camera matrix of rectified image, see omnidir::undistortImage

  • pointType: int.

    Point cloud type, it can be XYZRGB or XYZ

Return
  • disparity: Evision.Mat.t().

    Disparity map generated by stereo matching

  • image1Rec: Evision.Mat.t().

    Rectified image of the first image

  • image2Rec: Evision.Mat.t().

    rectified image of the second image

  • pointCloud: Evision.Mat.t().

    Point cloud of 3D reconstruction, with type CV_64FC3

Python prototype (for reference only):

stereoReconstruct(image1, image2, K1, D1, xi1, K2, D2, xi2, R, T, flag, numDisparities, SADWindowSize[, disparity[, image1Rec[, image2Rec[, newSize[, Knew[, pointCloud[, pointType]]]]]]]) -> disparity, image1Rec, image2Rec, pointCloud
@spec stereoRectify(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}

Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras

Positional Arguments
  • r: Evision.Mat.t().

    Rotation between the first and second camera

  • t: Evision.Mat.t().

    Translation between the first and second camera

Return
  • r1: Evision.Mat.t().

    Output 3x3 rotation matrix for the first camera

  • r2: Evision.Mat.t().

    Output 3x3 rotation matrix for the second camera

Python prototype (for reference only):

stereoRectify(R, T[, R1[, R2]]) -> R1, R2
Link to this function

stereoRectify(r, t, opts)

View Source
@spec stereoRectify(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{atom(), term()}, ...] | nil
) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}

Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras

Positional Arguments
  • r: Evision.Mat.t().

    Rotation between the first and second camera

  • t: Evision.Mat.t().

    Translation between the first and second camera

Return
  • r1: Evision.Mat.t().

    Output 3x3 rotation matrix for the first camera

  • r2: Evision.Mat.t().

    Output 3x3 rotation matrix for the second camera

Python prototype (for reference only):

stereoRectify(R, T[, R1[, R2]]) -> R1, R2
Link to this function

undistortImage(distorted, k, d, xi, flags)

View Source

Undistort omnidirectional images to perspective images

Positional Arguments
  • distorted: Evision.Mat.t().

    The input omnidirectional image.

  • k: Evision.Mat.t().

    Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.

  • d: Evision.Mat.t().

    Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.

  • xi: Evision.Mat.t().

    The parameter xi for CMei's model.

  • flags: int.

    Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC

Keyword Arguments
  • knew: Evision.Mat.t().

    Camera matrix of the distorted image. If it is not assigned, it is just K.

  • new_size: Size.

    The new image size. By default, it is the size of distorted.

  • r: Evision.Mat.t().

    Rotation matrix between the input and output images. By default, it is identity matrix.

Return
  • undistorted: Evision.Mat.t().

    The output undistorted image.

Python prototype (for reference only):

undistortImage(distorted, K, D, xi, flags[, undistorted[, Knew[, new_size[, R]]]]) -> undistorted
Link to this function

undistortImage(distorted, k, d, xi, flags, opts)

View Source
@spec undistortImage(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Undistort omnidirectional images to perspective images

Positional Arguments
  • distorted: Evision.Mat.t().

    The input omnidirectional image.

  • k: Evision.Mat.t().

    Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.

  • d: Evision.Mat.t().

    Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.

  • xi: Evision.Mat.t().

    The parameter xi for CMei's model.

  • flags: int.

    Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC

Keyword Arguments
  • knew: Evision.Mat.t().

    Camera matrix of the distorted image. If it is not assigned, it is just K.

  • new_size: Size.

    The new image size. By default, it is the size of distorted.

  • r: Evision.Mat.t().

    Rotation matrix between the input and output images. By default, it is identity matrix.

Return
  • undistorted: Evision.Mat.t().

    The output undistorted image.

Python prototype (for reference only):

undistortImage(distorted, K, D, xi, flags[, undistorted[, Knew[, new_size[, R]]]]) -> undistorted
Link to this function

undistortPoints(distorted, k, d, xi, r)

View Source

Undistort 2D image points for omnidirectional camera using CMei's model

Positional Arguments
  • distorted: Evision.Mat.t().

    Array of distorted image points, vector of Vec2f or 1xN/Nx1 2-channel Mat of type CV_32F, 64F depth is also acceptable

  • k: Evision.Mat.t().

    Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.

  • d: Evision.Mat.t().

    Distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.

  • xi: Evision.Mat.t().

    The parameter xi for CMei's model

  • r: Evision.Mat.t().

    Rotation trainsform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel

Return
  • undistorted: Evision.Mat.t().

    array of normalized object points, vector of Vec2f/Vec2d or 1xN/Nx1 2-channel Mat with the same depth of distorted points.

Python prototype (for reference only):

undistortPoints(distorted, K, D, xi, R[, undistorted]) -> undistorted
Link to this function

undistortPoints(distorted, k, d, xi, r, opts)

View Source

Undistort 2D image points for omnidirectional camera using CMei's model

Positional Arguments
  • distorted: Evision.Mat.t().

    Array of distorted image points, vector of Vec2f or 1xN/Nx1 2-channel Mat of type CV_32F, 64F depth is also acceptable

  • k: Evision.Mat.t().

    Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.

  • d: Evision.Mat.t().

    Distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.

  • xi: Evision.Mat.t().

    The parameter xi for CMei's model

  • r: Evision.Mat.t().

    Rotation trainsform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel

Return
  • undistorted: Evision.Mat.t().

    array of normalized object points, vector of Vec2f/Vec2d or 1xN/Nx1 2-channel Mat with the same depth of distorted points.

Python prototype (for reference only):

undistortPoints(distorted, K, D, xi, R[, undistorted]) -> undistorted