View Source Evision.FishEye (Evision v0.2.9)

Summary

Types

t()

Type that represents an FishEye struct.

Functions

Distorts 2D points using fisheye model.

Distorts 2D points using fisheye model.

Estimates new camera intrinsic matrix for undistortion or rectification.

Estimates new camera intrinsic matrix for undistortion or rectification.

Computes undistortion and rectification maps for image transform by #remap. If D is empty zero distortion is used, if R or P is empty identity matrixes are used.

Computes undistortion and rectification maps for image transform by #remap. If D is empty zero distortion is used, if R or P is empty identity matrixes are used.

Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.

Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.

Stereo rectification for fisheye camera model

Stereo rectification for fisheye camera model

Transforms an image to compensate for fisheye lens distortion.

Transforms an image to compensate for fisheye lens distortion.

Undistorts 2D points using fisheye model

Undistorts 2D points using fisheye model

Enumerator

Types

@type enum() :: integer()
@type t() :: %Evision.FishEye{ref: reference()}

Type that represents an FishEye struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec calibrate(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

calibrate(objectPoints, imagePoints, image_size, k, d)

View Source

Performs camera calibration

Positional Arguments
  • objectPoints: [Evision.Mat].

    vector of vectors of calibration pattern points in the calibration pattern coordinate space.

  • imagePoints: [Evision.Mat].

    vector of vectors of the projections of calibration pattern points. imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i.

  • image_size: Size.

    Size of the image used only to initialize the camera intrinsic matrix.

Keyword Arguments
  • flags: integer().

    Different flags that may be zero or a combination of the following values:

    • @ref fisheye::CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a least-squares fashion.
    • @ref fisheye::CALIB_RECOMPUTE_EXTRINSIC Extrinsic will be recomputed after each iteration of intrinsic optimization.
    • @ref fisheye::CALIB_CHECK_COND The functions will check validity of condition number.
    • @ref fisheye::CALIB_FIX_SKEW Skew coefficient (alpha) is set to zero and stay zero.
    • @ref fisheye::CALIB_FIX_K1,..., @ref fisheye::CALIB_FIX_K4 Selected distortion coefficients are set to zeros and stay zero.
    • @ref fisheye::CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global optimization. It stays at the center or at a different location specified when @ref fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
    • @ref fisheye::CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization. It is the \f$max(width,height)/\pi\f$ or the provided \f$f_x\f$, \f$f_y\f$ when @ref fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
  • criteria: TermCriteria.

    Termination criteria for the iterative optimization algorithm.

Return
  • retval: double

  • k: Evision.Mat.t().

    Output 3x3 floating-point camera intrinsic matrix \f$\cameramatrix{A}\f$ . If

  • d: Evision.Mat.t().

    Output vector of distortion coefficients \f$\distcoeffsfisheye\f$.

  • rvecs: [Evision.Mat].

    Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1).

  • tvecs: [Evision.Mat].

    Output vector of translation vectors estimated for each pattern view.

@ref fisheye::CALIB_USE_INTRINSIC_GUESS is specified, some or all of fx, fy, cx, cy must be initialized before calling the function.

Python prototype (for reference only):

calibrate(objectPoints, imagePoints, image_size, K, D[, rvecs[, tvecs[, flags[, criteria]]]]) -> retval, K, D, rvecs, tvecs
Link to this function

calibrate(objectPoints, imagePoints, image_size, k, d, opts)

View Source
@spec calibrate(
  [Evision.Mat.maybe_mat_in()],
  [Evision.Mat.maybe_mat_in()],
  {number(), number()},
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [criteria: term(), flags: term()] | nil
) ::
  {number(), Evision.Mat.t(), Evision.Mat.t(), [Evision.Mat.t()],
   [Evision.Mat.t()]}
  | {:error, String.t()}

Performs camera calibration

Positional Arguments
  • objectPoints: [Evision.Mat].

    vector of vectors of calibration pattern points in the calibration pattern coordinate space.

  • imagePoints: [Evision.Mat].

    vector of vectors of the projections of calibration pattern points. imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i.

  • image_size: Size.

    Size of the image used only to initialize the camera intrinsic matrix.

Keyword Arguments
  • flags: integer().

    Different flags that may be zero or a combination of the following values:

    • @ref fisheye::CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a least-squares fashion.
    • @ref fisheye::CALIB_RECOMPUTE_EXTRINSIC Extrinsic will be recomputed after each iteration of intrinsic optimization.
    • @ref fisheye::CALIB_CHECK_COND The functions will check validity of condition number.
    • @ref fisheye::CALIB_FIX_SKEW Skew coefficient (alpha) is set to zero and stay zero.
    • @ref fisheye::CALIB_FIX_K1,..., @ref fisheye::CALIB_FIX_K4 Selected distortion coefficients are set to zeros and stay zero.
    • @ref fisheye::CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global optimization. It stays at the center or at a different location specified when @ref fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
    • @ref fisheye::CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization. It is the \f$max(width,height)/\pi\f$ or the provided \f$f_x\f$, \f$f_y\f$ when @ref fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
  • criteria: TermCriteria.

    Termination criteria for the iterative optimization algorithm.

Return
  • retval: double

  • k: Evision.Mat.t().

    Output 3x3 floating-point camera intrinsic matrix \f$\cameramatrix{A}\f$ . If

  • d: Evision.Mat.t().

    Output vector of distortion coefficients \f$\distcoeffsfisheye\f$.

  • rvecs: [Evision.Mat].

    Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1).

  • tvecs: [Evision.Mat].

    Output vector of translation vectors estimated for each pattern view.

@ref fisheye::CALIB_USE_INTRINSIC_GUESS is specified, some or all of fx, fy, cx, cy must be initialized before calling the function.

Python prototype (for reference only):

calibrate(objectPoints, imagePoints, image_size, K, D[, rvecs[, tvecs[, flags[, criteria]]]]) -> retval, K, D, rvecs, tvecs
Link to this function

distortPoints(named_args)

View Source
@spec distortPoints(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

distortPoints(undistorted, k, d)

View Source

Distorts 2D points using fisheye model.

Positional Arguments
  • undistorted: Evision.Mat.

    Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f> ), where N is the number of points in the view.

  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

Keyword Arguments
  • alpha: double.

    The skew coefficient.

Return
  • distorted: Evision.Mat.t().

    Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f> .

Note that the function assumes the camera intrinsic matrix of the undistorted points to be identity. This means if you want to distort image points you have to multiply them with \f$K^{-1}\f$.

Python prototype (for reference only):

distortPoints(undistorted, K, D[, distorted[, alpha]]) -> distorted
Link to this function

distortPoints(undistorted, k, d, opts)

View Source
@spec distortPoints(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:alpha, term()}] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Distorts 2D points using fisheye model.

Positional Arguments
  • undistorted: Evision.Mat.

    Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f> ), where N is the number of points in the view.

  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

Keyword Arguments
  • alpha: double.

    The skew coefficient.

Return
  • distorted: Evision.Mat.t().

    Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f> .

Note that the function assumes the camera intrinsic matrix of the undistorted points to be identity. This means if you want to distort image points you have to multiply them with \f$K^{-1}\f$.

Python prototype (for reference only):

distortPoints(undistorted, K, D[, distorted[, alpha]]) -> distorted
Link to this function

estimateNewCameraMatrixForUndistortRectify(named_args)

View Source
@spec estimateNewCameraMatrixForUndistortRectify(Keyword.t()) ::
  any() | {:error, String.t()}
Link to this function

estimateNewCameraMatrixForUndistortRectify(k, d, image_size, r)

View Source
@spec estimateNewCameraMatrixForUndistortRectify(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  {number(), number()},
  Evision.Mat.maybe_mat_in()
) :: Evision.Mat.t() | {:error, String.t()}

Estimates new camera intrinsic matrix for undistortion or rectification.

Positional Arguments
  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

  • image_size: Size.

    Size of the image

  • r: Evision.Mat.

    Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel

Keyword Arguments
  • balance: double.

    Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1].

  • new_size: Size.

    the new size

  • fov_scale: double.

    Divisor for new focal length.

Return
  • p: Evision.Mat.t().

    New camera intrinsic matrix (3x3) or new projection matrix (3x4)

Python prototype (for reference only):

estimateNewCameraMatrixForUndistortRectify(K, D, image_size, R[, P[, balance[, new_size[, fov_scale]]]]) -> P
Link to this function

estimateNewCameraMatrixForUndistortRectify(k, d, image_size, r, opts)

View Source
@spec estimateNewCameraMatrixForUndistortRectify(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  {number(), number()},
  Evision.Mat.maybe_mat_in(),
  [balance: term(), fov_scale: term(), new_size: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Estimates new camera intrinsic matrix for undistortion or rectification.

Positional Arguments
  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

  • image_size: Size.

    Size of the image

  • r: Evision.Mat.

    Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel

Keyword Arguments
  • balance: double.

    Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1].

  • new_size: Size.

    the new size

  • fov_scale: double.

    Divisor for new focal length.

Return
  • p: Evision.Mat.t().

    New camera intrinsic matrix (3x3) or new projection matrix (3x4)

Python prototype (for reference only):

estimateNewCameraMatrixForUndistortRectify(K, D, image_size, R[, P[, balance[, new_size[, fov_scale]]]]) -> P
Link to this function

initUndistortRectifyMap(named_args)

View Source
@spec initUndistortRectifyMap(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

initUndistortRectifyMap(k, d, r, p, size, m1type)

View Source

Computes undistortion and rectification maps for image transform by #remap. If D is empty zero distortion is used, if R or P is empty identity matrixes are used.

Positional Arguments
  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

  • r: Evision.Mat.

    Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel

  • p: Evision.Mat.

    New camera intrinsic matrix (3x3) or new projection matrix (3x4)

  • size: Size.

    Undistorted image size.

  • m1type: integer().

    Type of the first output map that can be CV_32FC1 or CV_16SC2 . See #convertMaps for details.

Return
  • map1: Evision.Mat.t().

    The first output map.

  • map2: Evision.Mat.t().

    The second output map.

Python prototype (for reference only):

initUndistortRectifyMap(K, D, R, P, size, m1type[, map1[, map2]]) -> map1, map2
Link to this function

initUndistortRectifyMap(k, d, r, p, size, m1type, opts)

View Source
@spec initUndistortRectifyMap(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  {number(), number()},
  integer(),
  [{atom(), term()}, ...] | nil
) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}

Computes undistortion and rectification maps for image transform by #remap. If D is empty zero distortion is used, if R or P is empty identity matrixes are used.

Positional Arguments
  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

  • r: Evision.Mat.

    Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel

  • p: Evision.Mat.

    New camera intrinsic matrix (3x3) or new projection matrix (3x4)

  • size: Size.

    Undistorted image size.

  • m1type: integer().

    Type of the first output map that can be CV_32FC1 or CV_16SC2 . See #convertMaps for details.

Return
  • map1: Evision.Mat.t().

    The first output map.

  • map2: Evision.Mat.t().

    The second output map.

Python prototype (for reference only):

initUndistortRectifyMap(K, D, R, P, size, m1type[, map1[, map2]]) -> map1, map2
Link to this function

projectPoints(named_args)

View Source
@spec projectPoints(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

projectPoints(objectPoints, rvec, tvec, k, d)

View Source

projectPoints

Positional Arguments
Keyword Arguments
  • alpha: double.
Return
  • imagePoints: Evision.Mat.t().
  • jacobian: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

projectPoints(objectPoints, rvec, tvec, K, D[, imagePoints[, alpha[, jacobian]]]) -> imagePoints, jacobian
Link to this function

projectPoints(objectPoints, rvec, tvec, k, d, opts)

View Source

projectPoints

Positional Arguments
Keyword Arguments
  • alpha: double.
Return
  • imagePoints: Evision.Mat.t().
  • jacobian: Evision.Mat.t().

Has overloading in C++

Python prototype (for reference only):

projectPoints(objectPoints, rvec, tvec, K, D[, imagePoints[, alpha[, jacobian]]]) -> imagePoints, jacobian
@spec solvePnP(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs)

View Source

Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.

Positional Arguments
  • objectPoints: Evision.Mat.

    Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector\<Point3d> can be also passed here.

  • imagePoints: Evision.Mat.

    Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector\<Point2d> can be also passed here.

  • cameraMatrix: Evision.Mat.

    Input camera intrinsic matrix \f$\cameramatrix{A}\f$ .

  • distCoeffs: Evision.Mat.

    Input vector of distortion coefficients (4x1/1x4).

Keyword Arguments
  • useExtrinsicGuess: bool.

    Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.

  • flags: integer().

    Method for solving a PnP problem: see @ref calib3d_solvePnP_flags This function returns the rotation and the translation vectors that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame, using different methods:

    • P3P methods (@ref SOLVEPNP_P3P, @ref SOLVEPNP_AP3P): need 4 input points to return a unique solution.
    • @ref SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
    • @ref SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation. Number of input points must be 4. Object points must be defined in the following order:
    • point 0: [-squareLength / 2, squareLength / 2, 0]
    • point 1: [ squareLength / 2, squareLength / 2, 0]
    • point 2: [ squareLength / 2, -squareLength / 2, 0]
    • point 3: [-squareLength / 2, -squareLength / 2, 0]
    • for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
  • criteria: TermCriteria.

    Termination criteria for internal undistortPoints call. The function interally undistorts points with @ref undistortPoints and call @ref cv::solvePnP, thus the input are very similar. Check there and Perspective-n-Points is described in @ref calib3d_solvePnP for more information.

Return
  • retval: bool

  • rvec: Evision.Mat.t().

    Output rotation vector (see @ref Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.

  • tvec: Evision.Mat.t().

    Output translation vector.

Python prototype (for reference only):

solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags[, criteria]]]]]) -> retval, rvec, tvec
Link to this function

solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, opts)

View Source
@spec solvePnP(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [criteria: term(), flags: term(), useExtrinsicGuess: term()] | nil
) :: {Evision.Mat.t(), Evision.Mat.t()} | false | {:error, String.t()}

Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.

Positional Arguments
  • objectPoints: Evision.Mat.

    Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector\<Point3d> can be also passed here.

  • imagePoints: Evision.Mat.

    Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector\<Point2d> can be also passed here.

  • cameraMatrix: Evision.Mat.

    Input camera intrinsic matrix \f$\cameramatrix{A}\f$ .

  • distCoeffs: Evision.Mat.

    Input vector of distortion coefficients (4x1/1x4).

Keyword Arguments
  • useExtrinsicGuess: bool.

    Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.

  • flags: integer().

    Method for solving a PnP problem: see @ref calib3d_solvePnP_flags This function returns the rotation and the translation vectors that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame, using different methods:

    • P3P methods (@ref SOLVEPNP_P3P, @ref SOLVEPNP_AP3P): need 4 input points to return a unique solution.
    • @ref SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
    • @ref SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation. Number of input points must be 4. Object points must be defined in the following order:
    • point 0: [-squareLength / 2, squareLength / 2, 0]
    • point 1: [ squareLength / 2, squareLength / 2, 0]
    • point 2: [ squareLength / 2, -squareLength / 2, 0]
    • point 3: [-squareLength / 2, -squareLength / 2, 0]
    • for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
  • criteria: TermCriteria.

    Termination criteria for internal undistortPoints call. The function interally undistorts points with @ref undistortPoints and call @ref cv::solvePnP, thus the input are very similar. Check there and Perspective-n-Points is described in @ref calib3d_solvePnP for more information.

Return
  • retval: bool

  • rvec: Evision.Mat.t().

    Output rotation vector (see @ref Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.

  • tvec: Evision.Mat.t().

    Output translation vector.

Python prototype (for reference only):

solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags[, criteria]]]]]) -> retval, rvec, tvec
Link to this function

stereoCalibrate(named_args)

View Source
@spec stereoCalibrate(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

stereoCalibrate(objectPoints, imagePoints1, imagePoints2, k1, d1, k2, d2, imageSize)

View Source

stereoCalibrate

Positional Arguments
  • objectPoints: [Evision.Mat]
  • imagePoints1: [Evision.Mat]
  • imagePoints2: [Evision.Mat]
  • imageSize: Size
Keyword Arguments
  • flags: integer().
  • criteria: TermCriteria.
Return
  • retval: double
  • k1: Evision.Mat.t()
  • d1: Evision.Mat.t()
  • k2: Evision.Mat.t()
  • d2: Evision.Mat.t()
  • r: Evision.Mat.t().
  • t: Evision.Mat.t().

Python prototype (for reference only):

stereoCalibrate(objectPoints, imagePoints1, imagePoints2, K1, D1, K2, D2, imageSize[, R[, T[, flags[, criteria]]]]) -> retval, K1, D1, K2, D2, R, T
Link to this function

stereoCalibrate(objectPoints, imagePoints1, imagePoints2, k1, d1, k2, d2, imageSize, opts)

View Source

stereoCalibrate

Positional Arguments
  • objectPoints: [Evision.Mat]
  • imagePoints1: [Evision.Mat]
  • imagePoints2: [Evision.Mat]
  • imageSize: Size
Keyword Arguments
  • flags: integer().
  • criteria: TermCriteria.
Return
  • retval: double
  • k1: Evision.Mat.t()
  • d1: Evision.Mat.t()
  • k2: Evision.Mat.t()
  • d2: Evision.Mat.t()
  • r: Evision.Mat.t().
  • t: Evision.Mat.t().

Python prototype (for reference only):

stereoCalibrate(objectPoints, imagePoints1, imagePoints2, K1, D1, K2, D2, imageSize[, R[, T[, flags[, criteria]]]]) -> retval, K1, D1, K2, D2, R, T
Link to this function

stereoRectify(named_args)

View Source
@spec stereoRectify(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

stereoRectify(k1, d1, k2, d2, imageSize, r, tvec, flags)

View Source

Stereo rectification for fisheye camera model

Positional Arguments
  • k1: Evision.Mat.

    First camera intrinsic matrix.

  • d1: Evision.Mat.

    First camera distortion parameters.

  • k2: Evision.Mat.

    Second camera intrinsic matrix.

  • d2: Evision.Mat.

    Second camera distortion parameters.

  • imageSize: Size.

    Size of the image used for stereo calibration.

  • r: Evision.Mat.

    Rotation matrix between the coordinate systems of the first and the second cameras.

  • tvec: Evision.Mat.

    Translation vector between coordinate systems of the cameras.

  • flags: integer().

    Operation flags that may be zero or @ref fisheye::CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.

Keyword Arguments
  • newImageSize: Size.

    New image resolution after rectification. The same size should be passed to #initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.

  • balance: double.

    Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1].

  • fov_scale: double.

    Divisor for new focal length.

Return
  • r1: Evision.Mat.t().

    Output 3x3 rectification transform (rotation matrix) for the first camera.

  • r2: Evision.Mat.t().

    Output 3x3 rectification transform (rotation matrix) for the second camera.

  • p1: Evision.Mat.t().

    Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.

  • p2: Evision.Mat.t().

    Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.

  • q: Evision.Mat.t().

    Output \f$4 \times 4\f$ disparity-to-depth mapping matrix (see #reprojectImageTo3D ).

Python prototype (for reference only):

stereoRectify(K1, D1, K2, D2, imageSize, R, tvec, flags[, R1[, R2[, P1[, P2[, Q[, newImageSize[, balance[, fov_scale]]]]]]]]) -> R1, R2, P1, P2, Q
Link to this function

stereoRectify(k1, d1, k2, d2, imageSize, r, tvec, flags, opts)

View Source

Stereo rectification for fisheye camera model

Positional Arguments
  • k1: Evision.Mat.

    First camera intrinsic matrix.

  • d1: Evision.Mat.

    First camera distortion parameters.

  • k2: Evision.Mat.

    Second camera intrinsic matrix.

  • d2: Evision.Mat.

    Second camera distortion parameters.

  • imageSize: Size.

    Size of the image used for stereo calibration.

  • r: Evision.Mat.

    Rotation matrix between the coordinate systems of the first and the second cameras.

  • tvec: Evision.Mat.

    Translation vector between coordinate systems of the cameras.

  • flags: integer().

    Operation flags that may be zero or @ref fisheye::CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.

Keyword Arguments
  • newImageSize: Size.

    New image resolution after rectification. The same size should be passed to #initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.

  • balance: double.

    Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1].

  • fov_scale: double.

    Divisor for new focal length.

Return
  • r1: Evision.Mat.t().

    Output 3x3 rectification transform (rotation matrix) for the first camera.

  • r2: Evision.Mat.t().

    Output 3x3 rectification transform (rotation matrix) for the second camera.

  • p1: Evision.Mat.t().

    Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.

  • p2: Evision.Mat.t().

    Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.

  • q: Evision.Mat.t().

    Output \f$4 \times 4\f$ disparity-to-depth mapping matrix (see #reprojectImageTo3D ).

Python prototype (for reference only):

stereoRectify(K1, D1, K2, D2, imageSize, R, tvec, flags[, R1[, R2[, P1[, P2[, Q[, newImageSize[, balance[, fov_scale]]]]]]]]) -> R1, R2, P1, P2, Q
Link to this function

undistortImage(named_args)

View Source
@spec undistortImage(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

undistortImage(distorted, k, d)

View Source

Transforms an image to compensate for fisheye lens distortion.

Positional Arguments
  • distorted: Evision.Mat.

    image with fisheye lens distortion.

  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

Keyword Arguments
  • knew: Evision.Mat.

    Camera intrinsic matrix of the distorted image. By default, it is the identity matrix but you may additionally scale and shift the result by using a different matrix.

  • new_size: Size.

    the new size

Return
  • undistorted: Evision.Mat.t().

    Output image with compensated fisheye lens distortion.

The function transforms an image to compensate radial and tangential lens distortion. The function is simply a combination of #fisheye::initUndistortRectifyMap (with unity R ) and #remap (with bilinear interpolation). See the former function for details of the transformation being performed. See below the results of undistortImage.

  • a) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3, k_4, k_5, k_6) of distortion were optimized under calibration)

  • b) result of #fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2, k_3, k_4) of fisheye distortion were optimized under calibration)

  • c) original image was captured with fisheye lens

Pictures a) and b) almost the same. But if we consider points of image located far from the center of image, we can notice that on image a) these points are distorted. image

Python prototype (for reference only):

undistortImage(distorted, K, D[, undistorted[, Knew[, new_size]]]) -> undistorted
Link to this function

undistortImage(distorted, k, d, opts)

View Source
@spec undistortImage(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [knew: term(), new_size: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Transforms an image to compensate for fisheye lens distortion.

Positional Arguments
  • distorted: Evision.Mat.

    image with fisheye lens distortion.

  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

Keyword Arguments
  • knew: Evision.Mat.

    Camera intrinsic matrix of the distorted image. By default, it is the identity matrix but you may additionally scale and shift the result by using a different matrix.

  • new_size: Size.

    the new size

Return
  • undistorted: Evision.Mat.t().

    Output image with compensated fisheye lens distortion.

The function transforms an image to compensate radial and tangential lens distortion. The function is simply a combination of #fisheye::initUndistortRectifyMap (with unity R ) and #remap (with bilinear interpolation). See the former function for details of the transformation being performed. See below the results of undistortImage.

  • a) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3, k_4, k_5, k_6) of distortion were optimized under calibration)

  • b) result of #fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2, k_3, k_4) of fisheye distortion were optimized under calibration)

  • c) original image was captured with fisheye lens

Pictures a) and b) almost the same. But if we consider points of image located far from the center of image, we can notice that on image a) these points are distorted. image

Python prototype (for reference only):

undistortImage(distorted, K, D[, undistorted[, Knew[, new_size]]]) -> undistorted
Link to this function

undistortPoints(named_args)

View Source
@spec undistortPoints(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

undistortPoints(distorted, k, d)

View Source

Undistorts 2D points using fisheye model

Positional Arguments
  • distorted: Evision.Mat.

    Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f> ), where N is the number of points in the view.

  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

Keyword Arguments
  • r: Evision.Mat.

    Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel

  • p: Evision.Mat.

    New camera intrinsic matrix (3x3) or new projection matrix (3x4)

  • criteria: TermCriteria.

    Termination criteria

Return
  • undistorted: Evision.Mat.t().

    Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f> .

Python prototype (for reference only):

undistortPoints(distorted, K, D[, undistorted[, R[, P[, criteria]]]]) -> undistorted
Link to this function

undistortPoints(distorted, k, d, opts)

View Source
@spec undistortPoints(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [criteria: term(), p: term(), r: term()] | nil
) :: Evision.Mat.t() | {:error, String.t()}

Undistorts 2D points using fisheye model

Positional Arguments
  • distorted: Evision.Mat.

    Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f> ), where N is the number of points in the view.

  • k: Evision.Mat.

    Camera intrinsic matrix \f$cameramatrix{K}\f$.

  • d: Evision.Mat.

    Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.

Keyword Arguments
  • r: Evision.Mat.

    Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel

  • p: Evision.Mat.

    New camera intrinsic matrix (3x3) or new projection matrix (3x4)

  • criteria: TermCriteria.

    Termination criteria

Return
  • undistorted: Evision.Mat.t().

    Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f> .

Python prototype (for reference only):

undistortPoints(distorted, K, D[, undistorted[, R[, P[, criteria]]]]) -> undistorted