View Source Evision.RGBD (Evision v0.2.9)

Summary

Types

t()

Type that represents an RGBD struct.

Types

@type t() :: %Evision.RGBD{ref: reference()}

Type that represents an RGBD struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec depthTo3d(Keyword.t()) :: any() | {:error, String.t()}
@spec depthTo3d(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

depthTo3d

Positional Arguments
  • depth: Evision.Mat.

    the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters (as done with the Microsoft Kinect), otherwise, if given as CV_32F or CV_64F, it is assumed in meters)

  • k: Evision.Mat.

    The calibration matrix

Keyword Arguments
  • mask: Evision.Mat.

    the mask of the points to consider (can be empty)

Return
  • points3d: Evision.Mat.t().

    the resulting 3d points. They are of depth the same as depth if it is CV_32F or CV_64F, and the depth of K if depth is of depth CV_U

Converts a depth image to an organized set of 3d points. The coordinate system is x pointing left, y down and z away from the camera

Python prototype (for reference only):

depthTo3d(depth, K[, points3d[, mask]]) -> points3d
Link to this function

depthTo3d(depth, k, opts)

View Source
@spec depthTo3d(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{:mask, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

depthTo3d

Positional Arguments
  • depth: Evision.Mat.

    the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters (as done with the Microsoft Kinect), otherwise, if given as CV_32F or CV_64F, it is assumed in meters)

  • k: Evision.Mat.

    The calibration matrix

Keyword Arguments
  • mask: Evision.Mat.

    the mask of the points to consider (can be empty)

Return
  • points3d: Evision.Mat.t().

    the resulting 3d points. They are of depth the same as depth if it is CV_32F or CV_64F, and the depth of K if depth is of depth CV_U

Converts a depth image to an organized set of 3d points. The coordinate system is x pointing left, y down and z away from the camera

Python prototype (for reference only):

depthTo3d(depth, K[, points3d[, mask]]) -> points3d
Link to this function

depthTo3dSparse(named_args)

View Source
@spec depthTo3dSparse(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

depthTo3dSparse(depth, in_K, in_points)

View Source

depthTo3dSparse

Positional Arguments
Return
  • points3d: Evision.Mat.t().

    the resulting 3d points

Python prototype (for reference only):

depthTo3dSparse(depth, in_K, in_points[, points3d]) -> points3d
Link to this function

depthTo3dSparse(depth, in_K, in_points, opts)

View Source
@spec depthTo3dSparse(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

depthTo3dSparse

Positional Arguments
Return
  • points3d: Evision.Mat.t().

    the resulting 3d points

Python prototype (for reference only):

depthTo3dSparse(depth, in_K, in_points[, points3d]) -> points3d
Link to this function

registerDepth(named_args)

View Source
@spec registerDepth(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

registerDepth(unregisteredCameraMatrix, registeredCameraMatrix, registeredDistCoeffs, rt, unregisteredDepth, outputImagePlaneSize)

View Source

registerDepth

Positional Arguments
  • unregisteredCameraMatrix: Evision.Mat.

    the camera matrix of the depth camera

  • registeredCameraMatrix: Evision.Mat.

    the camera matrix of the external camera

  • registeredDistCoeffs: Evision.Mat.

    the distortion coefficients of the external camera

  • rt: Evision.Mat.

    the rigid body transform between the cameras. Transforms points from depth camera frame to external camera frame.

  • unregisteredDepth: Evision.Mat.

    the input depth data

  • outputImagePlaneSize: Size.

    the image plane dimensions of the external camera (width, height)

Keyword Arguments
  • depthDilation: bool.

    whether or not the depth is dilated to avoid holes and occlusion errors (optional)

Return
  • registeredDepth: Evision.Mat.t().

    the result of transforming the depth into the external camera

Registers depth data to an external camera Registration is performed by creating a depth cloud, transforming the cloud by the rigid body transformation between the cameras, and then projecting the transformed points into the RGB camera. uv_rgb = K_rgb [R | t] z inv(K_ir) uv_ir Currently does not check for negative depth values.

Python prototype (for reference only):

registerDepth(unregisteredCameraMatrix, registeredCameraMatrix, registeredDistCoeffs, Rt, unregisteredDepth, outputImagePlaneSize[, registeredDepth[, depthDilation]]) -> registeredDepth
Link to this function

registerDepth(unregisteredCameraMatrix, registeredCameraMatrix, registeredDistCoeffs, rt, unregisteredDepth, outputImagePlaneSize, opts)

View Source

registerDepth

Positional Arguments
  • unregisteredCameraMatrix: Evision.Mat.

    the camera matrix of the depth camera

  • registeredCameraMatrix: Evision.Mat.

    the camera matrix of the external camera

  • registeredDistCoeffs: Evision.Mat.

    the distortion coefficients of the external camera

  • rt: Evision.Mat.

    the rigid body transform between the cameras. Transforms points from depth camera frame to external camera frame.

  • unregisteredDepth: Evision.Mat.

    the input depth data

  • outputImagePlaneSize: Size.

    the image plane dimensions of the external camera (width, height)

Keyword Arguments
  • depthDilation: bool.

    whether or not the depth is dilated to avoid holes and occlusion errors (optional)

Return
  • registeredDepth: Evision.Mat.t().

    the result of transforming the depth into the external camera

Registers depth data to an external camera Registration is performed by creating a depth cloud, transforming the cloud by the rigid body transformation between the cameras, and then projecting the transformed points into the RGB camera. uv_rgb = K_rgb [R | t] z inv(K_ir) uv_ir Currently does not check for negative depth values.

Python prototype (for reference only):

registerDepth(unregisteredCameraMatrix, registeredCameraMatrix, registeredDistCoeffs, Rt, unregisteredDepth, outputImagePlaneSize[, registeredDepth[, depthDilation]]) -> registeredDepth
Link to this function

rescaleDepth(named_args)

View Source
@spec rescaleDepth(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

rescaleDepth(arg_in, depth)

View Source
@spec rescaleDepth(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

rescaleDepth

Positional Arguments
  • arg_in: Evision.Mat.

    the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters (as done with the Microsoft Kinect), it is assumed in meters)

  • depth: integer().

    the desired output depth (floats or double)

Keyword Arguments
  • depth_factor: double.

    (optional) factor by which depth is converted to distance (by default = 1000.0 for Kinect sensor)

Return
  • out: Evision.Mat.t().

    The rescaled float depth image

If the input image is of type CV_16UC1 (like the Kinect one), the image is converted to floats, divided by depth_factor to get a depth in meters, and the values 0 are converted to std::numeric_limits<float>::quiet_NaN() Otherwise, the image is simply converted to floats

Python prototype (for reference only):

rescaleDepth(in, depth[, out[, depth_factor]]) -> out
Link to this function

rescaleDepth(arg_in, depth, opts)

View Source
@spec rescaleDepth(
  Evision.Mat.maybe_mat_in(),
  integer(),
  [{:depth_factor, term()}] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

rescaleDepth

Positional Arguments
  • arg_in: Evision.Mat.

    the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters (as done with the Microsoft Kinect), it is assumed in meters)

  • depth: integer().

    the desired output depth (floats or double)

Keyword Arguments
  • depth_factor: double.

    (optional) factor by which depth is converted to distance (by default = 1000.0 for Kinect sensor)

Return
  • out: Evision.Mat.t().

    The rescaled float depth image

If the input image is of type CV_16UC1 (like the Kinect one), the image is converted to floats, divided by depth_factor to get a depth in meters, and the values 0 are converted to std::numeric_limits<float>::quiet_NaN() Otherwise, the image is simply converted to floats

Python prototype (for reference only):

rescaleDepth(in, depth[, out[, depth_factor]]) -> out
@spec warpFrame(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

warpFrame(image, depth, mask, rt, cameraMatrix, distCoeff)

View Source

warpFrame

Positional Arguments
  • image: Evision.Mat.

    The image (of CV_8UC1 or CV_8UC3 type)

  • depth: Evision.Mat.

    The depth (of type used in depthTo3d fuction)

  • mask: Evision.Mat.

    The mask of used pixels (of CV_8UC1), it can be empty

  • rt: Evision.Mat.

    The transformation that will be applied to the 3d points computed from the depth

  • cameraMatrix: Evision.Mat.

    Camera matrix

  • distCoeff: Evision.Mat.

    Distortion coefficients

Return
  • warpedImage: Evision.Mat.t().

    The warped image.

  • warpedDepth: Evision.Mat.t().

    The warped depth.

  • warpedMask: Evision.Mat.t().

    The warped mask.

Warp the image: compute 3d points from the depth, transform them using given transformation, then project color point cloud to an image plane. This function can be used to visualize results of the Odometry algorithm.

Python prototype (for reference only):

warpFrame(image, depth, mask, Rt, cameraMatrix, distCoeff[, warpedImage[, warpedDepth[, warpedMask]]]) -> warpedImage, warpedDepth, warpedMask
Link to this function

warpFrame(image, depth, mask, rt, cameraMatrix, distCoeff, opts)

View Source

warpFrame

Positional Arguments
  • image: Evision.Mat.

    The image (of CV_8UC1 or CV_8UC3 type)

  • depth: Evision.Mat.

    The depth (of type used in depthTo3d fuction)

  • mask: Evision.Mat.

    The mask of used pixels (of CV_8UC1), it can be empty

  • rt: Evision.Mat.

    The transformation that will be applied to the 3d points computed from the depth

  • cameraMatrix: Evision.Mat.

    Camera matrix

  • distCoeff: Evision.Mat.

    Distortion coefficients

Return
  • warpedImage: Evision.Mat.t().

    The warped image.

  • warpedDepth: Evision.Mat.t().

    The warped depth.

  • warpedMask: Evision.Mat.t().

    The warped mask.

Warp the image: compute 3d points from the depth, transform them using given transformation, then project color point cloud to an image plane. This function can be used to visualize results of the Odometry algorithm.

Python prototype (for reference only):

warpFrame(image, depth, mask, Rt, cameraMatrix, distCoeff[, warpedImage[, warpedDepth[, warpedMask]]]) -> warpedImage, warpedDepth, warpedMask