View Source Evision.RGBD (Evision v0.1.38)

Summary

Types

t()

Type that represents an RGBD struct.

Types

@type t() :: %Evision.RGBD{ref: reference()}

Type that represents an RGBD struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec depthTo3d(Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in()) ::
  Evision.Mat.t() | {:error, String.t()}

depthTo3d

Positional Arguments
  • depth: Evision.Mat.t().

    the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters (as done with the Microsoft Kinect), otherwise, if given as CV_32F or CV_64F, it is assumed in meters)

  • k: Evision.Mat.t().

    The calibration matrix

Keyword Arguments
  • mask: Evision.Mat.t().

    the mask of the points to consider (can be empty)

Return
  • points3d: Evision.Mat.t().

    the resulting 3d points. They are of depth the same as depth if it is CV_32F or CV_64F, and the depth of K if depth is of depth CV_U

Converts a depth image to an organized set of 3d points. The coordinate system is x pointing left, y down and z away from the camera

Python prototype (for reference only):

depthTo3d(depth, K[, points3d[, mask]]) -> points3d
Link to this function

depthTo3d(depth, k, opts)

View Source
@spec depthTo3d(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

depthTo3d

Positional Arguments
  • depth: Evision.Mat.t().

    the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters (as done with the Microsoft Kinect), otherwise, if given as CV_32F or CV_64F, it is assumed in meters)

  • k: Evision.Mat.t().

    The calibration matrix

Keyword Arguments
  • mask: Evision.Mat.t().

    the mask of the points to consider (can be empty)

Return
  • points3d: Evision.Mat.t().

    the resulting 3d points. They are of depth the same as depth if it is CV_32F or CV_64F, and the depth of K if depth is of depth CV_U

Converts a depth image to an organized set of 3d points. The coordinate system is x pointing left, y down and z away from the camera

Python prototype (for reference only):

depthTo3d(depth, K[, points3d[, mask]]) -> points3d
Link to this function

depthTo3dSparse(depth, in_K, in_points)

View Source

depthTo3dSparse

Positional Arguments
  • depth: Evision.Mat.t().

    the depth image

  • in_K: Evision.Mat.t().

  • in_points: Evision.Mat.t().

    the list of xy coordinates

Return
  • points3d: Evision.Mat.t().

    the resulting 3d points

Python prototype (for reference only):

depthTo3dSparse(depth, in_K, in_points[, points3d]) -> points3d
Link to this function

depthTo3dSparse(depth, in_K, in_points, opts)

View Source
@spec depthTo3dSparse(
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  Evision.Mat.maybe_mat_in(),
  [{atom(), term()}, ...] | nil
) :: Evision.Mat.t() | {:error, String.t()}

depthTo3dSparse

Positional Arguments
  • depth: Evision.Mat.t().

    the depth image

  • in_K: Evision.Mat.t().

  • in_points: Evision.Mat.t().

    the list of xy coordinates

Return
  • points3d: Evision.Mat.t().

    the resulting 3d points

Python prototype (for reference only):

depthTo3dSparse(depth, in_K, in_points[, points3d]) -> points3d
Link to this function

registerDepth(unregisteredCameraMatrix, registeredCameraMatrix, registeredDistCoeffs, rt, unregisteredDepth, outputImagePlaneSize)

View Source

registerDepth

Positional Arguments
  • unregisteredCameraMatrix: Evision.Mat.t().

    the camera matrix of the depth camera

  • registeredCameraMatrix: Evision.Mat.t().

    the camera matrix of the external camera

  • registeredDistCoeffs: Evision.Mat.t().

    the distortion coefficients of the external camera

  • rt: Evision.Mat.t().

    the rigid body transform between the cameras. Transforms points from depth camera frame to external camera frame.

  • unregisteredDepth: Evision.Mat.t().

    the input depth data

  • outputImagePlaneSize: Size.

    the image plane dimensions of the external camera (width, height)

Keyword Arguments
  • depthDilation: bool.

    whether or not the depth is dilated to avoid holes and occlusion errors (optional)

Return
  • registeredDepth: Evision.Mat.t().

    the result of transforming the depth into the external camera

Registers depth data to an external camera Registration is performed by creating a depth cloud, transforming the cloud by the rigid body transformation between the cameras, and then projecting the transformed points into the RGB camera. uv_rgb = K_rgb [R | t] z inv(K_ir) uv_ir Currently does not check for negative depth values.

Python prototype (for reference only):

registerDepth(unregisteredCameraMatrix, registeredCameraMatrix, registeredDistCoeffs, Rt, unregisteredDepth, outputImagePlaneSize[, registeredDepth[, depthDilation]]) -> registeredDepth
Link to this function

registerDepth(unregisteredCameraMatrix, registeredCameraMatrix, registeredDistCoeffs, rt, unregisteredDepth, outputImagePlaneSize, opts)

View Source

registerDepth

Positional Arguments
  • unregisteredCameraMatrix: Evision.Mat.t().

    the camera matrix of the depth camera

  • registeredCameraMatrix: Evision.Mat.t().

    the camera matrix of the external camera

  • registeredDistCoeffs: Evision.Mat.t().

    the distortion coefficients of the external camera

  • rt: Evision.Mat.t().

    the rigid body transform between the cameras. Transforms points from depth camera frame to external camera frame.

  • unregisteredDepth: Evision.Mat.t().

    the input depth data

  • outputImagePlaneSize: Size.

    the image plane dimensions of the external camera (width, height)

Keyword Arguments
  • depthDilation: bool.

    whether or not the depth is dilated to avoid holes and occlusion errors (optional)

Return
  • registeredDepth: Evision.Mat.t().

    the result of transforming the depth into the external camera

Registers depth data to an external camera Registration is performed by creating a depth cloud, transforming the cloud by the rigid body transformation between the cameras, and then projecting the transformed points into the RGB camera. uv_rgb = K_rgb [R | t] z inv(K_ir) uv_ir Currently does not check for negative depth values.

Python prototype (for reference only):

registerDepth(unregisteredCameraMatrix, registeredCameraMatrix, registeredDistCoeffs, Rt, unregisteredDepth, outputImagePlaneSize[, registeredDepth[, depthDilation]]) -> registeredDepth
Link to this function

rescaleDepth(in_, depth)

View Source
@spec rescaleDepth(Evision.Mat.maybe_mat_in(), integer()) ::
  Evision.Mat.t() | {:error, String.t()}

rescaleDepth

Positional Arguments
  • in_: Evision.Mat.t()

  • depth: int.

    the desired output depth (floats or double)

Keyword Arguments
  • depth_factor: double.

    (optional) factor by which depth is converted to distance (by default = 1000.0 for Kinect sensor)

Return
  • out: Evision.Mat.t().

    The rescaled float depth image

If the input image is of type CV_16UC1 (like the Kinect one), the image is converted to floats, divided by depth_factor to get a depth in meters, and the values 0 are converted to std::numeric_limits<float>::quiet_NaN() Otherwise, the image is simply converted to floats

Python prototype (for reference only):

rescaleDepth(in_, depth[, out[, depth_factor]]) -> out
Link to this function

rescaleDepth(in_, depth, opts)

View Source
@spec rescaleDepth(
  Evision.Mat.maybe_mat_in(),
  integer(),
  [{atom(), term()}, ...] | nil
) ::
  Evision.Mat.t() | {:error, String.t()}

rescaleDepth

Positional Arguments
  • in_: Evision.Mat.t()

  • depth: int.

    the desired output depth (floats or double)

Keyword Arguments
  • depth_factor: double.

    (optional) factor by which depth is converted to distance (by default = 1000.0 for Kinect sensor)

Return
  • out: Evision.Mat.t().

    The rescaled float depth image

If the input image is of type CV_16UC1 (like the Kinect one), the image is converted to floats, divided by depth_factor to get a depth in meters, and the values 0 are converted to std::numeric_limits<float>::quiet_NaN() Otherwise, the image is simply converted to floats

Python prototype (for reference only):

rescaleDepth(in_, depth[, out[, depth_factor]]) -> out
Link to this function

warpFrame(image, depth, mask, rt, cameraMatrix, distCoeff)

View Source

warpFrame

Positional Arguments
  • image: Evision.Mat.t().

    The image (of CV_8UC1 or CV_8UC3 type)

  • depth: Evision.Mat.t().

    The depth (of type used in depthTo3d fuction)

  • mask: Evision.Mat.t().

    The mask of used pixels (of CV_8UC1), it can be empty

  • rt: Evision.Mat.t().

    The transformation that will be applied to the 3d points computed from the depth

  • cameraMatrix: Evision.Mat.t().

    Camera matrix

  • distCoeff: Evision.Mat.t().

    Distortion coefficients

Return
  • warpedImage: Evision.Mat.t().

    The warped image.

  • warpedDepth: Evision.Mat.t().

    The warped depth.

  • warpedMask: Evision.Mat.t().

    The warped mask.

Warp the image: compute 3d points from the depth, transform them using given transformation, then project color point cloud to an image plane. This function can be used to visualize results of the Odometry algorithm.

Python prototype (for reference only):

warpFrame(image, depth, mask, Rt, cameraMatrix, distCoeff[, warpedImage[, warpedDepth[, warpedMask]]]) -> warpedImage, warpedDepth, warpedMask
Link to this function

warpFrame(image, depth, mask, rt, cameraMatrix, distCoeff, opts)

View Source

warpFrame

Positional Arguments
  • image: Evision.Mat.t().

    The image (of CV_8UC1 or CV_8UC3 type)

  • depth: Evision.Mat.t().

    The depth (of type used in depthTo3d fuction)

  • mask: Evision.Mat.t().

    The mask of used pixels (of CV_8UC1), it can be empty

  • rt: Evision.Mat.t().

    The transformation that will be applied to the 3d points computed from the depth

  • cameraMatrix: Evision.Mat.t().

    Camera matrix

  • distCoeff: Evision.Mat.t().

    Distortion coefficients

Return
  • warpedImage: Evision.Mat.t().

    The warped image.

  • warpedDepth: Evision.Mat.t().

    The warped depth.

  • warpedMask: Evision.Mat.t().

    The warped mask.

Warp the image: compute 3d points from the depth, transform them using given transformation, then project color point cloud to an image plane. This function can be used to visualize results of the Odometry algorithm.

Python prototype (for reference only):

warpFrame(image, depth, mask, Rt, cameraMatrix, distCoeff[, warpedImage[, warpedDepth[, warpedMask]]]) -> warpedImage, warpedDepth, warpedMask