librealsense: Getting wrong values for 3D point and rs2_deproject_pixel_to_point
Using ROS1 C++. Realsense D431i, latest version of realsense libraries, on Ubuntu 20.04
Introduction
In the following piece of code, I have a point from a D435i camera, post processed to do object detection, and I am interested in the 3D dimensions marked by point sideA.x, sideA.y which correspond to column, row in the color frame.
rs2_intrinsics intrinsics;
intrinsics = selection.get_stream(RS2_STREAM_COLOR).as<rs2::video_stream_profile>().get_intrinsics();float pixel[2];
float my3Dpoint[3]; // x, y, z
pixel[0] = sideA.x;
pixel[1] = sideA.y;
rs2_deproject_pixel_to_point (my3Dpoint, &intrinsics, pixel, heightOfCamera);
The camera is mounted on the wrist of a 6dof arm, and positioned pot a pose double r=0., p=1.5, y=0
which means aligned with the world frame x and y, rotated facing and parallel to the flat surface below formed by x and y, at a known camera height through a value of z in the pose adjusted by the camera actual position from the end effector:
float heightOfCamera = nextPose.position.z + 0.03;
Before using the robot’s pose z as a source of depth of the point, I used this …
float depthofPoint = depth.get_distance (sideA.x, sideA.y);
rs2_deproject_pixel_to_point (my3Dpoint, &intrinsics, pixel, depthofPoint);
(BTW I am doing this exercise in an effort to diagnose problems iwith code I wrote to use a Eye-On-Hand calibration matrix…)
The Problem
The return values for x and y in my3Dpoint
are smaller by about 40% from the actual values with both methdos for determining the depth of the pixel needed by rs2_deproject_pixel_to_point …
Ive read the material pointed to in this reply but did not find a guiding light there.
Guidance is much appreciated… Thanks in advance, Dave
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 72
My support colleagues have analyzed this case and provided feedback.
It should not be assumed that the center of the image will coincide with (0, 0, 0). Instead, it depends on the sensor. The principal point may not be exactly the center of the sensor. Often, there is a small offset.
There are multiple ways to find the 3D point in the depth coordinates system from a known pixel in the color image. Scripts for three different methods are provided below.
Method 1 - align depth to color, deproject pixel into point in color coordinate systems, then transform the point back to depth coordinate system.
Method 2 – do not align, use the original depth and color images, rs2_project_color_pixel_to_depth_pixel, then rs2_deproject_pixel_to_point.
Method 3 – align depth and use point cloud.