librealsense: Pyrealsense2: Question about 3D reconstruction problem
Issue Description
Hello, I am recently using pyrealsense2 for 3D reconstruction based on SR300 and I got the following questions:
1: Is there a better way for to get textured 3D pointcloud in pyrealsense? My method is to first get the 3D point cloud coordinates and the corresponding color image, and map these two information together to 3D textured information ( So far I am doing this step in MATLAB). However, I got the result that these two information are not correctly aligned together ( It might have some problems with my code).
Code in Python:
pc = rs.pointcloud()
points = rs.points
frames = pipeline.wait_for_frames()
depth = frames.get_depth_frame()
color = frames.get_color_frame()
pc.map_to(color)
points = pc.calculate(depth)
color_img = np.asanyarray(color.get_data())
depth_img = np.asanyarray(depth.get_data())
vtx = np.asanyarray(points.get_vertices())
pt_vtx = np.zeros( (len(vtx), 3) , float )
for i in range(len(vtx)):
pt_vtx[i][0] = np.float(vtx[i][0])
pt_vtx[i][1] = np.float(vtx[i][1])
pt_vtx[i][2] = np.float(vtx[i][2])
Code in MATLAB: The result I input for the MATLAB is the color_img and vtx.
pc = readNPY('data.npy');
img = imread('img_color.png');
color = double(reshape(im2double(imrotate(img, 90)), [480*640 3]));
pcshow( [ pc(:,1),pc(:,2),pc(:,3) ] , color);
But the image is not correctly aligned up with the point cloud (Would the problem be the difference between left image and right image? ).
The final result I want to get is similar to the result in this demo: https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud
2. Is there a direct way I can get the extrinsic parameters for the camera? I want to get the extrinsic parameters for the camera relative to the target object in the image. However, I have no idea how to input the variables for the function to get the corresponding parameters:
rs.extrinsics.translation
rs.extrinsics.rotation
rotation = property(lambda self: object(), lambda self, v: None, lambda self: None)
Sorry for the long questions and I appreciate the helps.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 1
- Comments: 19 (4 by maintainers)
Here is my version of the realsense code for anyone who would need it:
@ravan786 Sorry, I do not have experience with such an approach or making a point cloud created from multiple frames. I can imagine that it will be necessary to identify the points in a common reference frame so what I would do is to tie points from multiple color+depth frame pairs to one frame of reference (e.g. world) and then present them as one point cloud in that frame.
Hi @ljc19800331 ,
tl;dr: You should use
points.get_texture_coordinates()
:In details: Please note that
pc.map_to(color)
does not change the color image, it takes the color and maps it to the point cloud as its texture, using the calibration data of the streams that created these images (which you are lacking in your code, thus have misalignment).After calling
pc.map_to(color)
you should callpoints.get_texture_coordinates()
which returns a buffer protocol object that can be put into anumpy
array. This array should be in the same size aspoints
and itsi
th element contains the texture coordinates ([u,v]
) of thei
th element in the vertices array. The texture coordinates are the coordinate (valued in the range [0-1]) of the pixel in the color image you used inmap_to
.Yes 😃 I assume you want to de/project by yourself, so you will also need the intrinsics parameters and the depth scale. The SDK maps RealSense device in the following hierarchy:
Each stream profile is either a video, motion, or pose stream (at the moment). Video streams profiles provide intrinsics of cameras and motion stream profiles provide motion device intrinsics. Inside the library we keep the extrinsic data between the streams, and you can ask each stream to get its transformation (extrinsics) to another stream.
Here’s how to get this data in python and also some additional data, and finally mapping a depth pixel to a color one: