librealsense: cannot stream depth, colour, gyro and accel all at once using the python3 wrapper
| Camera Model | D400 |
| Firmware Version | 05.12.03.00 |
| Operating System & Version | 18.04 |
| Kernel Version (Linux Only) | 4.9.140-tegra |
| Platform | jeston nano jetpack 4.3. |
| SDK Version | 2.33.1 |
| Language | python |
| Segment | Robot |
I am trying to gather the depth, colour, acc and gyro data from the d435i. but it’s throwing a timeout error. I have read through the posts and cannot seem to find a solid answer on how it should be accomplished. any help would be greatly appreciated.
Here is my code, it’s not actually trying to read the gyro data yet just setup the camera streams to run with each other but no luck.
from threading import Thread
import pyrealsense2 as rs
import numpy as np
import cv2, sys, time
class VideoStream:
def __init__(self, resolution=(640, 480), framerate=15):
self.FOV = 100.4
# point conversion
self.num_sectors = 21 # number of sections
self.pixel_group = resolution[0] / self.num_sectors
self.distance_array = [0]*self.num_sectors
self.color_image = np.zeros((resolution[0], resolution[1]))
self.depth_image = self.color_image
self.camera_stopped = False
self.resolution = resolution
self.framerate = framerate
self.pipeline = rs.pipeline()
self.config = rs.config()
self.config.enable_stream(rs.stream.depth, resolution[0], resolution[1], rs.format.z16, framerate)
self.config.enable_stream(rs.stream.color, resolution[0], resolution[1], rs.format.bgr8, framerate)
self.config.enable_stream(rs.stream.gyro)
def start_camera(self):
# start the thread to read frames from the video stream
self.pipeline.start(self.config)
Thread(target=self.update).start()
def update(self):
try:
while True:
# Wait for a coherent pair of frames: depth and color
frames = self.pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
continue
# Convert images to numpy arrays
self.depth_image = np.asanyarray(depth_frame.get_data())
self.color_image = np.asanyarray(color_frame.get_data())
# Bin pixels and determine local minima
depth_index = []
for idx in range(self.num_sectors):
a = int(idx*self.pixel_group)
b = int((idx+1)*self.pixel_group)
self.depth_image[self.depth_image==0] = 3000
depth_index = np.append(depth_index, np.min(self.depth_image[:,a:b]))
self.depth_index = depth_index
except:
self.pipeline.stop()
print("Error in Vision", sys.exc_info())
finally:
self.pipeline.stop()
def read(self):
# return the frame most recently read
return self.frame
def stop_camera(self):
# indicate that the thread should be camera_stopped
self.camera_started = False
self.camera_stopped = True
if __name__ == '__main__':
vs = VideoStream()
vs.start_camera()
time.sleep(5)
while True:
cv2.imshow("image", vs.color_image)
print ("array", vs.depth_index)
cv2.waitKey(1)
This is the error I get when I leave self.config.enable_stream(rs.stream.gyro)
uncommented
Error in Vision (<class ‘RuntimeError’>, RuntimeError(“Frame didn’t arrived within 5000”,), <traceback object at 0x7f96f6e688>)
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 20
@BetterRobotics If you could please post a separate case explaining what you would like documented better then I can give it a ‘Documentation’ label so that it can be tracked.
In the meantime, given that there is more than one report of potential issues with the IMU in the recent SDK versions, I will add the ‘Bug’ label to this particular case so it can be tracked and investigated.