moveit: Insufficient maximum volume handling in Octomap

EDIT: The first part of this issue has been solved. See https://github.com/ros-planning/moveit/issues/2709#issuecomment-908128055 and the followup answer for unaddressed parts of the issue.

Description

The parameter max_range (typically set in sensor_manager.launch.xml or sensors_3d.yaml) seems to only apply to occupied voxels, not free voxels.

Is this the intended behaviour? It makes my computer very slow when I update the Octomap, using scan data that goes out to 10-15 meters, even if i use a max range of 3 meters. RViz also has a lot of trouble switching between “All Voxels”, “Occupied Voxels” and “Free Voxels”.

Your environment

  • ROS Distro: Noetic
  • OS Version: Ubuntu 20.04 (Pop!_OS 20.04 LTS)
  • Everything is installed via apt
$ apt search ros-noetic-moveit
Sorting... Done
Full Text Search... Done
ros-noetic-moveit/focal,now 1.1.5-1focal.20210524.225952 amd64 [installed]
  Meta package that contains all essential package of MoveIt.

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Comments: 17 (14 by maintainers)

Most upvoted comments

First of all, note that I never used the octomap feature - so correct me if I’m wrong. As far as I understand, it explicitly represents both, occupied and free voxels separately.

These octomaps represent a voxelgrid by storing explicit voxel nodes with an occupancy log probability. Every voxel not explicitly represented by such a node has no value assigned. The key feature in the tree is pruning in (aligned) powers of two: If all nodes on a level in an aligned block of twice the current voxel size share the same log probability, they are pruned into their single “parent” node.

I noticed that before free voxels are represented by “a few” large voxels, while after this PR, we see many small voxels along the max_range boundary.

That took me a while to think out. You are right. I believe raytracing through the whole volume of empty space will often result in pruned octomap nodes. The max_range sphere does not align with any aligned grid boundary, so any explicit voxel (with a probability assigned) on the outer hull of the sphere will almost surely be represented at the highest resolution. That’s suboptimal. At the same time, there is a lot less volume to consider for which we create nodes at the moment (even though they are bigger). It’s a trade-off, and either implementation might be bad for an average cloud input (if such a thing exists). I don’t think there is a valid argument here though without solid benchmarks and I’m not willing to look into that.

To support my patch, I would simply require that max_range should behave in an intuitive way - by not adding explicit voxels anywhere outside the sphere, similar to my patch.

I’m not sure whether the octomap additionally restricts the maximum represented volume somewhere. Looking through the code I’m unsure how expanding volumes are handled. At least the max_range parameter in our templates is not used for it. (It does not seem to be used for anything right now 😕) I found that we can set a bounding box for the octomap and tried to use it, but reading octomap’s sources, this functionality is not supported by the updateNode method we use…

A side point is that octomap’s own insertRay method, which we could actually use in our code if anyone would be interested to look into this, also truncates rays at a max_range.

It is possible to have both options by adding a clip_distance parameter, in addition to max_range. Setting max_range to 0 and clip_distance to something non-zero could give the behavior implemented in your PR.

I dislike the approach because clipping should never mean that there is a different free space to consider. Either it’s the same distance or 0.0.

Looking at the code again, the whole block is already wrapped in a !isnan conditional. So assuming invalid measured points are indicated by nan (as is the convention in PCL afaik) we only add valid measurements that are too far away to clip_cells.

I’ll wait for feedback from @rhaschke here.