RTABMap (Real-Time Appearance-Based Mapping) SLAM is now integrated with the Kinect v2 ROS2 system, providing real-time 3D mapping and localization.
./launch_kinect_slam.shThis will start:
- TF publishers (map → odom → camera_link → kinect2_rgb_optical_frame)
- Kinect v2 publisher
- RTABMap SLAM node
- RViz with SLAM visualization
./test_slam.sh/rtabmap/mapData- Complete SLAM map data/rtabmap/odom- Visual odometry (camera pose estimation)/rtabmap/cloud_map- 3D point cloud map/rtabmap/odom_path- Trajectory path/rtabmap/graph_nodes- SLAM graph nodes (for visualization)/rtabmap/info- SLAM statistics and performance metrics
/kinect2/sd/image_color- RGB images/kinect2/sd/image_depth- Depth images/kinect2/sd/camera_info- Camera calibration
The SLAM RViz config (kinect2_slam_rviz.rviz) shows:
- Grid - Reference grid in map frame
- TF - Transform tree visualization
- RGB Image - Live camera feed
- RTABMap Cloud - 3D point cloud map (colored)
- RTABMap Graph - SLAM graph nodes and connections
- Odometry Path - Camera trajectory (green line)
# View SLAM info
ros2 topic echo /rtabmap/info
# Check odometry rate
ros2 topic hz /rtabmap/odom
# View map statistics
ros2 topic echo /rtabmap/mapData --once# Monitor all SLAM topics
ros2 topic list | grep rtabmap
# Check topic rates
ros2 topic hz /rtabmap/cloud_map
ros2 topic hz /rtabmap/odomThe launch script uses these parameters:
rtabmap_args:="--delete_db_on_start" # Start fresh each time
rgb_topic:=/kinect2/sd/image_color
depth_topic:=/kinect2/sd/image_depth
camera_info_topic:=/kinect2/sd/camera_info
frame_id:=camera_link
approx_sync:=true # Sync RGB and depth approximately
wait_imu_to_init:=false # Don't wait for IMU
qos:=2 # QoS reliabilityTo save the map for later use, remove --delete_db_on_start:
ros2 launch rtabmap_launch rtabmap.launch.py \
rgb_topic:=/kinect2/sd/image_color \
depth_topic:=/kinect2/sd/image_depth \
camera_info_topic:=/kinect2/sd/camera_info \
frame_id:=camera_link \
approx_sync:=trueThe map will be saved to ~/.ros/rtabmap.db
ros2 launch rtabmap_launch rtabmap.launch.py \
rtabmap_args:="--Rtabmap/StartNewMapOnLoopClosure true" \
rgb_topic:=/kinect2/sd/image_color \
depth_topic:=/kinect2/sd/image_depth \
camera_info_topic:=/kinect2/sd/camera_infoTo use an existing map for localization only (no mapping):
ros2 launch rtabmap_launch rtabmap.launch.py \
localization:=true \
rgb_topic:=/kinect2/sd/image_color \
depth_topic:=/kinect2/sd/image_depth \
camera_info_topic:=/kinect2/sd/camera_infoimport rclpy
from rclpy.node import Node
from nav_msgs.msg import Odometry
from sensor_msgs.msg import PointCloud2
from rtabmap_msgs.msg import Info
class SLAMIntegrationNode(Node):
def __init__(self):
super().__init__('slam_integration')
# Subscribe to odometry
self.odom_sub = self.create_subscription(
Odometry, '/rtabmap/odom',
self.odom_callback, 10)
# Subscribe to point cloud map
self.cloud_sub = self.create_subscription(
PointCloud2, '/rtabmap/cloud_map',
self.cloud_callback, 10)
# Subscribe to SLAM info
self.info_sub = self.create_subscription(
Info, '/rtabmap/info',
self.info_callback, 10)
def odom_callback(self, msg):
# Get camera pose
position = msg.pose.pose.position
orientation = msg.pose.pose.orientation
# Use for navigation, object detection context, etc.
def cloud_callback(self, msg):
# Process 3D map
# Use for obstacle detection, path planning, etc.
def info_callback(self, msg):
# SLAM statistics
loop_closures = msg.loop_closure_id
# Monitor SLAM healthclass SLAMObjectDetectionNode(Node):
def __init__(self):
super().__init__('slam_object_detection')
# Subscribe to RGB and odometry
self.rgb_sub = self.create_subscription(
Image, '/kinect2/sd/image_color',
self.rgb_callback, 10)
self.odom_sub = self.create_subscription(
Odometry, '/rtabmap/odom',
self.odom_callback, 10)
self.current_pose = None
self.detector = YOLODetector()
def odom_callback(self, msg):
self.current_pose = msg.pose.pose
def rgb_callback(self, msg):
# Detect objects
detections = self.detector.detect(msg)
# Add 3D position using current pose
for det in detections:
det.world_position = self.transform_to_world(
det.bbox, self.current_pose)Check if RTABMap is installed:
ros2 pkg list | grep rtabmapInstall if missing:
sudo apt install ros-jazzy-rtabmap-ros-
Check topics are publishing:
ros2 topic hz /kinect2/sd/image_color ros2 topic hz /kinect2/sd/image_depth
-
Check TF tree:
ros2 run tf2_tools view_frames
-
Monitor SLAM info:
ros2 topic echo /rtabmap/info
- Move camera slowly - Fast movements can cause tracking loss
- Ensure good lighting - Poor lighting affects visual odometry
- Add visual features - Blank walls are hard to track
- Check frame rates - Should be >10 FPS for good SLAM
Loop closure requires:
- Revisiting previously mapped areas
- Good visual features
- Sufficient overlap between views
Monitor loop closures:
ros2 topic echo /rtabmap/info | grep loop_closure_idFor better performance, use lower resolution:
# Modify Kinect publisher to use QHD instead of HD
# Or adjust RTABMap decimation parametersCreate custom RTABMap config:
ros2 launch rtabmap_launch rtabmap.launch.py \
rtabmap_args:="--Rtabmap/DetectionRate 1 --Vis/MaxFeatures 400"Common parameters:
Rtabmap/DetectionRate- Process every Nth frame (default: 1)Vis/MaxFeatures- Max visual features (default: 1000)RGBD/OptimizeMaxError- Optimization threshold
launch_kinect_slam.sh- Main SLAM launcherkinect2_slam_rviz.rviz- RViz configuration for SLAMtest_slam.sh- SLAM setup verification
- Test SLAM:
./test_slam.sh - Launch system:
./launch_kinect_slam.sh - Move camera around to build map
- Monitor in RViz - Watch point cloud and trajectory
- Integrate with your pipeline - Use odometry and map data