ubuntu20.04下使用D435i在ROS下实时运行ORB-SLAM3
提前安装好ROS环境,这里采用的是小鱼老师的一行代码安装:
wget http://fishros.com/install -O fishros && bash fishros 提前运行跑通ORB-SLAM3,具体参考如下:
https://blog.ZEEKLOG.net/2302_80099075/article/details/140306463?spm=1011.2415.3001.5331
目录
一、安装IntelRealSense SDK2.0
开始前一定要提前装好ROS和ORB-SLAM3的环境,ORB-SLAM3能够运行不报错。
有的教程是通过命令行安装的,我按照这种方式,后面编译realsense-ros时一直会报错:
CMake Error at realsense-ros/realsense2_camera/cMakeLists.txt:47(message): Inter Realsense SDK 2.0 is missing, please install it from https://github.com/IntelRealsense/librealsense/releases
原因是:命令行安装的IntelRealSense SDK配置引用了ROS2的依赖,而安装的是ROS1 版本。所以编译会一直报错。

这里采用源码安装的方式。
终端执行以下命令下载源码,这里放在了主目录下:
git clone https://github.com/IntelRealSense/librealsense.gitlibrealsense目录下,终端执行以下命令,安装权限脚本:
sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/ sudo udevadm control --reload-rules sudo udevadm trigger终端执行以下命令,安装依赖:
sudo apt-get install git libssl-dev libusb-1.0-0-dev libudev-dev pkg-config libgtk-3-dev sudo apt-get install libglfw3-dev libgl1-mesa-dev libglu1-mesa-devlibrealsense目录下,执行以下命令,编译与安装IntelRealSense SDK2.0:
mkdir build && cd build cmake .. -DCMAKE_BUILD_TYPE=Release -DFORCE_RSUSB_BACKEND=false make sudo make install执行以下命令,查看是否安装成功:
realsense-viewer 连接的接口应该是USB 3.0,不然可能会出现这个问题。


二、realsense-ros的安装
主目录下创建一个工作空间:
mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src/克隆realsense-ros和ddynamic_reconfigure源码在catkin_ws/src/目录下:
git clone https://github.com/IntelRealSense/realsense-ros.git git clone https://github.com/pal-robotics/ddynamic_reconfigure.git在realsense-ros目录下,查看分支:
git branch -a 
当前分支在ros2-master下,而当前的系统是ros1,因此需要切换分支,执行以下命令,将分支切换至ros1-legacy:
git checkout ros1-legacy 
回到catkin_ws目录下,编译安装realsense-ros:
catkin_make -j$(nproc) catkin_make install 把工作空间的环境变量设置到bash环境中:
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrcsource ~/.bashrc终端执行如下命令启动相机:
roslaunch realsense2_camera rs_camera.launch 如果启动过程中,提示找不到包,手动执行下source ~/catkin_ws/devel/setup.bash
另一个终端启动rviz,看各话题的画面是不是正常显示:
在 RViz 中配置:
1、Fixed Frame: 设置为 camera_link
2、添加 Display:
- 点击左下角
Add
选择 Camera→ 设置 Image Topic为 /camera/color/image_raw

三、D435i 彩色相机内参标定
安装摄像头标定工具:
sudo apt-get install ros-noetic-camera-calibration通过以下链接,下载标定板:
https://docs.opencv.org/2.4/_downloads/pattern.png
在终端执行以下命令,启动摄像头:
roslaunch realsense2_camera rs_camera.launch打开一个新终端,执行以下命令:
rosrun camera_calibration cameracalibrator.py --size 9x6 --square 0.024 image:=/camera/color/image_raw camera:=/camera/color --no-service-check –size 9x6 棋盘大小为 (9x6,仅计算内角)
–square 0.024 棋盘中的一个正方形大小,以米为单位
根据标定板实际情况修改。

运行起来界面如下:

按照x(左右)、y(上下)、size(前后)、skew(倾斜)等方式移动相机,直到x,y,size,skew的进度条都变成绿色位置。接着按下CALIBRATE按钮,等一段时间就可以完成标定。完成后点击SAVE保存标定使用的图像以及标定结果,会显示保存地址,可以打开查看,然后再COMMIT退出程序,标定完成。点击"COMMIT"报错是正常的,因为RealSense相机节点不提供该服务。但这不影响标定结果的使用。只需保存标定参数并手动应用到ORB-SLAM3配置文件中即可。
标定文件在根目录/tmp/calibrationdata.tar.gz里,找到标定结果文件后,按照其数据修改Examples/ROS/ORB_SLAM3目录下Asus.yaml。

以下是我的标定文件ost.yaml里面的内容。
image_width: 1280 # 图像宽度(像素) image_height: 720 # 图像高度(像素) camera_name: narrow_stereo # 相机名称(通常用于立体相机) camera_matrix: # 相机内参矩阵 rows: 3 cols: 3 data: [899.36111, 0. , 649.57404, # fx, 0, cx 0. , 901.64597, 361.88024, # 0, fy, cy 0. , 0. , 1. ] # 0, 0, 1 distortion_model: plumb_bob # 畸变模型(Brown-Conrady模型) distortion_coefficients: # 畸变系数 rows: 1 cols: 5 # [k1,k2,p1,p2,k3] data: [0.108620, -0.215185, 0.000762, -0.002215, 0.000000] rectification_matrix: # 校正矩阵 rows: 3 cols: 3 data: [1., 0., 0., 0., 1., 0., 0., 0., 1.] projection_matrix: # 投影矩阵 rows: 3 cols: 4 data: [898.53308, 0. , 646.8581 , 0. , 0. , 913.9599 , 361.79539, 0. , 0. , 0. , 1. , 0. ]根据标定文件ost.yaml修改Asus.yaml得到:
%YAML:1.0 #-------------------------------------------------------------------------------------------- # Camera Parameters. Adjust them! #-------------------------------------------------------------------------------------------- Camera.type: "PinHole" # Camera calibration and distortion parameters (OpenCV) Camera.fx: 899.36111 Camera.fy: 901.64597 Camera.cx: 649.57404 Camera.cy: 361.88024 Camera.k1: 0.108620 Camera.k2: -0.215185 Camera.p1: 0.000762 Camera.p2: -0.002215 Camera.k3: 0.000000 Camera.width: 1280 Camera.height: 720 # Camera frames per second Camera.fps: 30.0 # IR projector baseline times fx (aprox.) # bf = fx * baseline (baseline ≈ 0.05m)(D435i基线约为5cm) Camera.bf: 44.9680555 # 899.36111 * 0.05 # Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale) Camera.RGB: 1 # Close/Far threshold. Baseline times. ThDepth: 40.0 # Deptmap values factor #(RealSense深度值为毫米,需要转换为米) DepthMapFactor: 1000.0 #-------------------------------------------------------------------------------------------- # ORB Parameters #-------------------------------------------------------------------------------------------- # ORB Extractor: Number of features per image ORBextractor.nFeatures: 1000 # ORB Extractor: Scale factor between levels in the scale pyramid ORBextractor.scaleFactor: 1.2 # ORB Extractor: Number of levels in the scale pyramid ORBextractor.nLevels: 8 # ORB Extractor: Fast threshold # Image is divided in a grid. At each cell FAST are extracted imposing a minimum response. # Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST # You can lower these values if your images have low contrast ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7 #-------------------------------------------------------------------------------------------- # Viewer Parameters #-------------------------------------------------------------------------------------------- Viewer.KeyFrameSize: 0.05 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize:2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500 也可以不进行标定,直接用出厂标定的参数也可以。
roslaunch realsense2_camera rs_camera.launch 相机启动的情况下,终端执行以下命令,查看相机内参:
rostopic echo /camera/color/camera_info 
frame_id: "camera_color_optical_frame" # 坐标系名称 height: 720 # 图像高度(像素) width: 1280 # 图像宽度(像素) distortion_model: "plumb_bob" # 畸变模型(Brown-Conrady模型) D: [0.0, 0.0, 0.0, 0.0, 0.0] # 畸变系数 [k1, k2, p1, p2, k3] K: [907.001708984375, 0.0, 659.4490966796875, 0.0, 906.75634765625, 367.81591796875, 0.0, 0.0, 1.0] K:[fx, 0, cx, 0, fy, cy, # 相机内参矩阵 0, 0, 1] R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, # 旋转矩阵 0.0, 0.0, 1.0] P: [907.001708984375, 0.0, 659.4490966796875, 0.0, 0.0, 906.75634765625, 367.81591796875, 0.0, 0.0, 0.0, 1.0, 0.0] # 投影矩阵同样按照其数据修改Examples/ROS/ORB_SLAM3目录下Asus.yaml内容即可。
四、ROS下运行orb-slam3
将ORB_SLAM3放在前面创建的catkin_ws/src/目录下。
将Examples_old下的ROS文件夹复制到Examples文件夹下。
在终端执行以下命令:
gedit ~/.bashrc 会自动打开一个文件,在文件末尾添加以下语句(路径根据自己的实际路径来):
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/home/ubuntu/catkin_ws/src/ORB_SLAM3/Examples/ROS保存并退出。在终端执行以下命令,刷新下环境:
source ~/.bashrc终端执行以下命令,查看刚刚添加的路径会不会出现:
echo ${ROS_PACKAGE_PATH} 
将ORB_SLAM3/Examples/ROS/ORB_SLAM3/CMakeLists.txt文件进行如下修改:
把C++11改为C++14:


第2行增加如下代码:
project(ORB_SLAM3)第34行:
将find_package(OpenCV 3.0 QUIET)改为find_package(OpenCV 4.2 QUIET)第50行增加以下代码(在include_directories内):
${PROJECT_SOURCE_DIR}/../../../Thirdparty/Sophus将ORB_SLAM3/Examples/ROS/ORB_SLAM3/src/AR下,ros_mono_ar.cc文件中的代码第151行进行修改:
cv::Mat Tcw = mpSLAM->TrackMonocular(cv_ptr->image,cv_ptr->header.stamp.toSec());改为:
cv::Mat Tcw=ORB_SLAM3::Converter::toCvMat(mpSLAM->TrackMonocular(cv_ptr->image,cv_ptr->header.stamp.toSec()).matrix());将ORB_SLAM3/Examples/ROS/ORB_SLAM3/src/AR下,ViewerAR.cc文件中的代码第405行和第530行进行修改:
vPoints.push_back(pMP->GetWorldPos());cv::Mat Xw = pMP->GetWorldPos();修改为:
vPoints.push_back(ORB_SLAM3::Converter::toCvMat(pMP->GetWorldPos()));cv::Mat Xw = ORB_SLAM3::Converter::toCvMat(pMP->GetWorldPos());安装一下rosdep。这里用小鱼老师的一键安装系列。执行以下代码:
wget http://fishros.com/install -O fishros && bash fishros 安装完后执行以下代码更新一下:
rosdepc update在ORB-SLAM3目录下打开终端,依次执行以下命令:
需要确保 ./build.sh 先执行完成,这一步生成的一些文件,在执行ROS版本的编译时需要。
chmod +x build_ros.sh ./build_ros.sh编译成功如下:

ORB_SLAM3/Examples/ROS/ORB_SLAM3/src/ros_rgbd.cc里的第66,67行进行修改:
message_filters::Subscriber<sensor_msgs::Image> rgb_sub(nh, "/camera/rgb/image_raw", 100); message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "camera/depth_registered/image_raw", 100);终端执行以下命令,启动相机驱动:
roslaunch realsense2_camera rs_camera.launch通过执行 rostopic list 查看相关话题:

将订阅话题修改为上面出现的:
message_filters::Subscriber<sensor_msgs::Image> rgb_sub(nh, "/camera/color/image_raw", 100); message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "/camera/depth/image_rect_raw", 100);修改完后,重新编译即可。
打开终端执行以下命令:
roslaunch realsense2_camera rs_camera.launch另起一个终端在/catkin_ws/src/ORB_SLAM3下,执行以下命令,这里路径如果错误会无法启动:
rosrun ORB_SLAM3 RGBD Vocabulary/ORBvoc.txt Examples/ROS/ORB_SLAM3/Asus.yaml 成功运行画面如下:

由于ROS包中,同名文件夹会出现冲突。建议把Examples_old这个文件夹直接删除,避免和Examples/ROS/ORB_SLAM3路径发生冲突。
如果还需要跑数据集测试的话,把Examples_old这个文件夹放到其他地方保存。因为编译时CMakeLists.txt会链接Examples_old里面的内容。或者直接把CMakeLists.txt里面关于Examples_old的内容删除即可。
五、启用IMU
catkin_ws/src/realsense-ros/realsense2_camera/launch目录下,找到rs_camera.launch文件。
将rs_camera.launch文件里面的enable_gyro和enable_accel参数设置为true即可。
<arg name="enable_gyro" default="false"/> <arg name="enable_accel" default="false"/>终端执行以下命令,启动相机:
roslaunch realsense2_camera rs_camera.launch另起一个终端执行以下命令,查看话题:
rostopic list 现在已经有IMU相关的话题:

六、稠密建图
当进行稠密建图时相机只要出现一点移动或者旋转 ,建成的地图就会出现大量重影,完全无法识别是什么。

原因是RGB图像与深度图未对齐。
catkin_ws/src/realsense-ros/realsense2_camera/launch下找到rs_camera.launch文件。
将rs_camera.launch文件里面的align_depth参数设置为true即可。
<arg name="align_depth" default="true"/>终端执行以下命令,启动相机:
roslaunch realsense2_camera rs_camera.launch另起一个终端执行以下命令,查看话题:
rostopic list 现在已经有align相关的话题。

然后在Examples/ROS/ORB_SLAM3/src/下找到ros_rgbd.cc文件。
将订阅的话题/camera/depth/image_rect_raw修改为深度对齐的话题/camera/aligned_depth_to_color/image_raw
message_filters::Subscriber<sensor_msgs::Image> rgb_sub(nh, "/camera/color/image_raw", 100); message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "/camera/depth/image_rect_raw", 100);改为:
message_filters::Subscriber<sensor_msgs::Image> rgb_sub(nh, "/camera/color/image_raw", 100); message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "/camera/aligned_depth_to_color/image_raw", 100);此时再建图。

Asus.yaml重要参数可参考如下配置。
%YAML:1.0 #-------------------------------------------------------------------------------------------- # Camera Parameters. Adjust them! #-------------------------------------------------------------------------------------------- Camera.type: "PinHole" # Camera calibration and distortion parameters (OpenCV) Camera.fx: 907.001708984375 Camera.fy: 906.75634765625 Camera.cx: 659.4490966796875 Camera.cy: 367.81591796875 Camera.k1: 0.0 Camera.k2: 0.0 Camera.p1: 0.0 Camera.p2: 0.0 Camera.k3: 0.0 Camera.width: 1280 Camera.height: 720 # Camera frames per second Camera.fps: 30.0 # IR projector baseline times fx (aprox.) Camera.bf: 45.350085 # Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale) Camera.RGB: 1 # Close/Far threshold. Baseline times. ThDepth: 40.0 # Deptmap values factor DepthMapFactor: 1000.0 #-------------------------------------------------------------------------------------------- # ORB Parameters #-------------------------------------------------------------------------------------------- # ORB Extractor: Number of features per image ORBextractor.nFeatures: 1000 # ORB Extractor: Scale factor between levels in the scale pyramid ORBextractor.scaleFactor: 1.2 # ORB Extractor: Number of levels in the scale pyramid ORBextractor.nLevels: 8 # ORB Extractor: Fast threshold # Image is divided in a grid. At each cell FAST are extracted imposing a minimum response. # Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST # You can lower these values if your images have low contrast ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7 #-------------------------------------------------------------------------------------------- # Viewer Parameters #-------------------------------------------------------------------------------------------- Viewer.KeyFrameSize: 0.05 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize:2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500