首页 | 本学科首页   官方微博 | 高级检索  
     检索      

一种基于地图辅助的自动驾驶视-惯融合定位方法
引用本文:程君,张立炎,陈启宏.一种基于地图辅助的自动驾驶视-惯融合定位方法[J].交通运输系统工程与信息,2022,22(2):117-126.
作者姓名:程君  张立炎  陈启宏
作者单位:武汉理工大学,自动化学院,武汉 430070
摘    要:视觉同步定位与建图(Simultaneous Localization and Mapping, SLAM)方法广泛应用于自动驾驶领域。传统的方法利用车载摄像头表征车辆周围环境,同时估计自身位置,当车辆运动过快时,定位精度和鲁棒性会下降。针对此问题,本文提出一种地图辅助的视-惯融合定位方法。该方法在ORB-SLAM2(Oriented FAST and Rotated BRIEF SLAM2)的基础上拓展地图保存功能,将建图和定位拆分为两个独立模块,车辆首先以较慢的速度构建并保存具有视觉特征的地图,然后,在第2次运行时车载计算机调用预先保存的地图实现精确且稳定的定位性能。由于构建地图阶段采用了图优化算法融合惯性测量单元(Inertial Measurement Unit, IMU)的信息,地图误差得到有效校正。在KITTI数据集场景和实际场景中验证了所提方法的良好性能。实验结果表明,所提方法在4, 8, 16 m·s-1 驾驶速度下的定位精度分别为2.59,2.61,2.73 m,图像失帧率和路径丢失率分别为3.76%和1.38%,3.89%和1.69%,4.27%和1.84%。相比原始的ORB-SLAM2方法,系统定位精度和鲁棒性均得到了提高。

关 键 词:智能交通  地图辅助定位  视觉-惯性融合  自动驾驶  
收稿时间:2021-10-26

A Map Aided Visual-inertial Fusion Localization Method for Autonomous Vehicles
CHENG Jun,ZHANG Li-yan,CHEN Qi-hong.A Map Aided Visual-inertial Fusion Localization Method for Autonomous Vehicles[J].Transportation Systems Engineering and Information,2022,22(2):117-126.
Authors:CHENG Jun  ZHANG Li-yan  CHEN Qi-hong
Institution:School of Automation, Wuhan University of Technology, Wuhan 430070, China
Abstract:Simultaneous Localization and Mapping (SLAM) method based on visual sensors is widely used for localization in the autonomous driving field. The traditional method uses the onboard camera to characterize the surrounding environment of the vehicle and estimate the vehicle locations. However, the accuracy and robustness are decreased when the vehicle moves fast. To solve this problem, this paper proposes a map-aided visual-inertial fusion localization method for autonomous driving. This method expands the map saving function on the basis of the ORBSLAM2 (Oriented FAST and Rotated BRIEF SLAM2) framework. The map building and positioning are divided into two independent modules. A map of environmental visual features was built and saved at a low driving speed. In the second run, the onboard computer loads the previously built- up map to realize high- precision and robust positioning performance. Since the graph-based optimization algorithm is adopted to integrate the inertial measurement unit (IMU) information in the map building stage, the errors of the visual map can be effectively corrected. The experimental results for scenes of the KITTI dataset and real-world scenes verified the good performance of the proposed method. The results indicate that the positioning error of the proposed method is respectively 2.59, 2.61, 2.73 m in the speed of 4, 8, 16 m/s. The frame lost rate (FLR) and path lost rate (PLR) are respectively 3.76% and 1.38%, 3.89% and1.69%, 4.27% and 1.84% for the three speed categories. Compared with the original ORB-SLAM2 framework, the positioning accuracy and robustness of the proposed method has been improved.
Keywords:intelligent transportation  map aided localization  visual-inertial fusion  autonomous vehicles  
本文献已被 万方数据 等数据库收录!
点击此处可从《交通运输系统工程与信息》浏览原始摘要信息
点击此处可从《交通运输系统工程与信息》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号