首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于3D空间球面的车载全景快速生成方法
引用本文:曹立波,夏家豪,廖家才,张冠军,张瑞锋.基于3D空间球面的车载全景快速生成方法[J].中国公路学报,2020,33(1):153-162,171.
作者姓名:曹立波  夏家豪  廖家才  张冠军  张瑞锋
作者单位:1. 湖南大学汽车车身先进设计制造国家重点实验室, 湖南长沙 410082;2. 湖南大学 深圳研究院, 广东深圳 518063;3. 清华大学深圳研究生院, 广东深圳 518000
基金项目:深圳市科技研发专项资金项目(JCYJ 20160530192230252);汽车车身先进设计制造国家重点实验室开放基金项目(31715013);国家自然科学基金项目(51205118)
摘    要:为了高效生成具有低失真度的车载全景,以保证基于此影像信息的辅助驾驶系统的实时性,提出针对运算效率进行优化的车载全景生成方法。首先,利用多扫描线趋近、圆拟合的方法实现鱼眼成像有效区域的准确自动提取,并对参数进行优化以提升算法鲁棒性。基于鱼眼成像有效区域轮廓,计算得到有效区域半径以及圆心。以圆心为原点,半径为单位长度重构笛卡尔坐标系。再基于此坐标系,将鱼眼成像通过经纬映射投射至3D空间球面并在球心建立虚拟相机,利用梯度下降获取视锥体最优旋转角后进行视角重构并成像,直接获取该方向鸟瞰图,实现通过一步变换完成鱼眼校正与逆透视变换,达到提升车载全景影像生成速率与减少图像失真的效果。最后,对图像位置进行配准,使虚拟相机直接成像在指定位置,并对4幅图像进行融合,从而降低由于拼接缝带来的图像信息损失,保证生成的车载全景影像最大程度保留道路信息。研究结果表明:使用该方法在同等硬件平台下,基于标定参数计算生成车载全景影像速率提升接近1倍,从视觉上图像失真程度明显降低;该算法可以提升车载全景生成速率,减少计算资源占用,降低图像失真,从而提升基于车载全景影像的辅助驾驶系统的实时性与可靠性。

关 键 词:交通工程  车载全景  算法优化  车载辅助驾驶系统  3D空间球面  视角重构  图像融合
收稿时间:2018-08-15

Fast Generation Methods of Around View Monitoring Images for Automobiles Based on 3D Space Sphere
CAO Li-bo,XIA Jia-hao,LIAO Jia-cai,ZHANG Guan-jun,ZHANG Rui-feng.Fast Generation Methods of Around View Monitoring Images for Automobiles Based on 3D Space Sphere[J].China Journal of Highway and Transport,2020,33(1):153-162,171.
Authors:CAO Li-bo  XIA Jia-hao  LIAO Jia-cai  ZHANG Guan-jun  ZHANG Rui-feng
Institution:1. State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, Hunan University, Changsha 410082, Hunan, China;2. Institute at Shenzhen, Hunan University, Shenzhen 518063, Guangdong, China;3. Graduate School at Shenzhen, Tsinghua University, Shenzhen 518000, Guangdong, China
Abstract:To efficiently generate around view monitoring (AVM) images with low distortion for the ultimate purpose of guaranteeing real-time performance in advanced driver assistance systems (ADASs) based on image information, this study proposed an optimization method for AVM systems. First, a new method based on scanning line and circle fitting was designed to extract the image area from a fisheye camera accurately and automatically. To guarantee the method's robustness, parameters were analyzed and optimized. The radius and center were calculated according to the image area of the fisheye camera. A Cartesian coordinate system was constructed with the center of the circle as the origin and the unit length as the radius. Then, fisheye camera images were mapped to a 3D space spherical surface by latitude-longitude projection based on the Cartesian coordinate system, and a virtual camera was constructed in the center of the sphere. The optimal rotation angle of the visual cone was calculated according to the gradient descent. Based on the rotation angle, the imaging direction of the virtual camera was rebuilt to obtain an aerial view directly, where the perspective transformation and fisheye calibration were combined into a single step to improve computational efficiency and reduce the distortion of the system. Finally, placing the image of each virtual camera in a specified location and fusing four images to reduce the loss of image information as a result of seams. The results demonstrate that the speed of AVM image generation nearly doubles with the same platform and produces less distortion in images. The proposed algorithm can speed up the generation rate of images in an AVM system based on camera parameters and can reduce both computing resource usage and loss of image information. Thus, the algorithm can be used to improve real-time performance and reliability of ADASs based on information obtained from AVM images.
Keywords:traffic engineering  around view monitoring  algorithm optimization  advanced driver assistance system  3D space spherical surface  imaging direction reconstruction  image fusion  
本文献已被 CNKI 维普 等数据库收录!
点击此处可从《中国公路学报》浏览原始摘要信息
点击此处可从《中国公路学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号