首页 | 本学科首页   官方微博 | 高级检索  
     检索      

面向弱势道路使用者的多目标运动轨迹预测方法
引用本文:李克强,熊辉,刘金鑫.面向弱势道路使用者的多目标运动轨迹预测方法[J].中国公路学报,2022,35(1):298-315.
作者姓名:李克强  熊辉  刘金鑫
作者单位:清华大学车辆与运载学院, 北京 100084
基金项目:清华大学-戴姆勒联合研究项目(20183910018);国家自然科学基金项目(52072214)
摘    要:针对复杂行驶环境下弱势道路使用者(VRU)多目标轨迹预测方法存在的预测精度不足和预测性能不稳定的问题,提出一种基于历史序列信息的面向VRU的多目标轨迹预测方法(Vulnerable Road User Trajectory Predictor, VRU_TP)。首先,为了解决预测精度不足的难题,该轨迹预测方法框架融合时空的多维运动状态特征和多维外观语义特征因子,同时基于外观特征综合考虑目标的运动意图,提取多线索的轨迹预测因子作为各个VRU目标历史轨迹的编码输入。其次,为了克服现有方法预测性能不稳定的难题,在序列到序列的编解码器方法(Seq2Seq Encoder-decoder)的基础上,从网络结构设计和网络优化策略等方面提出适用于VRU轨迹预测的门循环神经网络(GRU)。该网络融合设计的plelu6激活函数,能基于历史的序列信息映射学习未来的轨迹信息,用来提高编解码器对未来轨迹的解码能力。最后,为了验证时空多元轨迹线索和门循环神经网络优化方法的有效性和实用性,在已公开的MOT16数据库和提出的VRU-Track数据库上,采用在图像空间上通用的归一化平均位置偏移(NADE)、平均重叠率分值(AOS)评价指标以及平均成功率(ASR)和成功率曲线(Success Plot)评价指标,并进行试验验证。研究结果表明:在MOT16划分的验证集中,相比于基准方法,NADE下降了19.4%,ASR提高了22.6%;在VRU-Track数据库测试集中,NADE下降了23.0%,AOS提高了17.1%,ASR提高了16.5%;提出的VRU_TP方法减少了预测值与真值之间的位置偏移,增加了预测值与真值的重叠率,提升了各类VRU目标轨迹预测的性能,而验证了该方法的有效性。

关 键 词:汽车工程  运动轨迹预测  时空多元线索  弱势道路使用者  智能汽车  循环神经网络  编解码器  
收稿时间:2021-01-12

Multiple Object Motion Trajectory Prediction for Vulnerable Road User
LI Ke-qiang,XIONG Hui,LIU Jin-xin.Multiple Object Motion Trajectory Prediction for Vulnerable Road User[J].China Journal of Highway and Transport,2022,35(1):298-315.
Authors:LI Ke-qiang  XIONG Hui  LIU Jin-xin
Institution:School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China
Abstract:To overcome the challenges of low prediction accuracy and unstable prediction performance of vulnerable road user (VRU) multi-object trajectory prediction in complex driving environments, this paper proposes a VRU-oriented multi-target trajectory prediction method, "VRU_TP" (vulnerable road user trajectory predictor) based on historical sequential information. Firstly, to solve the problem of low prediction accuracy, the proposed trajectory prediction method integrated multi-dimensional motion-state and multi-dimensional appearance-semantic features. Meanwhile, according to the appearance features, the motion intention of the target was comprehensively considered, and multi-clue trajectory prediction factors were extracted as the encoding input of the historical trajectory of each VRU object. Secondly, to overcome the prediction instability of existing methods, an optimized gate recurrent unit (GRU) which is based on a sequence-to-sequence encoder-decoder, was proposed for VRU trajectory prediction, with respect to the aspects of network structure design and network optimization strategy. This unit integrates a self-designed plelu6 activation function, which can learn future trajectory information based on the historical sequential information mapping, so as to improve the ability of the decoder to decode the future trajectory of the user. Finally, to verify the effectiveness and practicability of the spatiotemporal multiple trajectory cues and the optimization method of the gated recurrent neural network, the MOT16 dataset and the proposed VRU-Track dataset were used. The normalized average displacement error (NADE), the average overlap score (AOS) evaluation index, as well as the average success rate (ASR) and success plot evaluation curve which are commonly used in digital image processing, were adopted for experimental verification. For the MOT16 dataset, compared with baseline values, the NADE decreases by 19.4%, while the ASR increases by 22.6%. Furthermore, for the test set of VRU-Track, the NADE decreases by 23.0%, the AOS increases by 17.1% and the ASR increases by 16.5%. The comparative experiments using various databases indicate that the proposed VRU_TP is an effective method. It can reduce the position offset and increase the overlap rate between the predicted and the ground-truth values, thus improving the performance of all types of VRU trajectory predictions.
Keywords:automotive engineering  motion trajectory prediction  spatial-temporal multiple cues  vulnerable road user  self-driving vehicle  recurrent neural network  encoder-decoder  
点击此处可从《中国公路学报》浏览原始摘要信息
点击此处可从《中国公路学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号