首页 | 本学科首页   官方微博 | 高级检索  
     检索      


In and out vision-based driver-interactive assistance system
Authors:H C Choi  S Y Kim  S Y Oh
Institution:1.Dartment of Electrical Engineering,Pohang University of Science and Technology,Gyeongbuk,Korea;2.Hyundai-Kia Automotive Group Namyang R&D Center,Gyeonggi,Korea
Abstract:The overall driving environment consists of the Traffic environment, vehicle and driver states (TVD). advanced driver assistance Systems (ADAS) must consider not only information on each of the TVD states but also their context. Recent research has focused on making more efficient and effective assistance systems by fusing all the information from the TVD states. Based on this research trend, this paper focuses on decision-level fusion to estimate the level of danger of a warning by using visual information of the traffic environment and the driver state. The driver state consists of the gazing region and the facial feature points that are obtained using the active appearance model (AAM). The traffic environment state consists of time to collision (TTC), time to lane Crossing (TLC), and lane color information, which are obtained from the environment in front of the vehicle, i.e., position of lanes and other vehicles. Warnings against lane-off, collision, and driver inattention are generated by fusing this vision-based information from inside and outside the vehicle. The experimental results prove that our vision-based interactive driver assistance system reduces most useless warnings.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号