首页 | 本学科首页   官方微博 | 高级检索  
     


Asynchronous n-step Q-learning adaptive traffic signal control
Authors:Wade Genders  Saiedeh Razavi
Affiliation:Department of Civil Engineering, McMaster University, Hamilton, Ontario, Canada
Abstract:Ensuring transportation systems are efficient is a priority for modern society. Intersection traffic signal control can be modeled as a sequential decision-making problem. To learn how to make the best decisions, we apply reinforcement learning techniques with function approximation to train an adaptive traffic signal controller. We use the asynchronous n-step Q-learning algorithm with a two hidden layer artificial neural network as our reinforcement learning agent. A dynamic, stochastic rush hour simulation is developed to test the agent’s performance. Compared against traditional loop detector actuated and linear Q-learning traffic signal control methods, our reinforcement learning model develops a superior control policy, reducing mean total delay by up 40% without compromising throughput. However, we find our proposed model slightly increases delay for left turning vehicles compared to the actuated controller, as a consequence of the reward function, highlighting the need for an appropriate reward function which truly develops the desired policy.
Keywords:Artificial intelligence  intelligent transportation systems  neural networks  reinforcement learning  traffic signal controllers
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号