强化学习在城市交通信号灯控制方法中的应用
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A survey of the application of reinforcement learning in urban traffic signal control methods
  • 作者:刘义 ; 何均宏
  • 英文作者:LIU Yi;HE Junhong;Shenzhen Traffic Police;Huawei Technologies Co.,Ltd.;
  • 关键词:交通信号控制 ; 强化学习 ; 人工智能 ; 通行效率
  • 英文关键词:traffic signal control;;reinforcement learning;;artificial intelligence;;pass efficiency
  • 中文刊名:KJDB
  • 英文刊名:Science & Technology Review
  • 机构:深圳市公安局交通警察局;华为技术有限公司;
  • 出版日期:2019-03-28
  • 出版单位:科技导报
  • 年:2019
  • 期:v.37;No.564
  • 语种:中文;
  • 页:KJDB201906013
  • 页数:7
  • CN:06
  • ISSN:11-1421/N
  • 分类号:86-92
摘要
悉尼自适应交通控制系统(SCATS)、绿信比-周期-相位差优化技术(SCOOT)及Smooth着深圳城市交通流量急剧增长,深圳交警在自主研发Smooth分布式、自适应调控要求,联合创新了人工信号控制方案TrafficGo,探索基于深度神经网络的强化学习,通过在线学习各种流量负荷,实时推理计算信控时段、相位、相序、信号周期、绿信比、相位差,进一步优化了交通信号灯的控制模式。介绍了在交通信号灯控制中运用的强化学习模型,实地测评表明,其取得了一定改进效果。
        The adaptive traffic signal control method is adopted to effectively control the traffic lights at the urban road junctions, with the rapid growth of the traffic flow in Shenzhen. Shenzhen traffic police asked for a real-time, distributed and adaptive control on the basis of the self-developed smooth signal control. Joint innovation has developed the reinforcement learning based on the deep neural network.Through online learning of various traffic loads, and the real-time reasoning, the information control period, phase, phase sequence, signal cycle, split and phase difference are calculated. This paper reviews the reinforcement learning model used in the traffic signal control, and makes an evaluation on the spot.
引文
[1]陆化普.大数据及其在城市智能交通系统中的应用综述[J].交通运输系统工程与信息,2015(10):45-51.Lu Huapu.Big data and its applications in urban intelligent transportation system[J].Journal of Transportation Systems Engineering and Information Technology,2015(10):45-51.
    [2]杨文臣,张轮,Zhu Feng.多智能体强化学习在城市交通网络信号控制方法中的应用综述[J].计算机应用研究,2018,35(6):101-114.Yang Wenchen,Zhang Lun,Zhu Feng.Multi-agent reinforcement learning based traffic signal control for integrated urban network:Survey of state of art[J].Application Research of Computers,2018,35(6):101-114.
    [3]Li L,Lv Y S,Wang F Y.Traffic signal timing via deep reinforcement learning[J].Acta Automatica Sinica,2016,3(3):247-254.
    [4]Hamilton A,Waterson B,Cherrett T,et al.The evolution of urban traffic control:Changing policy and technology[J].Transportation Planning&Technology,2013,36(1):24-43.
    [5]Zhang J,Wang F Y,Wang K,et al.Data-driven intelligent transportation systems:A survey[J].IEEE Transactions on Intelligent Transportation Systems,2011,12(4):1624-1639.
    [6]Wu X,Liu H X.Using high-resolution event-based data for traffic modeling and control:An overview[J].Transportation Research Part C,2014,42(2):28-43.
    [7]Yau K L A,Qadir J,Khoo H L,et al.A Survey on reinforcement learning models and algorithms for traffic signal control[J].ACM Computing Surveys,2017,50(3):1-38.
    [8]Azimirad E,Pariz N,Sistani M B N.A novel fuzzy model and control of single intersection at urban traffic network[J].IEEESystems Journal,2010,4(1):107-111.
    [9]Balaji P G,German X,Srinivasan D.Urban traffic signal control using reinforcement learning agents[J].IET Intelligent Transport Systems,2010,4(3):177-188.
    [10]Sutton R S,Barto A G.Reinforcement learning:An introduction[J].IEEE Transactions on Neural Networks,1998,9(5):1054.
    [11]Watkins C J C H,Dayan P.Q-learning[J].Machine Learning,1992,8(3/4):279-292.
    [12]Lecun Y,Bengio Y,Hinton G.Deep learning[J].Nature,2015,521(7553):436-444.
    [13]Mnih V,Kavukcuoglu K,Silver D,et al.Human-level control through deep reinforcement learning[J].Nature,2015,518(7540):529-533.
    [14]Genders W,Razavi S.Using a deep reinforcement learning agent for traffic signal control[J].arXiv preprint,2016,arXiv:1611.01142.
    [15]Tran D,Toulis P,Airoldi E M.Stochastic gradient descent methods for estimation with large data sets[J].arXiv preprint,2015,arXiv:1509.06459.
    [16]Lillicrap T P,Hunt J J,Pritzel A,et al.Continuous control with deep reinforcement learning[J].arXiv preprint,2016,arXiv:1509.02971.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700