首页 | 本学科首页   官方微博 | 高级检索  
     

基于多智能体深度强化学习的高速公路可变限速协同控制方法
引用本文:余荣杰,徐灵,章锐辞. 基于多智能体深度强化学习的高速公路可变限速协同控制方法[J]. 同济大学学报(自然科学版), 2024, 52(7): 1089-1098
作者姓名:余荣杰  徐灵  章锐辞
作者单位:1.同济大学 道路与交通工程教育部重点实验室,上海 201804;2.浙江杭绍甬高速公路有限公司,浙江 杭州 310000
基金项目:浙江省交通运输厅科技计划项目(2021047)
摘    要:面向高速公路多路段可变限速协同控制需求,针对高维参数空间高效训练寻优难题,提出了应用多智能体深度确定性策略梯度(MADDPG)算法的高速公路可变限速协同控制方法。区别于既有研究的单个智能体深度确定性策略梯度(DDPG)算法,MADDPG将每个管控单元抽象为具备Actor-Critic强化学习架构的智能体,在算法训练过程中共享各智能体的状态、动作信息,使得各智能体具备推测其余智能体控制策略的能力,进而实现多路段协同控制。基于开源仿真软件SUMO,在高速公路典型拥堵场景对提出的控制方法开展管控效果验证。实验结果表明,提出的MADDPG算法降低了拥堵持续时间和路段运行速度标准差,分别减少69.23 %、47.96 %,可显著提高交通效率与安全。对比单智能体DDPG算法,MADDPG可节约50 %的训练时间并提高7.44 %的累计回报值,多智能体算法可提升协同控制策略的优化效率。进一步,为验证智能体间共享信息的必要性,将MADDPG与独立多智能体DDPG(IDDPG)算法进行对比:相较于IDDPG,MADDPG可使拥堵持续时间、速度标准差均值的改善提升11.65 %、19.00 %。

关 键 词:交通工程  可变限速协同控制  多智能体深度强化学习  交通拥堵  高速公路  交通效率  交通安全
收稿时间:2022-10-18

Coordinated Variable Speed Limit Control for Freeway Based on Multi-Agent Deep Reinforcement Learning
YU Rongjie,XU Ling,ZHANG Ruici. Coordinated Variable Speed Limit Control for Freeway Based on Multi-Agent Deep Reinforcement Learning[J]. Journal of Tongji University(Natural Science), 2024, 52(7): 1089-1098
Authors:YU Rongjie  XU Ling  ZHANG Ruici
Affiliation:1.Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Tongji University, Shanghai 201804, China;2.Zhejiang Hangshaoyong Expressway Co., Ltd., Hangzhou 310000, China
Abstract:In order to meet the needs of coordinated variable speed limit (VSL) control of multi-segment on freeways, and to solve the problem of efficient training optimization in high-dimensional parameter space, a multi-agent deep deterministic policy gradient (MADDPG) algorithm is proposed for freeway VSL control. Different from the existing research on the single agent Deep Deterministic Policy Gradient (DDPG) algorithm, MADDPG abstracts each control unit as an agent with Actor-Critic reinforcement learning architecture, and shares each agent in the algorithm training process. The state and action information of the agents enable each agent to have the ability to infer the control strategies of other agents, thereby realizing multi-segment coordinated control. Based on the open source simulation software SUMO, the effect of the control method proposed is verified in a typical freeway traffic jam scenario. The experimental results show that the MADDPG algorithm proposed reduces the traffic jam duration and the speed standard deviation by 69.23 % and 47.96 % respectively, which can significantly improve the traffic efficiency and safety. Compared with the single-agent DDPG algorithm, MADDPG can save 50 % of the training time and increase the cumulative return value by 7.44 %. The multi-agent algorithm can improve the optimization efficiency of the collaborative control strategy. Further, in order to verify the necessity of sharing information among agents, MADDPG is compared with the independent DDPG (IDDPG) algorithm: It is shown that MADDPG can improve the traffic jam duration and speed standard deviation by 11.65 %, 19.00 % respectively.
Keywords:traffic engineering  coordinated variable speed limit control  multi-agent deep reinforcement learning  traffic jam  freeway  traffic efficiency  traffic safety
点击此处可从《同济大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《同济大学学报(自然科学版)》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号