Cooperative Bus Holding and Stop-skipping: A Deep Reinforcement Learning Framework

TitleCooperative Bus Holding and Stop-skipping: A Deep Reinforcement Learning Framework
Publication TypeJournal Article
Year of Publication2023
AuthorsRodriguez J, Haris N. Koutsopoulos, Shenhao Wang, Jinhua Zhao
JournalTransportation Research Part C: Emerging Technologies
Volume155
Date Published10/2023
KeywordsCooperative Agents, Multi-agent Reinforcement Learning, Real-time Bus Control
Abstract

The bus control problem that combines holding and stop-skipping strategies is formulated as a multi-agent reinforcement learning (MARL) problem. Traditional MARL methods, designed for settings with joint action-taking, are incompatible with the asynchronous nature of at-stop control tasks. On the other hand, using a fully decentralized approach leads to environment non-stationarity, since the state transition of an individual agent may be distorted by the actions of other agents. To address it, we propose a design of the state and reward function that increases the observability of the impact of agents’ actions during training. An event-based mesoscopic simulation model is built to train the agents. We evaluate the proposed approach in a case study with a complex route from the Chicago transit network. The proposed method is compared to a standard headway-based control and a policy trained with MARL but with no cooperative learning. The results show that the proposed method not only improves level of service but it is also more robust towards uncertainties in operations such as travel times and operator compliance with the recommended action. 

URLhttps://mobility.mit.edu/sites/default/files/rodriguez_cooperative_bus.pdf
DOI10.1016/j.trc.2023.104308