C.X. Chen, W.Y. Chen, Z.Y. Chen

pdf icon

Pages: 41-54

Abstract
Bus holding is a commonly used control strategy used in transit operations for the purpose of improving transit service reliability; its implementation requires dynamic decision-making in an interactive and stochastic system environment. This paper presents a distributed cooperative holding control formulation based on a multi-agent reinforcement-learning (MARL) framework to optimize real-time operations of public transport system. We use agent technology to model buses operation on a transit corridor. In the MARL framework, each bus agent is modeled as a coordinated reinforcement learner when it dwells at a stop for which the state, actions, reward function and operation constraints are defined. In the absence of coordination mechanisms for the multiple agent case, different agents may break these ties between multiple optimal joint actions in different ways, and the resulting joint may be suboptimal. Coordination Graphs (CGs) are thus applied in a cooperative holding action selection, which considers the holding action of bus agent depends on its backward and forward and edge-based decomposes the global payoff function into a linear combination of local payoff functions. Variable Elimination (VE) algorithms are proposed to obtain the best cooperative joint holding action due to the particularity of the sparsely structured graphs. The simulation results show the advantages of the MARL framework for distributed holding control.

Keywords: transit service reliability; bus holding; reinforcement learning; multi-agent system; q-learning; coordination graphs


Issues per Year