Distributed scheduling for automated guided vehicles by reinforcement learning.

Author(s)
Unoki, T. & Suetake, N.
Year
Abstract

In this paper an autonomous vehicle scheduling scheme is proposed in large physical distribution terminals publicly used as the next generation physical distribution bases. This scheme uses Learning Automaton for vehicle scheduling based on Contract Net Protocol, in order to obtain useful emergent behaviors of agents in the system based on the local decision-making of each agent. The state of the automaton is updated at each instant on the basis of new information that includes the arrival estimation time of vehicles. Each agent estimates the arrival time of vehicles by using Bayesian Learning Process. Using traffic simulation, the scheme was evaluated in various simulated environments. The result shows the advantage of the scheme when each agent provides the same criteria from the top down, and voluntarily generates criteria via interactions with the environment, playing an individual role in the system.

Request publication

1 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

Publication

Library number
C 13826 (In: C 13302 CD-ROM) /72 / IRRD 492243
Source

In: Mobility for everybody : proceedings of the fourth world congress on Intelligent Transport Systems ITS, Berlin, 21-24 October 1997, Paper No. 3004, 8 p., 9 ref.

Our collection

This publication is one of our other publications, and part of our extensive collection of road safety literature, that also includes the SWOV publications.