Abstract:AlphaZero is a self-play reinforcement learning algorithm that achieves superhuman play in chess, shogi, and Go via policy iteration. To be an effective policy improvement operator, AlphaZero's search requires accurate value estimates for the states appearing in its search tree. AlphaZero trains upon self-play matches beginning from the initial state of a game and only samples actions over the first few moves, limiting its exploration of states deeper in the game tree. We introduce Go-Exploit, a novel search control strategy for AlphaZero. Go-Exploit samples the start state of its self-play trajectories from an archive of states of interest. Beginning self-play trajectories from varied starting states enables Go-Exploit to more effectively explore the game tree and to learn a value function that generalizes better. Producing shorter self-play trajectories allows Go-Exploit to train upon more independent value targets, improving value training. Finally, the exploration inherent in Go-Exploit reduces its need for exploratory actions, enabling it to train under more exploitative policies. In the games of Connect Four and 9x9 Go, we show that Go-Exploit learns with a greater sample efficiency than standard AlphaZero, resulting in stronger performance against reference opponents and in head-to-head play. We also compare Go-Exploit to KataGo, a more sample efficient reimplementation of AlphaZero, and demonstrate that Go-Exploit has a more effective search control strategy. Furthermore, Go-Exploit's sample efficiency improves when KataGo's other innovations are incorporated.
Abstract:This paper presents a Genetic Programming (GP) approach to solving multi-robot path planning (MRPP) problems in single-lane workspaces, specifically those easily mapped to graph representations. GP's versatility enables this approach to produce programs optimizing for multiple attributes rather than a single attribute such as path length or completeness. When optimizing for the number of time steps needed to solve individual MRPP problems, the GP constructed programs outperformed complete MRPP algorithms, i.e. Push-Swap-Wait (PSW), by $54.1\%$. The GP constructed programs also consistently outperformed PSW in solving problems that did not meet PSW's completeness conditions. Furthermore, the GP constructed programs exhibited a greater capacity for scaling than PSW as the number of robots navigating within an MRPP environment increased. This research illustrates the benefits of using Genetic Programming for solving individual MRPP problems, including instances in which the number of robots exceeds the number of leaves in the tree-modeled workspace.