Abstract:Control theory provides engineers with a multitude of tools to design controllers that manipulate the closed-loop behavior and stability of dynamical systems. These methods rely heavily on insights about the mathematical model governing the physical system. However, in complex systems, such as autonomous underwater vehicles performing the dual objective of path-following and collision avoidance, decision making becomes non-trivial. We propose a solution using state-of-the-art Deep Reinforcement Learning (DRL) techniques, to develop autonomous agents capable of achieving this hybrid objective without having \`a priori knowledge about the goal or the environment. Our results demonstrate the viability of DRL in path-following and avoiding collisions toward achieving human-level decision making in autonomous vehicle systems within extreme obstacle configurations.
Abstract:Control theory provides engineers with a multitude of tools to design controllers that manipulate the closed-loop behavior and stability of dynamical systems. These methods rely heavily on insights about the mathematical model governing the physical system. However, if a system is highly complex, it might be infeasible to produce a reliable mathematical model of the system. Without a model most of the theoretical tools to develop control laws break down. In these settings, machine learning controllers become attractive: Controllers that can learn and adapt to complex systems, developing control laws where the engineer cannot. This article focuses on utilizing machine learning controllers in practical applications, specifically using deep reinforcement learning in motion control systems for an autonomous underwater vehicle with six degrees-of-freedom. Two methods are considered: end-to-end learning, where the vehicle is left entirely alone to explore the solution space in its search for an optimal policy, and PID assisted learning, where the DRL controller is essentially split into three separate parts, each controlling its own actuator.