Abstract:As autonomous machines such as robots and vehicles start performing tasks involving human users, ensuring a safe interaction between them becomes an important issue. Translating methods from human-robot interaction (HRI) studies to the interaction between humans and other highly complex machines (e.g. semi-autonomous vehicles) could help advance the use of those machines in scenarios requiring human interaction. One method involves understanding human intentions and decision-making to estimate the human's present and near-future actions whilst interacting with a robot. This idea originates from the psychological concept of Theory of Mind, which has been broadly explored for robotics and recently for autonomous and semi-autonomous vehicles. In this work, we explored how to predict human intentions before an action is performed by combining data from human-motion, vehicle-state and human inputs (e.g. steering wheel, pedals). A data-driven approach based on Recurrent Neural Network models was used to classify the current driving manoeuvre and to predict the future manoeuvre to be performed. A state-transition model was used with a fixed set of manoeuvres to label data recorded during the trials for real-time applications. Models were trained and tested using drivers of different seat preferences, driving expertise and arm-length; precision and recall metrics over 95% for manoeuvre identification and 86% for manoeuvre prediction were achieved, with prediction time-windows of up to 1 second for both known and unknown test subjects. Compared to our previous results, performance improved and manoeuvre prediction was possible for unknown test subjects without knowing the current manoeuvre.