Abstract:Estimating ego-pose from cameras is an important problem in robotics with applications ranging from mobile robotics to augmented reality. While SOTA models are becoming increasingly accurate, they can still be unwieldy due to high computational costs. In this paper, we propose to solve the problem by using invertible neural networks (INN) to find the mapping between the latent space of images and poses for a given scene. Our model achieves similar performance to the SOTA while being faster to train and only requiring offline rendering of low-resolution synthetic data. By using normalizing flows, the proposed method also provides uncertainty estimation for the output. We also demonstrated the efficiency of this method by deploying the model on a mobile robot.
Abstract:Robots have become ubiquitous tools in various industries and households, highlighting the importance of human-robot interaction (HRI). This has increased the need for easy and accessible communication between humans and robots. Recent research has focused on the intersection of virtual assistant technology, such as Amazon's Alexa, with robots and its effect on HRI. This paper presents the Virtual Assistant, Human, and Robots in the loop (VAHR) system, which utilizes bidirectional communication to control multiple robots through Alexa. VAHR's performance was evaluated through a human-subjects experiment, comparing objective and subjective metrics of traditional keyboard and mouse interfaces to VAHR. The results showed that VAHR required 41% less Robot Attention Demand and ensured 91% more Fan-out time compared to the standard method. Additionally, VAHR led to a 62.5% improvement in multi-tasking, highlighting the potential for efficient human-robot interaction in physically- and mentally-demanding scenarios. However, subjective metrics revealed a need for human operators to build confidence and trust with this new method of operation.
Abstract:Driving under varying road conditions is challenging, especially for autonomous vehicles that must adapt in real-time to changes in the environment, e.g., rain, snow, etc. It is difficult to apply offline learning-based methods in these time-varying settings, as the controller should be trained on datasets representing all conditions it might encounter in the future. While online learning may adapt a model from real-time data, its convergence is often too slow for fast varying road conditions. We study this problem in autonomous racing, where driving at the limits of handling under varying road conditions is required for winning races. We propose a computationally-efficient approach that leverages an ensemble of Gaussian processes (GPs) to generalize and adapt pre-trained GPs to unseen conditions. Each GP is trained on driving data with a different road surface friction. A time-varying convex combination of these GPs is used within a model predictive control (MPC) framework, where the model weights are adapted online to the current road condition based on real-time data. The predictive variance of the ensemble Gaussian process (EGP) model allows the controller to account for prediction uncertainty and enables safe autonomous driving. Extensive simulations of a full scale autonomous car demonstrated the effectiveness of our proposed EGP-MPC method for providing good tracking performance in varying road conditions and the ability to generalize to unknown maps.