Abstract:Multi-task missions for unmanned aerial vehicles (UAVs) involving inspection and landing tasks are challenging for novice pilots due to the difficulties associated with depth perception and the control interface. We propose a shared autonomy system, alongside supplementary information displays, to assist pilots to successfully complete multi-task missions without any pilot training. Our approach comprises of three modules: (1) a perception module that encodes visual information onto a latent representation, (2) a policy module that augments pilot's actions, and (3) an information augmentation module that provides additional information to the pilot. The policy module is trained in simulation with simulated users and transferred to the real world without modification in a user study (n=29), alongside supplementary information schemes including learnt red/green light feedback cues and an augmented reality display. The pilot's intent is unknown to the policy module and is inferred from the pilot's input and UAV's states. The assistant increased task success rate for the landing and inspection tasks from [16.67% & 54.29%] respectively to [95.59% & 96.22%]. With the assistant, inexperienced pilots achieved similar performance to experienced pilots. Red/green light feedback cues reduced the required time by 19.53% and trajectory length by 17.86% for the inspection task, where participants rated it as their preferred condition due to the intuitive interface and providing reassurance. This work demonstrates that simple user models can train shared autonomy systems in simulation, and transfer to physical tasks to estimate user intent and provide effective assistance and information to the pilot.
Abstract:Novice pilots find it difficult to operate and land unmanned aerial vehicles (UAVs), due to the complex UAV dynamics, challenges in depth perception, lack of expertise with the control interface and additional disturbances from the ground effect. Therefore we propose a shared autonomy approach to assist pilots in safely landing a UAV under conditions where depth perception is difficult and safe landing zones are limited. Our approach comprises of two modules: a perception module that encodes information onto a compressed latent representation using two RGB-D cameras and a policy module that is trained with the reinforcement learning algorithm TD3 to discern the pilot's intent and to provide control inputs that augment the user's input to safely land the UAV. The policy module is trained in simulation using a population of simulated users. Simulated users are sampled from a parametric model with four parameters, which model a pilot's tendency to conform to the assistant, proficiency, aggressiveness and speed. We conduct a user study (n = 28) where human participants were tasked with landing a physical UAV on one of several platforms under challenging viewing conditions. The assistant, trained with only simulated user data, improved task success rate from 51.4% to 98.2% despite being unaware of the human participants' goal or the structure of the environment a priori. With the proposed assistant, regardless of prior piloting experience, participants performed with a proficiency greater than the most experienced unassisted participants.
Abstract:Unmanned aerial vehicles (UAVs) are often used for navigating dangerous terrains, however they are difficult to pilot. Due to complex input-output mapping schemes, limited perception, the complex system dynamics and the need to maintain a safe operation distance, novice pilots experience difficulties in performing safe landings in obstacle filled environments. Previous work has proposed autonomous landing methods however these approaches do not adapt to the pilot's control inputs and require the pilot's goal to be known a priori. In this work we propose a shared autonomy approach that assists novice pilots to perform safe landings on one of several elevated platforms at a proficiency equal to or greater than experienced pilots. Our approach consists of two modules, a perceptual module and a policy module. The perceptual module compresses high dimensionality RGB-D images into a latent vector trained with a cross-modal variational auto-encoder. The policy module provides assistive control inputs trained with the reinforcement algorithm TD3. We conduct a user study (n=33) where participants land a simulated drone on a specified platform out of five candidate platforms with and without the use of the assistant. Despite the goal platform not being known to the assistant, participants of all skill levels were able to outperform experienced participants while assisted in the task.
Abstract:This paper is concerned with the deployment of multiple mobile robots in order to autonomously cover a region Q. The region to be covered is described using a density function which may not be apriori known. In this paper, we pose the coverage problem as an optimization problem over some space of functions on Q. In particular, we look at L 2 -distance based coverage algorithm and derive adaptive control laws for the same. We also propose a modified adaptive control law incorporating consensus for better parameter convergence. We implement the algorithms on real differential drive robots with both simulated density function as well as density function implemented using light sources. We also compare the L 2 -distance based method with the locational optimization method using experiments.