Abstract:Assistive robotic devices can increase the independence of individuals with motor impairments. However, each person is unique in their level of injury, preferences, and skills, which moreover can change over time. Further, the amount of assistance required can vary throughout the day due to pain or fatigue, or over longer periods due to rehabilitation, debilitating conditions, or aging. Therefore, in order to become an effective team member, the assistive machine should be able to learn from and adapt to the human user. To do so, we need to be able to characterize the user's control commands to determine when and how autonomy should change to best assist the user. We perform a 20 person pilot study in order to establish a set of meaningful performance measures which can be used to characterize the user's control signals and as cues for the autonomy to modify the level and amount of assistance. Our study includes 8 spinal cord injured and 12 uninjured individuals. The results unveil a set of objective, runtime-computable metrics that are correlated with user-perceived task difficulty, and thus could be used by an autonomy system when deciding whether assistance is required. The results further show that metrics which evaluate the user interaction with the robotic device, robot execution, and the perceived task difficulty show differences among spinal cord injured and uninjured groups, and are affected by the type of control interface used. The results will be used to develop an adaptable, user-centered, and individually customized shared-control algorithms.
Abstract:Teleoperation of physically assistive machines is usually facilitated by interfaces that are low-dimensional and have unique physical mechanisms for their activation. Accidental deviations from intended user input commands due to motor limitations can potentially affect user satisfaction and task performance. In this paper, we present an assistance system that reasons about a human's intended actions during robot teleoperation in order to provide appropriate corrections for unintended behavior. We model the human's physical interaction with a control interface during robot teleoperation using the framework of dynamic Bayesian Networks in which we distinguish between intended and measured physical actions explicitly. By reasoning over the unobserved intentions using model-based inference techniques, our assistive system provides customized corrections on a user's issued commands. We present results from (1) a simulation-based study in which we validate our algorithm and (2) a 10-person human subject study in which we evaluate the performance of the proposed assistance paradigms. Our results suggest that (a) the corrective assistance paradigm helped to significantly reduce objective task effort as measured by task completion time and number of mode switches and (b) the assistance paradigms helped to reduce cognitive workload and user frustration and improve overall satisfaction.