Abstract:Interactive exploration of the unknown physical properties of objects such as stiffness, mass, center of mass, friction coefficient, and shape is crucial for autonomous robotic systems operating continuously in unstructured environments. Precise identification of these properties is essential to manipulate objects in a stable and controlled way, and is also required to anticipate the outcomes of (prehensile or non-prehensile) manipulation actions such as pushing, pulling, lifting, etc. Our study focuses on autonomously inferring the physical properties of a diverse set of various homogeneous, heterogeneous, and articulated objects utilizing a robotic system equipped with vision and tactile sensors. We propose a novel predictive perception framework for identifying object properties of the diverse objects by leveraging versatile exploratory actions: non-prehensile pushing and prehensile pulling. As part of the framework, we propose a novel active shape perception to seamlessly initiate exploration. Our innovative dual differentiable filtering with Graph Neural Networks learns the object-robot interaction and performs consistent inference of indirectly observable time-invariant object properties. In addition, we formulate a $N$-step information gain approach to actively select the most informative actions for efficient learning and inference. Extensive real-robot experiments with planar objects show that our predictive perception framework results in better performance than the state-of-the-art baseline and demonstrate our framework in three major applications for i) object tracking, ii) goal-driven task, and iii) change in environment detection.
Abstract:To manipulate objects or dance together, humans and robots exchange energy and haptic information. While the exchange of energy in human-robot interaction has been extensively investigated, the underlying exchange of haptic information is not well understood. Here, we develop a computational model of the mechanical and sensory interactions between agents that can tune their viscoelasticity while considering their sensory and motor noise. The resulting stochastic-optimal-information-and-effort (SOIE) controller predicts how the exchange of haptic information and the performance can be improved by adjusting viscoelasticity. This controller was first implemented on a robot-robot experiment with a tracking task which showed its superior performance when compared to either stiff or compliant control. Importantly, the optimal controller also predicts how connected humans alter their muscle activation to improve haptic communication, with differentiated viscoelasticity adjustment to their own sensing noise and haptic perturbations. A human-robot experiment then illustrated the applicability of this optimal control strategy for robots, yielding improved tracking performance and effective haptic communication as the robot adjusted its viscoelasticity according to its own and the user's noise characteristics. The proposed SOIE controller may thus be used to improve haptic communication and collaboration of humans and robots.
Abstract:This study investigates the role of haptic feedback in a car-following scenario, where information about the motion of the front vehicle is provided through a virtual elastic connection with it. Using a robotic interface in a simulated driving environment, we examined the impact of varying levels of such haptic feedback on the driver's ability to follow the road while avoiding obstacles. The results of an experiment with 15 subjects indicate that haptic feedback from the front car's motion can significantly improve driving control (i.e., reduce motion jerk and deviation from the road) and reduce mental load (evaluated via questionnaire). This suggests that haptic communication, as observed between physically interacting humans, can be leveraged to improve safety and efficiency in automated driving systems, warranting further testing in real driving scenarios.
Abstract:Autonomously exploring the unknown physical properties of novel objects such as stiffness, mass, center of mass, friction coefficient, and shape is crucial for autonomous robotic systems operating continuously in unstructured environments. We introduce a novel visuo-tactile based predictive cross-modal perception framework where initial visual observations (shape) aid in obtaining an initial prior over the object properties (mass). The initial prior improves the efficiency of the object property estimation, which is autonomously inferred via interactive non-prehensile pushing and using a dual filtering approach. The inferred properties are then used to enhance the predictive capability of the cross-modal function efficiently by using a human-inspired `surprise' formulation. We evaluated our proposed framework in the real-robotic scenario, demonstrating superior performance.
Abstract:Space poses significant challenges for human physiology, leading to physiological adaptations in response to an environment vastly different from Earth. While these adaptations can be beneficial, they may not fully counteract the adverse impact of space-related stressors. A comprehensive understanding of these physiological adaptations is needed to devise effective countermeasures to support human life in space. This review focuses on the impact of the environment in space on the musculoskeletal system. It highlights the complex interplay between bone and muscle adaptation, the underlying physiological mechanisms, and their implications on astronaut health. Furthermore, the review delves into the deployed and current advances in countermeasures and proposes, as a perspective for future developments, wearable sensing and robotic technologies, such as exoskeletons, as a fitting alternative.
Abstract:The adoption of high-density electrode systems for human-machine interfaces in real-life applications has been impeded by practical and technical challenges, including noise interference, motion artifacts and the lack of compact electrode interfaces. To overcome some of these challenges, we introduce a wearable and stretchable electromyography (EMG) array, and present its design, fabrication methodology, characterisation, and comprehensive evaluation. Our proposed solution comprises dry-electrodes on flexible printed circuit board (PCB) substrates, eliminating the need for time-consuming skin preparation. The proposed fabrication method allows the manufacturing of stretchable sleeves, with consistent and standardised coverage across subjects. We thoroughly tested our developed prototype, evaluating its potential for application in both research and real-world environments. The results of our study showed that the developed stretchable array matches or outperforms traditional EMG grids and holds promise in furthering the real-world translation of high-density EMG for human-machine interfaces.
Abstract:Spinal cord injuries (SCIs) generally result in sensory and mobility impairments, with torso instability being particularly debilitating. Existing torso stabilisers are often rigid and restrictive. This paper presents an early investigation into a non-restrictive 1 degree-of-freedom (DoF) mechanical torso stabiliser inspired by devices such as centrifugal clutches and seat-belt mechanisms. Firstly, the paper presents a motion-capture (MoCap) and OpenSim-based kinematic analysis of the cable-based system to understand requisite device characteristics. The simulated evaluation resulted in the cable-based device to require 55-60cm of unrestricted travel, and to lock at a threshold cable velocity of 80-100cm/sec. Next, the developed 1-DoF device is introduced. The proposed mechanical device is transparent during activities of daily living, and transitions to compliant blocking when incipient fall is detected. Prototype behaviour was then validated using a MoCap-based kinematic analysis to verify non-restrictive movement, reliable transition to blocking, and compliance of the blocking.
Abstract:while most of the tactile robots are operated in close-set conditions, it is challenging for them to operate in open-set conditions where test objects are beyond the robots' knowledge. We proposed an open-set recognition framework using mechanical properties to recongise known objects and incrementally label novel objects. The main contribution is a clustering algorithm that exploits knowledge of known objects to estimate cluster centre and sizes, unlike a typical algorithm that randomly selects them. The framework is validated with the mechanical properties estimated from a real object during interaction. The results show that the framework could recognise objects better than alternative methods contributed by the novelty detector. Importantly, our clustering algorithm yields better clustering performance than other methods. Furthermore, the hyperparameters studies show that cluster size is important to clustering results and needed to be tuned properly.
Abstract:Perceptual processes are frequently multi-modal. This is the case of haptic perception. Data sets of visual and haptic sensory signals have been compiled in the past, especially when it comes to the exploration of textured surfaces. These data sets were intended to be used in natural and artificial perception studies and to provide training data sets for machine learning research. These data sets were typically acquired with rigid probes or artificial robotic fingers. Here, we collected visual, auditory, and haptic signals acquired when a human finger explored textured surfaces. We assessed the data set via machine learning classification techniques. Interestingly, multi-modal classification performance could reach 97% when haptic classification was around 80%.
Abstract:For robotic systems to interact with objects in dynamic environments, it is essential to perceive the physical properties of the objects such as shape, friction coefficient, mass, center of mass, and inertia. This not only eases selecting manipulation action but also ensures the task is performed as desired. However, estimating the physical properties of especially novel objects is a challenging problem, using either vision or tactile sensing. In this work, we propose a novel framework to estimate key object parameters using non-prehensile manipulation using vision and tactile sensing. Our proposed active dual differentiable filtering (ADDF) approach as part of our framework learns the object-robot interaction during non-prehensile object push to infer the object's parameters. Our proposed method enables the robotic system to employ vision and tactile information to interactively explore a novel object via non-prehensile object push. The novel proposed N-step active formulation within the differentiable filtering facilitates efficient learning of the object-robot interaction model and during inference by selecting the next best exploratory push actions (where to push? and how to push?). We extensively evaluated our framework in simulation and real-robotic scenarios, yielding superior performance to the state-of-the-art baseline.