Abstract:This study reviews the impact of personalization on human-robot interaction. Firstly, the various strategies used to achieve personalization are briefly described. Secondly, the effects of personalization known to date are discussed. They are presented along with the personalized parameters, personalized features, used technology, and use case they relate to. It is observed that various positive effects have been discussed in the literature while possible negative effects seem to require further investigation.
Abstract:Nowadays, robots are expected to interact more physically, cognitively, and socially with people. They should adapt to unpredictable contexts alongside individuals with various behaviours. For this reason, personalisation is a valuable attribute for social robots as it allows them to act according to a specific user's needs and preferences and achieve natural and transparent robot behaviours for humans. If correctly implemented, personalisation could also be the key to the large-scale adoption of social robotics. However, achieving personalisation is arduous as it requires us to expand the boundaries of robotics by taking advantage of the expertise of various domains. Indeed, personalised robots need to analyse and model user interactions while considering their involvement in the adaptative process. It also requires us to address ethical and socio-cultural aspects of personalised HRI to achieve inclusive and diverse interaction and avoid deception and misplaced trust when interacting with the users. At the same time, policymakers need to ensure regulations in view of possible short-term and long-term adaptive HRI. This workshop aims to raise an interdisciplinary discussion on personalisation in robotics. It aims at bringing researchers from different fields together to propose guidelines for personalisation while addressing the following questions: how to define it - how to achieve it - and how it should be guided to fit legal and ethical requirements.
Abstract:Self/other distinction and self-recognition are important skills for interacting with the world, as it allows humans to differentiate own actions from others and be self-aware. However, only a selected group of animals, mainly high order mammals such as humans, has passed the mirror test, a behavioural experiment proposed to assess self-recognition abilities. In this paper, we describe self-recognition as a process that is built on top of body perception unconscious mechanisms. We present an algorithm that enables a robot to perform non-appearance self-recognition on a mirror and distinguish its simple actions from other entities, by answering the following question: am I generating these sensations? The algorithm combines active inference, a theoretical model of perception and action in the brain, with neural network learning. The robot learns the relation between its actions and its body with the effect produced in the visual field and its body sensors. The prediction error generated between the models and the real observations during the interaction is used to infer the body configuration through free energy minimization and to accumulate evidence for recognizing its body. Experimental results on a humanoid robot show the reliability of the algorithm for different initial conditions, such as mirror recognition in any perspective, robot-robot distinction and human-robot differentiation.
Abstract:Perceptual hallucinations are present in neurological and psychiatric disorders and amputees. While the hallucinations can be drug-induced, it has been described that they can even be provoked in healthy subjects. Understanding their manifestation could thus unveil how the brain processes sensory information and might evidence the generative nature of perception. In this work, we investigate the generation of tactile hallucinations on biologically inspired, artificial skin. To model tactile hallucinations, we apply homeostasis, a change in the excitability of neurons during sensory deprivation, in a Deep Boltzmann Machine (DBM). We find that homeostasis prompts hallucinations of previously learned patterns on the artificial skin in the absence of sensory input. Moreover, we show that homeostasis is capable of inducing the formation of meaningful latent representations in a DBM and that it significantly increases the quality of the reconstruction of these latent states. Through this, our work provides a possible explanation for the nature of tactile hallucinations and highlights homeostatic processes as a potential underlying mechanism.
Abstract:This survey presents the most relevant neural network models of autism spectrum disorder and schizophrenia, from the first connectionist models to recent deep network architectures. We analyzed and compared the most representative symptoms with its neural model counterpart, detailing the alteration introduced in the network that generates each of the symptoms, and identifying their strengths and weaknesses. For completeness we additionally cross-compared Bayesian and free-energy approaches. Models of schizophrenia mainly focused on hallucinations and delusional thoughts using neural disconnections or inhibitory imbalance as the predominating alteration. Models of autism rather focused on perceptual difficulties, mainly excessive attention to environment details, implemented as excessive inhibitory connections or increased sensory precision. We found an excessive tight view of the psychopathologies around one specific and simplified effect, usually constrained to the technical idiosyncrasy of the network used. Recent theories and evidence on sensorimotor integration and body perception combined with modern neural network architectures offer a broader and novel spectrum to approach these psychopathologies, outlining the future research on neural networks computational psychiatry, a powerful asset for understanding the inner processes of the human brain.
Abstract:One of the biggest challenges in robotics systems is interacting under uncertainty. Unlike robots, humans learn, adapt and perceive their body as a unity when interacting with the world. We hypothesize that the nervous system counteracts sensor and motor uncertainties by unconscious processes that robustly fuse the available information for approximating their body and the world state. Being able to unite perception and action under a common principle has been sought for decades and active inference is one of the potential unification theories. In this work, we present a humanoid robot interacting with the world by means of a human brain-like inspired perception and control algorithm based on the free-energy principle. Until now, active inference was only tested in simulated examples. Their application on a real robot shows the advantages of such an algorithm for real world applications. The humanoid robot iCub was capable of performing robust reaching behaviors with both arms and active head object tracking in the visual field, despite the visual noise, the artificially introduced noise in the joint encoders (up to 40 degrees deviation), the differences between the model and the real robot and the misdetections of the hand.
Abstract:Artificial self-perception is the machine ability to perceive its own body, i.e., the mastery of modal and intermodal contingencies of performing an action with a specific sensors/actuators body configuration. In other words, the spatio-temporal patterns that relate its sensors (e.g. visual, proprioceptive, tactile, etc.), its actions and its body latent variables are responsible of the distinction between its own body and the rest of the world. This paper describes some of the latest approaches for modelling artificial body self-perception: from Bayesian estimation to deep learning. Results show the potential of these free-model unsupervised or semi-supervised crossmodal/intermodal learning approaches. However, there are still challenges that should be overcome before we achieve artificial multisensory body perception.
Abstract:We present an active visual search model for finding objects in unknown environments. The proposed algorithm guides the robot towards the sought object using the relevant stimuli provided by the visual sensors. Existing search strategies are either purely reactive or use simplified sensor models that do not exploit all the visual information available. In this paper, we propose a new model that actively extracts visual information via visual attention techniques and, in conjunction with a non-myopic decision-making algorithm, leads the robot to search more relevant areas of the environment. The attention module couples both top-down and bottom-up attention models enabling the robot to search regions with higher importance first. The proposed algorithm is evaluated on a mobile robot platform in a 3D simulated environment. The results indicate that the use of visual attention significantly improves search, but the degree of improvement depends on the nature of the task and the complexity of the environment. In our experiments, we found that performance enhancements of up to 42\% in structured and 38\% in highly unstructured cluttered environments can be achieved using visual attention mechanisms.
Abstract:The predictive functions that permit humans to infer their body state by sensorimotor integration are critical to perform safe interaction in complex environments. These functions are adaptive and robust to non-linear actuators and noisy sensory information. This paper introduces a computational perceptual model based on predictive processing that enables any multisensory robot to learn, infer and update its body configuration when using arbitrary sensors with Gaussian additive noise. The proposed method integrates different sources of information (tactile, visual and proprioceptive) to drive the robot belief to its current body configuration. The motivation is to enable robots with the embodied perception needed for self-calibration and safe physical human-robot interaction. We formulate body learning as obtaining the forward model that encodes the sensor values depending on the body variables, and we solve it by Gaussian process regression. We model body estimation as minimizing the discrepancy between the robot body configuration belief and the observed posterior. We minimize the variational free energy using the sensory prediction errors (sensed vs expected). In order to evaluate the model we test it on a real multisensory robotic arm. We show how different sensor modalities contributions, included as additive errors, improve the refinement of the body estimation and how the system adapts itself to provide the most plausible solution even when injecting strong sensory visuo-tactile perturbations. We further analyse the reliability of the model when different sensor modalities are disabled. This provides grounded evidence about the correctness of the perceptual model and shows how the robot estimates and adjusts its body configuration just by means of sensory information.