Abstract:As automation and mobile robotics reshape work environments, rising expectations for productivity increase cognitive demands on human operators, leading to potential stress and cognitive overload. Accurately assessing an operator's mental state is critical for maintaining performance and well-being. We use subjective time perception, which can be altered by stress and cognitive load, as a sensitive, low-latency indicator of well-being and cognitive strain. Distortions in time perception can affect decision-making, reaction times, and overall task effectiveness, making it a valuable metric for adaptive human-swarm interaction systems. We study how human physiological signals can be used to estimate a person's subjective time perception in a human-swarm interaction scenario as example. A human operator needs to guide and control a swarm of small mobile robots. We obtain eye-tracking data that is classified for subjective time perception based on questionnaire data. Our results show that we successfully estimate a person's time perception from eye-tracking data. The approach can profit from individual-based pretraining using only 30 seconds of data. In future work, we aim for robots that respond to human operator needs by automatically classifying physiological data in a closed control loop.
Abstract:In our project WatchPlant, we propose to use a decentralized network of living plants as air-quality sensors by measuring their electrophysiology to infer the environmental state, also called phytosensing. We conducted in-lab experiments exposing ivy (Hedera helix) plants to ozone, an important pollutant to monitor, and measured their electrophysiological response. However, there is no well established automated way of detecting ozone exposure in plants. We propose a generic automatic toolchain to select a high-performance subset of features and highly accurate models for plant electrophysiology. Our approach derives plant- and stimulus-generic features from the electrophysiological signal using the tsfresh library. Based on these features, we automatically select and optimize machine learning models using AutoML. We use forward feature selection to increase model performance. We show that our approach successfully classifies plant ozone exposure with accuracies of up to 94.6% on unseen data. We also show that our approach can be used for other plant species and stimuli. Our toolchain automates the development of monitoring algorithms for plants as pollutant monitors. Our results help implement significant advancements for phytosensing devices contributing to the development of cost-effective, high-density urban air monitoring systems in the future.