Abstract:Neonates in intensive care require continuous monitoring. Current measurement devices are limited for long-term use due to the fragility of newborn skin and the interference of wires with medical care and parental interactions. Camera-based vital sign monitoring has the potential to address these limitations and has become of considerable interest in recent years due to the absence of physical contact between the recording equipment and the neonates, as well as the introduction of low-cost devices. We present a novel system to capture vital signs while offering clinical insights beyond current technologies using a single RGB-D camera. Heart rate and oxygen saturation were measured using colour and infrared signals with mean average errors (MAE) of 7.69 bpm and 3.37%, respectively. Using the depth signals, an MAE of 4.83 breaths per minute was achieved for respiratory rate. Tidal volume measurements were obtained with a MAE of 0.61 mL. Flow-volume loops can also be calculated from camera data, which have applications in respiratory disease diagnosis. Our system demonstrates promising capabilities for neonatal monitoring, augmenting current clinical recording techniques to potentially improve outcomes for neonates.
Abstract:Ankle exoskeletons have garnered considerable interest for their potential to enhance mobility and reduce fall risks, particularly among the aging population. The efficacy of these devices relies on accurate real-time prediction of the user's intended movements through sensor-based inputs. This paper presents a novel motion prediction framework that integrates three Inertial Measurement Units (IMUs) and eight surface Electromyography (sEMG) sensors to capture both kinematic and muscular activity data. A comprehensive set of activities, representative of everyday movements in barrier-free environments, was recorded for the purpose. Our findings reveal that Convolutional Neural Networks (CNNs) slightly outperform Long Short-Term Memory (LSTM) networks on a dataset of five motion tasks, achieving classification accuracies of $96.5 \pm 0.8 \%$ and $87.5 \pm 2.9 \%$, respectively. Furthermore, we demonstrate the system's proficiency in transfer learning, enabling accurate motion classification for new subjects using just ten samples per class for finetuning. The robustness of the model is demonstrated by its resilience to sensor failures resulting in absent signals, maintaining reliable performance in real-world scenarios. These results underscore the potential of deep learning algorithms to enhance the functionality and safety of ankle exoskeletons, ultimately improving their usability in daily life.