Abstract:Detection of periodic patterns of interest within noisy time series data plays a critical role in various tasks, spanning from health monitoring to behavior analysis. Existing learning techniques often rely on labels or clean versions of signals for detecting the periodicity, and those employing self-supervised learning methods are required to apply proper augmentations, which is already challenging for time series and can result in collapse -- all representations collapse to a single point due to strong augmentations. In this work, we propose a novel method to detect the periodicity in time series without the need for any labels or requiring tailored positive or negative data generation mechanisms with specific augmentations. We mitigate the collapse issue by ensuring the learned representations retain information from the original samples without imposing any random variance constraints on the batch. Our experiments in three time series tasks against state-of-the-art learning methods show that the proposed approach consistently outperforms prior works, achieving performance improvements of more than 45--50\%, showing its effectiveness. Code: https://github.com/eth-siplab/Unsupervised_Periodicity_Detection
Abstract:The success of contrastive learning is well known to be dependent on data augmentation. Although the degree of data augmentations has been well controlled by utilizing pre-defined techniques in some domains like vision, time-series data augmentation is less explored and remains a challenging problem due to the complexity of the data generation mechanism, such as the intricate mechanism involved in the cardiovascular system. Moreover, there is no widely recognized and general time-series augmentation method that can be applied across different tasks. In this paper, we propose a novel data augmentation method for quasi-periodic time-series tasks that aims to connect intra-class samples together, and thereby find order in the latent space. Our method builds upon the well-known mixup technique by incorporating a novel approach that accounts for the periodic nature of non-stationary time-series. Also, by controlling the degree of chaos created by data augmentation, our method leads to improved feature representations and performance on downstream tasks. We evaluate our proposed method on three time-series tasks, including heart rate estimation, human activity recognition, and cardiovascular disease detection. Extensive experiments against state-of-the-art methods show that the proposed approach outperforms prior works on optimal data generation and known data augmentation techniques in the three tasks, reflecting the effectiveness of the presented method. Source code: https://github.com/eth-siplab/Finding_Order_in_Chaos
Abstract:We present a novel learning-based method that achieves state-of-the-art performance on several heart rate estimation benchmarks extracted from photoplethysmography signals (PPG). We consider the evolution of the heart rate in the context of a discrete-time stochastic process that we represent as a hidden Markov model. We derive a distribution over possible heart rate values for a given PPG signal window through a trained neural network. Using belief propagation, we incorporate the statistical distribution of heart rate changes to refine these estimates in a temporal context. From this, we obtain a quantized probability distribution over the range of possible heart rate values that captures a meaningful and well-calibrated estimate of the inherent predictive uncertainty. We show the robustness of our method on eight public datasets with three different cross-validation experiments.
Abstract:Providing health monitoring devices with machine intelligence is important for enabling automatic mobile healthcare applications. However, this brings additional challenges due to the resource scarcity of these devices. This work introduces a neural contextual bandits based dynamic sensor selection methodology for high-performance and resource-efficient body-area networks to realize next generation mobile health monitoring devices. The methodology utilizes contextual bandits to select the most informative sensor combinations during runtime and ignore redundant data for decreasing transmission and computing power in a body area network (BAN). The proposed method has been validated using one of the most common health monitoring applications: cardiac activity monitoring. Solutions from our proposed method are compared against those from related works in terms of classification performance and energy while considering the communication energy consumption. Our final solutions could reach $78.8\%$ AU-PRC on the PTB-XL ECG dataset for cardiac abnormality detection while decreasing the overall energy consumption and computational energy by $3.7 \times$ and $4.3 \times$, respectively.
Abstract:The recent developments in wearable devices and the Internet of Medical Things (IoMT) allow real-time monitoring and recording of electrocardiogram (ECG) signals. However, continuous monitoring of ECG signals is challenging in low-power wearable devices due to energy and memory constraints. Therefore, in this paper, we present a novel and energy-efficient methodology for continuously monitoring the heart for low-power wearable devices. The proposed methodology is composed of three different layers: 1) a Noise/Artifact detection layer to grade the quality of the ECG signals; 2) a Normal/Abnormal beat classification layer to detect the anomalies in the ECG signals, and 3) an Abnormal beat classification layer to detect diseases from ECG signals. Moreover, a distributed multi-output Convolutional Neural Network (CNN) architecture is used to decrease the energy consumption and latency between the edge-fog/cloud. Our methodology reaches an accuracy of 99.2% on the well-known MIT-BIH Arrhythmia dataset. Evaluation on real hardware shows that our methodology is suitable for devices having a minimum RAM of 32KB. Moreover, the proposed methodology achieves $7\times$ more energy efficiency compared to state-of-the-art works.
Abstract:The paper proposes accurate Blood Pressure Monitoring (BPM) based on a single-site Photoplethysmographic (PPG) sensor and provides an energy-efficient solution on edge cuffless wearable devices. Continuous PPG signal preprocessed and used as input of the Artificial Neural Network (ANN), and outputs systolic BP (SBP), diastolic BP (DBP), and mean arterial BP (MAP) values for each heartbeat. The improvement of the BPM accuracy is obtained by removing outliers in the preprocessing step and the whole-based inputs compared to parameter-based inputs extracted from the PPG signal. Performance obtained is $3.42 \pm 5.42$ mmHg (MAE $\pm$ RMSD) for SBP, $1.92 \pm 3.29$ mmHg for DBP, and $2.21 \pm 3.50$ mmHg for MAP which is competitive compared to other studies. This is the first BPM solution with edge computing artificial intelligence as we have learned so far. Evaluation experiments on real hardware show that the solution takes 42.2 ms, 18.2 KB RAM, and 2.1 mJ average energy per reading.
Abstract:This paper proposes a novel lightweight method using the multitaper power spectrum to estimate arousal levels at wearable devices. We show that the spectral slope (1/f) of the electrophysiological power spectrum reflects the scale-free neural activity. To evaluate the proposed feature's performance, we used scalp EEG recorded during anesthesia and sleep with technician-scored Hypnogram annotations. It is shown that the proposed methodology discriminates wakefulness from reduced arousal solely based on the neurophysiological brain state with more than 80% accuracy. Therefore, our findings describe a common electrophysiological marker that tracks reduced arousal states, which can be applied to different applications (e.g., emotion detection, driver drowsiness). Evaluation on hardware shows that the proposed methodology can be implemented for devices with a minimum RAM of 512 KB with 55 mJ average energy consumption.
Abstract:Edge-Cloud hierarchical systems employing intelligence through Deep Neural Networks (DNNs) endure the dilemma of workload distribution within them. Previous solutions proposed to distribute workloads at runtime according to the state of the surroundings, like the wireless conditions. However, such conditions are usually overlooked at design time. This paper addresses this issue for DNN architectural design by presenting a novel methodology, LENS, which administers multi-objective Neural Architecture Search (NAS) for two-tiered systems, where the performance objectives are refashioned to consider the wireless communication parameters. From our experimental search space, we demonstrate that LENS improves upon the traditional solution's Pareto set by 76.47% and 75% with respect to the energy and latency metrics, respectively.
Abstract:Human Activity Recognition (HAR) is one of the key applications of health monitoring that requires continuous use of wearable devices to track daily activities. State-of-the-art works using wearable devices have been following fog/cloud computing architecture where the data is classified at the mobile phones/remote servers. This kind of approach suffers from energy, latency, and privacy issues. Therefore, we follow edge computing architecture where the wearable device solutions provide adequate performance while being energy and memory-efficient. This paper proposes an Adaptive CNN for energy-efficient HAR (AHAR) suitable for low-power edge devices. AHAR uses a novel adaptive architecture that selects a portion of the baseline architecture to use during the inference phase. We validate our methodology in classifying locomotion activities from two datasets- Opportunity and w-HAR. Compared to the fog/cloud computing approaches for the Opportunity dataset, our baseline and adaptive architecture shows a comparable weighted F1 score of 91.79%, and 91.57%, respectively. For the w-HAR dataset, our baseline and adaptive architecture outperforms the state-of-the-art works with a weighted F1 score of 97.55%, and 97.64%, respectively. Evaluation on real hardware shows that our baseline architecture is significantly energy-efficient (422.38x less) and memory-efficient (14.29x less) compared to the works on the Opportunity dataset. For the w-HAR dataset, our baseline architecture requires 2.04x less energy and 2.18x less memory compared to the state-of-the-art work. Moreover, experimental results show that our adaptive architecture is 12.32% (Opportunity) and 11.14% (w-HAR) energy-efficient than our baseline while providing similar (Opportunity) or better (w-HAR) performance with no significant memory overhead.