Abstract:In this work, we propose a time-varying wave-shape extraction algorithm based on a modified version of the adaptive non-harmonic model for non-stationary signals. The model codifies the time-varying wave-shape information in the relative amplitude and phase of the harmonic components of the wave-shape. The algorithm was validated on both real and synthetic signals for the tasks of denoising, decomposition and adaptive segmentation. For the denoising task, both monocomponent and multicomponent synthetic signals were considered. In both cases, the proposed algorithm can accurately recover the time-varying wave-shape of non-stationary signals, even in the presence of high levels of noise, outperforming existing wave-shape estimation algorithms and denoising methods based on short-time Fourier transform thresholding. The denoising of an electroencephalograph signal was also performed, giving similar results. For decomposition, our proposal was able to recover the composing waveforms more accurately by considering the time variations from the harmonic amplitude functions when compared to existing methods. Finally, the algorithm was used for the adaptive segmentation of synthetic signals and an electrocardiograph of a patient undergoing ventricular fibrillation.
Abstract:Sleep disorders are very widespread in the world population and suffer from a generalized underdiagnosis, given the complexity of their diagnostic methods. Therefore, there is an increasing interest in developing simpler screening methods. A pulse oximeter is an ideal device for sleep disorder screenings since it is a portable, low-cost and accessible technology. This device can provide an estimation of the heart rate (HR), which can be useful to obtain information regarding the sleep stage. In this work, we developed a network architecture with the aim of classifying the sleep stage in awake or asleep using only HR signals from a pulse oximeter. The proposed architecture has two fundamental parts. The first part has the objective of obtaining a representation of the HR by using temporal convolutional networks. Then, the obtained representation is used to feed the second part, which is based on transformers, a model built solely with attention mechanisms. Transformers are able to model the sequence, learning the transition rules between sleep stages. The performance of the proposed method was evaluated on Sleep Heart Health Study dataset, composed of 5000 healthy and pathological subjects. The dataset was split into three subsets: 2500 for training, $1250$ for validating, and 1250 for testing. The overall accuracy, specificity, sensibility, and Cohen's Kappa coefficient were 90.0%, 94.9%, 78.1%, and 0.73.
Abstract:The regulation of the autonomic nervous system changes with the sleep stages causing variations in the physiological variables. We exploit these changes with the aim of classifying the sleep stages in awake or asleep using pulse oximeter signals. We applied a recurrent neural network to heart rate and peripheral oxygen saturation signals to classify the sleep stage every 30 seconds. The network architecture consists of two stacked layers of bidirectional gated recurrent units (GRUs) and a softmax layer to classify the output. In this paper, we used 5000 patients from the Sleep Heart Health Study dataset. 2500 patients were used to train the network, and two subsets of 1250 were used to validate and test the trained models. In the test stage, the best result obtained was 90.13% accuracy, 94.13% sensitivity, 80.26% specificity, 92.05% precision, and 84.68% negative predictive value. Further, the Cohen's Kappa coefficient was 0.74 and the average absolute error percentage to the actual sleep time was 8.9%. The performance of the proposed network is comparable with the state-of-the-art algorithms when they use much more informative signals (except those with EEG).