Abstract:Deep Learning (DL) methods have been used for electrocardiogram (ECG) processing in a wide variety of tasks, demonstrating good performance compared with traditional signal processing algorithms. These methods offer an efficient framework with a limited need for apriori data pre-processing and feature engineering. While several studies use this approach for ECG signal delineation, a significant gap persists between the expected and the actual outcome. Existing methods rely on a sample-to-sample classifier. However, the clinical expected outcome consists of a set of onset, offset, and peak for the different waves that compose each R-R interval. To align the actual with the expected output, it is necessary to incorporate post-processing algorithms. This counteracts two of the main advantages of DL models, since these algorithms are based on assumptions and slow down the method's performance. In this paper, we present Keypoint Estimation for Electrocardiogram Delineation (KEED), a novel DL model designed for keypoint estimation, which organically offers an output aligned with clinical expectations. By standing apart from the conventional sample-to-sample classifier, we achieve two benefits: (i) Eliminate the need for additional post-processing, and (ii) Establish a flexible framework that allows the adjustment of the threshold value considering the sensitivity-specificity tradeoff regarding the particular clinical requirements. The proposed method's performance is compared with state-of-the-art (SOTA) signal processing methods. Remarkably, KEED significantly outperforms despite being optimized with an extremely limited annotated data. In addition, KEED decreases the inference time by a factor ranging from 52x to 703x.
Abstract:Despite recent advancements in Self-Supervised Learning (SSL) for time series analysis, a noticeable gap persists between the anticipated achievements and actual performance. While these methods have demonstrated formidable generalization capabilities with minimal labels in various domains, their effectiveness in distinguishing between different classes based on a limited number of annotated records is notably lacking. Our hypothesis attributes this bottleneck to the prevalent use of Contrastive Learning, a shared training objective in previous state-of-the-art (SOTA) methods. By mandating distinctiveness between representations for negative pairs drawn from separate records, this approach compels the model to encode unique record-based patterns but simultaneously neglects changes occurring across the entire record. To overcome this challenge, we introduce Distilled Embedding for Almost-Periodic Time Series (DEAPS) in this paper, offering a non-contrastive method tailored for quasiperiodic time series, such as electrocardiogram (ECG) data. By avoiding the use of negative pairs, we not only mitigate the model's blindness to temporal changes but also enable the integration of a "Gradual Loss (Lgra)" function. This function guides the model to effectively capture dynamic patterns evolving throughout the record. The outcomes are promising, as DEAPS demonstrates a notable improvement of +10% over existing SOTA methods when just a few annotated records are presented to fit a Machine Learning (ML) model based on the learned representation.
Abstract:By identifying similarities between successive inputs, Self-Supervised Learning (SSL) methods for time series analysis have demonstrated their effectiveness in encoding the inherent static characteristics of temporal data. However, an exclusive emphasis on similarities might result in representations that overlook the dynamic attributes critical for modeling cardiovascular diseases within a confined subject cohort. Introducing Distilled Encoding Beyond Similarities (DEBS), this paper pioneers an SSL approach that transcends mere similarities by integrating dissimilarities among positive pairs. The framework is applied to electrocardiogram (ECG) signals, leading to a notable enhancement of +10\% in the detection accuracy of Atrial Fibrillation (AFib) across diverse subjects. DEBS underscores the potential of attaining a more refined representation by encoding the dynamic characteristics of time series data, tapping into dissimilarities during the optimization process. Broadly, the strategy delineated in this study holds the promise of unearthing novel avenues for advancing SSL methodologies tailored to temporal data.
Abstract:Extracting information from the electrocardiography (ECG) signal is an essential step in the design of digital health technologies in cardiology. In recent years, several machine learning (ML) algorithms for automatic extraction of information in ECG have been proposed. Supervised learning methods have successfully been used to identify specific aspects in the signal, like detection of rhythm disorders (arrhythmias). Self-supervised learning (SSL) methods, on the other hand, can be used to extract all the features contained in the data. The model is optimized without any specific goal and learns from the data itself. By adapting state-of-the-art computer vision methodologies to the signal processing domain, a few SSL approaches have been reported recently for ECG processing. However, such SSL methods require either data augmentation or negative pairs, which limits the method to only look for similarities between two ECG inputs, either two versions of the same signal or two signals from the same subject. This leads to models that are very effective at extracting characteristics that are stable in a subject, such as gender or age. But they are not successful at capturing changes within the ECG recording that can explain dynamic aspects, like different arrhythmias or different sleep stages. In this work, we introduce the first SSL method that uses neither data augmentation nor negative pairs for understanding ECG signals, and still, achieves comparable quality representations. As a result, it is possible to design a SSL method that not only captures similarities between two inputs, but also captures dissimilarities for a complete understanding of the data. In addition, a model based on transformer blocks is presented, which produces better results than a model based on convolutional layers (XResNet50) with almost the same number of parameters.