Ryerson University, Toronto, Canada
Abstract:Emotion recognition has significant potential in healthcare and affect-sensitive systems such as brain-computer interfaces (BCIs). However, challenges such as the high cost of labeled data and variability in electroencephalogram (EEG) signals across individuals limit the applicability of EEG-based emotion recognition models across domains. These challenges are exacerbated in cross-dataset scenarios due to differences in subject demographics, recording devices, and presented stimuli. To address these issues, we propose a novel approach to improve cross-domain EEG-based emotion classification. Our method, Gradual Proximity-guided Target Data Selection (GPTDS), incrementally selects reliable target domain samples for training. By evaluating their proximity to source clusters and the models confidence in predicting them, GPTDS minimizes negative transfer caused by noisy and diverse samples. Additionally, we introduce Prediction Confidence-aware Test-Time Augmentation (PC-TTA), a cost-effective augmentation technique. Unlike traditional TTA methods, which are computationally intensive, PC-TTA activates only when model confidence is low, improving inference performance while drastically reducing computational costs. Experiments on the DEAP and SEED datasets validate the effectiveness of our approach. When trained on DEAP and tested on SEED, our model achieves 67.44% accuracy, a 7.09% improvement over the baseline. Conversely, training on SEED and testing on DEAP yields 59.68% accuracy, a 6.07% improvement. Furthermore, PC-TTA reduces computational time by a factor of 15 compared to traditional TTA methods. Our method excels in detecting both positive and negative emotions, demonstrating its practical utility in healthcare applications. Code available at: https://github.com/RyersonMultimediaLab/EmotionRecognitionUDA
Abstract:Our work tackles the fundamental challenge of image segmentation in computer vision, which is crucial for diverse applications. While supervised methods demonstrate proficiency, their reliance on extensive pixel-level annotations limits scalability. In response to this challenge, we present an enhanced unsupervised Convolutional Neural Network (CNN)-based algorithm called DynaSeg. Unlike traditional approaches that rely on a fixed weight factor to balance feature similarity and spatial continuity, requiring manual adjustments, our novel, dynamic weighting scheme automates parameter tuning, adapting flexibly to image details. We also introduce the novel concept of a Silhouette Score Phase that addresses the challenge of dynamic clustering during iterations. Additionally, our methodology integrates both CNN-based and pre-trained ResNet feature extraction, offering a comprehensive and adaptable approach. We achieve state-of-the-art results on diverse datasets, with a notable 12.2% and 14.12% mIOU improvement compared to the current benchmarks on COCO-All and COCO-Stuff, respectively. The proposed approach unlocks the potential for unsupervised image segmentation and addresses scalability concerns in real-world scenarios by obviating the need for meticulous parameter tuning.
Abstract:Image segmentation is the foundation of several computer vision tasks, where pixel-wise knowledge is a prerequisite for achieving the desired target. Deep learning has shown promising performance in supervised image segmentation. However, supervised segmentation algorithms require a massive amount of data annotated at a pixel level, thus limiting their applicability and scalability. Therefore, there is a need to invest in unsupervised learning for segmentation. This work presents an improved version of an unsupervised Convolutional Neural Network (CNN) based algorithm that uses a constant weight factor to balance between the segmentation criteria of feature similarity and spatial continuity, and it requires continuous manual adjustment of parameters depending on the degree of detail in the image and the dataset. In contrast, we propose a novel dynamic weighting scheme that leads to a flexible update of the parameters and an automatic tuning of the balancing weight between the two criteria above to bring out the details in the images in a genuinely unsupervised manner. We present quantitative and qualitative results on four datasets, which show that the proposed scheme outperforms the current unsupervised segmentation approaches without requiring manual adjustment.
Abstract:The classification of electrocardiogram (ECG) plays a crucial role in the development of an automatic cardiovascular diagnostic system. However, considerable variances in ECG signals between individuals is a significant challenge. Changes in data distribution limit cross-domain utilization of a model. In this study, we propose a solution to classify ECG in an unlabeled dataset by leveraging knowledge obtained from labeled source domain. We present a domain-adaptive deep network based on cross-domain feature discrepancy optimization. Our method comprises three stages: pre-training, cluster-centroid computing, and adaptation. In pre-training, we employ a Distributionally Robust Optimization (DRO) technique to deal with the vanishing worst-case training loss. To enhance the richness of the features, we concatenate three temporal features with the deep learning features. The cluster computing stage involves computing centroids of distinctly separable clusters for the source using true labels, and for the target using confident predictions. We propose a novel technique to select confident predictions in the target domain. In the adaptation stage, we minimize compacting loss within the same cluster, separating loss across different clusters, inter-domain cluster discrepancy loss, and running combined loss to produce a domain-robust model. Experiments conducted in both cross-domain and cross-channel paradigms show the efficacy of the proposed method. Our method achieves superior performance compared to other state-of-the-art approaches in detecting ventricular ectopic beats (V), supraventricular ectopic beats (S), and fusion beats (F). Our method achieves an average improvement of 11.78% in overall accuracy over the non-domain-adaptive baseline method on the three test datasets.
Abstract:The presence of domain shift in medical imaging is a common issue, which can greatly impact the performance of segmentation models when dealing with unseen image domains. Adversarial-based deep learning models, such as Cycle-GAN, have become a common model for approaching unsupervised domain adaptation of medical images. These models however, have no ability to enforce the preservation of structures of interest when translating medical scans, which can lead to potentially poor results for unsupervised domain adaptation within the context of segmentation. This work introduces the Structure Preserving Cycle-GAN (SP Cycle-GAN), which promotes medical structure preservation during image translation through the enforcement of a segmentation loss term in the overall Cycle-GAN training process. We demonstrate the structure preserving capability of the SP Cycle-GAN both visually and through comparison of Dice score segmentation performance for the unsupervised domain adaptation models. The SP Cycle-GAN is able to outperform baseline approaches and standard Cycle-GAN domain adaptation for binary blood vessel segmentation in the STARE and DRIVE datasets, and multi-class Left Ventricle and Myocardium segmentation in the multi-modal MM-WHS dataset. SP Cycle-GAN achieved a state of the art Myocardium segmentation Dice score (DSC) of 0.7435 for the MR to CT MM-WHS domain adaptation problem, and excelled in nearly all categories for the MM-WHS dataset. SP Cycle-GAN also demonstrated a strong ability to preserve blood vessel structure in the DRIVE to STARE domain adaptation problem, achieving a 4% DSC increase over a default Cycle-GAN implementation.
Abstract:Accurate LVEF measurement is important in clinical practice as it identifies patients who may be in need of life-prolonging treatments. This paper presents a deep learning based framework to automatically estimate left ventricular ejection fraction from an entire 4-chamber apical echocardiogram video. The aim of the proposed framework is to provide an interpretable and computationally effective ejection fraction prediction pipeline. A lightweight Mobile U-Net based network is developed to segment the left ventricle in each frame of an echocardiogram video. An unsupervised LVEF estimation algorithm is implemented based on Simpson's mono-plane method. Experimental results on a large public dataset show that our proposed approach achieves comparable accuracy to the state-of-the-art while being significantly more space and time efficient (with 5 times fewer parameters and 10 times fewer FLOPS).
Abstract:Annually 8500 neonatal deaths are reported in the US due to respiratory failure. Recently, Lung Ultrasound (LUS), due to its radiation free nature, portability, and being cheaper is gaining wide acceptability as a diagnostic tool for lung conditions. However, lack of highly trained medical professionals has limited its use especially in remote areas. To address this, an automated screening system that captures characteristics of the LUS patterns can be of significant assistance to clinicians who are not experts in lung ultrasound (LUS) images. In this paper, we propose a feature extraction method designed to quantify the spatially-localized line patterns and texture patterns found in LUS images. Using the dual-tree complex wavelet transform (DTCWT) and four types of common image features we propose a method to classify the LUS images into 6 common neonatal lung conditions. These conditions are normal lung, pneumothorax (PTX), transient tachypnea of the newborn (TTN), respiratory distress syndrome (RDS), chronic lung disease (CLD) and consolidation (CON) that could be pneumonia or atelectasis. The proposed method using DTCWT decomposition extracted global statistical, grey-level co-occurrence matrix (GLCM), grey-level run length matrix (GLRLM) and linear binary pattern (LBP) features to be fed to a linear discriminative analysis (LDA) based classifier. Using 15 best DTCWT features along with 3 clinical features the proposed approach achieved a per-image classification accuracy of 92.78% with a balanced dataset containing 720 images from 24 patients and 74.39% with the larger unbalanced dataset containing 1550 images from 42 patients. Likewise, the proposed method achieved a maximum per-subject classification accuracy of 81.53% with 43 DTCWT features and 3 clinical features using the balanced dataset and 64.97% with 13 DTCWT features and 3 clinical features using the unbalanced dataset.
Abstract:R-peak detection is crucial in electrocardiogram (ECG) signal processing as it is the basis of heart rate variability analysis. The Pan-Tompkins algorithm is the most widely used QRS complex detector for the monitoring of many cardiac diseases including arrhythmia detection. However, the performance of the Pan-Tompkins algorithm in detecting the QRS complexes degrades in low-quality and noisy signals. This article introduces Pan-Tompkins++, an improved Pan-Tompkins algorithm. A bandpass filter with a passband of 5--18 Hz followed by an N-point moving average filter has been applied to remove the noise without discarding the significant signal components. Pan-Tompkins++ uses three thresholds to distinguish between R-peaks and noise peaks. Rather than using a generalized equation, different rules are applied to adjust the thresholds based on the pattern of the signal for the accurate detection of R-peaks under significant changes in signal pattern. The proposed algorithm reduces the False Positive and False Negative detections, and hence improves the robustness and performance of Pan-Tompkins algorithm. Pan-Tompkins++ has been tested on four open source datasets. The experimental results show noticeable improvement for both R-peak detection and execution time. We achieve 2.8% and 1.8% reduction in FP and FN, respectively, and 2.2% increase in F-score on average across four datasets, with 33% reduction in execution time. We show specific examples to demonstrate that in situations where the Pan-Tompkins algorithm fails to identify R-peaks, the proposed algorithm is found to be effective. The results have also been contrasted with other well-known R-peak detection algorithms.
Abstract:Physiological Signals are the most reliable form of signals for emotion recognition, as they cannot be controlled deliberately by the subject. Existing review papers on emotion recognition based on physiological signals surveyed only the regular steps involved in the workflow of emotion recognition such as preprocessing, feature extraction, and classification. While these are important steps, such steps are required for any signal processing application. Emotion recognition poses its own set of challenges that are very important to address for a robust system. Thus, to bridge the gap in the existing literature, in this paper, we review the effect of inter-subject data variance on emotion recognition, important data annotation techniques for emotion recognition and their comparison, data preprocessing techniques for each physiological signal, data splitting techniques for improving the generalization of emotion recognition models and different multimodal fusion techniques and their comparison. Finally we discuss key challenges and future directions in this field.
Abstract:Domain Adaptation is a technique to address the lack of massive amounts of labeled data in unseen environments. Unsupervised domain adaptation is proposed to adapt a model to new modalities using solely labeled source data and unlabeled target domain data. Though many image-spaces domain adaptation methods have been proposed to capture pixel-level domain-shift, such techniques may fail to maintain high-level semantic information for the segmentation task. For the case of biomedical images, fine details such as blood vessels can be lost during the image transformation operations between domains. In this work, we propose a model that adapts between domains using cycle-consistent loss while maintaining edge details of the original images by enforcing an edge-based loss during the adaptation process. We demonstrate the effectiveness of our algorithm by comparing it to other approaches on two eye fundus vessels segmentation datasets. We achieve 1.1 to 9.2 increment in DICE score compared to the SOTA and ~5.2 increments compared to a vanilla CycleGAN implementation.