Abstract:Automated emotion recognition using electroencephalogram (EEG) signals has gained substantial attention. Although deep learning approaches exhibit strong performance, they often suffer from vulnerabilities to various perturbations, like environmental noise and adversarial attacks. In this paper, we propose an Inception feature generator and two-sided perturbation (INC-TSP) approach to enhance emotion recognition in brain-computer interfaces. INC-TSP integrates the Inception module for EEG data analysis and employs two-sided perturbation (TSP) as a defensive mechanism against input perturbations. TSP introduces worst-case perturbations to the model's weights and inputs, reinforcing the model's elasticity against adversarial attacks. The proposed approach addresses the challenge of maintaining accurate emotion recognition in the presence of input uncertainties. We validate INC-TSP in a subject-independent three-class emotion recognition scenario, demonstrating robust performance.
Abstract:Although deep learning-based algorithms have demonstrated excellent performance in automated emotion recognition via electroencephalogram (EEG) signals, variations across brain signal patterns of individuals can diminish the model's effectiveness when applied across different subjects. While transfer learning techniques have exhibited promising outcomes, they still encounter challenges related to inadequate feature representations and may overlook the fact that source subjects themselves can possess distinct characteristics. In this work, we propose a multi-source domain adaptation approach with a transformer-based feature generator (MSDA-TF) designed to leverage information from multiple sources. The proposed feature generator retains convolutional layers to capture shallow spatial, temporal, and spectral EEG data representations, while self-attention mechanisms extract global dependencies within these features. During the adaptation process, we group the source subjects based on correlation values and aim to align the moments of the target subject with each source as well as within the sources. MSDA-TF is validated on the SEED dataset and is shown to yield promising results.
Abstract:Recently, physiological data such as electroencephalography (EEG) signals have attracted significant attention in affective computing. In this context, the main goal is to design an automated model that can assess emotional states. Lately, deep neural networks have shown promising performance in emotion recognition tasks. However, designing a deep architecture that can extract practical information from raw data is still a challenge. Here, we introduce a deep neural network that acquires interpretable physiological representations by a hybrid structure of spatio-temporal encoding and recurrent attention network blocks. Furthermore, a preprocessing step is applied to the raw data using graph signal processing tools to perform graph smoothing in the spatial domain. We demonstrate that our proposed architecture exceeds state-of-the-art results for emotion classification on the publicly available DEAP dataset. To explore the generality of the learned model, we also evaluate the performance of our architecture towards transfer learning (TL) by transferring the model parameters from a specific source to other target domains. Using DEAP as the source dataset, we demonstrate the effectiveness of our model in performing cross-modality TL and improving emotion classification accuracy on DREAMER and the Emotional English Word (EEWD) datasets, which involve EEG-based emotion classification tasks with different stimuli.
Abstract:Deep unrolling is an emerging deep learning-based image reconstruction methodology that bridges the gap between model-based and purely deep learning-based image reconstruction methods. Although deep unrolling methods achieve state-of-the-art performance for imaging problems and allow the incorporation of the observation model into the reconstruction process, they do not provide any uncertainty information about the reconstructed image, which severely limits their use in practice, especially for safety-critical imaging applications. In this paper, we propose a learning-based image reconstruction framework that incorporates the observation model into the reconstruction task and that is capable of quantifying epistemic and aleatoric uncertainties, based on deep unrolling and Bayesian neural networks. We demonstrate the uncertainty characterization capability of the proposed framework on magnetic resonance imaging and computed tomography reconstruction problems. We investigate the characteristics of the epistemic and aleatoric uncertainty information provided by the proposed framework to motivate future research on utilizing uncertainty information to develop more accurate, robust, trustworthy, uncertainty-aware, learning-based image reconstruction and analysis methods for imaging problems. We show that the proposed framework can provide uncertainty information while achieving comparable reconstruction performance to state-of-the-art deep unrolling methods.
Abstract:Ultrasound elasticity images which enable the visualization of quantitative maps of tissue stiffness can be reconstructed by solving an inverse problem. Classical model-based approaches for ultrasound elastography use deterministic finite element methods (FEMs) to incorporate the governing physical laws resulting in poor performance in noisy conditions. Moreover, these approaches utilize fixed regularizers for various tissue patterns while appropriate data-adaptive priors might be required for capturing the complex spatial elasticity distribution. In this regard, we propose a joint model-based and learning-based framework for estimating the elasticity distribution by solving a regularized optimization problem. We present an integrated objective function composed of a statistical physics-based forward model and a data-driven regularizer to leverage deep neural networks for learning the underlying elasticity prior. This constrained optimization problem is solved using the gradient descent (GD) method and the gradient of regularizer is simply replaced by the residual of the trained denoiser network for having an explicit objective function with reduced computation time.
Abstract:Classical model-based imaging methods for ultrasound elasticity inverse problem require prior constraints about the underlying elasticity patterns, while finding the appropriate hand-crafted prior for each tissue type is a challenge. In contrast, standard data-driven methods count solely on supervised learning on the training data pairs leading to massive network parameters for unnecessary physical model relearning which might not be consistent with the governing physical models of the imaging system. Fusing the physical forward model and noise statistics with data-adaptive priors leads to a united reconstruction framework that guarantees the learned reconstruction agrees with the physical models while coping with the limited training data. In this paper, we propose a new methodology for estimating the elasticity image by solving a regularized optimization problem which benefits from the physics-based modeling via a data-fidelity term and adversarially learned priors via a regularization term. In this method, the regularizer is trained based on the Wasserstein Generative Adversarial Network (WGAN) objective function which tries to distinguish the distribution of clean and noisy images. Leveraging such an adversarial regularizer for parameterizing the distribution of latent images and using gradient descent (GD) for solving the corresponding regularized optimization task leads to stability and convergence of the reconstruction compared to pixel-wise supervised learning schemes. Our simulation results verify the effectiveness and robustness of the proposed methodology with limited training datasets.
Abstract:Elasticity image, visualizing the quantitative map of tissue stiffness, can be reconstructed by solving an inverse problem. Classical methods for magnetic resonance elastography (MRE) try to solve a regularized optimization problem comprising a deterministic physical model and a prior constraint as data-fidelity term and regularization term, respectively. For improving the elasticity reconstructions, appropriate prior about the underlying elasticity distribution is required which is not unique. This article proposes an infused approach for MRE reconstruction by integrating the statistical representation of the physical laws of harmonic motions and learning-based prior. For data-fidelity term, we use a statistical linear-algebraic model of equilibrium equations and for the regularizer, data-driven regularization by denoising (RED) is utilized. In the proposed optimization paradigm, the regularizer gradient is simply replaced by the residual of learned denoiser leading to time-efficient computation and convex explicit objective function. Simulation results of elasticity reconstruction verify the effectiveness of the proposed approach.
Abstract:Quantitative characterization of tissue properties, known as elasticity imaging, can be cast as solving an ill-posed inverse problem. The finite element methods (FEMs) in magnetic resonance elastography (MRE) imaging are based on solving a constrained optimization problem consisting of a physical forward model and a regularizer as the data-fidelity term and the prior term, respectively. In existing formulation for the elasticity forward model, physical laws that arise from equilibrium equation of harmonic motion, indicate a deterministic relationship between MRE-measured data and unknown elasticity distribution which leads to the poor and unstable elasticity distribution estimation in the presence of noise. Toward this end, we propose an efficient statistical methodology for physical forward model refinement by formulating it as linear algebraic representation with respect to the unknown elasticity distribution and incorporating an analytical noise model. To solve the subsequent total variation regularized optimization task, we benefit from a fixed-point scheme involving proximal gradient methods. Simulation results of elasticity reconstruction in various SNR conditions verify the effectiveness of the proposed approach.
Abstract:Existing physical model-based imaging methods for ultrasound elasticity reconstruction utilize fixed variational regularizers that may not be appropriate for the application of interest or may not capture complex spatial prior information about the underlying tissues. On the other hand, end-to-end learning-based methods count solely on the training data, not taking advantage of the governing physical laws of the imaging system. Integrating learning-based priors with physical forward models for ultrasound elasticity imaging, we present a joint reconstruction framework which guarantees that learning driven reconstructions are consistent with the underlying physics. For solving the elasticity inverse problem as a regularized optimization problem, we propose a plug-and-play (PnP) reconstruction approach in which each iteration of the elasticity image estimation process involves separate updates incorporating data fidelity and learning-based regularization. In this methodology, the data fidelity term is developed using a statistical linear algebraic model of quasi-static equilibrium equation revealing the relationship of the observed displacement fields \cmmnt{measured deformation data} to the unobserved elastic modulus. The regularizer comprises a convolutional neural network (CNN) based denoiser that captures the learned prior structure of the underlying tissues. Preliminary simulation results demonstrate the robustness and effectiveness of the proposed approach with limited training datasets and noisy displacement measurements.
Abstract:The growing success of graph signal processing (GSP) approaches relies heavily on prior identification of a graph over which network data admit certain regularity. However, adaptation to increasingly dynamic environments as well as demands for real-time processing of streaming data pose major challenges to this end. In this context, we develop novel algorithms for online network topology inference given streaming observations assumed to be smooth on the sought graph. Unlike existing batch algorithms, our goal is to track the (possibly) time-varying network topology while maintaining the memory and computational costs in check by processing graph signals sequentially-in-time. To recover the graph in an online fashion, we leverage proximal gradient (PG) methods to solve a judicious smoothness-regularized, time-varying optimization problem. Under mild technical conditions, we establish that the online graph learning algorithm converges to within a neighborhood of (i.e., it tracks) the optimal time-varying batch solution. Computer simulations using both synthetic and real financial market data illustrate the effectiveness of the proposed algorithm in adapting to streaming signals to track slowly-varying network connectivity.