Abstract:Cardiovascular diseases (CVD) are the leading cause of death worldwide, with coronary artery disease (CAD) comprising the largest subcategory of CVDs. Recently, there has been increased focus on detecting CAD using phonocardiogram (PCG) signals, with high success in clinical environments with low noise and optimal sensor placement. Multichannel techniques have been found to be more robust to noise; however, achieving robust performance on real-world data remains a challenge. This work utilises a novel multichannel energy-based noisy-segment rejection algorithm, using heart and noise-reference microphones, to discard audio segments with large amounts of nonstationary noise before training a deep learning classifier. This conformer-based classifier takes mel-frequency cepstral coefficients (MFCCs) from multiple channels, further helping improve the model's noise robustness. The proposed method achieved 78.4% accuracy and 78.2% balanced accuracy on 297 subjects, representing improvements of 4.1% and 4.3%, respectively, compared to training without noisy-segment rejection.




Abstract:Accurately interpreting cardiac auscultation signals plays a crucial role in diagnosing and managing cardiovascular diseases. However, the paucity of labelled data inhibits classification models' training. Researchers have turned to generative deep learning techniques combined with signal processing to augment the existing data and improve cardiac auscultation classification models to overcome this challenge. However, the primary focus of prior studies has been on model performance as opposed to model robustness. Robustness, in this case, is defined as both the in-distribution and out-of-distribution performance by measures such as Matthew's correlation coefficient. This work shows that more robust abnormal heart sound classifiers can be trained using an augmented dataset. The augmentations consist of traditional audio approaches and the creation of synthetic audio conditionally generated using the WaveGrad and DiffWave diffusion models. It is found that both the in-distribution and out-of-distribution performance can be improved over various datasets when training a convolutional neural network-based classification model with this augmented dataset. With the performance increase encompassing not only accuracy but also balanced accuracy and Matthew's correlation coefficient, an augmented dataset significantly contributes to resolving issues of imbalanced datasets. This, in turn, helps provide a more general and robust classifier.




Abstract:The leading cause of mortality and morbidity worldwide is cardiovascular disease (CVD), with coronary artery disease (CAD) being the largest sub-category. Unfortunately, myocardial infarction or stroke can manifest as the first symptom of CAD, underscoring the crucial importance of early disease detection. Hence, there is a global need for a cost-effective, non-invasive, reliable, and easy-to-use system to pre-screen CAD. Previous studies have explored weak murmurs arising from CAD for classification using phonocardiogram (PCG) signals. However, these studies often involve tedious and inconvenient data collection methods, requiring precise subject preparation and environmental conditions. This study proposes using a novel data acquisition system (DAQS) designed for simplicity and convenience. The DAQS incorporates multi-channel PCG sensors into a wearable vest. The entire signal acquisition process can be completed in under two minutes, from fitting the vest to recording signals and removing it, requiring no specialist training. This exemplifies the potential for mass screening, which is impractical with current state-of-the-art protocols. Seven PCG signals are acquired, six from the chest and one from the subject's back, marking a novel approach. Our classification approach, which utilizes linear-frequency cepstral coefficients (LFCC) as features and employs a support vector machine (SVM) to distinguish between normal and CAD-affected heartbeats, outperformed alternative low-computational methods suitable for portable applications. Utilizing feature-level fusion, multiple channels are combined, and the optimal combination yields the highest subject-level accuracy and F1-score of 80.44% and 81.00%, respectively, representing a 7% improvement over the best-performing single channel. The proposed system's performance metrics have been demonstrated to be clinically significant.