Abstract:People undergoing neuromuscular dysfunctions and amputated limbs require automatic prosthetic appliances. In developing such prostheses, the precise detection of brain motor actions is imperative for the Grasp-and-Lift (GAL) tasks. Because of the low-cost and non-invasive essence of Electroencephalography (EEG), it is widely preferred for detecting motor actions during the controls of prosthetic tools. This article has automated the hand movement activity viz GAL detection method from the 32-channel EEG signals. The proposed pipeline essentially combines preprocessing and end-to-end detection steps, eliminating the requirement of hand-crafted feature engineering. Preprocessing action consists of raw signal denoising, using either Discrete Wavelet Transform (DWT) or highpass or bandpass filtering and data standardization. The detection step consists of Convolutional Neural Network (CNN)- or Long Short Term Memory (LSTM)-based model. All the investigations utilize the publicly available WAY-EEG-GAL dataset, having six different GAL events. The best experiment reveals that the proposed framework achieves an average area under the ROC curve of 0.944, employing the DWT-based denoising filter, data standardization, and CNN-based detection model. The obtained outcome designates an excellent achievement of the introduced method in detecting GAL events from the EEG signals, turning it applicable to prosthetic appliances, brain-computer interfaces, robotic arms, etc.
Abstract:The novel Coronavirus Disease 2019 (COVID-19) is a global pandemic disease spreading rapidly around the world. A robust and automatic early recognition of COVID-19, via auxiliary computer-aided diagnostic tools, is essential for disease cure and control. The chest radiography images, such as Computed Tomography (CT) and X-ray, and deep Convolutional Neural Networks (CNNs), can be a significant and useful material for designing such tools. However, designing such an automated tool is challenging as a massive number of manually annotated datasets are not publicly available yet, which is the core requirement of supervised learning systems. In this article, we propose a robust CNN-based network, called CVR-Net (Coronavirus Recognition Network), for the automatic recognition of the coronavirus from CT or X-ray images. The proposed end-to-end CVR-Net is a multi-scale-multi-encoder ensemble model, where we have aggregated the outputs from two different encoders and their different scales to obtain the final prediction probability. We train and test the proposed CVR-Net on three different datasets, where the images have collected from different open-source repositories. We compare our proposed CVR-Net with state-of-the-art methods, which are trained and tested on the same datasets. We split three datasets into five different tasks, where each task has a different number of classes, to evaluate the multi-tasking CVR-Net. Our model achieves an overall F1-score & accuracy of 0.997 & 0.998; 0.963 & 0.964; 0.816 & 0.820; 0.961 & 0.961; and 0.780 & 0.780, respectively, for task-1 to task-5. As the CVR-Net provides promising results on the small datasets, it can be an auspicious computer-aided diagnostic tool for the diagnosis of coronavirus to assist the clinical practitioners and radiologists. Our source codes and model are publicly available at https://github.com/kamruleee51/CVR-Net.