Abstract:In the realm of ubiquitous computing, Human Activity Recognition (HAR) is vital for the automation and intelligent identification of human actions through data from diverse sensors. However, traditional machine learning approaches by aggregating data on a central server and centralized processing are memory-intensive and raise privacy concerns. Federated Learning (FL) has emerged as a solution by training a global model collaboratively across multiple devices by exchanging their local model parameters instead of local data. However, in realistic settings, sensor data on devices is non-independently and identically distributed (Non-IID). This means that data activity recorded by most devices is sparse, and sensor data distribution for each client may be inconsistent. As a result, typical FL frameworks in heterogeneous environments suffer from slow convergence and poor performance due to deviation of the global model's objective from the global objective. Most FL methods applied to HAR are either designed for overly ideal scenarios without considering the Non-IID problem or present privacy and scalability concerns. This work addresses these challenges, proposing CDFL, an efficient federated learning framework for image-based HAR. CDFL efficiently selects a representative set of privacy-preserved images using contrastive learning and deep clustering, reduces communication overhead by selecting effective clients for global model updates, and improves global model quality by training on privacy-preserved data. Our comprehensive experiments carried out on three public datasets, namely Stanford40, PPMI, and VOC2012, demonstrate the superiority of CDFL in terms of performance, convergence rate, and bandwidth usage compared to state-of-the-art approaches.
Abstract:EEG signals in emotion recognition absorb special attention owing to their high temporal resolution and their information about what happens in the brain. Different regions of brain work together to process information and meanwhile the activity of brain changes during the time. Therefore, the investigation of the connection between different brain areas and their temporal patterns plays an important role in neuroscience. In this study, we investigate the emotion classification performance using functional connectivity features in different frequency bands and compare them with the classification performance using differential entropy feature, which has been previously used for this task. Moreover, we investigate the effect of using different time periods on the classification performance. Our results on publicly available SEED dataset show that as time goes on, emotions become more stable and the classification accuracy increases. Among different time periods, we achieve the highest classification accuracy using the time period of 140s-end. In this time period, the accuracy is improved by 4 to 6% compared to using the entire signal. The mean accuracy of about 88% is obtained using any of the Pearson correlation coefficient, coherence, and phase locking value features and SVM. Therefore, functional connectivity features lead to better classification accuracy than DE features (with the mean accuracy of 84.89%) using the proposed framework. Finally, in a relatively fair comparison, we show that using the best time interval and SVM, we achieve better accuracy than using Recurrent Neural Networks which need large amount of data and have high computational cost.