Abstract:Despite continued efforts to improve classification accuracy, it has been reported that offline accuracy is a poor indicator of the usability of pattern recognition-based myoelectric control. One potential source of this disparity is the existence of transitions between contraction classes that happen during regular use and are reported to be problematic for pattern recognition systems. Nevertheless, these transitions are often ignored or undefined during both the training and testing processes. In this work, we propose a set of metrics for analyzing the transitions that occur during the voluntary changes between contraction classes during continuous control. These metrics quantify the common types of errors that occur during transitions and compare them to existing metrics that apply only to the steady-state portions of the data. We then use these metrics to analyze transition characteristics of 6 commonly used classifiers on a novel dataset that includes continuous transitions between all combinations of seven different contraction classes. Results show that a linear discriminant classifier consistently outperforms other conventional classifiers during both transitions and steady-state conditions, despite having an almost identical offline performance. Results also show that, although offline training metrics correlate with steady-state performance, they do not correlate with transition performance. These insights suggest that the proposed set of metrics could provide a shift in perspective on the way pattern recognition systems are evaluated and provide a more representative picture of a classifier's performance, potentially narrowing the gap between offline performance and online usability.
Abstract:In this study, we investigate the application of self-supervised learning via pre-trained Long Short-Term Memory (LSTM) networks for training surface electromyography pattern recognition models (sEMG-PR) using dynamic data with transitions. While labeling such data poses challenges due to the absence of ground-truth labels during transitions between classes, self-supervised pre-training offers a way to circumvent this issue. We compare the performance of LSTMs trained with either fully-supervised or self-supervised loss to a conventional non-temporal model (LDA) on two data types: segmented ramp data (lacking transition information) and continuous dynamic data inclusive of class transitions. Statistical analysis reveals that the temporal models outperform non-temporal models when trained with continuous dynamic data. Additionally, the proposed VICReg pre-trained temporal model with continuous dynamic data significantly outperformed all other models. Interestingly, when using only ramp data, the LSTM performed worse than the LDA, suggesting potential overfitting due to the absence of sufficient dynamics. This highlights the interplay between data type and model choice. Overall, this work highlights the importance of representative dynamics in training data and the potential for leveraging self-supervised approaches to enhance sEMG-PR models.