Abstract:This paper presents a control interface to translate the residual body motions of individuals living with severe disabilities, into control commands for body-machine interaction. A custom, wireless, wearable multi-sensor network is used to collect motion data from multiple points on the body in real-time. The solution proposed successfully leverage electromyography gesture recognition techniques for the recognition of inertial measurement units-based commands (IMU), without the need for cumbersome and noisy surface electrodes. Motion pattern recognition is performed using a computationally inexpensive classifier (Linear Discriminant Analysis) so that the solution can be deployed onto lightweight embedded platforms. Five participants (three able-bodied and two living with upper-body disabilities) presenting different motion limitations (e.g. spasms, reduced motion range) were recruited. They were asked to perform up to 9 different motion classes, including head, shoulder, finger, and foot motions, with respect to their residual functional capacities. The measured prediction performances show an average accuracy of 99.96% for able-bodied individuals and 91.66% for participants with upper-body disabilities. The recorded dataset has also been made available online to the research community. Proof of concept for the real-time use of the system is given through an assembly task replicating activities of daily living using the JACO arm from Kinova Robotics.
Abstract:In recent years, the use of deep learning algorithms has become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyography-based gesture recognition, deep learning algorithms are seldom employed as it requires an unreasonable amount of time for a single person, in a single session, to generate tens of thousands of examples. This work's hypothesis is that general, informative features can be learned from the large amount of data generated by aggregating the signals of multiple users, thus reducing the recording burden imposed on a single person while enhancing gesture recognition. As such, this paper proposes applying transfer learning on the aggregated data of multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large dataset, without the need for in-depth feature engineering. To this end, two datasets are recorded with the Myo Armband (Thalmic Labs), a low-cost, low-sampling rate (200Hz), 8-channel, consumer-grade, dry electrode sEMG armband. These two datasets are comprised of 19 and 17 able-bodied participants respectively. A third dataset, also recorded with the Myo Armband, was taken from the NinaPro database and is comprised of 10 able-bodied participants. This transfer learning scheme is shown to outperform the current state-of-the-art in gesture recognition. It achieves an average accuracy of 98.31% for 7 hand/wrist gestures over 17 able-bodied participants and 65.57% for 18 hand/wrist gestures over 10 able-bodied participants. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback reduces the degradation in accuracy normally experienced over time.