当前位置: X-MOL 学术IEEE Trans. Instrum. Meas. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Surface EMG-Based Intersession/Intersubject Gesture Recognition by Leveraging Lightweight All-ConvNet and Transfer Learning
IEEE Transactions on Instrumentation and Measurement ( IF 5.6 ) Pub Date : 2024-03-25 , DOI: 10.1109/tim.2024.3381288
Md. Rabiul Islam 1 , Daniel Massicotte 1 , Philippe Massicotte 2 , Wei-Ping Zhu 3
Affiliation  

Gesture recognition using low-resolution instantaneous high-density surface electromyography (HD-sEMG) images opens up new avenues for the development of more fluid and natural muscle–computer interfaces (MCIs). However, the data variability between intersession and intersubject scenarios presents a great challenge. The existing approaches employed very large and complex deep ConvNet or two-stage recurrent neural network (2SRNN)-based domain adaptation methods to approximate the distribution shift caused by these intersession and intersubject data variabilities. Hence, these methods also require learning over millions of training parameters and a large pretrained and target domain dataset in both the pretraining and the adaptation stages. As a result, it makes high-end resource-bounded and computationally very expensive network for deployment in real-time applications. To overcome this problem, we propose a lightweight All-ConvNet + transfer learning (TL) model that leverages lightweight All-ConvNet and TL for the enhancement of intersession and intersubject gesture recognition performance. The All-ConvNet + TL model consists solely of convolutional layers, a simple yet efficient framework for learning invariant and discriminative representations to address the distribution shifts caused by intersession and intersubject data variability. Experiments on four datasets demonstrate that our proposed methods outperform the most complex existing approaches by a large margin and achieve state-of-the-art results on intersession and intersubject scenarios and perform on par with or competitively on intrasession gesture recognition. These performance gaps increase even more when a tiny amount (e.g., a single trial) of data is available on the target domain for adaptation. These outstanding experimental results provide evidence that the current state-of-the-art models may be overparameterized for sEMG-based intersession and intersubject gesture recognition tasks.

中文翻译:

利用轻量级全卷积网络和迁移学习进行基于表面肌电图的会话间/主体间手势识别

使用低分辨率瞬时高密度表面肌电图 (HD-sEMG) 图像进行手势识别,为开发更流畅、更自然的肌肉计算机接口 (MCI) 开辟了新途径。然而,会话间和受试者间场景之间的数据可变性提出了巨大的挑战。现有方法采用非常大且复杂的深度ConvNet或基于两阶段递归神经网络(2SRNN)的域适应方法来近似由这些会话间和受试者间数据变异性引起的分布变化。因此,这些方法还需要在预训练和适应阶段学习数百万个训练参数以及大量的预训练和目标域数据集。因此,它使得高端资源受限且计算成本非常昂贵的网络无法部署在实时应用程序中。为了克服这个问题,我们提出了一种轻量级的 All-ConvNet + 迁移学习(TL)模型,利用轻量级的 All-ConvNet 和 TL 来增强会话间和主体间手势识别性能。 All-ConvNet + TL 模型仅由卷积层组成,这是一个简单而有效的框架,用于学习不变性和判别性表示,以解决由会话间和受试者间数据变异性引起的分布变化。对四个数据集的实验表明,我们提出的方法在很大程度上优于最复杂的现有方法,并在会话间和受试者间场景中实现了最先进的结果,并且在会话内手势识别上的表现与会话内手势识别相当或具有竞争力。当目标域上有少量数据(例如,单次试验)可用于适应时,这些性能差距甚至会更大。这些出色的实验结果提供了证据,表明当前最先进的模型对于基于 sEMG 的会话间和主体间手势识别任务可能会过度参数化。
更新日期:2024-03-25
down
wechat
bug