当前位置: X-MOL 学术Microelectron. J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An optimized EEGNet processor for low-power and real-time EEG classification in wearable brain–computer interfaces
Microelectronics Journal ( IF 2.2 ) Pub Date : 2024-02-19 , DOI: 10.1016/j.mejo.2024.106134
Jiacheng Cao , Wei Xiong , Jie Lu , Peilin Chen , Jian Wang , Jinmei Lai , Miaoqing Huang

Brain–computer interfaces (BCIs) based on electroencephalogram (EEG) signals have recently gained significant attention. EEGNet is a lightweight convolutional neural network designed for EEG-based BCIs. Previous EEGNet processors are implemented with high-precision fixed-point numbers, resulting in high power consumption and resource utilization. To address these drawbacks, this paper proposes a low-precision EEGNet (LPEEGNet) processor designed with a data-driven processing flow to achieve power-efficient and real-time EEG classification. For algorithm, we reconstruct EEGNet into LPEEGNet through low-precision quantization. Regarding hardware, we first propose a heterogeneous streaming architecture based on layer fusion. Furthermore, we utilize global feature maps memory to improve memory access efficiency. Finally, we achieve both inter-layer parallelism and intra-layer pipelining to enhance inference efficiency. Compared with the state-of-the-art EEGNet processor, experimental results on FPGA show that our processor implementation reduces the power consumption, LUT utilization, and inference latency by 52.6%, 36.2%, and 25.8%, respectively. With an accuracy of 93.06% on the event-related potential dataset, our processor is more suitable for wearable BCIs.

中文翻译:

优化的 EEGNet 处理器,用于可穿戴脑机接口中的低功耗实时脑电图分类

基于脑电图(EEG)信号的脑机接口(BCI)最近引起了广泛关注。EEGNet 是一种轻量级卷积神经网络,专为基于 EEG 的 BCI 设计。之前的EEGNet处理器采用高精度定点数实现,导致功耗和资源利用率较高。为了解决这些缺点,本文提出了一种低精度 EEGNet (LPEEGNet) 处理器,采用数据驱动的处理流程设计,以实现节能和实时的 EEG 分类。算法方面,我们通过低精度量化将EEGNet重构为LPEEGNet。在硬件方面,我们首先提出了一种基于层融合的异构流架构。此外,我们利用全局特征映射内存来提高内存访问效率。最后,我们实现了层间并行和层内流水线,以提高推理效率。与最先进的 EEGNet 处理器相比,FPGA 上的实验结果表明,我们的处理器实现的功耗、LUT 利用率和推理延迟分别降低了 52.6%、36.2% 和 25.8%。我们的处理器在事件相关电位数据集上的准确率为 93.06%,更适合可穿戴 BCI。
更新日期:2024-02-19
down
wechat
bug