当前位置: X-MOL 学术IEEE J. Emerg. Sel. Top. Circuits Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CBP-QSNN: Spiking Neural Networks Quantized Using Constrained Backpropagation
IEEE Journal on Emerging and Selected Topics in Circuits and Systems ( IF 4.6 ) Pub Date : 2023-10-31 , DOI: 10.1109/jetcas.2023.3328911
Donghyung Yoo 1 , Doo Seok Jeong 2
Affiliation  

Spiking Neural Networks (SNNs) support sparse event-based data processing at high power efficiency when implemented in event-based neuromorphic processors. However, the limited on- chip memory capacity of neuromorphic processors strictly delimits the depth and width of SNNs implemented. A direct solution is the use of quantized SNNs (QSNNs) in place of SNNs with FP32 weights. To this end, we propose a method to quantize the weights using constrained backpropagation (CBP) with the Lagrangian function (conventional loss function plus well-defined weight-constraint functions) as an objective function. This work utilizes CBP as a post-training algorithm for deep SNNs pre-trained using various state-of-the-art methods including direct training (TSSL-BP, STBP, and surrogate gradient) and DNN-to-SNN conversion (SNN-Calibration), validating CBP as a general framework for QSNNs. CBP-QSNNs highlight their high accuracy insomuch as the degradation of accuracy on CIFAR-10, DVS128 Gesture, and CIFAR10-DVS in the worst case is less than 1%. Particularly, CBP-QSNNs for SNN-Calibration-pretrained SNNs on CIFAR-100 highlight an unexpected large increase in accuracy by 3.72% while using small weight-memory (3.5% of the FP32 case).

中文翻译:

CBP-QSNN:使用约束反向传播量化的尖峰神经网络

当在基于事件的神经形态处理器中实现时,尖峰神经网络 (SNN) 支持高功效的基于稀疏事件的数据处理。然而,神经形态处理器有限的片上存储容量严格限制了 SNN 实现的深度和宽度。直接的解决方案是使用量化 SNN (QSNN) 代替具有 FP32 权重的 SNN。为此,我们提出了一种使用约束反向传播(CBP)以拉格朗日函数(传统损失函数加上明确定义的权重约束函数)作为目标函数来量化权重的方法。这项工作利用 CBP 作为深度 SNN 的后训练算法,使用各种最先进的方法进行预训练,包括直接训练(TSSL-BP、STBP 和代理梯度)和 DNN 到 SNN 转换(SNN-校准),验证 CBP 作为 QSNN 的通用框架。 CBP-QSNN 突出了其高精度,在最坏情况下 CIFAR-10、DVS128 手势和 CIFAR10-DVS 的精度下降小于 1%。特别是,CIFAR-100 上经过 SNN 校准预训练的 SNN 的 CBP-QSNN 在使用小权重内存(FP32 情况下为 3.5%)时,准确率意外大幅提高了 3.72%。
更新日期:2023-10-31
down
wechat
bug