当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hardware-aware training of models with synaptic delays for digital event-driven neuromorphic processors
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-04-16 , DOI: arxiv-2404.10597
Alberto Patino-Saucedo, Roy Meijer, Amirreza Yousefzadeh, Manil-Dev Gomony, Federico Corradi, Paul Detteter, Laura Garrido-Regife, Bernabe Linares-Barranco, Manolis Sifalakis

Configurable synaptic delays are a basic feature in many neuromorphic neural network hardware accelerators. However, they have been rarely used in model implementations, despite their promising impact on performance and efficiency in tasks that exhibit complex (temporal) dynamics, as it has been unclear how to optimize them. In this work, we propose a framework to train and deploy, in digital neuromorphic hardware, highly performing spiking neural network models (SNNs) where apart from the synaptic weights, the per-synapse delays are also co-optimized. Leveraging spike-based back-propagation-through-time, the training accounts for both platform constraints, such as synaptic weight precision and the total number of parameters per core, as a function of the network size. In addition, a delay pruning technique is used to reduce memory footprint with a low cost in performance. We evaluate trained models in two neuromorphic digital hardware platforms: Intel Loihi and Imec Seneca. Loihi offers synaptic delay support using the so-called Ring-Buffer hardware structure. Seneca does not provide native hardware support for synaptic delays. A second contribution of this paper is therefore a novel area- and memory-efficient hardware structure for acceleration of synaptic delays, which we have integrated in Seneca. The evaluated benchmark involves several models for solving the SHD (Spiking Heidelberg Digits) classification task, where minimal accuracy degradation during the transition from software to hardware is demonstrated. To our knowledge, this is the first work showcasing how to train and deploy hardware-aware models parameterized with synaptic delays, on multicore neuromorphic hardware accelerators.

中文翻译:

用于数字事件驱动的神经形态处理器的具有突触延迟的模型的硬件感知训练

可配置的突触延迟是许多神经形态神经网络硬件加速器的基本功能。然而,尽管它们对表现出复杂(时间)动态的任务的性能和效率产生了良好的影响,但它们很少用于模型实现,因为目前尚不清楚如何优化它们。在这项工作中,我们提出了一个框架,用于在数字神经形态硬件中训练和部署高性能尖峰神经网络模型(SNN),其中除了突触权重之外,每个突触延迟也得到了共同优化。利用基于尖峰的时间反向传播,训练考虑了两个平台约束,例如突触权重精度和每个核心的参数总数,作为网络大小的函数。此外,还使用延迟修剪技术来减少内存占用,同时降低性能成本。我们评估两个神经拟态数字硬件平台中的训练模型:Intel Loihi 和 Imec Seneca。 Loihi 使用所谓的环形缓冲区硬件结构提供突触延迟支持。 Seneca 不提供对突触延迟的本机硬件支持。因此,本文的第二个贡献是一种新颖的节省空间和内存的硬件结构,用于加速突触延迟,我们已将其集成到 Seneca 中。评估的基准测试涉及多个用于解决 SHD(Spiking Heidelberg Digits)分类任务的模型,其中证明了从软件到硬件的过渡期间精度下降最小。据我们所知,这是第一个展示如何在多核神经形态硬件加速器上训练和部署以突触延迟参数化的硬件感知模型的工作。
更新日期:2024-04-17
down
wechat
bug