当前位置: X-MOL 学术ACM Trans. Archit. Code Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multiply-and-Fire: An Event-Driven Sparse Neural Network Accelerator
ACM Transactions on Architecture and Code Optimization ( IF 1.6 ) Pub Date : 2023-12-14 , DOI: 10.1145/3630255
Miao Yu 1 , Tingting Xiang 1 , Venkata Pavan Kumar Miriyala 1 , Trevor E. Carlson 1
Affiliation  

Deep neural network inference has become a vital workload for many systems from edge-based computing to data centers. To reduce the performance and power requirements for deep neural networks (DNNs) running on these systems, pruning is commonly used as a way to maintain most of the accuracy of the system while significantly reducing the workload requirements. Unfortunately, accelerators designed for unstructured pruning typically employ expensive methods to either determine non-zero activation-weight pairings or reorder computation. These methods require additional storage and memory accesses compared to the more regular data access patterns seen in structurally pruned models. However, even existing works that focus on the more regular access patterns seen in structured pruning continue to suffer from inefficient designs, which either ignore or expensively handle activation sparsity leading to low performance.

To address these inefficiencies, we leverage structured pruning and propose the multiply-and-fire (MnF) technique, which aims to solve these problems in three ways: (a) the use of a novel event-driven dataflow that naturally exploits activation sparsity without complex, high-overhead logic; (b) an optimized dataflow takes an activation-centric approach, which aims to maximize the reuse of activation data in computation and ensures the data are only fetched once from off-chip global and on-chip local memory; and (c) based on the proposed event-driven dataflow, we develop an energy-efficient, high-performance sparsity-aware DNN accelerator. Our results show that our MnF accelerator achieves a significant improvement across a number of modern benchmarks and presents a new direction to enable highly efficient AI inference for both CNN and MLP workloads. Overall, this work achieves a geometric mean of 11.2× higher energy efficiency and 1.41× speedup compared to a state-of-the-art sparsity-aware accelerator.



中文翻译:

Multiply-and-Fire:事件驱动的稀疏神经网络加速器

深度神经网络推理已成为从边缘计算到数据中心的许多系统的重要工作负载。为了降低在这些系统上运行的深度神经网络 (DNN) 的性能和功耗要求,剪枝通常被用作保持系统大部分准确性的方法,同时显着降低工作负载要求。不幸的是,为非结构化修剪设计的加速器通常采用昂贵的方法来确定非零激活权重配对或重新排序计算。与结构修剪模型中更常规的数据访问模式相比,这些方法需要额外的存储和内存访问。然而,即使现有的专注于结构化剪枝中更规则的访问模式的工作也继续受到低效设计的影响,这些设计要么忽略要么昂贵地处理激活稀疏性,从而导致性能低下。

为了解决这些低效率问题,我们利用结构化剪枝并提出乘法和激发(MnF)技术,该技术旨在通过三种方式解决这些问题:(a)使用新颖的事件驱动数据流,该数据流自然地利用激活稀疏性,而无需复杂、高开销的逻辑;(b) 优化的数据流采用以激活为中心的方法,旨在最大化计算中激活数据的重用,并确保数据仅从片外全局存储器和片上本地存储器获取一次;(c) 基于所提出的事件驱动数据流,我们开发了一种节能、高性能的稀疏感知 DNN 加速器。我们的结果表明,我们的 MnF 加速器在许多现代基准测试中实现了显着改进,并提出了为 CNN 和 MLP 工作负载实现高效 AI 推理的新方向。总体而言,与最先进的稀疏感知加速器相比,这项工作实现了 11.2 倍的几何平均能效和 1.41 倍的加速。

更新日期:2023-12-14
down
wechat
bug