当前位置: X-MOL 学术J. Comput. Sci. Tech. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DyPipe: A Holistic Approach to Accelerating Dynamic Neural Networks with Dynamic Pipelining
Journal of Computer Science and Technology ( IF 1.9 ) Pub Date : 2023-07-31 , DOI: 10.1007/s11390-021-1161-y
Yi-Min Zhuang , Xing Hu , Xiao-Bing Chen , Tian Zhi

Dynamic neural network (NN) techniques are increasingly important because they facilitate deep learning techniques with more complex network architectures. However, existing studies, which predominantly optimize the static computational graphs by static scheduling methods, usually focus on optimizing static neural networks in deep neural network (DNN) accelerators. We analyze the execution process of dynamic neural networks and observe that dynamic features introduce challenges for efficient scheduling and pipelining in existing DNN accelerators. We propose DyPipe, a holistic approach to optimizing dynamic neural network inferences in enhanced DNN accelerators. DyPipe achieves significant performance improvements for dynamic neural networks while it introduces negligible overhead for static neural networks. Our evaluation demonstrates that DyPipe achieves 1.7x speedup on dynamic neural networks and maintains more than 96% performance for static neural networks.



中文翻译:

DyPipe:通过动态流水线加速动态神经网络的整体方法

动态神经网络(NN)技术变得越来越重要,因为它们促进了具有更复杂网络架构的深度学习技术。然而,现有的研究主要通过静态调度方法来优化静态计算图,通常侧重于优化深度神经网络(DNN)加速器中的静态神经网络。我们分析了动态神经网络的执行过程,并观察到动态特征给现有 DNN 加速器的高效调度和流水线带来了挑战。我们提出了 DyPipe,一种在增强型 DNN 加速器中优化动态神经网络推理的整体方法。DyPipe 为动态神经网络实现了显着的性能改进,同时为静态神经网络引入了可以忽略不计的开销。我们的评估表明,DyPipe 在动态神经网络上实现了 1.7 倍的加速,并在静态神经网络上保持了 96% 以上的性能。

更新日期:2023-07-31
down
wechat
bug