当前位置: X-MOL 学术Microprocess. Microsyst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep neural networks accelerators with focus on tensor processors
Microprocessors and Microsystems ( IF 2.6 ) Pub Date : 2023-12-27 , DOI: 10.1016/j.micpro.2023.105005
Hamidreza Bolhasani , Mohammad Marandinejad

The massive amount of data and the problem of processing them is one of the main challenges of the digital age, and the development of artificial intelligence and machine learning can be useful in solving this problem. Using deep neural networks to improve the efficiency of these two areas is a good solution. So far, several architectures have been introduced for data processing with the benefit of deep neural networks, whose accuracy, efficiency, and computing power are different from each other. This article tries to review these architectures, their features, and their functions in a systematic way. According to the current research style, 24 articles (conference and research articles related to this topic) have been evaluated in the period of 2014–2022. In fact, the significant aspects of the selected articles are compared and at the end, the upcoming challenges and topics for future research are presented. The results show that the main parameters for proposing a new tensor processor include increasing speed and accuracy and reducing data processing time, reducing on-chip storage space, reducing DRAM access, reducing energy consumption, and achieving high efficiency.



中文翻译:

专注于张量处理器的深度神经网络加速器

海量数据及其处理问题是数字时代的主要挑战之一,人工智能和机器学习的发展有助于解决这一问题。使用深度神经网络来提高这两个领域的效率是一个很好的解决方案。到目前为止,已经引入了几种利用深度神经网络进行数据处理的架构,它们的准确性、效率和计算能力各不相同。本文试图系统地回顾这些架构、它们的特性和功能。根据目前的研究风格,2014年至2022年期间共评估了24篇文章(与该主题相关的会议和研究文章)。事实上,对所选文章的重要方面进行了比较,最后提出了未来研究即将面临的挑战和主题。结果表明,提出新型张量处理器的主要参数包括提高速度和精度、减少数据处理时间、减少片上存储空间、减少 DRAM 访问、降低能耗和实现高效率。

更新日期:2023-12-27
down
wechat
bug