当前位置: X-MOL 学术Front. Neuroinform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
Frontiers in Neuroinformatics ( IF 3.5 ) Pub Date : 2023-06-27 , DOI: 10.3389/fninf.2023.1099510
Benjamin J Arthur 1 , Christopher M Kim 1, 2 , Susu Chen 1 , Stephan Preibisch 1 , Ran Darshan 1
Affiliation  

Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive in-silico study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as in-vivo experiments are being conducted, thus closing the loop between modeling and experiments.

中文翻译:

用于训练尖峰神经网络的递归最小二乘算法的可扩展实现

在神经元记录或行为任务上训练尖峰循环神经网络已成为研究神经系统执行计算的流行方法。随着神经记录的大小和复杂性的增加,需要有效的算法来使用最少的资源在短时间内训练模型。我们提出了尖峰神经网络中递归最小二乘算法的优化 CPU 和 GPU 实现。GPU 实现可以训练包含 100 万个神经元、1 亿个可塑突触和 10 亿个静态突触的网络,比未优化的参考 CPU 实现快约 1,000 倍。我们通过在不到一个小时的时间内训练网络来重现>的活动来展示代码的实用性。记录了执行决策任务的小鼠的 66, 000 个神经元。计算机模拟研究多区域计算的动力学和连接性。它还承认将模型训练为体内正在进行实验,从而关闭建模和实验之间的循环。
更新日期:2023-06-27
down
wechat
bug