当前位置: X-MOL 学术J. Phys. A: Math. Theor. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Replica symmetry breaking in supervised and unsupervised Hebbian networks
Journal of Physics A: Mathematical and Theoretical ( IF 2.1 ) Pub Date : 2024-04-09 , DOI: 10.1088/1751-8121/ad38b4
Linda Albanese , Andrea Alessandrelli , Alessia Annibale , Adriano Barra

Hebbian neural networks with multi-node interactions, often called Dense Associative Memories, have recently attracted considerable interest in the statistical mechanics community, as they have been shown to outperform their pairwise counterparts in a number of features, including resilience against adversarial attacks, pattern retrieval with extremely weak signals and supra-linear storage capacities. However, their analysis has so far been carried out within a replica-symmetric theory. In this manuscript, we relax the assumption of replica symmetry and analyse these systems at one step of replica-symmetry breaking, focusing on two different prescriptions for the interactions that we will refer to as supervised and unsupervised learning. We derive the phase diagram of the model using two different approaches, namely Parisi’s hierarchical ansatz for the relationship between different replicas within the replica approach, and the so-called telescope ansatz within Guerra’s interpolation method: our results show that replica-symmetry breaking does not alter the threshold for learning and slightly increases the maximal storage capacity. Further, we also derive analytically the instability line of the replica-symmetric theory, using a generalization of the De Almeida and Thouless approach.

中文翻译:

监督和无监督赫布网络中的副本对称性破缺

具有多节点交互作用的赫布神经网络(通常称为密集联想记忆)最近引起了统计力学界的极大兴趣,因为它们已被证明在许多功能上优于成对的神经网络,包括对抗对抗攻击的弹性、模式检索具有极弱的信号和超线性存储能力。然而,迄今为止,他们的分析是在复制对称理论中进行的。在这篇手稿中,我们放宽了复制对称性的假设,并在复制对称性破缺的一步中分析这些系统,重点关注两种不同的相互作用处方,我们将其称为监督学习和无监督学习。我们使用两种不同的方法得出模型的相图,即 Parisi 的用于复制方法中不同复制品之间关系的分层 ansatz,以及 Guerra 插值方法中的所谓望远镜 ansatz:我们的结果表明,复制对称性破缺并不改变学习阈值并稍微增加最大存储容量。此外,我们还使用 De Almeida 和 Thouless 方法的推广,分析推导了复制对称理论的不稳定线。
更新日期:2024-04-09
down
wechat
bug