当前位置: X-MOL 学术Comput. Phys. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Supervised training of neural-network quantum states for the next-nearest neighbor Ising model
Computer Physics Communications ( IF 6.3 ) Pub Date : 2024-03-19 , DOI: 10.1016/j.cpc.2024.109169
Zheyu Wu , Remmy Zen , Heitor P. Casagrande , Dario Poletti , Stéphane Bressan

Different neural network architectures can be unsupervisedly or supervisedly trained to represent quantum states. We explore and compare different strategies for the supervised training of feed forward neural network quantum states. We empirically and comparatively evaluate the performance of feed forward neural network quantum states in different phases of matter for variants of the architecture, for different hyper-parameters, and for two different loss functions, to which we refer as and , respectively. We consider the next-nearest neighbor Ising model for the diversity of its phases and focus on its paramagnetic, ferromagnetic, and pair-antiferromagnetic phases. We observe that the overlap loss function allows better training of the model across all phases, provided a rescaling of the neural network.

中文翻译:

下一个最近邻伊辛模型的神经网络量子态的监督训练

不同的神经网络架构可以进行无监督或监督训练来表示量子态。我们探索并比较了前馈神经网络量子态监督训练的不同策略。我们根据经验和比较评估前馈神经网络量子态在不同物质相中对于架构变体、不同超参数以及两个不同损失函数(我们分别称为 和 )的性能。我们考虑次近邻伊辛模型的相多样性,并重点关注其顺磁相、铁磁相和对反铁磁相。我们观察到,重叠损失函数允许在所有阶段更好地训练模型,前提是对神经网络进行重新缩放。
更新日期:2024-03-19
down
wechat
bug