当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Boosting Adversarial Training via Fisher-Rao Norm-based Regularization
arXiv - CS - Machine Learning Pub Date : 2024-03-26 , DOI: arxiv-2403.17520
Xiangyu Yin, Wenjie Ruan

Adversarial training is extensively utilized to improve the adversarial robustness of deep neural networks. Yet, mitigating the degradation of standard generalization performance in adversarial-trained models remains an open problem. This paper attempts to resolve this issue through the lens of model complexity. First, We leverage the Fisher-Rao norm, a geometrically invariant metric for model complexity, to establish the non-trivial bounds of the Cross-Entropy Loss-based Rademacher complexity for a ReLU-activated Multi-Layer Perceptron. Then we generalize a complexity-related variable, which is sensitive to the changes in model width and the trade-off factors in adversarial training. Moreover, intensive empirical evidence validates that this variable highly correlates with the generalization gap of Cross-Entropy loss between adversarial-trained and standard-trained models, especially during the initial and final phases of the training process. Building upon this observation, we propose a novel regularization framework, called Logit-Oriented Adversarial Training (LOAT), which can mitigate the trade-off between robustness and accuracy while imposing only a negligible increase in computational overhead. Our extensive experiments demonstrate that the proposed regularization strategy can boost the performance of the prevalent adversarial training algorithms, including PGD-AT, TRADES, TRADES (LSE), MART, and DM-AT, across various network architectures. Our code will be available at https://github.com/TrustAI/LOAT.

中文翻译:

通过基于 Fisher-Rao 范数的正则化促进对抗训练

对抗训练被广泛用于提高深度神经网络的对抗鲁棒性。然而,减轻对抗训练模型中标准泛化性能的下降仍然是一个悬而未决的问题。本文试图通过模型复杂性的角度来解决这个问题。首先,我们利用 Fisher-Rao 范数(模型复杂性的几何不变度量)为 ReLU 激活的多层感知器建立基于交叉熵损失的 Rademacher 复杂性的非平凡界限。然后我们概括一个与复杂性相关的变量,该变量对模型宽度的变化和对抗训练中的权衡因素敏感。此外,大量的经验证据证实,该变量与对抗训练模型和标准训练模型之间的交叉熵损失的泛化差距高度相关,特别是在训练过程的初始和最终阶段。基于这一观察,我们提出了一种新颖的正则化框架,称为面向 Logit 的对抗训练(LOAT),它可以减轻鲁棒性和准确性之间的权衡,同时只增加可忽略不计的计算开销。我们的大量实验表明,所提出的正则化策略可以提高各种网络架构中流行的对抗训练算法的性能,包括 PGD-AT、TRADES、TRADES (LSE)、MART 和 DM-AT。我们的代码将在 https://github.com/TrustAI/LOAT 上提供。
更新日期:2024-03-27
down
wechat
bug