当前位置: X-MOL 学术Front. Inform. Technol. Electron. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards robust neural networks via a global and monotonically decreasing robustness training strategy
Frontiers of Information Technology & Electronic Engineering ( IF 3 ) Pub Date : 2023-11-07 , DOI: 10.1631/fitee.2300059
Zhen Liang , Taoran Wu , Wanwei Liu , Bai Xue , Wenjing Yang , Ji Wang , Zhengbin Pang

Robustness of deep neural networks (DNNs) has caused great concerns in the academic and industrial communities, especially in safety-critical domains. Instead of verifying whether the robustness property holds or not in certain neural networks, this paper focuses on training robust neural networks with respect to given perturbations. State-of-the-art training methods, interval bound propagation (IBP) and CROWN-IBP, perform well with respect to small perturbations, but their performance declines significantly in large perturbation cases, which is termed “drawdown risk” in this paper. Specifically, drawdown risk refers to the phenomenon that IBP-family training methods cannot provide expected robust neural networks in larger perturbation cases, as in smaller perturbation cases. To alleviate the unexpected drawdown risk, we propose a global and monotonically decreasing robustness training strategy that takes multiple perturbations into account during each training epoch (global robustness training), and the corresponding robustness losses are combined with monotonically decreasing weights (monotonically decreasing robustness training). With experimental demonstrations, our presented strategy maintains performance on small perturbations and the drawdown risk on large perturbations is alleviated to a great extent. It is also noteworthy that our training method achieves higher model accuracy than the original training methods, which means that our presented training strategy gives more balanced consideration to robustness and accuracy.



中文翻译:

通过全局单调递减的鲁棒性训练策略实现鲁棒神经网络

深度神经网络(DNN)的鲁棒性引起了学术界和工业界的极大关注,特别是在安全关键领域。本文不是验证鲁棒性属性在某些神经网络中是否成立,而是专注于训练针对给定扰动的鲁棒神经网络。最先进的训练方法,区间界限传播(IBP)和 CROWN-IBP,在小扰动方面表现良好,但在大扰动情况下它们的性能显着下降,这在本文中被称为“回撤风险”。具体来说,回撤风险是指 IBP 系列训练方法在较大扰动情况下无法提供预期的稳健神经网络的现象,就像在较小扰动情况下一样。为了减轻意外的回撤风险,我们提出了一种全局单调递减的鲁棒性训练策略,该策略在每个训练时期考虑多个扰动(全局鲁棒性训练),并将相应的鲁棒性损失与单调递减的权重相结合(单调递减的鲁棒性训练) 。通过实验演示,我们提出的策略在小扰动下保持了性能,并且在很大程度上减轻了大扰动下的回撤风险。还值得注意的是,我们的训练方法比原始训练方法实现了更高的模型精度,这意味着我们提出的训练策略对鲁棒性和准确性给予了更平衡的考虑。

更新日期:2023-11-07
down
wechat
bug