当前位置: X-MOL 学术ACM Trans. Priv. Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SAM: Query-efficient Adversarial Attacks against Graph Neural Networks
ACM Transactions on Privacy and Security ( IF 2.3 ) Pub Date : 2023-11-13 , DOI: 10.1145/3611307
Chenhan Zhang 1 , Shiyao Zhang 2 , James J.Q. Yu 3 , Shui Yu 1
Affiliation  

Recent studies indicate that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. Particularly, adversarially perturbing the graph structure, e.g., flipping edges, can lead to salient degeneration of GNNs’ accuracy. In general, efficiency and stealthiness are two significant metrics to evaluate an attack method in practical use. However, most prevailing graph structure-based attack methods are query intensive, which impacts their practical use. Furthermore, while the stealthiness of perturbations has been discussed in previous studies, the majority of them focus on the attack scenario targeting a single node. To fill the research gap, we present a global attack method against GNNs, Saturation adversarial Attack with Meta-gradient, in this article. We first propose an enhanced meta-learning-based optimization method to obtain useful gradient information concerning graph structural perturbations. Then, leveraging the notion of saturation attack, we devise an effective algorithm to determine the perturbations based on the derived meta-gradients. Meanwhile, to ensure stealthiness, we introduce a similarity constraint to suppress the number of perturbed edges. Thorough experiments demonstrate that our method can effectively depreciate the accuracy of GNNs with a small number of queries. While achieving a higher misclassification rate, we also show that the perturbations developed by our method are not noticeable.



中文翻译:

SAM:针对图神经网络的高效查询对抗攻击

最近的研究表明图神经网络(GNN)很容易受到对抗性攻击。特别是,对抗性地扰乱图结构(例如翻转边缘)可能会导致 GNN 准确性的显着下降。一般来说,效率和隐蔽性是在实际使用中评估攻击方法的两个重要指标。然而,大多数流行的基于图结构的攻击方法都是查询密集型的,这影响了它们的实际使用。此外,虽然之前的研究已经讨论了扰动的隐秘性,但大多数研究都集中在针对单个节点的攻击场景上。为了填补研究空白​​,我们在本文中提出了一种针对 GNN 的全局攻击方法,即元梯度饱和对抗攻击。我们首先提出了一种增强的基于元学习的优化方法,以获得有关图结构扰动的有用梯度信息。然后,利用饱和攻击的概念,我们设计了一种有效的算法来根据导出的元梯度确定扰动。同时,为了确保隐秘性,我们引入了相似性约束来抑制扰动边缘的数量。彻底的实验表明,我们的方法可以通过少量查询有效地降低 GNN 的准确性。在实现更高的错误分类率的同时,我们还表明我们的方法产生的扰动并不明显。

更新日期:2023-11-13
down
wechat
bug