当前位置: X-MOL 学术ACM Trans. Priv. Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks
ACM Transactions on Privacy and Security ( IF 2.3 ) Pub Date : 2023-06-26 , DOI: https://dl.acm.org/doi/10.1145/3592800
Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis

Collaborative machine learning settings such as federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model into disclosing the training data. Previous implementations of this attack typically only rely on the shared data representations, ignoring the adversarial priors, or require that specific layers are present in the target model, reducing the potential attack surface. In this work, we propose a novel context-agnostic model inversion framework that builds on the foundations of gradient-based inversion attacks, but additionally exploits the features and the style of the data controlled by an in-the-network adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively across all training settings, showing particular effectiveness against the collaborative medical imaging tasks. Finally, we demonstrate that our method achieves significant success on two downstream tasks: sensitive feature inference and facial recognition spoofing.



中文翻译:

超越梯度:在模型反转攻击中利用对抗性先验

联合学习等协作机器学习设置可能容易受到对抗性干扰和攻击。一类此类攻击称为模型反转攻击,其特征是对手对模型进行逆向工程以公开训练数据。这种攻击以前的实施通常依赖共享数据表示,忽略对抗性先验,或者要求目标模型中存在特定层,从而减少潜在的攻击面。在这项工作中,我们提出了一种新颖的上下文无关模型反转框架,该框架建立在基于梯度的反转攻击的基础上,但还利用了网络内对手控制的数据的特征和风格。我们的技术在所有训练环境中在定性和定量上都优于现有的基于梯度的方法,在协作医学成像任务中显示出特别的有效性。最后,我们证明我们的方法在两个下游任务上取得了显着的成功:敏感特征推断和面部识别欺骗。

更新日期:2023-06-26
down
wechat
bug