当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Secure Aggregation is Not Private Against Membership Inference Attacks
arXiv - CS - Machine Learning Pub Date : 2024-03-26 , DOI: arxiv-2403.17775
Khac-Hoang Ngo, Johan Östman, Giuseppe Durisi, Alexandre Graell i Amat

Secure aggregation (SecAgg) is a commonly-used privacy-enhancing mechanism in federated learning, affording the server access only to the aggregate of model updates while safeguarding the confidentiality of individual updates. Despite widespread claims regarding SecAgg's privacy-preserving capabilities, a formal analysis of its privacy is lacking, making such presumptions unjustified. In this paper, we delve into the privacy implications of SecAgg by treating it as a local differential privacy (LDP) mechanism for each local update. We design a simple attack wherein an adversarial server seeks to discern which update vector a client submitted, out of two possible ones, in a single training round of federated learning under SecAgg. By conducting privacy auditing, we assess the success probability of this attack and quantify the LDP guarantees provided by SecAgg. Our numerical results unveil that, contrary to prevailing claims, SecAgg offers weak privacy against membership inference attacks even in a single training round. Indeed, it is difficult to hide a local update by adding other independent local updates when the updates are of high dimension. Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection, in federated learning.

中文翻译:

安全聚合对于成员推理攻击而言不是私有的

安全聚合(SecAgg)是联邦学习中常用的隐私增强机制,允许服务器仅访问模型更新的聚合,同时保护单个更新的机密性。尽管人们普遍声称 SecAgg 的隐私保护能力,但缺乏对其隐私的正式分析,使得这种假设不合理。在本文中,我们通过将 SecAgg 视为每次本地更新的本地差分隐私(LDP)机制来深入研究 SecAgg 的隐私影响。我们设计了一种简单的攻击,其中对抗服务器试图在 SecAgg 下的联合学习的单轮训练中从两个可能的更新向量中辨别客户端提交的更新向量。通过进行隐私审计,我们评估了这次攻击的成功概率,并量化了 SecAgg 提供的 LDP 保证。我们的数值结果表明,与普遍的说法相反,即使在单轮训练中,SecAgg 也能提供针对成员推理攻击的弱隐私保护。事实上,当更新具有高维度时,很难通过添加其他独立的本地更新来隐藏本地更新。我们的研究结果强调了联邦学习中额外的隐私增强机制的必要性,例如噪声注入。
更新日期:2024-03-27
down
wechat
bug