当前位置: X-MOL 学术ACM Trans. Knowl. Discov. Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
BapFL : You can Backdoor Personalized Federated Learning
ACM Transactions on Knowledge Discovery from Data ( IF 3.6 ) Pub Date : 2024-02-23 , DOI: 10.1145/3649316
Tiandi Ye 1 , Cen Chen 2 , Yinggui Wang 3 , Xiang Li 1 , Ming Gao 2
Affiliation  

In federated learning (FL), malicious clients could manipulate the predictions of the trained model through backdoor attacks, posing a significant threat to the security of FL systems. Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario, where all clients collaborate to train a single global model. A recent study conducted by Qin et al. [24] marks the initial exploration of backdoor attacks within the personalized federated learning (pFL) scenario, where each client constructs a personalized model based on its local data. Notably, the study demonstrates that pFL methods with parameter decoupling can significantly enhance robustness against backdoor attacks. However, in this paper, we whistleblow that pFL methods with parameter decoupling are still vulnerable to backdoor attacks. The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts. We analyze two direct causes of the heterogeneous classifiers: (1) data heterogeneity inherently exists among clients and (2) poisoning by malicious clients further exacerbates the data heterogeneity. To address these issues, we propose a two-pronged attack method, BapFL , which comprises two simple yet effective strategies: (1) poisoning only the feature encoder while keeping the classifier fixed and (2) diversifying the classifier through noise introduction to simulate that of the benign clients. Extensive experiments on three benchmark datasets under varying conditions demonstrate the effectiveness of our proposed attack. Additionally, we evaluate the effectiveness of six widely used defense methods and find that BapFL still poses a significant threat even in the presence of the best defense, Multi-Krum. We hope to inspire further research on attack and defense strategies in pFL scenarios. The code is available at: https://github.com/BapFL/code.



中文翻译:

BapFL:你可以后门个性化联邦学习

在联邦学习(FL)中,恶意客户端可以通过后门攻击操纵训练模型的预测,对 FL 系统的安全构成重大威胁。现有的研究主要集中在通用联合学习场景中的后门攻击和防御,其中所有客户协作训练单个全局模型。秦等人最近进行的一项研究。[24]标志着个性化联合学习(pFL)场景中后门攻击的初步探索,其中每个客户端基于其本地数据构建个性化模型。值得注意的是,该研究表明,具有参数解耦功能的 pFL 方法可以显着增强针对后门攻击的鲁棒性。然而,在本文中,我们指出具有参数解耦的 pFL 方法仍然容易受到后门攻击。具有参数解耦的 pFL 方法的阻力归因于恶意客户端和良性客户端之间的异构分类器。我们分析了异构分类器的两个直接原因:(1)客户端之间固有地存在数据异构性;(2)恶意客户端的中毒进一步加剧了数据异构性。为了解决这些问题,我们提出了一种双管齐下的攻击方法,BapFL,它包括两种简单但有效的策略:(1)仅毒害特征编码器,同时保持分类器固定;(2)通过引入噪声来模拟分类器的多样化的良性客户。在不同条件下对三个基准数据集进行的广泛实验证明了我们提出的攻击的有效性。此外,我们评估了六种广泛使用的防御方法的有效性,发现即使存在最佳防御 Multi-Krum,BapFL 仍然构成重大威胁。我们希望能够激发对 pFL 场景中攻击和防御策略的进一步研究。该代码位于:https://github.com/BapFL/code。

更新日期:2024-02-24
down
wechat
bug