当前位置: X-MOL 学术ACM Trans. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Manipulating Visually Aware Federated Recommender Systems and Its Countermeasures
ACM Transactions on Information Systems ( IF 5.6 ) Pub Date : 2023-12-30 , DOI: 10.1145/3630005
Wei Yuan 1 , Shilong Yuan 2 , Chaoqun Yang 3 , Quoc Viet Hung Nguyen 3 , Hongzhi Yin 1
Affiliation  

Federated recommender systems (FedRecs) have been widely explored recently due to their capability to safeguard user data privacy. These systems enable a central server to collaboratively learn recommendation models by sharing public parameters with clients, providing privacy-preserving solutions. However, this collaborative approach also creates a vulnerability that allows adversaries to manipulate FedRecs. Existing works on FedRec security already reveal that items can easily be promoted by malicious users via model poisoning attacks, but all of them mainly focus on FedRecs with only collaborative information (i.e., user–item interactions). We contend that these attacks are effective primarily due to the data sparsity of collaborative signals. In light of this, we propose a method to address data sparsity and model poisoning threats by incorporating product visual information. Intriguingly, our empirical findings demonstrate that the inclusion of visual information renders all existing model poisoning attacks ineffective.

Nevertheless, the integration of visual information also introduces a new avenue for adversaries to manipulate federated recommender systems, as this information typically originates from external sources. To assess such threats, we propose a novel form of poisoning attack tailored for visually aware FedRecs, namely image poisoning attacks, where adversaries can gradually modify the uploaded image with human-unaware perturbations to manipulate item ranks during the FedRecs’ training process. Moreover, we provide empirical evidence showcasing a heightened threat when image poisoning attacks are combined with model poisoning attacks, resulting in easier manipulation of the federated recommendation systems. To ensure the safe utilization of visual information, we employ a diffusion model in visually aware FedRecs to purify each uploaded image and detect the adversarial images. Extensive experiments conducted with two FedRecs on two datasets demonstrate the effectiveness and generalization of our proposed attacks and defenses.



中文翻译:

操纵视觉感知联合推荐系统及其对策

联合推荐系统(FedRecs)由于其保护用户数据隐私的能力,最近得到了广泛的探索。这些系统使中央服务器能够通过与客户端共享公共参数来协作学习推荐模型,从而提供隐私保护解决方案。然而,这种协作方法也造成了一个漏洞,使对手能够操纵 FedRecs。现有的 FedRec 安全工作已经表明,恶意用户可以通过模型中毒攻击轻松地推广项目,但所有这些工作都主要集中在仅具有协作信息(即用户-项目交互)的 FedRecs 上。我们认为,这些攻击之所以有效,主要是由于协作信号的数据稀疏性。有鉴于此,我们提出了一种通过结合产品视觉信息来解决数据稀疏性和模型中毒威胁的方法。有趣的是,我们的实证研究结果表明,包含视觉信息会使所有现有的模型中毒攻击无效。

然而,视觉信息的集成也为对手提供了操纵联合推荐系统的新途径,因为这些信息通常来自外部来源。为了评估此类威胁,我们提出了一种专为视觉感知的 FedRecs 量身定制的新型投毒攻击,即图像投毒攻击,其中攻击者可以通过人类无意识的扰动逐渐修改上传的图像,以在 FedRecs 的训练过程中操纵项目排名。此外,我们提供的经验证据表明,当图像中毒攻击与模型中毒攻击相结合时,威胁会更大,从而更容易操纵联合推荐系统。为了确保视觉信息的安全利用,我们在视觉感知的 FedRecs 中采用扩散模型来净化每个上传的图像并检测对抗性图像。使用两个 FedRec 在两个数据集上进行的广泛实验证明了我们提出的攻击和防御的有效性和泛化性。

更新日期:2023-12-30
down
wechat
bug