当前位置: X-MOL 学术IEEE Trans. Softw. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression Against Heterogeneous Attacks Toward AI Software Deployment
IEEE Transactions on Software Engineering ( IF 7.4 ) Pub Date : 2024-01-01 , DOI: 10.1109/tse.2023.3348515
Jie Zhu 1 , Leye Wang 1 , Xiao Han 2 , Anmin Liu 1 , Tao Xie 1
Affiliation  

The size of deep learning models in artificial intelligence (AI) software is increasing rapidly, hindering the large-scale deployment on resource-restricted devices ( e.g ., smartphones). To mitigate this issue, AI software compression plays a crucial role, which aims to compress model size while keeping high performance. However, the intrinsic defects in a big model may be inherited by the compressed one. Such defects may be easily leveraged by adversaries, since a compressed model is usually deployed in a large number of devices without adequate protection. In this article, we aim to address the safe model compression problem from the perspective of safety-performance co-optimization. Specifically, inspired by the test-driven development (TDD) paradigm in software engineering, we propose a test-driven sparse training framework called SafeCompress . By simulating the attack mechanism as safety testing, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm. Then, considering two kinds of representative and heterogeneous attack mechanisms, i.e ., black-box membership inference attack and white-box membership inference attack, we develop two concrete instances called BMIA-SafeCompress and WMIA-SafeCompress. Further, we implement another instance called MMIA-SafeCompress by extending SafeCompress to defend against the occasion when adversaries conduct black-box and white-box membership inference attacks simultaneously. We conduct extensive experiments on five datasets for both computer vision and natural language processing tasks. The results show the effectiveness and generalizability of our framework. We also discuss how to adapt SafeCompress to other attacks besides membership inference attack, demonstrating the flexibility of SafeCompress.

中文翻译:

安全性和性能,为什么不能两者兼而有之?针对人工智能软件部署的异构攻击的双目标优化模型压缩

人工智能(AI)软件中深度学习模型的规模正在迅速增加,阻碍了在资源有限的设备上的大规模部署( 例如,智能手机)。为了缓解这个问题,人工智能软件压缩发挥着至关重要的作用,其目的是在保持高性能的同时压缩模型大小。然而,大模型的固有缺陷可能会被压缩模型继承。这些缺陷很容易被对手利用,因为压缩模型通常部署在大量设备中而没有足够的保护。在本文中,我们旨在从安全性能协同优化的角度解决安全模型压缩问题。具体来说,受到软件工程中测试驱动开发(TDD)范式的启发,我们提出了一个测试驱动的稀疏训练框架,称为安全压缩。通过将攻击机制模拟为安全测试,SafeCompress 可以按照动态稀疏训练范式自动将大模型压缩为小模型。然后,考虑两种代表性的和异构的攻击机制,即黑盒成员推理攻击和白盒成员推理攻击,我们开发了两个具体实例,称为BMIA-SafeCompress和WMIA-SafeCompress。此外,我们通过扩展 SafeCompress 来实现另一个名为 MMIA-SafeCompress 的实例,以防御对手同时进行黑盒和白盒成员推理攻击的情况。我们对计算机视觉和自然语言处理任务的五个数据集进行了广泛的实验。结果显示了我们框架的有效性和普遍性。我们还讨论了如何使 SafeCompress 适应除成员推理攻击之外的其他攻击,展示了 SafeCompress 的灵活性。
更新日期:2024-01-01
down
wechat
bug