当前位置: X-MOL 学术Concurr. Comput. Pract. Exp. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EPIDL: Towards efficient and privacy‐preserving inference in deep learning
Concurrency and Computation: Practice and Experience ( IF 2 ) Pub Date : 2024-04-04 , DOI: 10.1002/cpe.8110
Chenfei Nie 1 , Zhipeng Zhou 1 , Mianxiong Dong 2 , Kaoru Ota 2 , Qiang Li 1
Affiliation  

SummaryDeep learning has shown its great potential in real‐world applications. However, users(clients) who want to use deep learning applications need to send their data to the deep learning service provider (server), which can make the client's data leak to the server, resulting in serious privacy concerns. To address this issue, we propose a protocol named EPIDL to perform efficient and secure inference tasks on neural networks. This protocol enables the client and server to complete inference tasks by performing secure multi‐party computation (MPC) and the client's private data is kept secret from the server. The work in EPIDL can be summarized as follows: First, we optimized the convolution operation and matrix multiplication, such that the total communication can be reduced; Second, we proposed a new method for truncation following secure multiplication based on oblivious transfer and garbled circuits, which will not fail and can be executed together with the ReLU activation function; Finally, we replace complex activation function with MPC‐friendly approximation function. We implement our work in C++ and accelerate the local matrix computation with CUDA support. We evaluate the efficiency of EPIDL in privacy‐preserving deep learning inference tasks, such as the time to execute a secure inference on the MNIST dataset in the LeNet model is about 0.14 s. Compared with the state‐ofthe‐art work, our work is 1.8–98 faster over LAN and WAN, respectively. The experimental results show that our EPIDL is efficient and privacy‐preserving.

中文翻译:

EPIDL:在深度学习中实现高效且保护隐私的推理

摘要深度学习在实际应用中显示出了巨大的潜力。然而,想要使用深度学习应用的用户(客户端)需要将其数据发送到深度学习服务提供商(服务器),这可能会使客户端的数据泄露到服务器,从而导致严重的隐私问题。为了解决这个问题,我们提出了一种名为 EPIDL 的协议,用于在神经网络上执行高效且安全的推理任务。该协议使客户端和服务器能够通过执行安全多方计算(MPC)来完成推理任务,并且客户端的私有数据对服务器保密。 EPIDL的工作可以总结如下:首先,我们优化了卷积运算和矩阵乘法,从而减少了总的通信量;其次,我们提出了一种基于不经意转移和乱码电路的安全乘法截断新方法,该方法不会失败并且可以与ReLU激活函数一起执行;最后,我们用 MPC 友好的近似函数替换复杂的激活函数。我们用 C++ 实现我们的工作,并通过 CUDA 支持加速局部矩阵计算。我们评估了 EPIDL 在隐私保护深度学习推理任务中的效率,例如在 LeNet 模型中对 MNIST 数据集执行安全推理的时间约为 0.14 s。与最先进的工作相比,我们的通过 LAN 和 WAN 的工作速度分别快 1.8–98。实验结果表明我们的 EPIDL 是高效且保护隐私的。
更新日期:2024-04-04
down
wechat
bug