当前位置: X-MOL 学术Int. J. Distrib. Sens. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed consensus problem with caching on federated learning framework
International Journal of Distributed Sensor Networks ( IF 2.3 ) Pub Date : 2022-04-22 , DOI: 10.1177/15501329221092932
Xin Yan 1 , Yiming Qin 1 , Xiaodong Hu 2 , Xiaoling Xiao 3
Affiliation  

Federated learning framework facilitates more applications of deep learning algorithms on the existing network architectures, where the model parameters are aggregated in a centralized manner. However, some of federated learning participants are often inaccessible, such as in a power shortage or dormant state. That will force us to explore the possibility that the parameter aggregation is operated in an ad hoc manner, which is based on consensus computing. On the contrary, since caching mechanism is indispensable to any federated learning mobile node, it is necessary to investigate the connection between it and consensus computing. In this article, we first propose a novel federated learning paradigm, which supports an ad hoc operation mode for federated learning participants. Second, a discrete-time dynamic equation and its control law are formulated to satisfy the demands from federated learning framework, with a quantized caching scheme designed to mask the uncertainties from both asynchronous updates and measurement noises. Then, the consensus conditions and the convergence of the consensus protocol are deduced analytically, and a quantized caching strategy to optimize the convergence speed is provided. Our major contribution is to give the basic theories of distributed consensus problem for federated learning framework, and the theoretical results are validated by numerical simulations.



中文翻译:

联邦学习框架上具有缓存的分布式共识问题

联邦学习框架促进了深度学习算法在现有网络架构上的更多应用,其中模型参数以集中方式聚合。但是,一些联合学习参与者通常无法访问,例如处于电力短缺或休眠状态。这将迫使我们探索参数聚合以ad hoc方式操作的可能性,这种方式基于共识计算。相反,由于缓存机制对于任何联邦学习移动节点都是必不可少的,因此有必要研究它与共识计算之间的联系。在本文中,我们首先提出了一种新颖的联邦学习范式,它支持联邦学习参与者的临时操作模式。第二,制定了离散时间动态方程及其控制律以满足联邦学习框架的要求,并采用量化缓存方案设计来掩盖异步更新和测量噪声的不确定性。然后,解析推导了共识协议的共识条件和收敛性,并给出了优化收敛速度的量化缓存策略。我们的主要贡献是为联邦学习框架提供了分布式共识问题的基本理论,并通过数值模拟验证了理论结果。解析推导了共识协议的共识条件和收敛性,给出了优化收敛速度的量化缓存策略。我们的主要贡献是为联邦学习框架提供了分布式共识问题的基本理论,并通过数值模拟验证了理论结果。解析推导了共识协议的共识条件和收敛性,给出了优化收敛速度的量化缓存策略。我们的主要贡献是为联邦学习框架提供了分布式共识问题的基本理论,并通过数值模拟验证了理论结果。

更新日期:2022-04-22
down
wechat
bug