Skip to main content
Log in

AFL-HCS: asynchronous federated learning based on heterogeneous edge client selection

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Federated learning (FL) constitutes a potent machine learning paradigm extensively applied in edge computing for training models on vast datasets. However, the challenges of data imbalance, edge dynamics, and resource constraints in edge computing pose formidable obstacles to sustaining FL efficiency. In addressing these challenges and enhancing the effectiveness of training across heterogeneous devices in unpredictable communication networks, we introduce an asynchronous federated learning framework called AFL-HCS. Within the AFL-HCS framework, client updates transmitted to the parameter server are aggregated in each epoch based on their arrival sequence at the parameter server. Furthermore, the system incorporates a cloud cache structure to store client-submitted training progress for subsequent rounds of global model updates. This mechanism optimally leverages the local progress of clients, expediting the enhancement of the global model’s performance. Experimental results demonstrate that AFL-HCS has significant advantages over the original federated learning protocol. Specifically, AFL-HCS shortens the duration of federated rounds, accelerates the convergence of the global model, and improves the accuracy of the global model, even in unstable edge environments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Algorithm 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Pantelopoulos, A., Bourbakis, N.G.: A survey on wearable sensor-based systems for health monitoring and prognosis. IEEE Trans. Syst. Man Cybern. C 40(1), 1–12 (2010). https://doi.org/10.1109/TSMCC.2009.2032660

    Article  Google Scholar 

  2. Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: A public domain dataset for human activity recognition using smartphones. In: 21st European Symposium on Artificial Neural Networks, ESANN 2013, Bruges, Belgium, April 24–26, 2013 (2013)

  3. Zhu, G., Liu, D., Du, Y., You, C., Zhang, J., Huang, K.: Towards an intelligent edge: wireless communication meets machine learning. CoRR. https://arxiv.org/abs/1809.00343 (2018)

  4. Satyanarayanan, M.: The emergence of edge computing. Computer 50(1), 30–39 (2017). https://doi.org/10.1109/MC.2017.9

    Article  Google Scholar 

  5. Tang, B., Guo, F., Cao, B., Tang, M., Li, K.: Cost-aware deployment of microservices for IoT applications in mobile edge computing environment. IEEE Trans. Netw. Serv. Manag. 20(3), 3119–3134 (2023). https://doi.org/10.1109/TNSM.2022.3232503

    Article  Google Scholar 

  6. Tang, B., Luo, J., Obaidat, M.S., Vijayakumar, P.: Container-based task scheduling in cloud-edge collaborative environment using priority-aware greedy strategy. Clust. Comput. 26(6), 3689–3705 (2023). https://doi.org/10.1007/S10586-022-03765-2

    Article  Google Scholar 

  7. Li, M., Andersen, D.G., Park, J.W., Smola, A.J., Ahmed, A., Josifovski, V., Long, J., Shekita, E.J., Su, B.: Scaling distributed machine learning with the parameter server. In: Flinn, J., Levy, H. (eds.) 11th USENIX Symposium on Operating Systems Design and Implementation, OSDI ’14, Broomfield, CO, USA, October 6–8, 2014, pp. 583–598. USENIX Association (2014)

  8. Hard, A., Rao, K., Mathews, R., Beaufays, F., Augenstein, S., Eichner, H., Kiddon, C., Ramage, D.: Federated learning for mobile keyboard prediction. CoRR. https://arxiv.org/abs/1811.03604 (2018)

  9. Zhang, L., Xu, J., Vijayakumar, P., Sharma, P.K., Ghosh, U.: Homomorphic encryption-based privacy-preserving federated learning in IoT-enabled healthcare system. IEEE Trans. Netw. Sci. Eng. 10(5), 2864–2880 (2023). https://doi.org/10.1109/TNSE.2022.3185327

    Article  MathSciNet  Google Scholar 

  10. Lyu, X., Han, Y., Wang, W., Liu, J., Wang, B., Liu, J., Zhang, X.: Poisoning with cerberus: stealthy and colluded backdoor attack against federated learning. In: Proceedings of Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2023). Feb 7–14, 2023 Washington DC (2023)

  11. Xu, X., Liu, P., Wang, W., Ma, H.-L., Wang, B., Han, Z., Han, Y.: CGIR: conditional generative instance reconstruction attacks against federated learning. IEEE Trans. Depend. Secur. Comput. (2022). https://doi.org/10.1109/TDSC.2022.3228302

    Article  Google Scholar 

  12. Liu, P., Xu, X., Wang, W.: Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives. Cybersecurity 5(1), 4 (2022). https://doi.org/10.1186/s42400-021-00105-6

    Article  Google Scholar 

  13. Wang, S., Tuor, T., Salonidis, T., Leung, K.K., Makaya, C., He, T., Chan, K.: Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 37(6), 1205–1221 (2019). https://doi.org/10.1109/JSAC.2019.2904348

    Article  Google Scholar 

  14. Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. In: 2019 IEEE International Conference on Communications, ICC 2019, Shanghai, China, May 20–24, 2019, pp. 1–7. IEEE (2019). https://doi.org/10.1109/ICC.2019.8761315

  15. Jiao, Z., Oh, J.C.: Asynchronous multitask reinforcement learning with dropout for continuous control. In: Wani, M.A., Khoshgoftaar, T.M., Wang, D., Wang, H., Seliya, N. (eds.) 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019, Boca Raton, FL, USA, December 16–19, 2019, pp. 529–534. IEEE (2019). https://doi.org/10.1109/ICMLA.2019.00099

  16. Xie, C., Koyejo, S., Gupta, I.: Asynchronous federated optimization. CoRR. https://arxiv.org/abs/1903.03934 (2019)

  17. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Singh, A., Zhu, X.J. (eds.) Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20–22 April 2017, Fort Lauderdale, FL, USA. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282. PMLR (2017). http://proceedings.mlr.press/v54/mcmahan17a.html

  18. Lim, H., Andersen, D.G., Kaminsky, M.: 3LC: lightweight and effective traffic compression for distributed machine learning. In: Talwalkar, A., Smith, V., Zaharia, M. (eds.) Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31–April 2, 2019. mlsys.org (2019). https://proceedings.mlsys.org/book/278.pdf

  19. Chen, Y., Ning, Y., Rangwala, H.: Asynchronous online federated learning for edge devices. CoRR. https://arxiv.org/abs/1911.02134 (2019)

  20. Lu, Y., Huang, X., Dai, Y., Maharjan, S., Zhang, Y.: Differentially private asynchronous federated learning for mobile edge computing in urban informatics. IEEE Trans. Ind. Inform. 16(3), 2134–2143 (2020). https://doi.org/10.1109/TII.2019.2942179

    Article  Google Scholar 

  21. Li, M., Andersen, D.G., Smola, A.J., Yu, K.: Communication efficient distributed machine learning with the parameter server. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8–13 2014, Montreal, QC, Canada, pp. 19–27 (2014)

  22. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  23. Chen, Z., Liao, W., Hua, K., Lu, C., Yu, W.: Towards asynchronous federated learning for heterogeneous edge-powered internet of things. Digit. Commun. Netw. 7(3), 317–326 (2021). https://doi.org/10.1016/j.dcan.2021.04.001

    Article  Google Scholar 

  24. Hu, C., Chen, Z., Larsson, E.G.: Device scheduling and update aggregation policies for asynchronous federated learning. CoRR. https://arxiv.org/abs/2107.11415 (2021)

  25. Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521(7553), 452–459 (2015). https://doi.org/10.1038/nature14541

    Article  ADS  CAS  PubMed  Google Scholar 

  26. Xu, H., Liu, X., Yu, W., Griffith, D.W., Golmie, N.: Reinforcement learning-based control and networking co-design for industrial internet of things. IEEE J. Sel. Areas Commun. 38(5), 885–898 (2020). https://doi.org/10.1109/JSAC.2020.2980909

    Article  Google Scholar 

  27. Nie, W., Karras, T., Garg, A., Debnath, S., Patney, A., Patel, A.B., Anandkumar, A.: Semi-supervised StyleGAN for disentanglement learning. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13–18 July 2020, Virtual Event. Proceedings of Machine Learning Research, vol. 119, pp. 7360–7369. PMLR (2020). http://proceedings.mlr.press/v119/nie20a.html

  28. Xu, C., Qu, Y., Xiang, Y., Gao, L.: Asynchronous federated learning on heterogeneous devices: a survey. CoRR. https://arxiv.org/abs/2109.04269 (2021)

  29. Avdiukhin, D., Kasiviswanathan, S.P.: Federated learning under arbitrary communication patterns. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18–24 July 2021, Virtual Event. Proceedings of Machine Learning Research, vol. 139, pp. 425–435. PMLR (2021). http://proceedings.mlr.press/v139/avdiukhin21a.html

  30. Lu, X., Liao, Y., Lio, P., Pan, H.: An asynchronous federated learning mechanism for edge network computing. J. Comput. Res. Dev. 57(12), 2571–2582 (2020). https://doi.org/10.7544/issn1000-1239.2020.20190754

    Article  Google Scholar 

  31. Hao, J., Zhao, Y., Zhang, J.: Time efficient federated learning with semi-asynchronous communication. In: 26th IEEE International Conference on Parallel and Distributed Systems, ICPADS 2020, Hong Kong, December 2–4, 2020, pp. 156–163. IEEE (2020). https://doi.org/10.1109/ICPADS51040.2020.00030

  32. Zhou, C., Tian, H., Zhang, H., Zhang, J., Dong, M., Jia, J.: Tea-fed: time-efficient asynchronous federated learning for edge computing. In: Palesi, M., Tumeo, A., Goumas, G.I., Almudéver, C.G. (eds.) CF ’21: Computing Frontiers Conference, Virtual Event, Italy, May 11–13, 2021, pp. 30–37. ACM (2021). https://doi.org/10.1145/3457388.3458655

Download references

Funding

This work was supported by Guangdong Province Key Discipline Scientific Research Capability Improvement Project (No. 2022ZDJS093), Innovation Project of Universities in Guangdong Province (No. 2023KTSCX113), Scientific Research Fund of Hunan Provincial Education Department (No. 22B0497), National Natural Science Foundation of China (No. 61602169), and Natural Science Foundation of Hunan Province (No. 2021JJ30278).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: YX, BT; Investigation: BC; Methodology: BT; Supervision: QY; Validation: LZ; Visualization: LZ; Writing—original draft: YX; Writing—review & editing: BT, MT.

Corresponding author

Correspondence to Bing Tang.

Ethics declarations

Competing interests

The authors declare no competing interests with respect to this manuscript.

Ethical approval

No ethical approval was required for this research.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, B., Xiao, Y., Zhang, L. et al. AFL-HCS: asynchronous federated learning based on heterogeneous edge client selection. Cluster Comput (2024). https://doi.org/10.1007/s10586-024-04314-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10586-024-04314-9

Keywords

Navigation