Paper The following article is Open access

An improved DDPG algorithm based on evolution-guided transfer in reinforcement learning

and

Published under licence by IOP Publishing Ltd
, , Citation Xueqian Bai and Haonian Wang 2024 J. Phys.: Conf. Ser. 2711 012016 DOI 10.1088/1742-6596/2711/1/012016

1742-6596/2711/1/012016

Abstract

Deep Reinforcement Learning (DRL) algorithms help agents take actions automatically in sophisticated control tasks. However, it is challenged by sparse reward and long training time for exploration in the application of Deep Neural Network (DNN). Evolutionary Algorithms (EAs), a set of black box optimization techniques, are well applied to single agent real-world problems, not troubled by temporal credit assignment. However, both suffer from large sets of sampled data. To facilitate the research on DRL for a pursuit-evasion game, this paper contributes an innovative policy optimization algorithm, which is named as Evolutionary Algorithm Transfer - Deep Deterministic Policy Gradient (EAT-DDPG). The proposed EAT-DDPG takes parameters transfer into consideration, initializing the DNN of DDPG with the parameters driven by EA. Meanwhile, a diverse set of experiences produced by EA are stored into the replay buffer of DDPG before the EA process is ceased. EAT-DDPG is an improved version of DDPG, aiming at maximizing the reward value of the agent trained by DDPG as much as possible within finite episodes. The experimental environment includes a pursuit-evasion scenario where the evader moves with the fixed policy, and the results show that the agent can explore policy more efficiently with the proposed EAT-DDPG during the learning process.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Please wait… references are loading.