Skip to main content
Log in

A survey on model-based reinforcement learning

  • Review
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Reinforcement learning (RL) interacts with the environment to solve sequential decision-making problems via a trial-and-error approach. Errors are always undesirable in real-world applications, even though RL excels at playing complex video games that permit several trial-and-error attempts. To improve sample efficiency and thus reduce errors, model-based reinforcement learning (MBRL) is believed to be a promising direction, as it constructs environment models in which trial-and-errors can occur without incurring actual costs. In this survey, we investigate MBRL with a particular focus on the recent advancements in deep RL. There is a generalization error between the learned model of a non-tabular environment and the actual environment. Consequently, it is crucial to analyze the disparity between policy training in the environment model and that in the actual environment, guiding algorithm design for improved model learning, model utilization, and policy training. In addition, we discuss the recent developments of model-based techniques in other forms of RL, such as offline RL, goal-conditioned RL, multi-agent RL, and meta-RL. Furthermore, we discuss the applicability and benefits of MBRL for real-world tasks. Finally, this survey concludes with a discussion of the promising future development prospects for MBRL. We believe that MBRL has great unrealized potential and benefits in real-world applications, and we hope this survey will encourage additional research on MBRL.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Sutton R S, Barto A G. Reinforcement Learning: An Introduction. Cambridge: MIT Press, 2018

    Google Scholar 

  2. Silver D, Huang A, Maddison C J, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529: 484–489

    Article  Google Scholar 

  3. Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518: 529–533

    Article  Google Scholar 

  4. Syed U, Bowling M, Schapire R E. Apprenticeship learning using linear programming. In: Proceedings of the 25th International Conference on Machine Learning, 2008. 1032–1039

  5. Yu Y. Towards sample efficient reinforcement learning. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018. 5739–5743

  6. Wang T W, Bao X C, Clavera I, et al. Benchmarking model-based reinforcement learning. 2019. ArXiv:1907.02057

  7. Luo Y P, Xu H Z, Li Y Z, et al. Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees. 2018. ArXiv:1807.03858

  8. Janner M, Fu J, Zhang M, et al. When to trust your model: model-based policy optimization. In: Proceedings of the Advances in Neural Information Processing Systems, 2019. 12498–12509

  9. Schulman J, Levine S, Abbeel P, et al. Trust region policy optimization. In: Proceedings of the 32nd International Conference on Machine Learning, 2015. 1889–1897

  10. Mnih V, Badia A P, Mirza M, et al. Asynchronous methods for deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning, 2016. 1928–1937

  11. Schulman J, Wolski F, Dhariwal P, et al. Proximal policy optimization algorithms. 2017. ArXiv:1707.06347

  12. Lillicrap T P, Hunt J J, Pritzel A, et al. Continuous control with deep reinforcement learning. In: Proceedings of the 4th International Conference on Learning Representations, 2016

  13. Haarnoja T, Zhou A, Abbeel P, et al. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: Proceedings of the 35th International Conference on Machine Learning, 2018. 1856–1865

  14. Sun W, Jiang N, Krishnamurthy A, et al. Model-based RL in contextual decision processes: PAC bounds and exponential improvements over model-free approaches. In: Proceedings of the Conference on Learning Theory, 2019

  15. Asadi K, Misra D, Kim S, et al. Combating the compounding-error problem with a multi-step model. 2019. ArXiv:1905.13320

  16. Sutton R S. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proceedings of the 7th International Conference on Machine Learning, 1990. 216–224

  17. Brafman R I, Tennenholtz M. R-MAX-A general polynomial time algorithm for near-optimal reinforcement learning. Journal of Maching Learning Research, 2002, 3: 213–231

    MathSciNet  Google Scholar 

  18. Jiang N. Notes on Rmax exploration, 2020. https://nanjiang.cs.illinois.edu/files/cs598/note7.pdf

  19. Azar M G, Osband I, Munos R. Minimax regret bounds for reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning, 2017. 263–272

  20. Zhang Z H, Zhou Y, Ji X Y. Almost optimal model-free reinforcement learning via reference-advantage decomposition. In: Proceedings of the Advances in Neural Information Processing Systems, 2020. 15198–15207

  21. Jin C, Allen-Zhu Z, Bubeck S, et al. Is Q-learning provably efficient? In: Proceedings of the Advances in Neural Information Processing Systems, 2018. 4868–4878

  22. Kurutach T, Clavera I, Duan Y, et al. Model-ensemble trust-region policy optimization. In: Proceedings of the 6th International Conference on Learning Representations, 2018

  23. Feinberg V, Wan A, Stoica I, et al. Model-based value estimation for efficient model-free reinforcement learning. 2018. ArXiv:1803.00101

  24. Rajeswaran A, Mordatch I, Kumar V. A game theoretic framework for model based reinforcement learning. In: Proceedings of the 37th International Conference on Machine Learning, 2020. 7953–7963

  25. Nagabandi A, Kahn G, Fearing R S, et al. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2018. 7559–7566

  26. Chua K, Calandra R, McAllister R, et al. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In: Proceedings of the Advances in Neural Information Processing Systems, 2018. 4759–4770

  27. Kégl B, Hurtado G, Thomas A. Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose? In: Proceedings of the 9th International Conference on Learning Representation, 2021

  28. Kearns M J, Singh S P. Near-optimal reinforcement learning in polynomial time. Machine Learn, 2002, 49: 209–232

    Article  Google Scholar 

  29. Xu T, Li Z N, Yu Y. Error bounds of imitating policies and environments. In: Proceedings of the Advances in Neural Information Processing Systems, 2020. 15737–15749

  30. Xu T, Li Z N, Yu Y. Error bounds of imitating policies and environments for reinforcement learning. IEEE Trans Pattern Anal Mach Intell, 2022, 44: 6968–6980

    Article  Google Scholar 

  31. Edwards A D, Downs L, Davidson J C. Forward-backward reinforcement learning. 2018. ArXiv:1803.10227

  32. Goyal A, Brakel P, Fedus W, et al. Recall traces: backtracking models for efficient reinforcement learning. In: Proceedings of the 7th International Conference on Learning Representations, 2019

  33. Lai H, Shen J, Zhang W N, et al. Bidirectional model-based policy optimization. In: Proceedings of the 37th International Conference on Machine Learning, 2020. 5618–5627

  34. Lee K, Seo Y, Lee S, et al. Context-aware dynamics model for generalization in model-based reinforcement learning. In: Proceedings of the 37th International Conference on Machine Learning, 2020. 5757–5766

  35. Wang J H, Li W Z, Jiang H Z, et al. Offline reinforcement learning with reverse model-based imagination. 2021. ArXiv:2110.00188

  36. Venkatraman A, Hebert M, Bagnell J A. Improving multi-step prediction of learned time series models. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence, 2015. 3024–3030

  37. Asadi K, Misra D, Littman M L. Lipschitz continuity in model-based reinforcement learning. In: Proceedings of the 35th International Conference on Machine Learning, 2018. 264–273

  38. Vaserstein L N. Markov processes over denumerable products of spaces, describing large systems of automata. Problemy Peredachi Informatsii, 1969, 5: 64–72

    MathSciNet  Google Scholar 

  39. Ho J, Ermon S. Generative adversarial imitation learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2016. 4565–4573

  40. Zhang Y F, Cai Q, Yang Z R, et al. Generative adversarial imitation learning with neural network parameterization: global optimality and convergence rate. In: Proceedings of the 37th International Conference on Machine Learning, 2020. 11044–11054

  41. Wang Y Z, Liu T Y, Yang Z, et al. On computation and generalization of generative adversarial imitation learning. In: Proceedings of the 8th International Conference on Learning Representations, 2020

  42. Xu T, Li Z N, Yu Y. On generalization of adversarial imitation learning and beyond. 2021. ArXiv:2106.10424

  43. Ghasemipour S K S, Zemel R S, Gu S. A divergence minimization perspective on imitation learning methods. In: Proceedings of the 3rd Annual Conference on Robot Learning, 2019. 1259–1277

  44. Ke L Y M, Barnes M, Sun W, et al. Imitation learning as f-divergence minimization. 2019. ArXiv:1905.12888

  45. Zhang H F, Wang J, Zhou Z M, et al. Learning to design games: strategic environments in reinforcement learning. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018. 3068–3074

  46. Shi J C, Yu Y, Da Q, et al. Virtual-Taobao: virtualizing real-world online retail environment for reinforcement learning. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019. 4902–4909

  47. Wu Y H, Fan T H, Ramadge P J, et al. Model imitation for model-based reinforcement learning. 2019. ArXiv:1909.11821

  48. Eysenbach B, Khazatsky A, Levine S, et al. Mismatched no more: joint model-policy optimization for model-based RL. 2021. ArXiv:2110.02758

  49. Zhang W N, Yang Z Y, Shen J, et al. Learning to build high-fidelity and robust environment models. In: Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2021. 104–121

  50. Tamar A, Glassner Y, Mannor S. Optimizing the CVaR via sampling. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence, 2015

  51. Spaan M T. Partially observable Markov decision processes. In: Proceedings of the Reinforcement Learning, volume 12 of Adaptation, Learning, and Optimization, 2012. 387–414

  52. Ha D, Schmidhuber J. Recurrent world models facilitate policy evolution. In: Proceedings of the Advances in Neural Information Processing Systems, 2018. 2455–2467

  53. Hausknecht M, Stone P. Deep recurrent Q-learning for partially observable MDPs. In: Proceedings of the AAAI Fall Symposium Series, 2015

  54. Yang M J, Nachum O. Representation matters: offline pretraining for sequential decision making. In: Proceedings of the 38th International Conference on Machine Learning, 2021. 11784–11794

  55. Oh J, Singh S, Lee H. Value prediction network. In: Proceedings of the Advances in Neural Information Processing Systems, 2017. 6118–6128

  56. Hafner D, Lillicrap T P, Ba J, et al. Dream to control: learning behaviors by latent imagination. In: Proceedings of the 8th International Conference on Learning Representations, 2020

  57. Hafner D, Lillicrap T P, Norouzi M, et al. Mastering Atari with discrete world models. In: Proceedings of the 9th International Conference on Learning Representations, 2021

  58. Hafner D, Lillicrap T P, Fischer I, et al. Learning latent dynamics for planning from pixels. In: Proceedings of the 36th International Conference on Machine Learning, 2019. 2555–2565

  59. Shen J, Zhao H, Zhang W N, et al. Model-based policy optimization with unsupervised model adaptation. In: Proceedings of the Advances in Neural Information Processing Systems, 2020. 2823–2834

  60. Moerland T M, Broekens J, Jonker C M. A framework for reinforcement learning and planning. 2020. ArXiv:2006.15009

  61. Moerland T M, Broekens J, Jonker C M. Model-based reinforcement learning: a survey. 2020. ArXiv:2006.16712

  62. Camacho E F, Alba C B. Model Predictive Control. Berlin: Springer, 2013

    Google Scholar 

  63. Hewing L, Wabersich K P, Menner M, et al. Learning-based model predictive control: toward safe learning in control. Annu Rev Control Robot Auton Syst, 2020, 3: 269–296

    Article  Google Scholar 

  64. Wang T W, Ba J. Exploring model-based planning with policy networks. In: Proceedings of the 8th International Conference on Learning Representations, 2020

  65. Botev Z I, Kroese D P, Rubinstein R Y, et al. The cross-entropy method for optimization. In: Proceedings of the Handbook of Statistics, 2013. 31: 35–59

  66. Hansen N. The CMA evolution strategy: a tutorial. 2016. ArXiv:1604.00772

  67. Yu Y, Qian H, Hu Y Q. Derivative-free optimization via classification. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence, 2016. 2286–2292

  68. Hu Y Q, Qian H, Yu Y. Sequential classification-based optimization for direct policy search. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017. 2029–2035

  69. He J, Suau M, Oliehoek F A. Influence-augmented online planning for complex environments. In: Proceedings of the Advances in Neural Information Processing Systems, 2020

  70. Oliehoek F A, Witwicki S J, Kaelbling L P. Influence-based abstraction for multiagent systems. In: Proceedings of the 26th Conference on Artificial Intelligence, 2012

  71. Oliehoek F, Witwicki S, Kaelbling L. A sufficient statistic for influence in structured multiagent environments. J Artif Intell Res, 2021, 70: 789–870

    Article  MathSciNet  Google Scholar 

  72. Congeduti E, Mey A, Oliehoek F A. Loss bounds for approximate influence-based abstraction. In: Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, 2021. 377–385

  73. Racanière S, Weber T, Reichert D P, et al. Imagination-augmented agents for deep reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2017. 5690–5701

  74. Browne C B, Powley E, Whitehouse D, et al. A survey of monte carlo tree search methods. IEEE Trans Comput Intell AI Games, 2012, 4: 1–43

    Article  Google Scholar 

  75. Chaslot G, Bakkes S, Szita I, et al. Monte-Carlo tree search: a new framework for game AI. In: Proceedings of the 4th Artificial Intelligence and Interactive Digital Entertainment Conference, 2008

  76. Silver D, Schrittwieser J, Simonyan K, et al. Mastering the game of Go without human knowledge. Nature, 2017, 550: 354–359

    Article  Google Scholar 

  77. Silver D, Hubert T, Schrittwieser J, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. 2017. ArXiv:1712.01815

  78. Anthony T, Tian Z, Barber D. Thinking fast and slow with deep learning and tree search. In: Proceedings of the Advances in Neural Information Processing Systems, 2017. 5360–5370

  79. Couëtoux A, Hoock J, Sokolovska N, et al. Continuous upper confidence trees. In: Proceedings of the 5th International Conference on Learning and Intelligent Optimization, 2011. 433–445

  80. Moerland T M, Broekens J, Plaat A, et al. A0C: Alpha zero in continuous action space. 2018. ArXiv:1805.09613

  81. Coulom R. Computing “Elo ratings” of move patterns in the game of Go. J Int Comput Games Assoc, 2007, 30: 198–208

    Google Scholar 

  82. Chaslot G M J B, Winands M H M, Herik H J V D, et al. Progressive strategies for Monte-Carlo tree search. New Math Nat Computation, 2008, 04: 343–357

    Article  MathSciNet  Google Scholar 

  83. Schrittwieser J, Antonoglou I, Hubert T, et al. Mastering Atari, Go, chess and shogi by planning with a learned model. 2019. ArXiv:1911.08265

  84. Sutton R S. Dyna, an integrated architecture for learning, planning, and reacting. SIGART Bull, 1991, 2: 160–163

    Article  Google Scholar 

  85. Moore A W, Atkeson C G. Prioritized sweeping: reinforcement learning with less data and less time. Machine Learning, 1993, 13: 103–130

    Article  Google Scholar 

  86. Tamar A, Levine S, Abbeel P, et al. Value iteration networks. In: Proceedings of the Advances in Neural Information Processing Systems, 2016. 2146–2154

  87. Bellman R. Dynamic programming and stochastic control processes. Inf Control, 1958, 1: 228–239

    Article  MathSciNet  Google Scholar 

  88. Tesauro G, Galperin G R. On-line policy improvement using Monte-Carlo search. In: Proceedings of the Advances in Neural Information Processing Systems, 1996. 1068–1074

  89. Tesauro G. Temporal difference learning and TD-Gammon. Commun ACM, 1995, 38: 58–68

    Article  Google Scholar 

  90. Buckman J, Hafner D, Tucker G, et al. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In: Proceedings of the Advances in Neural Information Processing Systems, 2018. 8234–8244

  91. Pan F Y, He J, Tu D D, et al. Trust the model when it is confident: masked model-based actor-critic. In: Proceedings of the Advances in Neural Information Processing Systems, 2020

  92. Lin H X, Sun Y H, Zhang J J, et al. Model-based reinforcement learning with multi-step plan value estimation. 2022. ArXiv:2209.05530

  93. Heess N, Wayne G, Silver D, et al. Learning continuous control policies by stochastic value gradients. In: Proceedings of the Advances in Neural Information Processing Systems, 2015. 2944–2952

  94. Deisenroth M P, Rasmussen C E. PILCO: a model-based and data-efficient approach to policy search. In: Proceedings of the 28th International Conference on Machine Learning, 2011. 465–472

  95. Degrave J, Hermans M, Dambre J, et al. A differentiable physics engine for deep learning in robotics. Front Neurorobot, 2019, 13: 6

    Article  Google Scholar 

  96. Kwakernaak H, Sivan R. Linear Optimal Control Systems. New York: John Wiley & Sons, Inc., 1972

    Google Scholar 

  97. Todorov E, Li W. A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems. In: Proceedings of the American Control Conference, 2005. 300–306

  98. Li W, Todorov E. Iterative linear quadratic regulator design for nonlinear biological movement systems. In: Proceedings of the 1st International Conference on Informatics in Control, 2004. 222–229

  99. Tassa Y, Erez T, Todorov E. Synthesis and stabilization of complex behaviors through online trajectory optimization. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. 4906–4913

  100. Watter M, Springenberg J T, Boedecker J, et al. Embed to control: a locally linear latent dynamics model for control from raw images. In: Proceedings of the Advances in Neural Information Processing Systems, 2015. 2746–2754

  101. Levine S, Koltun V. Guided policy search. In: Proceedings of the 30th International Conference on Machine Learning, 2013. 1–9

  102. Levine S, Abbeel P. Learning neural network policies with guided policy search under unknown dynamics. In: Proceedings of the Advances in Neural Information Processing Systems, 2014. 1071–1079

  103. Levine S, Wagener N, Abbeel P. Learning contact-rich manipulation skills with guided policy search. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2015. 156–163

  104. Levine S, Finn C, Darrell T, et al. End-to-end training of deep visuomotor policies. J Machine Learning Res, 2016, 17: 1–40

    MathSciNet  Google Scholar 

  105. Zhang M, Vikram S, Smith L, et al. SOLAR: deep structured representations for model-based reinforcement learning. In: Proceedings of the 36th International Conference on Machine Learning, 2019. 7444–7453

  106. Ebert F, Finn C, Dasari S, et al. Visual foresight: model-based deep reinforcement learning for vision-based robotic control. 2018. ArXiv:1812.00568

  107. Srinivas A, Jabri A, Abbeel P, et al. Universal planning networks: learning generalizable representations for visuomotor control. In: Proceedings of the 35th International Conference on Machine Learning, 2018. 4739–4748

  108. Bharadhwaj H, Xie K, Shkurti F. Model-predictive control via cross-entropy and gradient-based optimization. In: Proceedings of the 2nd Annual Conference on Learning for Dynamics and Control, 2020. 277–286

  109. Seeger M. Gaussian processes for machine learning. Int J Neur Syst, 2004, 14: 69–106

    Article  Google Scholar 

  110. Peters J, Schaal S. Policy gradient methods for robotics. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006. 2219–2225

  111. Gal Y, McAllister R, Rasmussen C E. Improving PILCO with Bayesian neural network dynamics models. In: Proceedings of the 33rd International Conference on Machine Learning Workshop on Data-Efficient Machine Learning Workshop, 2016. 25

  112. Mackay D J C. Bayesian methods for adaptive models. Dissertation for Ph.D. Degree. Pasadena: California Institute of Technology, 1992

  113. Mohamed S, Rosca M, Figurnov M, et al. Monte Carlo gradient estimation in machine learning. J Machine Learning Res, 2020, 21: 5183–5244

    MathSciNet  Google Scholar 

  114. Kingma D P, Welling M. Auto-encoding variational Bayes. In: Proceedings of the 2nd International Conference on Learning Representations, 2014

  115. Rezende D J, Mohamed S, Wierstra D. Stochastic backpropagation and approximate inference in deep generative models. In: Proceedings of the 31st International Conference on Machine Learning, 2014. 1278–1286

  116. Amos B, Stanton S, Yarats D, et al. On the model-based stochastic value gradient for continuous reinforcement learning. In: Proceedings of the 3rd Annual Conference on Learning for Dynamics and Control, 2021. 6–20

  117. Clavera I, Fu Y, Abbeel P. Model-augmented actor-critic: backpropagating through paths. In: Proceedings of the 8th International Conference on Learning Representations, 2020

  118. Byravan A, Springenberg J T, Abdolmaleki A, et al. Imagined value gradients: model-based policy optimization with transferable latent dynamics models. 2019. ArXiv:1910.04142

  119. Lambert N, Amos B, Yadan O, et al. Objective mismatch in model-based reinforcement learning. 2020. ArXiv:2002.04523

  120. Farahmand A M, Barreto A, Nikovski D. Value-aware loss function for model-based reinforcement learning. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017. 1486–1494

  121. Voelcker C A, Liao V, Garg A, et al. Value gradient weighted model-based reinforcement learning. In: Proceedings of the 10th International Conference on Learning Representations, 2021

  122. Abachi R. Policy-aware model learning for policy gradient methods. Dissertation for Ph.D. Degree. Toronto: University of Toronto, 2020

  123. Levine S, Kumar A, Tucker G, et al. Offline reinforcement learning: tutorial, review, and perspectives on open problems. 2020. ArXiv:2005.01643

  124. Kumar A, Zhou A, Tucker G, et al. Conservative Q-learning for offline reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2020

  125. Fujimoto S, Meger D, Precup D. Off-policy deep reinforcement learning without exploration. In: Proceedings of the 36th International Conference on Machine Learning, 2019. 2052–2062

  126. Peng X B, Kumar A, Zhang G, et al. Advantage-weighted regression: simple and scalable off-policy reinforcement learning. 2019. ArXiv:1910.00177

  127. Chen X Y, Zhou Z J, Wang Z, et al. BAIL: best-action imitation learning for batch deep reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2020. 18353–18363

  128. Kidambi R, Rajeswaran A, Netrapalli P, et al. MORel: model-based offline reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2020. 21810–21823

  129. Yu T, Thomas G, Yu L, et al. MOPO: model-based offline policy optimization. In: Proceedings of the Advances in Neural Information Processing Systems, 2020. 14129–14142

  130. Yu T, Kumar A, Rafailov R, et al. COMBO: conservative offline model-based policy optimization. In: Proceedings of the Advances in Neural Information Processing Systems, 2021

  131. Chen X H, Yu Y, Li Q Y, et al. Offline model-based adaptable policy learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2021. 8432–8443

  132. Liu M H, Zhu M H, Zhang W N. Goal-conditioned reinforcement learning: problems and solutions. 2022. ArXiv:2201.08299

  133. Pitis S, Chan H, Zhao S, et al. Maximum entropy gain exploration for long horizon multi-goal reinforcement learning. In: Proceedings of the 37th International Conference on Machine Learning, 2020. 7750–7761

  134. Andrychowicz M, Crow D, Ray A, et al. Hindsight experience replay. In: Proceedings of the Advances in Neural Information Processing Systems, 2017. 5048–5058

  135. Florensa C, Held D, Geng X, et al. Automatic goal generation for reinforcement learning agents. In: Proceedings of the 35th International Conference on Machine Learning, 2018. 1514–1523

  136. Lai Y Q, Wang W F, Yang Y J, et al. Hindsight planner. In: Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems, 2020. 690–698

  137. Eysenbach B, Salakhutdinov R, Levine S. Search on the replay buffer: bridging planning and reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2019. 15220–15231

  138. Nair S, Finn C. Hierarchical foresight: self-supervised learning of long-horizon tasks via visual subgoal generation. In: Proceedings of the 8th International Conference on Learning Representations, 2020

  139. Zhu M H, Liu M H, Shen J, et al. MapGo: model-assisted policy optimization for goal-oriented tasks. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence, 2021. 3484–3491

  140. Papoudakis G, Christianos F, Rahman A, et al. Dealing with non-stationarity in multi-agent deep reinforcement learning. 2019. ArXiv:1906.04737

  141. Fink A M. Equilibrium in a stochastic n-person game. Hiroshima Math J, 1964, 28: 89–93

    Article  MathSciNet  Google Scholar 

  142. Subramanian J, Sinha A, Mahajan A. Robustness and sample complexity of model-based MARL for general-sum Markov games. 2021. ArXiv:2110.02355

  143. Zhang K, Kakade S M, Basar T, et al. Model-based multi-agent RL in zero-sum Markov games with near-optimal sample complexity. In: Proceedings of the Advances in Neural Information Processing Systems, 2020. 1166–1178

  144. Bai Y, Jin C. Provable self-play algorithms for competitive reinforcement learning. In: Proceedings of the 37th International Conference on Machine Learning, 2020. 551–560

  145. He H, Boyd-Graber J, Kwok K, et al. Opponent modeling in deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning, 2016. 1804–1813

  146. Mahajan A, Samvelyan M, Mao L, et al. Tesseract: tensorised actors for multi-agent reinforcement learning. In: Proceedings of the 38th International Conference on Machine Learning, 2021. 7301–7312

  147. Zhang W N, Wang X H, Shen J, et al. Model-based multi-agent policy optimization with adaptive opponent-wise rollouts. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence, 2021

  148. Kim W, Park J, Sung Y. Communication in multi-agent reinforcement learning: intention sharing. In: Proceedings of the 9th International Conference on Learning Representations, 2021

  149. Wang X H, Zhang Z C, Zhang W N. Model-based multi-agent reinforcement learning: recent progress and prospects. 2022. ArXiv:2203.10603

  150. Duan Y, Schulman J, Chen X, et al. RL2: fast reinforcement learning via slow reinforcement learning. 2016. ArXiv:1611.02779

  151. Houthooft R, Chen Y, Isola P, et al. Evolved policy gradients. In: Proceedings of the Advances in Neural Information Processing Systems, 2018. 5405–5414

  152. Yu Y, Chen S Y, Da Q, et al. Reusable reinforcement learning via shallow trails. IEEE Trans Neural Netw Learn Syst, 2018, 29: 2204–2215

    Article  MathSciNet  Google Scholar 

  153. Luo F M, Jiang S Y, Yu Y, et al. Adapt to environment sudden changes by learning a context sensitive policy. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence, 2022

  154. Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning, 2017. 1126–1135

  155. Rothfuss J, Lee D, Clavera I, et al. ProMP: proximal meta-policy search. In: Proceedings of the 7th International Conference on Learning Representations, 2019

  156. Peng X B, Andrychowicz M, Zaremba W, et al. Sim-to-real transfer of robotic control with dynamics randomization. In: Proceedings of the 34th IEEE International Conference on Robotics and Automation, 2018. 1–8

  157. Zhang C, Yu Y, Zhou Z H. Learning environmental calibration actions for policy self-evolution. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018. 3061–3067

  158. Williams G, Aldrich A, Theodorou E A. Model predictive path integral control using covariance variable importance sampling. 2015. ArXiv:1509.01149

  159. Nagabandi A, Clavera I, Liu S, et al. Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. In: Proceedings of the 7th International Conference on Learning Representations, 2019

  160. Nagabandi A, Finn C, Levine S. Deep online learning via meta-learning: continual adaptation for model-based RL. In: Proceedings of the 7th International Conference on Learning Representations, 2019

  161. Guo J X, Gong M M, Tao D C. A relational intervention approach for unsupervised dynamics generalization in model-based reinforcement learning. In: Proceedings of the 10th International Conference on Learning Representations, 2022

  162. Seo Y, Lee K, Gilaberte I C, et al. Trajectory-wise multiple choice learning for dynamics generalization in reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2020

  163. Belkhale S, Li R, Kahn G, et al. Model-based meta-reinforcement learning for flight with suspended payloads. IEEE Robot Autom Lett, 2021, 6: 1471–1478

    Article  Google Scholar 

  164. Open AI, Akkaya I, Andrychowicz M, et al. Solving Rubik’s cube with a robot hand. 2019. ArXiv:1910.07113

  165. Miki T, Lee J, Hwangbo J, et al. Learning robust perceptive locomotion for quadrupedal robots in the wild. Sci Robot, 2022. doi: https://doi.org/10.1126/scirobotics.abk2822

  166. Chen B M, Liu Z X, Zhu J C, et al. Context-aware safe reinforcement learning for non-stationary environments. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2021

  167. Zhang J, Cheung B, Finn C, et al. Cautious adaptation for reinforcement learning in safety-critical settings. In: Proceedings of the 37th International Conference on Machine Learning, 2020. 11055–11065

  168. Yu W, Tan J, Liu C K, et al. Preparing for the unknown: learning a universal policy with online system identification. 2017. ArXiv:1702.02453

  169. Tan J, Zhang T N, Coumans E, et al. Sim-to-real: learning agile locomotion for quadruped robots. 2018. ArXiv:1804.10332

  170. Rusu A A, Večerík M, Rothörl T, et al. Sim-to-real robot learning from pixels with progressive nets. In: Proceedings of the 1st Annual Conference on Robot Learning, 2017. 262–270

  171. Chen X H, Jiang S Y, Xu F, et al. Cross-modal domain adaptation for cost-efficient visual reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, Virtual Event, 2021. 12520–12532

  172. Golemo F, Taiga A A, Courville A, et al. Sim-to-real transfer with neural-augmented robot simulation. In: Proceedings of the 2nd Conference on Robot Learning, 2018. 817–828

  173. Hwangbo J, Lee J, Dosovitskiy A, et al. Learning agile and dynamic motor skills for legged robots. Sci Robot, 2019, 4: 5872

    Article  Google Scholar 

  174. Jiang Y F, Zhang T N, Ho D, et al. SimGAN: hybrid simulator identification for domain adaptation via adversarial reinforcement learning. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2021. 2884–2890

  175. Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. In: Proceedings of the Advances in Neural Information Processing Systems, 2014. 2672–2680

  176. Lai H, Shen J, Zhang W N, et al. On effective scheduling of model-based reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, 2021. 3694–3705

  177. Dong L S, Li Y L, Zhou X, et al. Intelligent trainer for dyna-style model-based deep reinforcement learning. In: Proceedings of the IEEE Transactions on Neural Networks and Learning Systems, 2020

  178. Mnih V, Kavukcuoglu K, Silver D, et al. Playing Atari with deep reinforcement learning. 2013. ArXiv:1312.5602

  179. Zhang B, Rajan R, Pineda L, et al. On the importance of hyperparameter optimization for model-based reinforcement learning. In: Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021. 4015–4023

  180. Hutter F, Kotthoff L, Vanschoren J. Automated Machine Learning: Methods, Systems, Challenges. Berlin: Springer, 2019

    Book  Google Scholar 

  181. Zhou M, Luo J, Villela J, et al. SMARTS: an open-source scalable multi-agent RL training school for autonomous driving. In: Proceedings of the 4th Conference on Robot Learning, 2020. 264–285

  182. Hein D, Depeweg S, Tokic M, et al. A benchmark environment motivated by industrial control problems. In: Proceedings of the IEEE Symposium Series on Computational Intelligence, 2017. 1–8

  183. Zhang H C, Feng S Y, Liu C, et al. CityFlow: a multi-agent reinforcement learning environment for large scale city traffic scenario. In: Proceedings of the World Wide Web Conference, 2019. 3620–3624

  184. Vázquez-Canteli J R, Kämpf J, Henze G, et al. Citylearn v1.0: an OpenAI Gym environment for demand response with deep reinforcement learning. In: Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, 2019. 356–357

  185. Liu X Y, Yang H Y, Chen Q, et al. FinRL: a deep reinforcement learning library for automated stock trading in quantitative finance. 2020. ArXiv:2011.09607

  186. Degrave J, Felici F, Buchli J, et al. Magnetic control of tokamak plasmas through deep reinforcement learning. Nature, 2022, 602: 414–419

    Article  Google Scholar 

  187. Jiang S, Pang J C, Yu Y. Offline imitation learning with a misspecified simulator. In: Proceedings of the Advances in Neural Information Processing Systems, 2020

  188. Chou G, Sahin Y E, Yang L, et al. Using control synthesis to generate corner cases: a case study on autonomous driving. IEEE Trans Comput-Aided Des Integr Circuits Syst, 2018, 37: 2906–2917

    Article  Google Scholar 

  189. Sun H W, Feng S, Yan X T, et al. Corner case generation and analysis for safety assessment of autonomous vehicles. Transp Res Record, 2021, 2675: 587–600

    Article  Google Scholar 

  190. Shang W J, Li Q Y, Qin Z W, et al. Partially observable environment estimation with uplift inference for reinforcement learning based recommendation. Mach Learn, 2021, 110: 2603–2640

    Article  MathSciNet  Google Scholar 

  191. Qin R J, Gao S Y, Zhang X Y, et al. NeoRL: a near real-world benchmark for offline reinforcement learning. 2021. ArXiv:2102.00714

  192. Jin X K, Liu X H, Jiang S, et al. Hybrid value estimation for off-policy evaluation and offline reinforcement learning. 2022. ArXiv:2206.02000

  193. Zhu Z M, Chen X H, Tian H L, et al. Offline reinforcement learning with causal structured world models. 2022. ArXiv:2206.01474

  194. Chen X H, Yu Y, Zhu Z M, et al. Adversarial counterfactual environment model learning. 2022. ArXiv:2206.04890

  195. Dietterich T G. State abstraction in MAXQ hierarchical reinforcement learning. In: Proceedings of the Advances in Neural Information Processing Systems, 1999. 994–1000

  196. Sutton R S, Precup D, Singh S. Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif Intelligence, 1999, 112: 181–211

    Article  MathSciNet  Google Scholar 

  197. Jiang N, Kulesza A, Singh S. Abstraction selection in model-based reinforcement learning. In: Proceedings of the 32nd International Conference on Machine Learning, 2015. 179–188

  198. Zhu Z M, Jiang S, Liu Y R, et al. Invariant action effect model for reinforcement learning. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence, 2022

  199. Bommasani R, Hudson D A, Adeli E, et al. On the opportunities and risks of foundation models. 2021. ArXiv:2108.07258

  200. Reed S E, Zolna K, Parisotto E, et al. A generalist agent. 2022. ArXiv:2205.06175

  201. Wu B, Gupta J K, Kochenderfer M. Model primitives for hierarchical lifelong reinforcement learning. Auton Agent Multi-Agent Syst, 2020, 34: 28

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Key Research and Development Program of China (Grant No. 2020AAA0107200) and National Natural Science Foundation of China (Grant Nos. 61876077, 62076161).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Weinan Zhang or Yang Yu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, FM., Xu, T., Lai, H. et al. A survey on model-based reinforcement learning. Sci. China Inf. Sci. 67, 121101 (2024). https://doi.org/10.1007/s11432-022-3696-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-022-3696-5

Keywords

Navigation