当前位置: X-MOL 学术J. Comput. Phys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DynAMO: Multi-agent reinforcement learning for dynamic anticipatory mesh optimization with applications to hyperbolic conservation laws
Journal of Computational Physics ( IF 4.1 ) Pub Date : 2024-03-16 , DOI: 10.1016/j.jcp.2024.112924
T. Dzanic , K. Mittal , D. Kim , J. Yang , S. Petrides , B. Keith , R. Anderson

We introduce DynAMO, a reinforcement learning paradigm for Dynamic Anticipatory Mesh Optimization. Adaptive mesh refinement is an effective tool for optimizing computational cost and solution accuracy in numerical methods for partial differential equations. However, traditional adaptive mesh refinement approaches for time-dependent problems typically rely only on instantaneous error indicators to guide adaptivity. As a result, standard strategies often require frequent remeshing to maintain accuracy. In the DynAMO approach, multi-agent reinforcement learning is used to discover new local refinement policies that can anticipate and respond to future solution states by producing meshes that deliver more accurate solutions for longer time intervals. By applying DynAMO to discontinuous Galerkin methods for the linear advection and compressible Euler equations in two dimensions, we demonstrate that this new mesh refinement paradigm can outperform conventional threshold-based strategies while also generalizing to different mesh sizes, remeshing and simulation times, and initial conditions.

中文翻译:

DynAMO:用于动态预期网格优化的多智能体强化学习及其在双曲守恒定律中的应用

我们介绍 DynAMO,一种用于动态预期网格优化的强化学习范例。自适应网格细化是优化偏微分方程数值方法中的计算成本和求解精度的有效工具。然而,针对时间相关问题的传统自适应网格细化方法通常仅依赖于瞬时误差指标来指导自适应性。因此,标准策略通常需要频繁重新划分网格以保持准确性。在 DynAMO 方法中,多智能体强化学习用于发现新的局部细化策略,这些策略可以通过生成网格来预测和响应未来的解决方案状态,从而在更长的时间间隔内提供更准确的解决方案。通过将 DynAMO 应用于二维线性平流和可压缩欧拉方程的不连续 Galerkin 方法,我们证明了这种新的网格细化范例可以优于传统的基于阈值的策略,同时还可以推广到不同的网格尺寸、重新网格划分和模拟时间以及初始条件。
更新日期:2024-03-16
down
wechat
bug