当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Masked Autoencoders are PDE Learners
arXiv - CS - Machine Learning Pub Date : 2024-03-26 , DOI: arxiv-2403.17728
Anthony Zhou, Amir Barati Farimani

Neural solvers for partial differential equations (PDEs) have great potential, yet their practicality is currently limited by their generalizability. PDEs evolve over broad scales and exhibit diverse behaviors; predicting these phenomena will require learning representations across a wide variety of inputs, which may encompass different coefficients, geometries, or equations. As a step towards generalizable PDE modeling, we adapt masked pretraining for PDEs. Through self-supervised learning across PDEs, masked autoencoders can learn useful latent representations for downstream tasks. In particular, masked pretraining can improve coefficient regression and timestepping performance of neural solvers on unseen equations. We hope that masked pretraining can emerge as a unifying method across large, unlabeled, and heterogeneous datasets to learn latent physics at scale.

中文翻译:

蒙面自动编码器是 PDE 学习者

偏微分方程(PDE)的神经求解器具有巨大的潜力,但其实用性目前受到其普遍性的限制。偏微分方程在广泛的范围内演化并表现出不同的行为;预测这些现象需要学习各种输入的表示,其中可能包含不同的系数、几何或方程。作为迈向可推广 PDE 建模的一步,我们采用了 PDE 的屏蔽预训练。通过跨偏微分方程的自监督学习,掩码自动编码器可以学习下游任务的有用的潜在表示。特别是,屏蔽预训练可以提高神经求解器在未见方程上的系数回归和时间步长性能。我们希望屏蔽预训练能够成为跨大型、未标记和异构数据集的统一方法,以大规模学习潜在物理。
更新日期:2024-03-27
down
wechat
bug