Neural Operators for Physical Simulations

The field of neural operators for physical simulations is moving towards more general and transferable models. Recent research has focused on developing neural operator architectures that can effectively transfer knowledge across different partial differential equation (PDE) problems, including extrapolation to unseen parameters and incorporation of new variables. This has led to the development of more advanced neural operator architectures, such as transformer-based models and those leveraging specific mathematical transforms. Additionally, there is a growing interest in solver-independent frameworks for training neural operators, which can reduce the dependence on expensive numerical simulations or experimental data. These developments have the potential to improve the accuracy and generalizability of neural operator-based simulations, making them more practical for industrial computer-aided engineering applications. Noteworthy papers include:

  • Towards Universal Neural Operators through Multiphysics Pretraining, which demonstrates the effectiveness of advanced neural operator architectures in transferring knowledge across PDE problems.
  • Method of Manufactured Learning for Solver-free Training of Neural Operators, which introduces a solver-independent framework for training neural operators using analytically constructed datasets.
  • GLOBE: Accurate and Generalizable PDE Surrogates using Domain-Inspired Architectures and Equivariances, which presents a new neural surrogate for homogeneous PDEs that achieves substantial accuracy improvements over existing models.

Sources

Towards Universal Neural Operators through Multiphysics Pretraining

Sumudu Neural Operator for ODEs and PDEs

Method of Manufactured Learning for Solver-free Training of Neural Operators

GLOBE: Accurate and Generalizable PDE Surrogates using Domain-Inspired Architectures and Equivariances

Built with on top of