The field of numerical methods for partial differential equations (PDEs) and explainable neural networks is experiencing significant advancements. Researchers are developing innovative techniques to improve the accuracy, efficiency, and interpretability of these methods. One notable direction is the integration of deep learning and symbolic regression to discover closed-form expressions from complex datasets. Another area of focus is the development of reliable and efficient error estimators for various numerical methods, including finite element and discontinuous Galerkin methods. Additionally, there are advancements in the application of neural networks to solve forward and inverse problems in elliptic PDEs, as well as the development of new frameworks for symbolic regression and Hamiltonian mechanics. Noteworthy papers include: Ex-HiDeNN, which presents a novel approach to discover closed-form expressions from limited observations using a separable and scalable neural architecture. Fredholm Neural Networks, which extends the framework to tackle forward and inverse problems for linear and semi-linear elliptic PDEs. SymFlux, which performs symbolic regression to identify Hamiltonian functions from their corresponding vector fields.
Advances in Explainable Neural Networks and Numerical Methods for PDEs
Sources
A nonsmooth extension of the Brezzi-Rappaz-Raviart approximation theorem via metric regularity techniques and applications to nonlinear PDEs
A generalized Hessian-based error estimator for an IPDG formulation of the biharmonic problem in two dimensions