The field of scientific computing is witnessing a significant shift towards the adoption of physics-informed neural networks (PINNs) for solving partial differential equations (PDEs) and other complex problems. Recent research has focused on developing innovative architectures and training methods that can effectively incorporate physical constraints and laws into neural networks, leading to improved accuracy and generalizability. One notable direction is the use of geometric and physical constraints to enhance the performance of neural PDE surrogates, allowing for more accurate predictions and better generalization to new initial conditions and durations. Another area of research is the development of novel training methods, such as projection-based frameworks and flow matching techniques, which can provide more efficient and effective ways of training neural networks for scientific computing applications. Noteworthy papers in this area include 'Geometric and Physical Constraints Synergistically Enhance Neural PDE Surrogates', which introduces novel input and output layers that respect physical laws and symmetries on staggered grids, and 'Flow Matching Meets PDEs: A Unified Framework for Physics-Constrained Generation', which proposes a novel generative framework that explicitly embeds physical constraints into the flow matching objective. Overall, the field is moving towards the development of more powerful and flexible neural network architectures that can effectively capture the underlying physics of complex systems, leading to improved predictions and better decision-making in a wide range of scientific and engineering applications.
Advances in Physics-Informed Neural Networks for Scientific Computing
Sources
Universal Differential Equations for Scientific Machine Learning of Node-Wise Battery Dynamics in Smart Grids
Geometric flow regularization in latent spaces for smooth dynamics with the efficient variations of curvature