The field of numerical methods and physics-informed neural networks is rapidly advancing, with a focus on improving the accuracy and efficiency of solving partial differential equations (PDEs) and other complex problems. Recent developments have led to the creation of new frameworks and algorithms that can handle complex dynamics, non-linearizable systems, and high-dimensional problems. Notably, the integration of machine learning techniques with traditional numerical methods has shown great promise in improving the performance and scalability of these methods. One of the key areas of research is the development of physics-informed neural networks that can learn the underlying dynamics of a system and make accurate predictions. These networks have been shown to be effective in solving a wide range of problems, from simple ODEs to complex PDEs. Another area of research is the development of new numerical methods that can efficiently solve PDEs and other complex problems. These methods include the use of Gaussian processes, extreme learning machines, and other advanced techniques. Some noteworthy papers in this area include the proposal of a self-optimization physics-informed Fourier-features randomized neural network framework, which significantly improves the numerical solving accuracy of PDEs through hyperparameter optimization. Additionally, the development of a novel neural operator framework, DyMixOp, which integrates insights from complex dynamical systems to address the challenge of transforming nonlinear dynamical systems into a suitable format for neural networks. These advances have the potential to impact a wide range of fields, from engineering and physics to biology and finance, and are expected to continue to drive innovation in the coming years.
Advances in Physics-Informed Neural Networks and Numerical Methods
Sources
SO-PIFRNN: Self-optimization physics-informed Fourier-features randomized neural network for solving partial differential equations
DyMixOp: Guiding Neural Operator Design for PDEs from a Complex Dynamics Perspective with Local-Global-Mixing
Learning to Learn the Macroscopic Fundamental Diagram using Physics-Informed and meta Machine Learning techniques
Recursive Gaussian Process Regression with Integrated Monotonicity Assumptions for Control Applications
Generative Neural Operators of Log-Complexity Can Simultaneously Solve Infinitely Many Convex Programs