The field of solving partial differential equations (PDEs) is rapidly advancing with the introduction of neural networks. Recent developments have focused on improving the efficiency and accuracy of neural network-based methods for solving PDEs. One notable direction is the use of adaptive basis functions, which can be learned during the training process to better represent the solution of the PDE. This approach has been shown to reduce the number of trainable parameters while maintaining high approximation accuracy. Another area of research is the development of multimodal foundation models that can learn universal error-correction schemes for simulating dynamical systems. These models have demonstrated superior accuracy-efficiency tradeoffs compared to classical solvers. Additionally, there is a growing interest in using neural operators to solve PDEs, which can be designed to mimic the behavior of traditional solution operators. These neural operators have been shown to achieve state-of-the-art performance in various benchmark problems. Noteworthy papers in this area include: FMint-SDE, which introduces a novel multi-modal foundation model for large-scale simulations of differential equations, achieving a superior accuracy-efficiency tradeoff compared to classical solvers. NOWS, which presents a hybrid strategy that harnesses learned solution operators to accelerate classical iterative solvers, reducing computational time by up to 90% while preserving stability and convergence guarantees.