The field of computational modeling and reinforcement learning is moving towards more advanced and efficient methods for simulating complex phenomena and making predictions. Researchers are exploring new architectures and techniques, such as differentiable spatial computers and flow-matching algorithms, to improve the accuracy and scalability of their models. These innovations have the potential to transform computational workflows in physics and engineering, and to enable more effective decision-making in complex systems. Notable papers in this area include:
- Towards Reasoning for PDE Foundation Models, which introduces a test-time computing strategy for partial differential equations that achieves more accurate predictions with fewer training samples and smaller models.
- Neural Field Turing Machine, which presents a unified computational substrate for bridging discrete algorithms and continuous field dynamics within a single differentiable framework.
- Text-Trained LLMs Can Zero-Shot Extrapolate PDE Dynamics, which demonstrates the ability of large language models to accurately extrapolate spatiotemporal dynamics from discretized partial differential equation solutions without fine-tuning or natural language prompting.
- floq, which improves performance in reinforcement learning by parameterizing the Q-function using a velocity field and training it using techniques from flow-matching.
- Rollout-LaSDI, which enhances the long-term accuracy of latent space dynamics by introducing a flexible finite-difference scheme and a rollout loss that trains reduced-order models to make accurate predictions over arbitrary time horizons.