The field of natural language understanding is moving towards more robust and generalizable models, with a focus on reinforcement learning and transferable reasoning capabilities. Recent developments have shown that reframing traditional tasks, such as Natural Language Inference and Text-to-SQL, as pathways for teaching large language models to reason over and manipulate data can lead to significant improvements in performance and interpretability. Notable papers include: Pushing the boundary on Natural Language Inference, which demonstrates a scalable framework for building robust NLI systems without sacrificing inference quality. Sparks of Tabular Reasoning via Text2SQL Reinforcement Learning, which proposes a two-stage framework that leverages SQL supervision to develop transferable table reasoning capabilities.