The field of large language models (LLMs) is moving towards addressing the challenges of causal analysis and tabular reasoning. Recent research has highlighted the need for robust evaluation protocols and hybrid methods that combine LLM-derived knowledge with data-driven statistics to realize the full potential of LLMs in causal discovery. Additionally, there is a growing interest in developing neuro-symbolic agents and multi-agent frameworks that can accurately reason over complex and large spreadsheets. The development of novel benchmarks and evaluation protocols is also a key area of focus, with a emphasis on preventing dataset leakage and ensuring that LLMs can generalize to new, unseen data. Noteworthy papers in this area include: The paper proposing TabR1, which achieves state-of-the-art results in tabular prediction with limited supervision. The paper introducing SheetBrain, a neuro-symbolic agent that significantly improves accuracy on both existing benchmarks and more challenging scenarios. The paper presenting Mixture-of-Minds, a multi-agent framework that delivers substantial gains in table understanding by combining structured workflows with reinforcement learning.