The field of table understanding and reasoning is witnessing significant developments, with a focus on enhancing the capabilities of language models to comprehend and manipulate tabular data. Researchers are exploring innovative approaches to improve the robustness and accuracy of table-related tasks, including the development of comprehensive benchmarks and the application of self-supervised and reinforcement learning techniques. Notably, the introduction of program-based table reasoning and weakness-guided data synthesis frameworks has shown promising results in advancing the state-of-the-art in this area. Furthermore, the creation of benchmarks that simulate real-world data artifacts has highlighted the need for more robust and data-aware models. Overall, the field is moving towards more sophisticated and effective methods for table understanding and reasoning, with potential applications in various real-world scenarios. Noteworthy papers include: MMTU, which introduces a large-scale benchmark for evaluating table understanding and reasoning capabilities. Table-r1, which proposes a two-stage program-based table reasoning method that outperforms existing small language model-based methods. RADAR, which presents a benchmark for evaluating data-aware reasoning on imperfect tabular data. TableDreamer, which introduces a progressive and weakness-guided data synthesis framework for table instruction tuning. Enhancing Reasoning Capabilities of Small Language Models with Blueprints and Prompt Template Search, which proposes a novel framework for improving small language model reasoning capabilities using LLM-generated blueprints and prompt template search.