The field of table reasoning and multimodal understanding is moving towards more advanced and nuanced approaches, with a focus on developing models that can effectively extract insights from complex tables and integrate information from multiple sources. Researchers are exploring new techniques for prompting and reasoning, including adaptive prompting frameworks and multimodal benchmarks. These innovations are driving significant improvements in model performance and enabling more accurate analysis of complex data. Notable papers in this area include: SEAR, an adaptive prompting framework that achieves superior performance across all table types. MTabVQA, a novel benchmark for multi-tabular visual question answering that reveals significant performance limitations in state-of-the-art models. SciVer, a benchmark for evaluating foundation models in multimodal scientific claim verification that highlights critical limitations in current open-source models.