The field of table understanding and generation is rapidly advancing, with a focus on developing more accurate and efficient methods for extracting and representing tabular data. Recent research has explored the use of large language models (LLMs) and neurosymbolic approaches to improve table extraction, generation, and reasoning. These methods have shown significant promise in handling complex tasks such as table retrieval, question answering, and data annotation. Notably, the use of LLMs has enabled the development of zero-shot and few-shot learning frameworks, which can adapt to new tasks and domains with minimal training data. Overall, the field is moving towards more robust and generalizable methods for table understanding and generation, with potential applications in a wide range of areas, including finance, healthcare, and scientific research. Noteworthy papers include: Fine-Tuning Vision-Language Models for Markdown Conversion of Financial Tables, which proposes a fine-tuned vision-language model for converting financial tables into Markdown format, achieving high accuracy and outperforming larger models. TEN: Table Explicitization, Neurosymbolically, which presents a neurosymbolic approach for extracting tabular data from semistructured input text, significantly outperforming purely neural baselines and achieving high exact match accuracy.