The field of tabular data modeling is moving towards more accurate and efficient methods for handling missing data and complex feature interactions. Recent developments have focused on leveraging pre-trained transformers and graph-based deep learning methods to improve predictive performance and interpretability. A key challenge in this area is the need for methods that can adapt to diverse datasets and tasks without requiring extensive hyperparameter tuning or fine-tuning. Noteworthy papers in this area include: TabImpute, which proposes a pre-trained transformer for accurate and fast zero-shot imputation. Relational Transformer, which presents a novel architecture for zero-shot foundation models on relational data. Relational Database Distillation, which aims to distill large-scale relational databases into compact heterogeneous graphs while retaining predictive power.