The field of artificial intelligence and machine learning is witnessing a significant shift towards graph-based models, which are being increasingly used to store and process complex relational data. These models have shown great promise in various applications, including computer vision, natural language processing, and emotion recognition. The use of graph-based models is enabling the development of more efficient and effective algorithms for tasks such as semantic reasoning, cognitive modeling, and relational learning. Notably, the incorporation of graph-based models into foundation models is leading to improved performance and interpretability in various tasks. Overall, the field is moving towards the development of more advanced graph-based models that can handle complex relational data and provide better insights into the underlying structures and relationships. Noteworthy papers include: Views, which proposes a hardware-friendly graph database model for storing semantic information. Why Relational Graphs Will Save the Next Generation of Vision Foundation Models, which argues that next-generation vision foundation models should incorporate explicit relational interfaces. GLaRE, which proposes a graph-based landmark region embedding network for emotion recognition. Turning Tabular Foundation Models into Graph Foundation Models, which proposes a simple graph foundation model that employs a tabular foundation model as a backbone.