The field of large language models (LLMs) is rapidly advancing, with a focus on improving task planning and physical reasoning capabilities. Recent research has explored the integration of LLMs with formal knowledge representations, such as ontologies, to enhance their ability to process symbolic knowledge. Additionally, there is a growing interest in developing benchmarks to evaluate the physical reasoning capabilities of LLMs, including their ability to combine domain knowledge, symbolic reasoning, and understanding of real-world constraints. Noteworthy papers in this area include Code-Driven Planning in Grid Worlds with Large Language Models, which proposes an iterative programmatic planning framework for solving grid-based tasks, and OntoURL, which introduces a comprehensive benchmark to evaluate LLMs' proficiency in handling ontologies. Other notable papers include APEX, which equips LLMs with physics-driven foresight for real-time task planning, and PhyX, which assesses models' capacity for physics-grounded reasoning in visual scenarios.