The field of artificial intelligence is witnessing a significant shift towards embodied intelligence, where agents are expected to interact with and reason about the physical world. Recent developments have focused on creating benchmarks and evaluation frameworks for embodied agents, with a emphasis on multi-modal reasoning and physical interaction. Researchers are exploring various environments, such as retail stores, cooking scenarios, and cleaning tasks, to test the capabilities of embodied agents. The introduction of new benchmarks, such as OmniPlay, DeepPHY, and OmniEAR, has highlighted the challenges faced by current models in reasoning about physical interactions, tool usage, and multi-agent coordination. Noteworthy papers in this area include PhysicsEval, which introduces a new evaluation benchmark for physics problems, and Sari Sandbox, which presents a high-fidelity, photorealistic 3D retail store simulation for benchmarking embodied agents. Additionally, CookBench and ShoppingBench provide novel benchmarks for long-horizon planning and intent-grounded shopping tasks, respectively. These developments are advancing the field of embodied intelligence and multi-modal reasoning, with a focus on creating more realistic and challenging evaluation frameworks.