The field of artificial intelligence is witnessing significant advancements in agentic AI systems, with a focus on developing more autonomous, scalable, and intelligent decision-making capabilities. Recent research has highlighted the importance of understanding bias in human mobility data, which can have a profound impact on downstream analyses and prediction tasks. Furthermore, the development of provenance models and runtime governance frameworks is crucial for ensuring the transparency, traceability, and reliability of agentic AI systems. Noteworthy papers in this area include PROV-AGENT, which introduces a unified provenance model for tracking AI agent interactions, and MI9, which presents a runtime governance framework for safety and alignment of agentic AI systems. Additionally, research on cognition-centered frameworks for proactive and self-evolving LLM agents, such as Galaxy, is pushing the boundaries of intelligent personal assistants. Overall, these advancements are expected to have a significant impact on various applications, from human mobility and transportation to healthcare and education.