The field of mobile automation and GUI agent technology is witnessing significant advancements, driven by innovative approaches to operational knowledge injection, task planning, and bug reproduction. Researchers are exploring new methods to enhance the performance and efficiency of mobile automation processes, such as utilizing video-guided approaches and world model-driven code execution. Additionally, there is a growing focus on developing stable planning modules and autonomous exploration mechanisms to improve the accuracy and effectiveness of GUI agents. These advancements have the potential to revolutionize the way mobile devices are automated and interacted with, enabling more efficient and user-friendly experiences. Noteworthy papers in this area include: Mobile-Agent-V, which introduces a video-guided approach for effortless and efficient operational knowledge injection in mobile automation, achieving a 36% performance enhancement. SPlanner proposes a plug-and-play planning module that generates execution plans for vision language models, demonstrating strong performance on dynamic benchmarks. BugRepro presents a novel technique that integrates domain-specific knowledge to enhance the accuracy and efficiency of bug reproduction, significantly outperforming state-of-the-art methods.