The field of autonomous GUI interaction is rapidly advancing, with a focus on developing agents that can efficiently and effectively interact with complex graphical user interfaces. Recent research has explored the use of experience-driven learning frameworks, scalable frameworks for automated desktop UI exploration, and relational reinforcement learning to improve agent performance. Notably, the development of hybrid action mechanisms and foundation models has enabled agents to seamlessly integrate GUI primitives with high-level programmatic tool calls, leading to significant improvements in exploration efficiency and strategic depth. Furthermore, researchers have proposed novel methods for resolving instruction ambiguities and enhancing GUI grounding with multi-perspective instruction-as-reasoning. Some noteworthy papers in this area include: Experience-Driven Exploration for Efficient API-Free AI Agents, which proposes a framework that structures an agent's raw pixel-level interactions into a persistent State-Action Knowledge Graph, and UI-Ins, which introduces the Instruction-as-Reasoning paradigm to enhance GUI grounding with multi-perspective instruction-as-reasoning. These advancements have the potential to transform desktop automation and enable the development of more robust and secure embodied agents.