The field of GUI agents and automation is rapidly evolving, with a focus on developing more intelligent and adaptive systems. Recent research has explored the use of large language models and reinforcement learning to improve the capabilities of GUI agents, enabling them to better understand and interact with complex digital environments. Notable advancements include the development of frameworks for automating GUI testing, such as WebRLED and LELANTE, which have shown promising results in improving test efficiency and accuracy. Additionally, researchers have investigated the use of multimodal agents and iterative trajectory exploration to enhance the generalization and robustness of GUI agents. Overall, the field is moving towards more scalable and adaptable solutions for automating GUI interactions, with potential applications in various domains. Noteworthy papers include: Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents, which highlights the need for a human-centered evaluation framework for GUI agents. Exploring a Large Language Model for Transforming Taxonomic Data into OWL, which demonstrates the potential of large language models for automating taxonomy-related tasks. Deep Reinforcement Learning for Automated Web GUI Testing, which proposes an effective approach for automated GUI testing using deep reinforcement learning. AndroidGen, which develops a framework for enhancing the capabilities of LLM-based agents under data scarcity. PhenoAssistant, which introduces a conversational multi-agent AI system for automated plant phenotyping. LLM-Powered GUI Agents in Phone Automation, which surveys progress and prospects in LLM-driven phone GUI agents.