The field of large language models (LLMs) is moving towards scalable and autonomous systems, with a focus on distributed architectures and event-driven frameworks. Recent developments have enabled the creation of persistent and embedded autonomy in LLMs, allowing them to operate efficiently in resource-constrained environments. Additionally, there is a growing trend towards integrating reinforcement learning (RL) with LLMs to enhance their capabilities and ensure safe, goal-aligned behavior. Noteworthy papers in this area include DistFlow, which introduces a fully distributed RL framework for scalable and efficient LLM post-training, and Amico, which presents a modular, event-driven framework for building autonomous agents optimized for embedded systems. AgentFly also provides a scalable and extensible Agent-RL framework for empowering LM agents with a variety of RL algorithms.