The field of AI safety is rapidly evolving, with a growing focus on developing guardrails to prevent harm and ensure responsible AI deployment. Recent research has highlighted the importance of addressing risks at the planning stage, rather than solely relying on post-execution measures. This shift in approach is driven by the recognition that certain risks can have severe consequences once carried out, and that intervening early on can help prevent harm. Notable papers in this area include: Building a Foundational Guardrail for General Agentic Systems via Synthetic Data, which introduces a controllable engine for synthesizing benign trajectories and a foundational guardrail for pre-execution safety. Another noteworthy paper is From Refusal to Recovery: A Control-Theoretic Approach to Generative AI Guardrails, which proposes a control-theoretic approach to building predictive guardrails that can proactively correct risky outputs to safe ones.