The field of AI governance and cybersecurity is rapidly evolving, with a growing focus on developing effective technical proposals for attributing generative AI content and ensuring accountability. Current implementations of watermarking, a leading technical proposal, are being reevaluated to address the gap between regulatory expectations and technical limitations. Meanwhile, innovations in open-source RISC-V cores are improving energy efficiency and performance, challenging common beliefs about the trade-offs between high performance and energy efficiency. Research on information leakage in real-time systems is also yielding new insights, with the development of statistical analysis techniques to infer execution patterns and identify critical invocations. As AI-enabled cyber capabilities advance, strategies like differential access are being explored to tilt the cybersecurity balance toward defense. The development of flexible hardware-enabled guarantees and technical options for AI governance mechanisms is also underway, with a focus on verifiable claims about compute usage and physical tamper protection. Noteworthy papers in this area include: Watermarking Without Standards Is Not AI Governance, which proposes a three-layer framework to realign watermarking with governance goals. Ramping Up Open-Source RISC-V Cores, which presents a modified version of the OoO C910 core and an enhanced version of the CVA6 core, achieving significant performance improvements. Asterinas, which proposes a novel OS architecture called framekernel to achieve intra-kernel privilege separation and ensure a minimal and sound Trusted Computing Base.