The field of AI security and forensics is rapidly evolving, with a growing focus on proactive threat simulation, adversarial testing, and autonomous decision-making. Researchers are developing innovative approaches to identify and mitigate risks in AI systems, including asset-centric threat modeling, offensive security frameworks, and in-kernel forensics engines. These advancements aim to address the unique security challenges posed by integrated AI agents and improve the resilience of AI-driven technologies. Noteworthy papers in this area include those that introduce novel methodologies for threat analysis, such as UEberForensIcs and LASE, which enable robust forensics and threat analysis in various contexts. Others, like LibVulnWatch, focus on uncovering hidden vulnerabilities in open-source AI libraries, promoting more informed library selection and supply chain risk assessment.