The field of AI research is moving towards a more holistic understanding of AI systems and their interactions with humans. There is a growing recognition of the need to integrate physical and social dynamics in world models, as well as to prioritize trustworthiness and transparency in AI development. Researchers are exploring new approaches to AI governance, including the use of analogies from other fields, such as nuclear weapons, to inform policy development. The importance of human resilience and adaptability in the face of AI-driven change is also being emphasized. Notable papers in this area include 'World Models Should Prioritize the Unification of Physical and Social Dynamics', which argues for a more integrated approach to world modeling, and 'Understanding AI Trustworthiness: A Scoping Review of AIES & FAccT Articles', which highlights the need for a more holistic understanding of trustworthy AI systems. Additionally, 'Agentic AI: A Comprehensive Survey of Architectures, Applications, and Future Directions' provides a comprehensive analysis of agentic AI systems and their potential applications.
Advancements in AI Governance and Trustworthiness
Sources
A quality of mercy is not trained: the imagined vs. the practiced in healthcare process-specialized AI development
Embracing Trustworthy Brain-Agent Collaboration as Paradigm Extension for Intelligent Assistive Technologies
Teaching Probabilistic Machine Learning in the Liberal Arts: Empowering Socially and Mathematically Informed AI Discourse
Tackling the Algorithmic Control Crisis -- the Technical, Legal, and Ethical Challenges of Research into Algorithmic Agents