Embodied AI and AI Governance

The field of embodied AI is rapidly advancing, with significant risks and opportunities emerging. Recent innovations in large language and multimodal models, along with increasingly advanced and responsive hardware, have enabled embodied AI systems to grow in capabilities and operational domains. However, these advances also present risks, including physical harm from malicious use, mass surveillance, and economic and societal disruption. To address these risks, there is an urgent need to extend and adapt existing policy frameworks to account for the unique risks of embodied AI.

A key direction in the field is the development of frameworks for evaluating the ethics and trustworthiness of AI systems. This includes the identification of key dimensions for evaluation, such as fairness, transparency, and accountability, and the development of detailed indicators and assessment methodologies.

Another important area of research is the development of accountability frameworks for AI systems, particularly in high-stakes domains such as healthcare. This includes the analysis of existing regulatory frameworks and the development of new frameworks that can ensure joint accountability in decision-making.

Noteworthy papers in this area include: Embodied AI: Emerging Risks and Opportunities for Policy Action, which provides a foundational taxonomy of key physical, informational, economic, and social EAI risks and offers concrete policy recommendations. A Study on the Framework for Evaluating the Ethics and Trustworthiness of Generative AI, which proposes a comprehensive framework for evaluating the ethics and trustworthiness of generative AI systems. ANNIE: Be Careful of Your Robots, which presents the first systematic study of adversarial safety attacks on embodied AI systems and highlights the urgent need for security-driven defenses in the physical AI era.

Sources

Embodied AI: Emerging Risks and Opportunities for Policy Action

A Study on the Framework for Evaluating the Ethics and Trustworthiness of Generative AI

Can AI be Auditable?

AGI as Second Being: The Structural-Generative Ontology of Intelligence

Can Media Act as a Soft Regulator of Safe AI Development? A Game Theoretical Analysis

Accountability Framework for Healthcare AI Systems: Towards Joint Accountability in Decision Making

ANNIE: Be Careful of Your Robots

The human biological advantage over AI

Governing AI R&D: A Legal Framework for Constraining Dangerous AI

Cumplimiento del Reglamento (UE) 2024/1689 en rob\'otica y sistemas aut\'onomos: una revisi\'on sistem\'atica de la literatura

Operationalising AI Regulatory Sandboxes under the EU AI Act: The Triple Challenge of Capacity, Coordination and Attractiveness to Providers

AI Governance in Higher Education: A course design exploring regulatory, ethical and practical considerations

Built with on top of