The field of embodied AI is rapidly advancing, with significant risks and opportunities emerging. Recent innovations in large language and multimodal models, along with increasingly advanced and responsive hardware, have enabled embodied AI systems to grow in capabilities and operational domains. However, these advances also present risks, including physical harm from malicious use, mass surveillance, and economic and societal disruption. To address these risks, there is an urgent need to extend and adapt existing policy frameworks to account for the unique risks of embodied AI.
A key direction in the field is the development of frameworks for evaluating the ethics and trustworthiness of AI systems. This includes the identification of key dimensions for evaluation, such as fairness, transparency, and accountability, and the development of detailed indicators and assessment methodologies.
Another important area of research is the development of accountability frameworks for AI systems, particularly in high-stakes domains such as healthcare. This includes the analysis of existing regulatory frameworks and the development of new frameworks that can ensure joint accountability in decision-making.
Noteworthy papers in this area include: Embodied AI: Emerging Risks and Opportunities for Policy Action, which provides a foundational taxonomy of key physical, informational, economic, and social EAI risks and offers concrete policy recommendations. A Study on the Framework for Evaluating the Ethics and Trustworthiness of Generative AI, which proposes a comprehensive framework for evaluating the ethics and trustworthiness of generative AI systems. ANNIE: Be Careful of Your Robots, which presents the first systematic study of adversarial safety attacks on embodied AI systems and highlights the urgent need for security-driven defenses in the physical AI era.