Advances in Human-AI Collaboration and Ethics

The field of human-AI collaboration is rapidly evolving, with a growing focus on establishing ethical, social, and safety standards for AI system development and operation. Recent research has highlighted the need for continuous adaptation of AI systems to meet user and environmental needs, and the importance of synchronizing AI evolution with changes in users and the environment to prevent ethical and safety issues. Noteworthy papers in this area include those that propose methodological frameworks for assessing the impact of AI systems on human rights, and those that focus on delineating the impact of machine learning on autonomy and fostering awareness. Some notable works propose architectures and frameworks for constructing a unified and incremental development of AI ethics, and explore the relevance of international standards in risk management, data quality, bias mitigation, and governance. These innovative approaches and results are advancing the field and promoting better alignment with human rights principles and regulatory compliance.

Sources

Enhancing Human-Robot Collaboration through Existing Guidelines: A Case Study Approach

Aportes para el cumplimiento del Reglamento (UE) 2024/1689 en rob\'otica y sistemas aut\'onomos

HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT

Safeguarding Autonomy: a Focus on Machine Learning Decision Systems

e-person Architecture and Framework for Human-AI Co-adventure Relationship

Built with on top of