Responsible AI and Human-Centric Design

The field of computing research is shifting towards a more human-centric approach, with a focus on responsible AI and design ethics. Researchers are exploring new methods for anticipating and mitigating the risks associated with emerging technologies, such as Artificial Intelligence. The use of scenario building methods is becoming increasingly popular, allowing researchers to map future trajectories of technology development and sociotechnical adoption. Another key area of research is the development of reparative actions in AI, with a focus on accountability and systemic change. The integration of Augmented Reality and Human-Computer Interaction is also being explored, particularly in the context of public safety applications. Furthermore, researchers are emphasizing the importance of stakeholder participation in responsible AI development, and the need to address disconnects between current practice and guidance. Noteworthy papers include: What Comes After Harm, which develops a taxonomy of AI harm reparation based on a thematic analysis of real-world incidents. Stakeholder Participation for Responsible AI Development, which clarifies the extent to which established stakeholder involvement practices are able to contribute to responsible AI efforts.

Sources

Scenarios in Computing Research: A Systematic Review of the Use of Scenario Methods for Exploring the Future of Computing Technologies in Society

What Comes After Harm? Mapping Reparative Actions in AI through Justice Frameworks

The Turn to Practice in Design Ethics: Characteristics and Future Research Directions for HCI Research

Exploring the Convergence of HCI and Evolving Technologies in Information Systems

Augmented Reality User Interfaces for First Responders: A Scoping Literature Review

Stakeholder Participation for Responsible AI Development: Disconnects Between Guidance and Current Practice

Speculative Design in Spiraling Time: Methods and Indigenous HCI

Built with on top of