The field of computing research is shifting towards a more human-centric approach, with a focus on responsible AI and design ethics. Researchers are exploring new methods for anticipating and mitigating the risks associated with emerging technologies, such as Artificial Intelligence. The use of scenario building methods is becoming increasingly popular, allowing researchers to map future trajectories of technology development and sociotechnical adoption. Another key area of research is the development of reparative actions in AI, with a focus on accountability and systemic change. The integration of Augmented Reality and Human-Computer Interaction is also being explored, particularly in the context of public safety applications. Furthermore, researchers are emphasizing the importance of stakeholder participation in responsible AI development, and the need to address disconnects between current practice and guidance. Noteworthy papers include: What Comes After Harm, which develops a taxonomy of AI harm reparation based on a thematic analysis of real-world incidents. Stakeholder Participation for Responsible AI Development, which clarifies the extent to which established stakeholder involvement practices are able to contribute to responsible AI efforts.
Responsible AI and Human-Centric Design
Sources
Scenarios in Computing Research: A Systematic Review of the Use of Scenario Methods for Exploring the Future of Computing Technologies in Society
The Turn to Practice in Design Ethics: Characteristics and Future Research Directions for HCI Research