Shaping the Future of AI: Governance, Trust, and Localness

The field of artificial intelligence is undergoing significant transformations, driven by advances in technology, shifting societal norms, and evolving governance structures. A notable trend is the increasing focus on the interplay between AI, international law, and the tech-industrial complex, highlighting the need for a serious conversation about distribution, global equity, and democratic oversight. Another area of innovation is the systematic study of the Uncanny Valley Effect and its impact on trust in human-agent interaction, which has led to a deeper understanding of the complexities involved in trust measurement and the development of novel frameworks for classifying trust approaches. The concept of openness in AI is also being reexamined, with efforts to broaden discussions beyond the open-source software paradigm and to develop a more holistic view of openness that encompasses actions, system properties, and ethical objectives. Furthermore, research is exploring the notion of localness in people and machines, recognizing its importance in maintaining trust and credibility in location-based services and community-driven platforms. In addition, there is a growing interest in understanding laypeople's attitudes towards fair, affirmative, and discriminatory decision-making algorithms, which has implications for the development of more equitable and just AI systems. The potential for cooperation between the U.S. and China on AI risks and governance is also being investigated, with identified areas of convergence that could facilitate bilateral dialogue and cooperation. The value of disagreement in AI design, evaluation, and alignment is another area of study, with a proposed normative framework to guide practical reasoning about disagreement and its benefits. Lastly, preliminary frameworks for intersectionality in machine learning pipelines are being developed to support more equitable technological outcomes. Noteworthy papers include 'Silicon Sovereigns' which explores the shift in power from governments to tech firms, 'Would You Rely on an Eerie Agent?' which systematically reviews the impact of the Uncanny Valley Effect on trust, and 'The Value of Disagreement in AI Design, Evaluation, and Alignment' which introduces the notion of perspectival homogenization as a coupled ethical-epistemic risk.

Sources

Silicon Sovereigns: Artificial Intelligence, International Law, and the Tech-Industrial Complex

Would You Rely on an Eerie Agent? A Systematic Review of the Impact of the Uncanny Valley Effect on Trust in Human-Agent Interaction

Opening the Scope of Openness in AI

A Turing Test for ''Localness'': Conceptualizing, Defining, and Recognizing Localness in People and Machines

Laypeople's Attitudes Towards Fair, Affirmative, and Discriminatory Decision-Making Algorithms

Promising Topics for U.S.-China Dialogues on AI Risks and Governance

The Value of Disagreement in AI Design, Evaluation, and Alignment

A Preliminary Framework for Intersectionality in ML Pipelines

Which Demographic Features Are Relevant for Individual Fairness Evaluation of U.S. Recidivism Risk Assessment Tools?

Built with on top of