Advancements in AI Safety and Responsibility

The field of artificial intelligence is moving towards a greater emphasis on safety and responsibility, driven by the increasing capabilities and potential risks of foundation models. Researchers are exploring new approaches to AI safety, including the development of open-source tools and participatory mechanisms for mitigating potential harms. A key focus area is the creation of robust and transparent content filters, as well as the establishment of clear guidelines for the deployment of AI systems in various contexts.

Noteworthy papers in this area include:

  • A Different Approach to AI Safety, which reports on the outcomes of the Columbia Convening on AI Openness and Safety and proposes a roadmap for future research directions.
  • The Societal Impact of Foundation Models, which provides a comprehensive analysis of the coevolution of technology and society in the age of AI and offers insights into the development of evidence-based AI policy.

Sources

A Different Approach to AI Safety: Proceedings from the Columbia Convening on Openness in Artificial Intelligence and AI Safety

Report on NSF Workshop on Science of Safe AI

What's Privacy Good for? Measuring Privacy as a Shield from Harms due to Personal Data Use

The Societal Impact of Foundation Models: Advancing Evidence-based AI Policy

AI Risk-Management Standards Profile for General-Purpose AI (GPAI) and Foundation Models

Intellectual Property Rights and Entrepreneurship in the NFT Ecosystem: Legal Frameworks, Business Models, and Innovation Opportunities

Can AI be Consentful?

Rational Censorship Attack: Breaking Blockchain with a Blackboard

Recourse, Repair, Reparation, & Prevention: A Stakeholder Analysis of AI Supply Chains

Built with on top of