Advancements in AI Safety and Literacy

The field of artificial intelligence is moving towards a greater emphasis on safety and literacy, particularly in regards to the protection of young users and the development of more transparent and trustworthy AI systems. Recent research has highlighted the importance of evaluating and improving the ability of AI models to detect and mitigate harmful interactions, as well as the need for more effective content moderation practices on social media platforms. Additionally, there is a growing focus on fostering AI literacy in children, with studies exploring their mental models of AI reasoning and developing tools to help them critically engage with generative AI. Noteworthy papers in this area include: Selective Code Generation for Functional Guarantees, which proposes a method for controlling code hallucination in AI-generated code, Children's Mental Models of AI Reasoning, which identifies three models of AI reasoning in children and highlights implications for AI literacy education, and AI Puzzlers, which develops an interactive system to help children identify and analyze errors in generative AI.

Sources

Understanding Gen Alpha Digital Language: Evaluation of LLM Safety Systems for Content Moderation

Protecting Young Users on Social Media: Evaluating the Effectiveness of Content Moderation and Legal Safeguards on Video Sharing Platforms

Selective Code Generation for Functional Guarantees

Children's Mental Models of AI Reasoning: Implications for AI Literacy Education

"AI just keeps guessing": Using ARC Puzzles to Help Children Identify Reasoning Errors in Generative AI

"If anybody finds out you are in BIG TROUBLE": Understanding Children's Hopes, Fears, and Evaluations of Generative AI

AutoMCQ -- Automatically Generate Code Comprehension Questions using GenAI

Built with on top of