The field of AI research is moving towards a greater emphasis on privacy and safety, with a focus on developing more robust and nuanced models that can understand and enforce privacy principles. Recent studies have highlighted the importance of situating privacy preference elicitation within real-world data flows and have introduced new approaches for evaluating the harmfulness of content generated by large language models. Noteworthy papers in this area include:
- Falcon, which introduces a large-scale vision-language safety dataset and a specialized evaluator for identifying harmful content in complex and safety-critical multimodal dialogue scenarios.
- LLaVAShield, which presents a systematic definition and study of multimodal multi-turn dialogue safety and introduces a powerful tool for detecting and assessing risk in user inputs and assistant responses.