The field of generative artificial intelligence is moving towards the development of more transparent, accountable, and safe models. Researchers are working on creating frameworks for evaluating and comparing the characteristics of open and closed generative AI models, with a focus on dimensions such as openness, public governance, and security. There is also a growing emphasis on identifying and mitigating potential risks and challenges associated with the deployment of generative AI systems, including social biases, harmful content generation, and sycophancy. Furthermore, efforts are being made to advance the development of large language models for low-resource languages, such as Greek, and to conduct comprehensive ethical evaluations of open-source generative large language models. Notable papers in this area include:
- A study that proposes a framework for developing an Open, Public, and Safe Gen AI framework, highlighting the importance of multi-stakeholder governance and regulatory frameworks.
- Phare, a multilingual diagnostic framework for evaluating LLM behavior across critical dimensions, revealing patterns of systematic vulnerabilities.
- An introduction of a richer theory of social sycophancy in LLMs, characterizing sycophancy as the excessive preservation of a user's face, and presenting a framework for evaluating social sycophancy.
- Llama-Krikri-8B, a cutting-edge Large Language Model tailored for the Greek language, offering advanced capabilities and efficient computational performance.
- OpenEthics, a comprehensive ethical evaluation of open-source generative large language models, analyzing model behavior in both English and Turkish, and providing a guide for safer model development.