The field of AI research is shifting towards a stronger emphasis on ethics and trustworthiness. Papers in this area are exploring ways to incorporate ethical considerations into the development process, such as using multi-agent systems to generate ethics requirements drafts and integrating artefact-based approaches with perspective-based methods to specify trustworthy AI requirements. Another key direction is the development of novel approaches for safety verification, including probabilistic modelling and verification, and situation coverage analysis to elicit robustness-related safety requirements. Additionally, researchers are working on frameworks for intelligent requirements development, such as knowledge-driven multi-agent systems, to support collaboration among stakeholders and improve the quality of software requirements specifications. Notable papers in this area include:
- Multi-Agent LLMs as Ethics Advocates in AI-Based Systems, which proposes a framework for generating ethics requirements drafts using an ethics advocate agent in a multi-agent LLM setting.
- Probabilistic Safety Verification for an Autonomous Ground Vehicle, which presents a novel approach for safety verification based on systematic situation extraction, probabilistic modelling, and verification.
- iReDev: A Knowledge-Driven Multi-Agent Framework for Intelligent Requirements Development, which proposes a knowledge-driven multi-agent framework for intelligent requirement development, featuring six knowledge-driven agents to support the entire requirements development process.