The field of artificial intelligence and digital engineering is rapidly evolving, with a growing recognition of the need to address sociotechnical challenges and ensure responsible AI development. Recent research has highlighted the importance of considering the social and cultural context in which AI systems are designed and deployed, particularly in non-Western contexts. The development of AI systems that are culturally grounded, equitable, and responsive to the needs of diverse populations is a key area of focus. Additionally, there is a growing recognition of the need for more nuanced and contextual approaches to AI risk management, including the development of metrics and models that can effectively capture and mitigate the complex risks associated with AI systems. Noteworthy papers in this area include the development of a decolonial mindset for indigenising computing education and the creation of a taxonomy of expert perspectives on the risks and likely consequences of artificial intelligence. The paper on designing culturally aligned AI systems for social good in non-Western contexts is also particularly noteworthy, as it highlights the need for extensive collaboration between AI developers and domain experts to ensure that AI systems are safe and effective in high-stakes domains.