The field of artificial intelligence is moving towards increased transparency and accountability, with a focus on developing trustworthy and ethically aligned systems. Recent research has highlighted the importance of transparency in AI decision-making processes, and the need for standardized metrics and explainable AI techniques to facilitate accountability. The development of frameworks and guidelines for AI governance is also a key area of focus, with a emphasis on ensuring that AI systems are designed and deployed in a responsible and ethical manner. Noteworthy papers in this area include 'Towards Transparent Ethical AI: A Roadmap for Trustworthy Robotic Systems', which proposes a framework for implementing transparency in AI systems, and 'A Moral Agency Framework for Legitimate Integration of AI in Bureaucracies', which presents a three-point framework for ensuring that AI systems are used in a way that is consistent with human values and principles. Additionally, 'Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance' provides a comprehensive framework for AI governance, integrating technical and societal dimensions to promote transparency, accountability, and trust in AI systems.