The field of artificial intelligence is moving towards a more transparent and accountable direction, with a focus on explainable AI and autonomous systems. Researchers are developing new frameworks and metrics to evaluate the safety and reliability of AI systems, such as the Modified-Emergency Index for autonomous driving and marginal risk assessment frameworks. There is also a growing emphasis on regulation and governance, with the EU AI Act being cited as a model for risk-based and responsibility-driven regulation. Additionally, researchers are exploring new approaches to explanation and transparency, including verifiable reasoning agents and causal faithfulness analysis. Overall, the field is shifting towards a more nuanced understanding of the complex relationships between AI systems, human users, and societal context. Noteworthy papers include: MARIA, which proposes a marginal risk assessment framework that avoids dependence on ground truth or absolute risk. Modified-Emergency Index, which refines the estimation of the time available for evasive maneuvers in lateral conflicts. Position Paper, which challenges the entrenched belief that regulation and innovation are opposites and examines the EU AI Act as a model for risk-based regulation.