Advances in AI Ethics and Autonomous Systems

The field of AI ethics and autonomous systems is rapidly evolving, with a growing focus on developing systems that can incorporate human values and ethical preferences. Recent research has highlighted the importance of considering the social and contextual factors that influence human decision-making, and developing systems that can adapt to these factors. One key area of development is the creation of frameworks for ethical decision-making in autonomous vehicles, which can balance competing priorities such as safety, efficiency, and rule compliance. Another area of focus is the development of systems that can negotiate and adapt to different ethical preferences, such as the RobEthiChor approach. Notable papers in this area include the proposal of a novel human reasons-based supervision framework for automated vehicles, which detects when AV behaviour misaligns with expected human reasons and prompts replanning. Additionally, the development of a reasons-based trajectory evaluation framework that operationalises the tracking condition of Meaningful Human Control (MHC) has shown promise in assessing the alignment of automated vehicle decisions with human reasons.

Sources

Does AI and Human Advice Mitigate Punishment for Selfish Behavior? An Experiment on AI ethics From a Psychological Perspective

What's Really Different with AI? -- A Behavior-based Perspective on System Safety for Automated Driving Systems

Cross-Border Legal Adaptation of Autonomous Vehicle Design based on Logic and Non-monotonic Reasoning

RobEthiChor: Automated Context-aware Ethics-based Negotiation for Autonomous Robots

A blessing or a burden? Exploring worker perspectives of using a social robot in a church

A Framework for Ethical Decision-Making in Automated Vehicles through Human Reasons-based Supervision

Assessing the Alignment of Automated Vehicle Decisions with Human Reasons

Built with on top of