The field of AI in medicine is rapidly evolving, with a growing focus on the potential benefits and risks of AI applications in healthcare. Researchers are exploring the use of AI as a research tool, diagnostic aid, and organizational asset, while also acknowledging the need for careful consideration of ethical concerns such as privacy, bias, and transparency. The development of trustworthy AI systems is becoming increasingly urgent, particularly in light of forthcoming legal requirements such as the EU AI Act. A key challenge in this area is navigating the complex relationships between technical, evidence-based, and ethical practices. Noteworthy papers in this area include:
- A survey of hopes and fears about AI in medicine, which highlights the need for nuanced evaluation of AI's potential impact on healthcare systems and practices.
- A systematic qualitative review of ethical aspects of social robots in elderly care, which identifies a range of ethical hazards, opportunities, and unsettled questions that require careful consideration.
- A conceptualization of 'healthy distrust' in AI systems, which recognizes the importance of justified skepticism in building meaningful trust in AI usage practices.