The field of AI in healthcare is moving towards a more responsible and ethical approach, with a focus on addressing the needs of underserved populations and ensuring equitable access to healthcare services. Recent research has highlighted the potential of AI to improve healthcare outcomes in rural and low-resource settings, and has identified key challenges and barriers to implementation, including data quality concerns, infrastructural limitations, and ethical considerations. Notably, there is a growing recognition of the importance of user-centered approaches to AI ethics, and the need for regulatory frameworks that prioritize transparency, accountability, and fairness.
Some noteworthy papers in this area include: The User-first Approach to AI Ethics, which provides empirical evidence of uneven user prioritization of AI ethics principles and offers guidance for operationalizing ethics tailored to culture and context. The paper on Developing a Responsible AI Framework for Healthcare in Low Resource Countries presents a draft framework tailored to resource-constrained environments and highlights the need for localized governance structures and ethical oversight. The Design and Validation of a Responsible Artificial Intelligence-based System for the Referral of Diabetic Retinopathy Patients demonstrates a robust and ethically aligned solution for DR screening in clinical settings, with significant improvements in accuracy and fairness metrics.