Trustworthy AI in Healthcare

The field of artificial intelligence in healthcare is moving towards the development of trustworthy AI systems that prioritize human agency, oversight, and transparency. Recent research has focused on operationalizing trustworthy AI in healthcare, addressing challenges such as ethical concerns, regulatory barriers, and lack of trust. A key direction is the integration of explainability and contestability principles, which enable users and subjects to understand and challenge AI decisions. Another area of innovation is the use of blockchain technology to ensure data integrity, security, and patient consent. Participatory AI approaches are also being explored, enabling citizens to engage in the design of differentially private AI systems in public sector applications. Notable papers in this area include:

  • A design framework for operationalizing trustworthy AI in healthcare, which proposes a collection of requirements for medical AI systems to adhere to trustworthy AI principles.
  • A novel blockchain-based data structure, MedBlockTree, which solves the scalability issue in blockchain-based EMR systems, achieving processing speeds significantly faster than conventional approaches.

Sources

"Two Means to an End Goal": Connecting Explainability and Contestability in the Regulation of Public Sector AI

A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption

Efficient patient-centric EMR sharing block tree

Building Trust in Healthcare with Privacy Techniques: Blockchain in the Cloud

Participatory AI, Public Sector AI, Differential Privacy, Conversational Interfaces, Explainable AI, Citizen Engagement in AI

On the Encapsulation of Medical Imaging AI Algorithms

Built with on top of