Transparent AI and Provenance Tracking in Healthcare

The field of AI in healthcare is moving towards greater transparency and accountability, with a focus on developing standardized frameworks for tracing and verifying AI models. This shift is driven by the need for robust frameworks that can ensure scalability, comparability, and machine interpretability across projects and platforms. Provenance tracking is emerging as a key component in this effort, enabling researchers and engineers to gain insights into resource usage patterns, identify inefficiencies, and ensure reproducibility and accountability in AI development workflows. Noteworthy papers in this area include:

  • The introduction of the AI Model Passport, a digital identity and verification tool for AI models that captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle.
  • The development of yProv4ML, a framework to capture provenance information generated during machine learning processes in PROV-JSON format, with minimal code modifications.

Sources

AI Model Passport: Data and System Traceability Framework for Transparent AI in Health

A Large-Scale Evolvable Dataset for Model Context Protocol Ecosystem and Security Analysis

Provenance Tracking in Large-Scale Machine Learning Systems

yProv4ML: Effortless Provenance Tracking for Machine Learning Systems

The hunt for research data: Development of an open-source workflow for tracking institutionally-affiliated research data publications

Evaluating Structured Output Robustness of Small Language Models for Open Attribute-Value Extraction from Clinical Notes

Built with on top of