The field of AI in healthcare is moving towards greater transparency and accountability, with a focus on developing standardized frameworks for tracing and verifying AI models. This shift is driven by the need for robust frameworks that can ensure scalability, comparability, and machine interpretability across projects and platforms. Provenance tracking is emerging as a key component in this effort, enabling researchers and engineers to gain insights into resource usage patterns, identify inefficiencies, and ensure reproducibility and accountability in AI development workflows. Noteworthy papers in this area include:
- The introduction of the AI Model Passport, a digital identity and verification tool for AI models that captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle.
- The development of yProv4ML, a framework to capture provenance information generated during machine learning processes in PROV-JSON format, with minimal code modifications.