Advances in Radiology Report Generation and Interpretability

The field of radiology report generation is moving towards more explainable and transparent models. Recent developments have focused on incorporating chain-of-thought and reinforcement learning to improve the accuracy and reliability of generated reports. Models that can mimic the workflow of radiologists and provide verifiable evidence are becoming increasingly popular. Noteworthy papers in this area include BoxMed-RL, which introduces a unified training framework for generating spatially verifiable and explainable radiology reports, and CBM-RAG, which presents an automated radiology report generation framework that combines Concept Bottleneck Models with a Multi-Agent Retrieval-Augmented Generation system. Other notable papers include ChestX-Reasoner, which leverages process supervision mined directly from clinical reports to improve reasoning ability, and IP-CRR, which proposes an interpretable-by-design framework for classifying radiology reports.

Sources

Reason Like a Radiologist: Chain-of-Thought and Reinforcement Learning for Verifiable Report Generation

CBM-RAG: Demonstrating Enhanced Interpretability in Radiology Report Generation with Multi-Agent RAG and Concept Bottleneck Models

ChestX-Reasoner: Advancing Radiology Foundation Models with Reasoning through Step-by-Step Verification

IP-CRR: Information Pursuit for Interpretable Classification of Chest Radiology Reports

ReXGradient-160K: A Large-Scale Publicly Available Dataset of Chest Radiographs with Free-text Reports

Machine Learning Meets Transparency in Osteoporosis Risk Assessment: A Comparative Study of ML and Explainability Analysis

T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT

Built with on top of