Advancements in Medical AI: Multimodal Integration and Explainability

The field of medical AI is rapidly advancing, with a growing focus on multimodal integration and explainability. Recent developments have seen the introduction of novel frameworks and models that combine multiple data sources, such as images, text, and sensor readings, to improve diagnostic accuracy and clinical decision-making. Notably, the use of large language models (LLMs) has become increasingly prevalent, with applications in medical question answering, disease diagnosis, and patient risk prediction. Furthermore, there is a growing emphasis on developing more transparent and explainable AI systems, with techniques such as attention mechanisms and feature attribution being explored to provide insights into model decision-making. Overall, these advancements have the potential to significantly improve patient outcomes and transform the field of medical AI. Noteworthy papers include MedAtlas, which introduces a novel benchmark framework for evaluating LLMs on realistic medical reasoning tasks, and HeteroRAG, which presents a heterogeneous retrieval-augmented generation framework for medical vision language tasks.

Sources

MedAtlas: Evaluating LLMs for Multi-Round, Multi-Task Medical Reasoning Across Diverse Imaging Modalities and Clinical Text

Cross-Granularity Hypergraph Retrieval-Augmented Generation for Multi-hop Question Answering

Ontology-Guided Query Expansion for Biomedical Document Retrieval using Large Language Models

QuarkMed Medical Foundation Model Technical Report

Q-FSRU: Quantum-Augmented Frequency-Spectral Fusion for Medical Visual Question Answering

MedKGent: A Large Language Model Agent Framework for Constructing Temporally Evolving Medical Knowledge Graph

Extracting Post-Acute Sequelae of SARS-CoV-2 Infection Symptoms from Clinical Notes via Hybrid Natural Language Processing

Standardization of Neuromuscular Reflex Analysis -- Role of Fine-Tuned Vision-Language Model Consortium and OpenAI gpt-oss Reasoning LLM Enabled Decision Support System

HeteroRAG: A Heterogeneous Retrieval-Augmented Generation Framework for Medical Vision Language Tasks

Breaking Reward Collapse: Adaptive Reinforcement for Open-ended Medical Reasoning with Enhanced Semantic Discrimination

A Language-Signal-Vision Multimodal Framework for Multitask Cardiac Analysis

CardAIc-Agents: A Multimodal Framework with Hierarchical Adaptation for Cardiac Care Support

ProMed: Shapley Information Gain Guided Reinforcement Learning for Proactive Medical LLMs

Toward Better EHR Reasoning in LLMs: Reinforcement Learning with Expert Attention Guidance

A Comparative Study of Decoding Strategies in Medical Text Generation

Interactive Query Answering on Knowledge Graphs with Soft Entity Constraints

EEG-MedRAG: Enhancing EEG-based Clinical Decision-Making via Hierarchical Hypergraph Retrieval-Augmented Generation

Expertise-aware Multi-LLM Recruitment and Collaboration for Medical Decision-Making

A Multi-Agent Approach to Neurological Clinical Reasoning

ShizhenGPT: Towards Multimodal LLMs for Traditional Chinese Medicine

Evaluating Retrieval-Augmented Generation vs. Long-Context Input for Clinical Reasoning over EHRs

MedReseacher-R1: Expert-Level Medical Deep Researcher via A Knowledge-Informed Trajectory Synthesis Framework

Cohort-Aware Agents for Individualized Lung Cancer Risk Prediction Using a Retrieval-Augmented Model Selection Framework

SurgWound-Bench: A Benchmark for Surgical Wound Diagnosis

DiagECG: An LLM-Driven Framework for Diagnostic Reasoning via Discretized ECG Tokenization

End-to-End Agentic RAG System Training for Traceable Diagnostic Reasoning

Built with on top of