Advancements in Clinical Decision Support with Large Language Models

The field of clinical decision support is witnessing significant advancements with the integration of large language models (LLMs). Recent developments focus on enhancing the diagnostic accuracy and clinical reasoning ability of LLMs, particularly in complex clinical scenarios. Researchers are exploring innovative approaches to align model attention with structured clinical reasoning, resulting in more interpretable and reliable AI diagnostic systems. The use of LLMs is also being investigated for automated pre-consultation questionnaire generation, demonstrating superior performance in information coverage and diagnostic relevance. Furthermore, systematic reviews are highlighting the importance of developing LLMs explicitly designed for medical reasoning, with a focus on training-time strategies and test-time mechanisms. Noteworthy papers in this area include: Integrating clinical reasoning into large language model-based diagnosis through etiology-aware attention steering, which improves diagnostic accuracy by 15.65% and boosts the average Reasoning Focus Score by 31.6% over baselines. From EMR Data to Clinical Insight: An LLM-Driven Framework for Automated Pre-Consultation Questionnaire Generation, which overcomes limitations of direct methods by building explicit clinical knowledge and demonstrates superior performance in information coverage and diagnostic relevance. Medical Reasoning in the Era of LLMs: A Systematic Review of Enhancement Techniques and Applications, which provides the first systematic review of this emerging field and proposes a taxonomy of reasoning enhancement techniques. Large Language Model's Multi-Capability Alignment in Biomedical Domain, which achieves state-of-the-art results in domain expertise, reasoning, instruction following, and integration, with theoretical safety guarantees and real-world deployment yields. Are Large Language Models Dynamic Treatment Planners, which evaluates open-source LLMs as dynamic insulin dosing agents and finds that carefully designed zero-shot prompts enable smaller LLMs to achieve comparable or superior clinical performance relative to extensively trained SRAs. Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension using Large Language Models, which investigates whether LLMs can generate accurate and concise computable phenotypes and finds that LLMs, coupled with iterative learning, can generate interpretable and reasonably accurate programs.

Sources

Integrating clinical reasoning into large language model-based diagnosis through etiology-aware attention steering

From EMR Data to Clinical Insight: An LLM-Driven Framework for Automated Pre-Consultation Questionnaire Generation

Medical Reasoning in the Era of LLMs: A Systematic Review of Enhancement Techniques and Applications

Reinforcement Learning for Target Zone Blood Glucose Control

Large Language Model's Multi-Capability Alignment in Biomedical Domain

Are Large Language Models Dynamic Treatment Planners? An In Silico Study from a Prior Knowledge Injection Angle

Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension using Large Language Models

Built with on top of