The field of biomedical research is witnessing a significant shift towards the adoption of large language models (LLMs) for various applications, including disease prediction, causal inference, and question answering. Recent studies have demonstrated the potential of LLMs in predicting cardiac diseases and identifying genetic patterns associated with cardiac conditions. Additionally, LLMs have been used to automate confounder discovery and subgroup analysis in causal inference, enhancing treatment effect estimation robustness. The development of novel benchmark datasets, such as HealthBranches, has also enabled the evaluation of LLMs' multi-step inference capabilities. Noteworthy papers include the introduction of LLM-BI, a conceptual pipeline for automating Bayesian workflows, and Semantic Bridge, a universal framework for controllably generating sophisticated multi-hop reasoning questions. The Knowledge-Reasoning Dissociation study highlights the fundamental limitations of LLMs in clinical natural language inference, revealing a dissociation between knowledge and reasoning capabilities.