The field of ophthalmic AI and medical reasoning is experiencing significant growth, driven by the development of innovative models and datasets that address long-standing challenges in the field. A key direction of research is the creation of large-scale synthetic datasets that can overcome the barriers of patient privacy concerns and high costs associated with expertly annotated clinical datasets. Another area of focus is the development of multimodal reasoning models that can integrate heterogeneous clinical information with multimodal medical imaging data, enabling more accurate and comprehensive diagnosis. These models are being designed to emulate realistic clinical thinking patterns and provide transparent and interpretable insights, which is crucial for dependable AI-assisted diagnosis. Noteworthy papers in this regard include: Automated Multi-label Classification of Eleven Retinal Diseases, which established a foundational performance benchmark for a large synthetic dataset and demonstrated strong generalization to real-world clinical datasets. Bridging the Gap in Ophthalmic AI, which introduced a novel ophthalmic multimodal dataset and a model that achieves state-of-the-art performance on both basic and complex reasoning tasks. MedGR^2, which presented a novel framework for generative reward learning that breaks the data barrier for medical reasoning and achieves state-of-the-art cross-modality and cross-task generalization. PathMR, which proposed a cell-level multimodal visual reasoning framework for pathological image analysis that delivers transparent and interpretable insights necessary for dependable AI-assisted pathology.