Advances in Dense Representation Learning and Medical Imaging

The field of computer vision and medical imaging is rapidly advancing, with a focus on developing innovative methods for dense representation learning and improving medical imaging analysis. Recent research has explored the use of cross-domain feature transfer, weakly-supervised learning, and self-supervised learning to enhance the accuracy and robustness of medical imaging analysis. Additionally, there is a growing interest in developing methods that can capture spatially relevant semantics in medical 3D imaging and establishing dense correspondences across image pairs. Noteworthy papers in this area include: TRELLIS-Enhanced Surface Features for Comprehensive Intracranial Aneurysm Analysis, which proposes a cross-domain feature-transfer approach to augment neural networks for aneurysm analysis. PathoHR: Hierarchical Reasoning for Vision-Language Models in Pathology, which introduces a novel benchmark and training scheme to evaluate and improve vision-language models' abilities in hierarchical semantic understanding and compositional reasoning within the pathology domain.

Sources

TRELLIS-Enhanced Surface Features for Comprehensive Intracranial Aneurysm Analysis

Weakly-Supervised Learning of Dense Functional Correspondences

Patch-level Kernel Alignment for Self-Supervised Dense Representation Learning

Spatial-Aware Self-Supervision for Medical 3D Imaging with Multi-Granularity Observable Tasks

PathoHR: Hierarchical Reasoning for Vision-Language Models in Pathology

Back To The Drawing Board: Rethinking Scene-Level Sketch-Based Image Retrieval

Intraoperative 2D/3D Registration via Spherical Similarity Learning and Inference-Time Differentiable Levenberg-Marquardt Optimization

Handling Multiple Hypotheses in Coarse-to-Fine Dense Image Matching

Built with on top of