Geometric Advances in Representation Learning

The field of representation learning is witnessing a significant shift towards geometric approaches, with a focus on capturing complex relationships and structures in data. Researchers are exploring various geometric spaces, such as Euclidean, hyperbolic, and spherical, to better represent and align different modalities, including text, images, and graphs. This trend is driven by the need to improve model interpretability, transferability, and robustness. Notable papers in this area include: Steering Embedding Models with Geometric Rotation, which introduces a geometric approach to represent semantic transformations as consistent rotational operations in embedding space. Topological Alignment of Shared Vision-Language Embedding Space, which proposes a topology-aware framework to align embedding spaces with topology-preserving constraints. Combining Euclidean and Hyperbolic Representations for Node-level Anomaly Detection, which jointly leverages Euclidean and Hyperbolic Graph Neural Networks to capture complementary aspects of node representations. GraphShaper: Geometry-aware Alignment for Improving Transfer Learning in Text-Attributed Graphs, which employs expert networks tailored to different geometric spaces to adaptively integrate geometric properties. Can Representation Gaps Be the Key to Enhancing Robustness in Graph-Text Alignment?, which proposes a gap-aware alignment framework to preserve representation gaps as geometric necessities for maintaining modality-specific knowledge. H4G: Unlocking Faithful Inference for Zero-Shot Graph Learning in Hyperbolic Space, which systematically reduces embedding radii to restore access to fine-grained patterns. OS-HGAdapter: Open Semantic Hypergraph Adapter for Large Language Models Assisted Entropy-Enhanced Image-Text Alignment, which uses open semantic knowledge to fill the entropy gap and reproduce the alignment ability of humans. When Embedding Models Meet: Procrustes Bounds and Applications, which studies when two sets of embeddings can be aligned by an orthogonal transformation and provides a tight bound on the alignment error.

Sources

Steering Embedding Models with Geometric Rotation: Mapping Semantic Relationships Across Languages and Models

Topological Alignment of Shared Vision-Language Embedding Space

Combining Euclidean and Hyperbolic Representations for Node-level Anomaly Detection

GraphShaper: Geometry-aware Alignment for Improving Transfer Learning in Text-Attributed Graphs

Can Representation Gaps Be the Key to Enhancing Robustness in Graph-Text Alignment?

H4G: Unlocking Faithful Inference for Zero-Shot Graph Learning in Hyperbolic Space

OS-HGAdapter: Open Semantic Hypergraph Adapter for Large Language Models Assisted Entropy-Enhanced Image-Text Alignment

When Embedding Models Meet: Procrustes Bounds and Applications

Built with on top of