Advances in Disentangled Representation Learning and Biometric Analysis

The field of disentangled representation learning and biometric analysis is witnessing significant developments, with a focus on improving the robustness and scalability of models. Researchers are exploring new approaches to disentangle attributes and objects, and to detect visual attribute reliance in vision models. Additionally, there is a growing interest in multi-modal object re-identification and biometric analysis, with a emphasis on developing flexible and effective frameworks for any-to-any image-level re-identification. Noteworthy papers in this area include Disentangled Representation Learning via Modular Compositional Bias, which proposes a compositional bias for disentangled representation learning, and GSAlign, which introduces a geometric and semantic alignment network for aerial-ground person re-identification. Other notable works include MDReID, which presents a modality-decoupled learning framework for any-to-any multi-modal object re-identification, and ConceptScope, which provides a scalable and automated framework for analyzing visual datasets by discovering and quantifying human-interpretable concepts.

Sources

Disentangled Representation Learning via Modular Compositional Bias

Automated Detection of Visual Attribute Reliance with a Self-Reflective Agent

GSAlign: Geometric and Semantic Alignment Network for Aerial-Ground Person Re-Identification

Bi-Encoder Contrastive Learning for Fingerprint and Iris Biometrics

MDReID: Modality-Decoupled Learning for Any-to-Any Multi-Modal Object Re-Identification

ConceptScope: Characterizing Dataset Bias via Disentangled Visual Concepts

Exploring Complementarity and Explainability in CNNs for Periocular Verification Across Acquisition Distances

Leveraging Large-Scale Face Datasets for Deep Periocular Recognition via Ocular Cropping

Built with on top of