The field of multimodal representation learning and retrieval is rapidly advancing, with a focus on developing more efficient and effective methods for aligning and retrieving data across different modalities. Recent work has emphasized the importance of representation alignment, with techniques such as linear transformations and attention-based mechanisms being used to bridge the gap between different modalities. Additionally, there is a growing trend towards developing more unified and scalable models that can handle multiple modalities and tasks simultaneously. Notable papers in this area include: mini-vec2vec, which presents a simple and efficient alternative to the original vec2vec method for aligning text embedding spaces without parallel data. Omni-Embed-Nemotron, which introduces a unified multimodal retrieval embedding model that can handle text, image, audio, and video modalities. Guided Query Refinement, which proposes a novel test-time optimization method for refining a primary retriever's query embedding using guidance from a complementary retriever's scores. Efficient Discriminative Joint Encoders, which introduces an efficient discriminative joint encoder that precomputes vision tokens offline and compresses them via a lightweight attention-based adapter, enabling high-throughput inference.