The field of multimodal representation learning is moving towards more efficient and effective models that can handle diverse types of data. Recent research has focused on developing novel architectures and training strategies that can improve the performance of multimodal large language models (MLLMs) on various tasks. One notable direction is the use of vision-centric activation and coordination techniques to optimize MLLM representations, which has shown significant improvements in visual comprehension capabilities. Another area of research is the development of methods that can absorb math reasoning abilities from large language models (LLMs) without requiring tuning, which has the potential to enhance the math reasoning performance of MLLMs. Additionally, there is a growing interest in programmatic representation learning with language models, which aims to learn interpretable representations that can be readily inspected and understood. Noteworthy papers in this area include: COCO-Tree, which presents a novel approach that augments VLM outputs with carefully designed neurosymbolic concept trees learned from LLMs to improve VLM's linguistic reasoning. CompoDistill, which proposes a novel KD framework that explicitly aligns the student's visual attention with that of the teacher to enhance the student's visual perception abilities. IP-Merging, which proposes a tuning-free approach that can enhance the math reasoning ability of MLLMs directly from Math LLMs without compromising their other capabilities.