The field of multimodal learning is moving towards addressing the challenges of missing modalities, which can severely degrade classification performance. Researchers are proposing innovative solutions to overcome these issues, including reformulating modality missing as a multi-task learning problem, investigating the effects of explicit alignment on model performance, and developing methods to calibrate incomplete alignments. These advancements have the potential to improve the robustness and generalization of multimodal learning models. Noteworthy papers include: Rethinking Efficient Mixture-of-Experts for Remote Sensing Modality-Missing Classification, which proposes a Missing-aware Mixture-of-Loras framework for parameter-efficient adaptation. Calibrated Multimodal Representation Learning with Missing Modalities, which leverages priors and inherent connections among modalities to model imputation for missing ones. Adaptive Redundancy Regulation for Balanced Multimodal Information Refinement, which constructs a redundancy phase monitor to trigger intervention and estimates the contribution of the dominant modality based on cross-modal semantics. Representation Space Constrained Learning with Modality Decoupling for Multimodal Object Detection, which proposes a method to alleviate fusion degradation and achieve state-of-the-art performance.