The field of brain-computer interfaces (BCIs) and multimodal learning is rapidly advancing, with a focus on developing more accurate and robust decoding frameworks. Recent research has highlighted the importance of integrating multiple modalities, such as EEG and EMG signals, to enhance decoding performance. Additionally, there is a growing interest in exploring uncertainty-resilient multimodal learning approaches to mitigate the effects of noisy data and low-quality labels. These advancements have the potential to improve the development of practical BCI applications, including speech decoding and affective computing. Noteworthy papers in this area include: CAT-Net, which proposes a novel cross-subject multimodal BCI decoding framework that fuses EEG and EMG signals to classify Mandarin tones. Shrinking the Teacher, which introduces an adaptive teaching paradigm for asymmetric EEG-vision alignment, achieving a top-1 accuracy of 60.2% on the zero-shot brain-to-image retrieval task. MindCross, which proposes a novel cross-subject framework for fast new subject adaptation with limited data for cross-subject video reconstruction from brain signals. Uncertainty-Resilient Multimodal Learning, which explores consistency-guided cross-modal transfer to enhance semantic robustness and improve data efficiency in multimodal learning systems.