The field of brain-computer interfaces (BCIs) is moving towards more sophisticated and human-like cognitive capabilities. Recent developments have focused on enhancing the ability of artificial systems to understand and interpret complex cognitive states, such as abstract concepts and nuanced perceptual experiences. This has been achieved through the integration of brain-inspired mechanisms, multimodal learning, and advanced neural network architectures. Notably, the incorporation of brain signals and supervisory learning has shown promise in improving the performance of large-scale models in tasks such as few-shot learning and out-of-distribution recognition. Furthermore, the development of more efficient and adaptable multimodal fusion strategies has enabled the creation of more robust and generalizable models. Noteworthy papers include: The paper 'Human-like Cognitive Generalization for Large Models via Brain-in-the-loop Supervision' which demonstrates the effectiveness of brain-in-the-loop supervised learning in enhancing the cognitive capabilities of large models. The paper 'Incorporating brain-inspired mechanisms for multimodal learning in artificial intelligence' which proposes an inverse effectiveness driven multimodal fusion strategy, achieving improved model performance and computational efficiency.