Continual Learning and Brain-Inspired Models in Affective Computing and Computer Vision

The field of affective computing and computer vision is moving towards more robust and generalizable models, with a focus on continual learning and brain-inspired architectures. Recent developments have shown that models can effectively learn from continuous data streams, adapt to new tasks, and generalize across subjects. The use of biologically plausible models, such as those inspired by the neocortex, has led to significant improvements in tasks like emotion recognition and visual decoding. Notably, the incorporation of top-down modulations and contrastive learning has enabled models to balance stability and plasticity, achieving state-of-the-art performance in class-incremental and transfer learning. Furthermore, the development of interpretable and generalizable models, such as those using Mixture-of-Experts architectures, has improved our understanding of neural signals in higher visual cortex and enabled more accurate visual reconstruction from fMRI data. Noteworthy papers include:

  • PhiNet v2, which achieves competitive performance in computer vision tasks while processing temporal visual input without strong augmentation.
  • MoRE-Brain, which introduces a novel Mixture-of-Experts architecture for high-fidelity and interpretable visual reconstruction from fMRI data.

Sources

Robust Emotion Recognition via Bi-Level Self-Supervised Continual Learning

PhiNet v2: A Mask-Free Brain-Inspired Vision Foundation Model from Video

Contrastive Consolidation of Top-Down Modulations Achieves Sparsely Supervised Continual Learning

Exploring The Visual Feature Space for Multimodal Neural Decoding

Meta-Learning an In-Context Transformer Model of Human Higher Visual Cortex

MoRE-Brain: Routed Mixture of Experts for Interpretable and Generalizable Cross-Subject fMRI Visual Decoding

Built with on top of