The field of cognitive computing and AI is rapidly advancing, with a focus on developing more sophisticated and human-like intelligence. Recent research has explored the alignment between popular CNN architectures and human brain processing, with findings suggesting that CNNs struggle to go beyond simple visual processing. Meanwhile, the development of multimodal large language models (MLLMs) has enabled more accurate emotion recognition and reasoning, with applications in areas such as virtual reality and human-computer interaction. Noteworthy papers in this area include 'Bridging the behavior-neural gap: A multimodal AI reveals the brain's geometry of emotion more accurately than human self-reports', which demonstrates the ability of MLLMs to develop rich, neurally-aligned affective representations, and 'The Dragon Hatchling: The Missing Link between the Transformer and Models of the Brain', which introduces a new large language model architecture based on a scale-free biologically inspired network. These advances have significant implications for the development of more intelligent and human-like AI systems, and highlight the importance of interdisciplinary research in cognitive computing and AI.
Advances in Cognitive Computing and AI
Sources
Customizing Visual Emotion Evaluation for MLLMs: An Open-vocabulary, Multifaceted, and Scalable Approach
Bridging the behavior-neural gap: A multimodal AI reveals the brain's geometry of emotion more accurately than human self-reports