Advances in Multisensory Integration and Cognitive Support

The field of multisensory integration and cognitive support is moving towards the development of innovative technologies that can enhance human performance and engagement. Recent studies have explored the use of machine learning models to estimate cognitive effort and infer cognitive load from neurophysiological signals. Mixed reality technologies are being investigated for their potential to support individuals with cognitive impairments, such as ADHD, and to enhance training outcomes in safety-critical fields like aviation. Multisensory integration is also being examined in the context of virtual reality and haptics, with a focus on understanding how different sensory modalities interact and influence each other. Noteworthy papers include:

  • Understood, a Mixed Reality system designed to assist adults with ADHD in real-world communication, which demonstrated high usability and effectiveness in supporting communication.
  • MeloKids, a multisensory VR system that aims to enhance speech and motor coordination in children with hearing loss, which provided evidence for designing personalized, interactive rehabilitation systems.

Sources

Estimating Cognitive Effort from Functional Near-Infrared Spectroscopy (fNIRS) Signals using Machine Learning

Strategies to Manage Human Factors in Mixed Reality Pilot Training: A Survey

Understood: Real-Time Communication Support for Adults with ADHD Using Mixed Reality

Multisensory Integration and Sensory Substitution Across Vision, Audition, and Haptics: Answering the What, Which, and When in Study Protocols

MeloKids: Multisensory VR System to Enhance Speech and Motor Coordination in Children with Hearing Loss

Built with on top of