The field of human-computer interaction and sensing is rapidly evolving, with a growing focus on developing innovative solutions to improve human performance, safety, and well-being. Recent developments have seen a significant shift towards multimodal approaches, combining visual, physiological, and vehicular cues to create more robust and context-aware systems.
Notable advancements include the development of automated pain assessment systems, which have achieved state-of-the-art performance in evaluating pain levels from diverse input modalities. Additionally, researchers have made significant progress in understanding inattentional blindness in the context of augmented reality head-up displays, highlighting the need for safety-centric evaluation frameworks.
Other areas of innovation include the creation of active inference models of covert and overt visual attention, which have been shown to effectively allocate attentional resources in complex environments. Furthermore, causal tree-based methods have been proposed to model personalized difficulty of rehabilitation exercises, enabling tailored support for individuals with unique needs.
Some particularly noteworthy papers in this area include:
- PainFormer, a vision foundation model that provides high-quality embeddings for automatic pain assessment, demonstrating state-of-the-art performance across various modalities.
- A study on inattentional blindness with augmented reality head-up displays, which highlights the importance of designing safety-centric evaluation frameworks for AR interfaces.