Advancements in Human-Computer Interaction and Adaptive Systems

The field of human-computer interaction is moving towards more immersive and dynamic experiences, with a focus on integrating physical and virtual spaces. Researchers are exploring the use of machine learning and reinforcement learning to create adaptive systems that can learn from user interactions and improve over time. This includes the development of frameworks for gesture-based control, adaptive user interfaces, and cross-reality lifestyles. Another area of focus is on optimizing the placement of content and systems in mixed reality and urban environments, using techniques such as deep reinforcement learning. Noteworthy papers in this area include:

  • Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces, which enhances user experience through personalized human feedback.
  • Deep Reinforcement Learning for Urban Air Quality Management, which optimizes the placement of air purification booths in metropolitan environments using a novel deep reinforcement learning framework.

Sources

A Real-Time Gesture-Based Control Framework

Cam-2-Cam: Exploring the Design Space of Dual-Camera Interactions for Smartphone-based Augmented Reality

Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces

Cross-Reality Lifestyle: Integrating Physical and Virtual Lives through Multi-Platform Metaverse

Adaptive 3D UI Placement in Mixed Reality Using Deep Reinforcement Learning

Investigating Adaptive Tuning of Assistive Exoskeletons Using Offline Reinforcement Learning: Challenges and Insights

Deep Reinforcement Learning for Urban Air Quality Management: Multi-Objective Optimization of Pollution Mitigation Booth Placement in Metropolitan Environments

Built with on top of