The field of human-computer interaction is moving towards more immersive and dynamic experiences, with a focus on integrating physical and virtual spaces. Researchers are exploring the use of machine learning and reinforcement learning to create adaptive systems that can learn from user interactions and improve over time. This includes the development of frameworks for gesture-based control, adaptive user interfaces, and cross-reality lifestyles. Another area of focus is on optimizing the placement of content and systems in mixed reality and urban environments, using techniques such as deep reinforcement learning. Noteworthy papers in this area include:
- Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces, which enhances user experience through personalized human feedback.
- Deep Reinforcement Learning for Urban Air Quality Management, which optimizes the placement of air purification booths in metropolitan environments using a novel deep reinforcement learning framework.