The field of artificial intelligence and music is moving towards a more experiential and collaborative approach, with a focus on improvisation, open-endedness, and human-machine interaction. Recent developments have highlighted the potential of AI to learn from and interact with humans in a more creative and dynamic way, particularly in the context of music generation and performance.
Notable advancements include the use of vision-based gesture recognition for real-time music composition, and the development of interactive systems that facilitate human-AI musical co-creativity. Additionally, researchers have made significant progress in enabling robots to perform complex tasks such as piano playing, with a focus on scalability and human-demonstration-free learning.
Some papers are particularly noteworthy for their innovative approaches and contributions to the field. The paper on improvisation and open-endedness provides valuable insights into the design of future experiential AI agents that can improvise alone or alongside humans. The paper on rhythm in the air introduces a novel application of vision-based dynamic gesture recognition for real-time music composition through gestures. The paper on the ghost in the keys presents an interactive system facilitating a real-time musical duet between a human pianist and a generative model. The paper on human-machine ritual proposes an alternative approach to human-machine collaboration through wearable IMU sensor data and responsive multimedia control. The paper on dexterous robotic piano playing at scale presents the first agent capable of performing nearly one thousand music pieces via scalable, human-demonstration-free learning.