Advances in Music Information Retrieval and Audio Applications

The field of Music Information Retrieval (MIR) and audio applications is rapidly evolving, with a focus on developing innovative and efficient methods for analyzing, generating, and interacting with music and audio data. Recent developments have seen a shift towards the use of artificial intelligence (AI) and machine learning (ML) techniques, such as neural networks and deep learning, to improve the accuracy and robustness of music and audio analysis tasks. Notably, there is a growing interest in using AI agents and multi-agent systems to enhance music analysis, education, and generation. Furthermore, researchers are exploring the use of preference learning and human-centered approaches to align music generation with human preferences and values. The development of large-scale datasets and benchmarks, such as LargeSHS, is also facilitating advancements in music adaptation and generation tasks. Overall, the field is moving towards more sophisticated and human-centered approaches to music and audio analysis, generation, and interaction. Noteworthy papers include: Lightweight Hopfield Neural Networks for Bioacoustic Detection, which presents a fast and accurate method for detecting animal calls; Data-Efficient Self-Supervised Algorithms for Fine-Grained Birdsong Analysis, which introduces a robust and efficient approach for birdsong annotation; and Aligning Generative Music AI with Human Preferences, which advocates for the systematic application of preference alignment techniques to music generation.

Sources

Lightweight Hopfield Neural Networks for Bioacoustic Detection and Call Monitoring of Captive Primates

Data-Efficient Self-Supervised Algorithms for Fine-Grained Birdsong Analysis

Preference-Based Learning in Audio Applications: A Systematic Analysis

Artificial Intelligence Agents in Music Analysis: An Integrative Perspective Based on Two Use Cases

MuCPT: Music-related Natural Language Model Continued Pretraining

Count The Notes: Histogram-Based Supervision for Automatic Music Transcription

A Controllable Perceptual Feature Generative Model for Melody Harmonization via Conditional Variational Autoencoder

Aligning Generative Music AI with Human Preferences: Methods and Challenges

LargeSHS: A large-scale dataset of music adaptation

Difficulty-Controlled Simplification of Piano Scores with Synthetic Data for Inclusive Music Education

Music Recommendation with Large Language Models: Challenges, Opportunities, and Evaluation

Built with on top of