The field of Music Information Retrieval (MIR) is rapidly advancing with innovative approaches to audio analysis and generation. Recent developments have focused on improving the accuracy and efficiency of music transcription, beat tracking, and chord recognition. Researchers are exploring the use of artificial intelligence and machine learning techniques, such as deep learning and neural networks, to enhance music analysis and generation. Additionally, there is a growing interest in using procedural data generation and sonification to improve music understanding and perception. Notably, the use of harmonic-aware fine-tuning approaches and pre-trained music foundation models has shown promising results in beat tracking and music analysis. Overall, the field of MIR is moving towards more accurate and efficient music analysis and generation, with potential applications in music production, recommendation, and education. Noteworthy papers include: HingeNet, which proposes a novel harmonic-aware fine-tuning approach for beat tracking, and BeatFM, which introduces a pre-trained music foundation model for improving beat tracking performance. AutoMashup is also notable for its system for automatic mashup creation based on source separation, music analysis, and compatibility estimation. Perceiving Slope and Acceleration is another significant paper that introduces a novel sampling method for pitch-based sonification to enhance the perception of slope and acceleration in univariate functions.