The field of music technology is moving towards increased integration of artificial intelligence and machine learning to enhance creative processes and improve music production. Researchers are exploring ways to preserve the human element in music creation, while also leveraging technology to support and augment human creativity. This includes the development of AI-based tools that can predict equalizer parameters, generate music variations, and modulate audio effects in real-time based on emotional cues. Furthermore, there is a growing focus on the ethical implications of AI music research and the need for more effective engagement with these issues. Notable papers in this area include: From Sound to Setting: AI-Based Equalizer Parameter Prediction for Piano Tone Replication, which presents a system for predicting EQ parameters directly from audio features. The Shape of Surprise: Structured Uncertainty and Co-Creativity in AI Music Tools, which examines how designers incorporate randomness and uncertainty into creative practice. Supporting Creative Ownership through Deep Learning-Based Music Variation, which investigates the importance of personal ownership in musical AI design.