The field of speech recognition and generation is witnessing significant developments, with a focus on improving model interpretability, robustness, and controllability. Researchers are exploring innovative techniques, such as adaptating interpretability methods from large language models to automatic speech recognition systems, to gain insights into linguistic representations and model behaviors. Moreover, novel frameworks are being proposed to enhance voice timbre attribute detection, controllable speech and singing voice generation, and speech restoration tasks. Noteworthy papers in this area include: QvTAD, which presents a pairwise comparison framework for voice timbre attribute detection, achieving substantial improvements across multiple timbre descriptors. Vevo2, a unified framework for controllable speech and singing voice generation, enabling flexible controllability over text, prosody, and style. Multi-Metric Preference Alignment for Generative Speech Restoration, which investigates the application of preference-based post-training to speech restoration tasks, resulting in consistent and significant performance gains.