The field of deep learning is moving towards a greater emphasis on uncertainty quantification and adaptation, with a focus on developing models that can reliably predict and adapt to changing conditions. Recent research has highlighted the importance of uncertainty-aware deep learning, particularly in applications such as wildfire danger forecasting and speech emotion recognition. Techniques such as transfer learning, meta-learning, and conformal prediction are being explored to improve the robustness and reliability of deep learning models. Notably, the development of lightweight and efficient methods for uncertainty quantification and adaptation is enabling the deployment of deep learning models in critical applications. Noteworthy papers include: An Efficient Transfer Learning Method Based on Adapter with Local Attributes for Speech Emotion Recognition, which proposes a novel adapter-based approach for efficient transfer learning in speech emotion recognition. Guided Uncertainty Learning Using a Post-Hoc Evidential Meta-Model, which introduces a lightweight meta-model approach for guiding uncertainty in deep learning models. Uncertainty-Aware Deep Learning for Wildfire Danger Forecasting, which presents an uncertainty-aware deep learning framework for wildfire danger forecasting. Towards a Certificate of Trust: Task-Aware OOD Detection for Scientific AI, which proposes a new OOD detection method based on estimating joint likelihoods using a score-based diffusion model. EMO-TTA: Improving Test-Time Adaptation of Audio-Language Models for Speech Emotion Recognition, which proposes a lightweight, training-free adaptation framework for speech emotion recognition. Annotation-Efficient Active Test-Time Adaptation with Conformal Prediction, which employs principled, coverage-guaranteed uncertainty into active test-time adaptation.