The field of software reliability and system safety is moving towards a deeper understanding of the interplay between reliability metrics, safety, and security concerns. Researchers are exploring new methods to predict reliability, integrate safety and security considerations, and develop more robust software systems. A key area of focus is the development of frameworks and guidelines for practitioners to improve reliability prediction models and address security and safety challenges. Another important direction is the investigation of the relationship between system safety objectives and model performance requirements, particularly in the context of machine learning and deep neural networks. Additionally, there is a growing recognition of the importance of software testing as a dynamic and intellectually engaging field, and the need to balance task complexity to sustain motivation. Noteworthy papers include: Relating System Safety and Machine Learnt Model Performance, which proposes a method to derive safety-related performance requirements for machine learning components, and Towards a Periodic Table of Computer System Design Principles, which aims to establish a shared vocabulary for system design principles across domains.