The field of neural networks is moving towards developing more robust and energy-efficient models. Researchers are exploring various techniques to improve the robustness of neural networks, including contractivity-promoting regularization and hybrid projection decomposition. These methods aim to reduce the sensitivity of neural networks to input noise and adversarial attacks, making them more reliable in real-world applications. Additionally, there is a growing interest in developing energy-efficient neural networks using compute-in-memory architectures and non-volatile memory-based accelerators. These architectures have the potential to significantly reduce the energy consumption of neural networks, making them more suitable for edge AI devices. Noteworthy papers in this area include: Robust Convolution Neural ODEs via Contractivity-promoting regularization, which proposes a novel regularization technique to improve the robustness of neural networks. HPD: Hybrid Projection Decomposition for Robust State Space Models on Analog CIM Hardware, which presents a hybrid projection decomposition strategy to reduce the susceptibility of state space models to perturbations. Extending Straight-Through Estimation for Robust Neural Networks on Analog CIM Hardware, which introduces an extended straight-through estimator framework to enable noise-aware training with more accurate noise modeling in analog CIM systems. Special Session: Sustainable Deployment of Deep Neural Networks on Non-Volatile Compute-in-Memory Accelerators, which presents a novel negative optimization training mechanism to achieve robust DNN deployment for NVCIM. A Time- and Energy-Efficient CNN with Dense Connections on Memristor-Based Chips, which proposes a scheme to build an RRAM-friendly yet efficient CNN. Soft Error Probability Estimation of Nano-scale Combinational Circuits, which introduces a novel framework for SEP analysis that holistically integrates PV and aging effects. An ECC-based Fault Tolerance Approach for DNNs, which proposes a fault tolerance approach based on Error Correcting Codes to ensure the correct functionality of DNNs in the presence of bit-flip faults. Harnessing the Full Potential of RRAMs through Scalable and Distributed In-Memory Computing with Integrated Error Correction, which introduces a full-stack, distributed framework for energy-efficient in-memory computing. Computing-In-Memory Dataflow for Minimal Buffer Traffic, which introduces a novel CIM dataflow that significantly reduces buffer traffic by maximizing data reuse and improving memory utilization during depthwise convolution. Mini-Batch Robustness Verification of Deep Neural Networks, which proposes a new approach to local robustness verification. Row-Column Hybrid Grouping for Fault-Resilient Multi-Bit Weight Representation on IMC Arrays, which proposes a novel multi-bit weight representation technique and a compiler pipeline to reformulate the fault-aware weight decomposition problem as an Integer Linear Programming task.