The field of neural networks is moving towards a deeper understanding of expressivity and approximation capabilities. Recent research has focused on the study of ReLU neural networks, input convex neural networks (ICNNs), and their relationships with polyhedral geometry and triangulations. These advancements have led to a better comprehension of the limitations and possibilities of these models, including sharp lower bounds on depth complexity and depth separations between ReLU networks and ICNNs. Furthermore, innovative approaches have been proposed for constructing fractal interpolation functions using neural network operators, ensuring smoothness preservation and convergence analysis. Additionally, the existence of efficient universal ReLU neural networks has been demonstrated, and novel architectures for deep morphological neural networks have been investigated, showcasing their potential as universal approximators. Noteworthy papers include: On the Depth of Monotone ReLU Neural Networks and ICNNs, which proves sharp lower bounds on the ICNN depth complexity of the maximum function. Neural Network Operator-Based Fractal Approximation, which presents a new approach for constructing alpha-fractal interpolation functions using neural network operators. Training Deep Morphological Neural Networks as Universal Approximators, which proposes novel architectures for DMNNs and demonstrates their potential as universal approximators.