The field of machine learning and neural networks is moving towards a deeper understanding of the underlying mechanisms and principles that govern their behavior. Researchers are working to develop new methods and techniques to improve the performance and efficiency of these models, particularly in high-dimensional data settings. One key area of focus is on distinguishing genuine structure from chance correlations in data, and several studies have made significant progress in this area. Another important theme is the development of new approaches to data curation and feature learning, which are critical for improving the accuracy and robustness of neural networks. Notably, some papers have made innovative contributions to our understanding of the storage capacity of perceptron models and the effects of data curation on neural scaling. For example, one study demonstrated that optimal variable selection can surpass the bounds established by the Cover-Gardner theory, while another showed that static pruning induces a bounded operator and cannot alter asymptotic neural scaling. The paper on mitigating the curse of detail also proposed a powerful heuristic route for predicting the data and width scales at which various patterns of feature learning emerge. Overall, these advances have the potential to significantly improve the performance and efficiency of machine learning models, and are likely to have a major impact on the field in the coming years.