The field of artificial intelligence is witnessing significant developments in neural reasoning and symbolic structures, with a focus on creating more efficient and effective models. Researchers are exploring novel architectures and techniques to improve the performance of neural networks on complex tasks, such as reasoning and problem-solving. One of the key directions is the integration of symbolic and connectionist approaches, enabling models to learn and represent abstract concepts and relationships. Another area of focus is the development of more interpretable and transparent models, allowing for a deeper understanding of their decision-making processes. Noteworthy papers in this area include the Hierarchical Reasoning Model, which achieves exceptional performance on complex reasoning tasks with a relatively small number of parameters. The paper on Why Neural Network Can Discover Symbolic Structures with Gradient-based Training provides a theoretical framework for understanding how discrete symbolic structures can emerge from continuous neural network training dynamics.