The field of neurosymbolic AI is witnessing significant advancements, with a focus on integrating learning and reasoning to exploit the strengths of both large-scale learning and robust, verifiable reasoning. Recent developments have led to the proposal of novel frameworks and methods for improving the scalability and expressiveness of neurosymbolic models, such as the use of parametrized grounding methods and the integration of logical rules with large language models. Furthermore, there is a growing interest in developing neurosymbolic AI systems that can learn and reason with model-grounded symbolic AI systems, where natural language serves as the symbolic layer and grounding is achieved through the model's internal representation space. Noteworthy papers in this area include ChainEdit, which proposes a framework for propagating ripple effects in LLM knowledge editing through logical rule-guided chains, and KELPS, which introduces a novel neurosymbolic framework for verified multi-language autoformalization via semantic-syntactic alignment. Additionally, the paper on Disentangling Neural Disjunctive Normal Form Models presents a new disentanglement method for improving the performance of neural DNF-based models, while the report on Comprehension Without Competence provides a structural diagnosis of the limitations of large language models in symbolic computation and reasoning.