The field of automata learning and neuro-symbolic reasoning is moving towards more efficient and scalable methods for learning and reasoning about complex systems. Recent developments have focused on improving the performance of automata learning algorithms, such as passive automata learning and active automata learning with stochastic delays. Additionally, there is a growing interest in neuro-symbolic architectures that can learn to solve discrete reasoning and optimization problems from natural inputs. These architectures have shown promising results in learning constraints and objectives for NP-hard reasoning problems. Furthermore, research has also explored the use of regular constraint propagation for solving string constraints, which has demonstrated effectiveness in both theoretical and experimental evaluations. Overall, the field is advancing towards more efficient and powerful methods for learning and reasoning about complex systems, with potential applications in a wide range of areas, including natural language processing, computer vision, and decision-making under uncertainty. Noteworthy papers in this area include those on passive automata learning of visibly deterministic context-free grammars and efficient neuro-symbolic learning of constraints and objectives. The paper on passive automata learning of visibly deterministic context-free grammars presents a novel algorithm for learning deterministic context-free grammars from positive and negative samples. The paper on efficient neuro-symbolic learning of constraints and objectives introduces a differentiable neuro-symbolic architecture and a loss function dedicated to learning how to solve NP-hard reasoning problems from natural inputs.