The field of neural networks and logic is witnessing a significant shift towards integrating logical operations with neural architectures, enabling more expressive and interpretable models. This trend is evident in the development of recurrent deep differentiable logic gate networks, which combine Boolean operations with recurrent architectures for sequence-to-sequence learning. Additionally, graph neural networks are being re-examined through the lens of logical languages, leading to a deeper understanding of their expressive power. The use of differentiable inductive logic programming techniques is also gaining traction, allowing for the discovery of approximate rule-based solutions to complex problems. Furthermore, neural logic networks are being developed to provide interpretable classification models, enabling the extraction of logical rules and mechanisms. Noteworthy papers in this area include:
- Recurrent Deep Differentiable Logic Gate Networks, which achieves competitive performance on sequence-to-sequence learning tasks.
- Aggregate-Combine-Readout GNNs Are More Expressive Than Logic C2, which resolves a long-standing open problem in the field of graph neural networks.
- GLIDR: Graph-Like Inductive Logic Programming with Differentiable Reasoning, which introduces a novel differentiable rule learning method that models the inference of logic rules with more expressive syntax.