Advancements in Neurosymbolic AI and Logical Reasoning

The field of neurosymbolic AI is witnessing significant advancements, with a focus on integrating learning and reasoning to exploit the strengths of both large-scale learning and robust, verifiable reasoning. Recent developments have led to the proposal of novel frameworks and methods for improving the scalability and expressiveness of neurosymbolic models, such as the use of parametrized grounding methods and the integration of logical rules with large language models. Furthermore, there is a growing interest in developing neurosymbolic AI systems that can learn and reason with model-grounded symbolic AI systems, where natural language serves as the symbolic layer and grounding is achieved through the model's internal representation space. Noteworthy papers in this area include ChainEdit, which proposes a framework for propagating ripple effects in LLM knowledge editing through logical rule-guided chains, and KELPS, which introduces a novel neurosymbolic framework for verified multi-language autoformalization via semantic-syntactic alignment. Additionally, the paper on Disentangling Neural Disjunctive Normal Form Models presents a new disentanglement method for improving the performance of neural DNF-based models, while the report on Comprehension Without Competence provides a structural diagnosis of the limitations of large language models in symbolic computation and reasoning.

Sources

Grounding Methods for Neural-Symbolic AI

ChainEdit: Propagating Ripple Effects in LLM Knowledge Editing through Logical Rule-Guided Chains

KELPS: A Framework for Verified Multi-Language Autoformalization via Semantic-Syntactic Alignment

Justification Logic for Intuitionistic Modal Logic (Extended Technical Report)

Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations

Model-Grounded Symbolic Artificial Intelligence Systems Learning and Reasoning with Model-Grounded Symbolic Artificial Intelligence Systems

Disentangling Neural Disjunctive Normal Form Models

Comprehension Without Competence: Architectural Limits of LLMs in Symbolic Computation and Reasoning

A Decision Procedure for Probabilistic Kleene Algebra with Angelic Nondeterminism

Defining neurosymbolic AI

Cancellative Convex Semilattices

FMC: Formalization of Natural Language Mathematical Competition Problems

Neurosymbolic Reasoning Shortcuts under the Independence Assumption

Monotone weak distributive laws over the lifted powerset monad in categories of algebras

Built with on top of