Advances in Neurosymbolic Reasoning and Multimodal Systems

The field of artificial intelligence is witnessing significant developments in neurosymbolic reasoning and multimodal systems. Researchers are focusing on enhancing the interpretability and scalability of these systems, enabling them to reason more effectively and make more accurate predictions. One notable direction is the integration of semantic and symbolic refinement techniques to improve graph quality and reduce inference noise. Another area of research is the investigation of multimodal systems' hidden language and the development of frameworks to study their understanding of the world. Noteworthy papers include: Spectral Neuro-Symbolic Reasoning II, which extends the Spectral NSR framework with modular, semantically grounded preprocessing steps. Concept-RuleNet, a multi-agent system that reinstates visual grounding in neurosymbolic reasoning while retaining transparent reasoning. M-CALLM, a framework that leverages multi-level contextual information to predict group coordination patterns in collaborative mixed reality environments.

Sources

Spectral Neuro-Symbolic Reasoning II: Semantic Node Merging, Entailment Filtering, and Knowledge Graph Alignment

Saying the Unsaid: Revealing the Hidden Language of Multimodal Systems Through Telephone Games

From Fact to Judgment: Investigating the Impact of Task Framing on LLM Conviction in Dialogue Systems

Can You Tell the Difference? Contrastive Explanations for ABox Entailments

Concept-RuleNet: Grounded Multi-Agent Neurosymbolic Reasoning in Vision Language Models

M-CALLM: Multi-level Context Aware LLM Framework for Group Interaction Prediction

A Crowdsourced Study of ChatBot Influence in Value-Driven Decision Making Scenarios

Can MLLMs Read the Room? A Multimodal Benchmark for Assessing Deception in Multi-Party Social Interactions

Formal Abductive Latent Explanations for Prototype-Based Networks

Built with on top of