Causal Inference and Reasoning in AI Models

The field of artificial intelligence is moving towards a deeper understanding of causal relationships and reasoning. Recent developments have focused on addressing the challenges of causal identification, over-memorization in finetuning large language models, and hallucinations in multimodal models. Researchers are exploring new frameworks and techniques to improve the robustness and generalization of AI models, including the use of causal analyses and reinforcement learning. Notable papers in this area include: Unveiling Over-Memorization in Finetuning LLMs for Reasoning Tasks, which investigates the conditions leading to over-memorization and provides recommendations for finetuning. Hacking Hallucinations of MLLMs with Causal Sufficiency and Necessity, which proposes a novel reinforcement learning framework to mitigate hallucinations in multimodal models. Causal Reflection with Language Models, which introduces a framework for explicit causal modeling and reasoning in language models.

Sources

Causal identification with $Y_0$

Unveiling Over-Memorization in Finetuning LLMs for Reasoning Tasks

Hacking Hallucinations of MLLMs with Causal Sufficiency and Necessity

Causal Reflection with Language Models

Built with on top of