Causality Detection and Explanation in Large Language Models

The field of natural language processing is moving towards the development of more sophisticated large language models (LLMs) that can effectively detect and explain causal relationships. Recent advancements have focused on enhancing LLM performance in causality detection and extraction tasks through the use of retrieval-augmented generation and hierarchical-causal modification frameworks. These frameworks have shown significant improvements over traditional static prompting schemes, enabling LLMs to better capture complex causal relationships and generate more accurate explanations. Furthermore, the integration of structural causal models into LLMs has led to the development of causal-aware LLMs, which can learn, adapt, and act in complex environments. Notable papers in this area include: HiCaM, which proposes a hierarchical-causal modification framework for long-form text modification, achieving significant improvements over strong LLMs. Causal-aware LLMs, which integrate structural causal models into the decision-making process, enabling more efficient policy-making through reinforcement learning agents.

Sources

Retrieval Augmented Generation based Large Language Models for Causality Mining

HiCaM: A Hierarchical-Causal Modification Framework for Long-Form Text Modification

Causal-aware Large Language Models: Enhancing Decision-Making Through Learning, Adapting and Acting

Causal Explanations Over Time: Articulated Reasoning for Interactive Environments

Built with on top of