Advances in Large Language Models and Graph Reasoning

The field of natural language processing and artificial intelligence is rapidly advancing, with a focus on improving the reliability and performance of large language models (LLMs) and graph reasoning techniques. Recent research has highlighted the importance of confidence estimation, model collapse mitigation, and explainable methods for temporal knowledge graph forecasting. Noteworthy papers include ForTIFAI, which proposes a confidence-aware loss function to mitigate model collapse, and GrACE, which introduces a generative approach to confidence elicitation for LLMs. Additionally, Zero-shot Graph Reasoning via Retrieval Augmented Framework with LLMs demonstrates a training-free method for graph reasoning tasks, achieving high accuracy and efficiency. These innovative approaches are expected to have a significant impact on the development of more reliable and effective AI systems.

Sources

ForTIFAI: Fending Off Recursive Training Induced Failure for AI Models

Constructing a Question-Answering Simulator through the Distillation of LLMs

Hierarchical Bracketing Encodings Work for Dependency Graphs

GrACE: A Generative Approach to Better Confidence Elicitation in Large Language Models

CountTRuCoLa: Rule Confidence Learning for Temporal Knowledge Graph Forecasting

Compartmentalised Agentic Reasoning for Clinical NLI

Selective Risk Certification for LLM Outputs via Information-Lift Statistics: PAC-Bayes, Robustness, and Skeleton Design

Zero-shot Graph Reasoning via Retrieval Augmented Framework with LLMs

The LLM Already Knows: Estimating LLM-Perceived Question Difficulty via Hidden Representations

All Roads Lead to Rome: Graph-Based Confidence Estimation for Large Language Model Reasoning

SINAI at eRisk@CLEF 2023: Approaching Early Detection of Gambling with Natural Language Processing

SINAI at eRisk@CLEF 2022: Approaching Early Detection of Gambling and Eating Disorders with Natural Language Processing

Built with on top of