Advances in Uncertainty Quantification and Explainability for Large Language Models

The field of large language models (LLMs) is rapidly advancing, with a growing focus on uncertainty quantification and explainability. Recent research has explored various methods for evaluating and improving the reliability of LLMs, including the integration of uncertainty quantification methods in argumentative LLMs and the development of novel frameworks for uncertainty quantification in generative video models. Additionally, there is a increasing interest in using counterfactual explanations to provide insights into model behavior and decision-making processes. Noteworthy papers in this area include 'Uncertainty as Feature Gaps: Epistemic Uncertainty Quantification of LLMs in Contextual Question-Answering', which proposes a theoretically grounded approach to quantify epistemic uncertainty, and 'How Confident are Video Models? Empowering Video Models to Express their Uncertainty', which presents a framework for uncertainty quantification of generative video models. Overall, the field is moving towards developing more robust and reliable LLMs that can provide accurate and transparent results.

Sources

Evaluating Uncertainty Quantification Methods in Argumentative Large Language Models

How Confident are Video Models? Empowering Video Models to Express their Uncertainty

Uncertainty as Feature Gaps: Epistemic Uncertainty Quantification of LLMs in Contextual Question-Answering

Enhancing XAI Narratives through Multi-Narrative Refinement and Knowledge Distillation

From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments

Decision Potential Surface: A Theoretical and Practical Approximation of LLM's Decision Boundary

The Argument is the Explanation: Structured Argumentation for Trust in Agents

Annotate Rhetorical Relations with INCEpTION: A Comparison with Automatic Approaches

Can Linear Probes Measure LLM Uncertainty?

Does Using Counterfactual Help LLMs Explain Textual Importance in Classification?

Enhancing Fake News Video Detection via LLM-Driven Creative Process Simulation

LLM Based Bayesian Optimization for Prompt Search

On the Role of Unobserved Sequences on Sample-based Uncertainty Quantification for LLMs

Synthesising Counterfactual Explanations via Label-Conditional Gaussian Mixture Variational Autoencoders

Latent Uncertainty Representations for Video-based Driver Action and Intention Recognition

Uncertainty Quantification In Surface Landmines and UXO Classification Using MC Dropout

Reproducibility Study of "XRec: Large Language Models for Explainable Recommendation"

Utilizing Large Language Models for Machine Learning Explainability

Built with on top of