Explainability and User Modeling in Recommender Systems

Introduction

The field of recommender systems is moving towards increased explainability and improved user modeling. Recent research has focused on leveraging large language models (LLMs) to generate natural language summaries of users' interaction histories, enabling more transparent and accurate recommendations.

General Direction

The field is shifting towards more nuanced and dynamic user profiling methods, recognizing that user interests and preferences evolve over time. Self-supervised learning methods, such as Barlow Twins, are being adapted for user sequence modeling to mitigate the need for extensive negative sampling and enable effective representation learning with limited labeled data.

Noteworthy Papers

  • One paper proposes a framework that leverages LLMs to generate natural language summaries of users' interaction histories, distinguishing recent behaviors from more persistent tendencies, and produces textual profiles that can be used to explain recommendations in an interpretable manner. This approach not only boosts recommendation accuracy but also supports explainability.
  • Another paper presents an adaptation of Barlow Twins for user sequence modeling, demonstrating an 8%-20% improvement in accuracy across three downstream tasks, and highlighting the effectiveness of this approach in extracting valuable sequence-level information for user modeling.

Sources

Towards Explainable Temporal User Profiling with LLMs

Enhancing User Sequence Modeling through Barlow Twins-based Self-Supervised Learning

Multi-agents based User Values Mining for Recommendation

Tell Me the Good Stuff: User Preferences in Movie Recommendation Explanations

With Friends Like These, Who Needs Explanations? Evaluating User Understanding of Group Recommendations

The Pitfalls of Growing Group Complexity: LLMs and Social Choice-Based Aggregation for Group Recommendations

Built with on top of