Explainable Recommendation Systems with Large Language Models

The field of recommender systems is moving towards incorporating large language models (LLMs) to improve explainability and effectiveness. Recent research has focused on leveraging LLMs to capture temporal dynamics in user preferences, generate richer user representations, and provide intrinsic interpretability. Another notable direction is the development of generative recommendation models, which reconceptualize recommendation as a generation task rather than discriminative scoring. These models have shown promise in integrating world knowledge, natural language understanding, and reasoning capabilities into recommendation systems. Noteworthy papers include HADSF, which introduces a two-stage approach for aspect-aware semantic control, and A Survey on Generative Recommendation, which provides a comprehensive examination of generative models in recommendation systems. Additionally, papers such as Pairwise and Attribute-Aware Decision Tree-Based Preference Elicitation and LookSync have made significant contributions to cold-start recommendation and visual product search, respectively.

Sources

HADSF: Aspect Aware Semantic Control for Explainable Recommendation

A Survey on Generative Recommendation: Data, Model, and Tasks

Pairwise and Attribute-Aware Decision Tree-Based Preference Elicitation for Cold-Start Recommendation

LookSync: Large-Scale Visual Product Search System for AI-Generated Fashion Looks

Effectiveness of LLMs in Temporal User Profiling for Recommendation

Listwise Preference Diffusion Optimization for User Behavior Trajectories Prediction

Solving cold start in news recommendations: a RippleNet-based system for large scale media outlet

E-CARE: An Efficient LLM-based Commonsense-Augmented Framework for E-Commerce

LLM-as-a-Judge: Toward World Models for Slate Recommendation Systems

Built with on top of