Large Language Models in Recommender Systems

The field of recommender systems is experiencing a significant shift with the integration of large language models (LLMs). Recent research has focused on leveraging LLMs to address long-standing challenges such as interaction sparsity, cold-start problems, and improving recommendation accuracy. The use of LLMs has enabled the development of novel frameworks that can densify knowledge graphs, model user roles and social behaviors, and generate synthetic data to enhance intent recognition. Furthermore, LLMs have been used to improve click-through rate prediction, enable thinking-based recommendations, and facilitate deep exploration of the item space. These advancements have the potential to revolutionize the field of recommender systems, enabling more accurate, personalized, and interpretable recommendations.

Noteworthy papers in this area include:

  • LLM-based Intent Knowledge Graph Recommender, which proposes a novel framework for constructing and densifying knowledge graphs to address sparsity issues.
  • TagCF, which introduces a user role identification task and behavioral logic modeling task to explicitly model user roles and learn logical relations between item topics and user social roles.
  • LLaCTR, a lightweight LLM-enhanced method for CTR prediction that employs a field-level enhancement paradigm to improve efficiency and effectiveness.
  • ThinkRec, a thinking-based framework that shifts LLM4Rec from System 1 to System 2, enabling more rational and interpretable recommendations.
  • DeepRec, a novel LLM-based RS that enables autonomous multi-turn interactions between LLMs and TRMs for deep exploration of the item space.
  • LARES, a latent reasoning framework for sequential recommendation that enhances model representation capabilities through depth-recurrent latent reasoning.
  • R^2ec, a unified large recommender model with intrinsic reasoning capabilities, facilitating interleaved reasoning and recommendation in the autoregressive process.

Sources

Explain What You Mean: Intent Augmented Knowledge Graph Recommender Built With LLM

Who You Are Matters: Bridging Topics and Social Roles via LLM-Enhanced Logical Recommendation

From Intent Discovery to Recognition with Topic Modeling and Synthetic Data

Field Matters: A lightweight LLM-enhanced Method for CTR Prediction

ThinkRec: Thinking-based recommendation via LLM

DeepRec: Towards a Deep Dive Into the Item Space with Large Language Model Based Recommendation

LARES: Latent Reasoning for Sequential Recommendation

$\text{R}^2\text{ec}$: Towards Large Recommender Models with Reasoning

Built with on top of