Advances in Recommender Systems Optimization

The field of recommender systems is moving towards more efficient and effective optimization techniques. Recent developments have focused on improving the performance of Top-K ranking metrics, such as NDCG@K, and enhancing the accuracy of sequential recommendations. Researchers are exploring novel loss functions, modular improvements, and ensemble sorting methods to address the challenges of optimizing these metrics. Notably, the use of unified monotonic transformations and attention-free token mixers has shown promising results in achieving fine-grained personalization and accelerating training and inference.

Some noteworthy papers in this area include: Breaking the Top-K Barrier, which proposes a novel recommendation loss tailored for NDCG@K optimization, resulting in a notable average improvement of 6.03% on four real-world datasets. eSASRec, which introduces a strong model that combines SASRec's training objective, LiGR Transformer layers, and Sampled Softmax Loss, achieving a 23% improvement compared to state-of-the-art models. UMRE, which proposes a unified monotonic ranking ensemble framework that replaces handcrafted transformations with unconstrained monotonic neural networks, achieving impressive performance and generalization capabilities. FuXi-β, which introduces a new framework applicable to Transformer-like recommendation models, achieving significant acceleration and improvement in the NDCG@10 metric on large-scale industrial datasets.

Sources

Breaking the Top-$K$ Barrier: Advancing Top-$K$ Ranking Metrics Optimization in Recommender Systems

eSASRec: Enhancing Transformer-based Recommendations in a Modular Fashion

UMRE: A Unified Monotonic Transformation for Ranking Ensemble in Recommender Systems

FuXi-\beta: Towards a Lightweight and Fast Large-Scale Generative Recommendation Model

Built with on top of