The field of recommender systems is moving towards incorporating large language models (LLMs) to improve explainability and effectiveness. Recent research has focused on leveraging LLMs to capture temporal dynamics in user preferences, generate richer user representations, and provide intrinsic interpretability. Another notable direction is the development of generative recommendation models, which reconceptualize recommendation as a generation task rather than discriminative scoring. These models have shown promise in integrating world knowledge, natural language understanding, and reasoning capabilities into recommendation systems. Noteworthy papers include HADSF, which introduces a two-stage approach for aspect-aware semantic control, and A Survey on Generative Recommendation, which provides a comprehensive examination of generative models in recommendation systems. Additionally, papers such as Pairwise and Attribute-Aware Decision Tree-Based Preference Elicitation and LookSync have made significant contributions to cold-start recommendation and visual product search, respectively.