The field of AI assistants is moving towards developing proactive and personalized models that can learn and adapt to individual user preferences. Recent research has focused on addressing the challenges of cold-start problems and biasing issues in large language models (LLMs) by incorporating collective knowledge and local-global memory frameworks. Additionally, there is a growing interest in developing frameworks that can track user preference evolution over time and provide transparent and explainable personalization. Another key area of research is test-time personalization, which enables real-time adaptation to user preferences without requiring extensive pre-existing user data. Furthermore, the use of reinforcement learning from human interaction and personalized reasoning is being explored to improve model alignment and performance in human-facing applications. Noteworthy papers include: ProPerSim, which introduces a simulation framework for developing proactive and personalized AI assistants. PET, which proposes a framework for tracking user preference evolution using LLM-generated explainable distributions. T-POP, which introduces a novel algorithm for test-time personalization with online preference feedback. PREFDISCO, which establishes personalized reasoning as a measurable research frontier and reveals fundamental limitations in current LLMs' interactive capabilities.