The field of large language models (LLMs) is moving towards more personalized and context-aware interactions. Researchers are exploring ways to enable LLMs to capture user-specific concepts, reason over relations among objects, and provide more tailored responses. One of the key challenges is to develop methods that can effectively integrate personalized knowledge and relational reasoning capabilities. Recent works have proposed innovative approaches, such as graph-based models and proactive conversation assistants, to address these challenges. Notably, some papers have introduced new benchmarks and datasets to evaluate the performance of personalized LLMs. Overall, the field is advancing towards more sophisticated and human-like language understanding and generation capabilities. Noteworthy papers include: ReGraP-LLaVA, which proposes a new dataset and model for relational reasoning in personalized LLMs, achieving state-of-the-art performance. LlamaPIE, which introduces a proactive in-ear conversation assistant that provides discreet and concise guidance via hearable devices, demonstrating strong user preference over baseline models.