The field of machine learning is moving towards a greater emphasis on privacy preservation, with several recent papers exploring innovative methods for protecting sensitive data. One key direction is the development of differentially private fine-tuning methods for large language models, which can help prevent the exposure of sensitive training data. Another area of focus is the creation of robust out-of-distribution detection systems, which can help identify and mitigate potential privacy risks. Additionally, researchers are exploring the use of graph foundation models and large language models to enable zero-shot graph out-of-distribution detection and graph synthetic out-of-distribution exposure. Noteworthy papers include NoEsis, which proposes a framework for differentially private knowledge transfer in modular LLM adaptation, and ReCIT, which presents a novel privacy attack for reconstructing full private data from gradients in parameter-efficient fine-tuning of LLMs. GLIP-OOD is also notable for its framework for zero-shot graph OOD detection using a graph foundation model and LLMs.