The field of large language models (LLMs) is rapidly evolving, with a growing focus on improving privacy and robustness. Recent research has explored the use of differential privacy to protect sensitive user data, with innovations in prompt perturbation mechanisms and hybrid utility functions. Additionally, there is a increasing awareness of the potential security risks associated with local LLM inference, including hardware cache side-channels and adversarial attacks. To address these issues, researchers are developing new techniques, such as implicit Euler methods and exponentiated gradient descent, to enhance the robustness of LLMs against attacks. Notably, papers such as CAPE and IM-BERT have made significant contributions to the field, with CAPE introducing a context-aware prompt perturbation mechanism and IM-BERT enhancing the robustness of BERT through the implicit Euler method. Similarly, PIG has proposed a novel framework for privacy jailbreak attacks on LLMs, highlighting the need for stronger safeguards to protect sensitive information.