The field of large language model (LLM) watermarking is moving towards more robust and secure methods for tracing and authenticating text generated by LLMs. Recent research has focused on developing innovative approaches to address the challenges of watermarking, including the need for stable training, prevention of reward hacking, and resistance to removal attacks. Notably, there is a growing interest in exploring the vulnerabilities of LLM service providers and the development of novel model stealing attacks. Additionally, researchers are investigating new statistical frameworks for optimal detection of language watermarks and exploring alternative approaches such as cross-lingual summarization attacks. Furthermore, the importance of public verifiability in LLM watermarking schemes is being recognized, with solutions being proposed to enable third-party verification without compromising the secrecy of the watermark detection process. Some noteworthy papers in this area include: A Reinforcement Learning Framework for Robust and Secure LLM Watermarking, which proposes an end-to-end RL framework for robust and secure LLM watermarking. PVMark: Enabling Public Verifiability for LLM Watermarking Schemes, which introduces a plugin based on zero-knowledge proof (ZKP) to enable public verifiability of watermark detection. Cross-Lingual Summarization as a Black-Box Watermark Removal Attack, which demonstrates the effectiveness of cross-lingual summarization attacks in removing watermarks from LLM outputs.