The field of watermarking for AI-generated content is rapidly moving towards developing more robust and stealthy methods to protect intellectual property and prevent misuse. Recent research has focused on creating multi-bit watermarking schemes that can embed provenance data, such as user IDs and timestamps, into AI-generated text, images, and audio. These methods aim to preserve the original distribution of the generated content while enabling fast traceability and ownership verification. Furthermore, researchers are exploring the use of regenerative diffusion models and key-controllable frameworks to enhance the security and effectiveness of watermarking techniques. Notably, some studies have also highlighted the limitations and potential risks of existing watermarking methods, including the possibility of optimization-free universal watermark forgery and data poisoning attacks. Overall, the field is progressing towards developing more practical and deployment-ready watermarking solutions that can effectively mitigate the risks associated with AI-generated content. Noteworthy papers include: StealthInk, which presents a stealthy multi-bit watermarking scheme for large language models, and WGLE, which proposes a novel black-box watermarking paradigm for graph neural networks that enables embedding multi-bit ownership information without using backdoors. Optimization-Free Universal Watermark Forgery with Regenerative Diffusion Models is also a significant contribution, as it uncovers the risk of optimization-free and universal watermark forgery that can work independently of the target image's origin or the watermarking model used. A Crack in the Bark: Leveraging Public Knowledge to Remove Tree-Ring Watermarks presents a novel attack against Tree-Ring, a watermarking technique for diffusion models known for its high imperceptibility and robustness against removal attacks.