The field of natural language processing is moving towards developing more sophisticated methods for detecting and understanding AI-generated text. Recent research has focused on improving the accuracy and robustness of detection methods, as well as exploring new approaches such as using DNA-inspired paradigms and rhythm-aware phrase insertion. The use of large language models has also been investigated, with a focus on verifying their outputs and detecting biases. Furthermore, researchers are working on developing more interpretable and explainable detection methods, such as those using surprisal-based features. Noteworthy papers in this area include: DNA-DetectLLM, which proposes a zero-shot detection method for distinguishing AI-generated and human-written text, achieving state-of-the-art detection performance and strong robustness against various adversarial attacks. Diversity Boosts AI-Generated Text Detection, which proposes a novel detection framework that captures how unpredictability fluctuates across a text using surprisal-based features, outperforming existing zero-shot detectors by up to 33.2% and achieving competitive performance with fine-tuned baselines.