The field of AI-generated image detection is rapidly evolving, with a focus on developing more robust and effective methods for identifying manipulated images. Recent research has highlighted the importance of considering the distinct challenges posed by different types of generative models, such as GANs and diffusion models. Notably, the use of Vision Transformers (ViTs) has shown significant promise in detecting AI-generated satellite images, outperforming traditional Convolutional Neural Networks (CNNs) in terms of accuracy and robustness. Additionally, the development of novel objectives and methods, such as attention entropy minimization and semantic-antagonistic fine-tuning, has improved the performance of detectors in various applications. Overall, the field is moving towards a more nuanced understanding of the strengths and limitations of different detection approaches, with a focus on developing more generalizable and reliable methods. Noteworthy papers include: Deepfake Geography, which demonstrates the effectiveness of ViTs for detecting AI-generated satellite images. AttenDence, which proposes a novel objective for test-time adaptation. DiffSeg30k, which introduces a benchmark dataset for fine-grained detection of diffusion-edited images. When Semantics Regulate, which presents a semantic-antagonistic fine-tuning paradigm for improving cross-domain generalization. Beyond Binary Classification, which proposes a semi-supervised approach for generalized AI-generated image detection. Shortcut Invariance, which presents a method for learning a robust function by rendering the classifier functionally invariant to shortcut signals. DinoLizer, which introduces a DINOv2-based model for localizing manipulated regions in generative inpainting. CAHS-Attack, which proposes a CLIP-Aware Heuristic Search attack method for strengthening attack capabilities.