The field of language models is moving towards a greater emphasis on hallucination detection, with a focus on developing methods that can accurately identify and mitigate the generation of unsubstantiated content. This is driven by the recognition that hallucinations can be pervasive and problematic, and that current evaluation methods are often insufficient. Researchers are exploring new approaches, including the use of entropy-based analysis, curriculum learning, and traceability methods, to improve the detection and understanding of hallucinations. These advances have the potential to significantly improve the reliability and trustworthiness of language models. Notable papers in this area include: Teaching with Lies, which presents a curriculum-based approach to hallucination detection that achieves significant improvements over state-of-the-art models, and VeriTrail, which introduces a closed-domain hallucination detection method with traceability capabilities.