The field of large language models (LLMs) is shifting towards a greater emphasis on hallucination detection and mitigation. Researchers are working to develop more robust metrics to understand and quantify hallucinations, as well as strategies to reduce their occurrence. Studies have shown that LLMs are prone to generating hallucinations, which can have significant consequences in applications such as clinical summarization and code generation. Some notable papers have proposed innovative approaches to addressing this challenge, including the use of mode-seeking decoding methods and NLI models for hallucination detection. Overall, the field is moving towards a more nuanced understanding of hallucinations and the development of more effective methods for mitigating their impact. Noteworthy papers include Evaluating Evaluation Metrics, which highlights the need for more robust metrics, and Triggering Hallucinations in LLMs, which proposes a prompt-based framework for systematically triggering and quantifying hallucination.