The field of Large Language Models (LLMs) is moving towards developing more reliable and trustworthy models by addressing the issue of hallucinations, which refers to the generation of confident but factually incorrect information. Recent research has focused on developing innovative methods for hallucination detection and mitigation, including metamorphic testing frameworks, attention probing techniques, and self-improving faithfulness-aware contrastive tuning. These approaches aim to improve the accuracy and reliability of LLMs, particularly in high-stakes domains such as law and enterprise applications. Noteworthy papers in this area include MetaRAG, which proposes a metamorphic testing framework for hallucination detection in Retrieval-Augmented Generation (RAG) systems, and SI-FACT, which presents a self-improving framework for mitigating knowledge conflict in LLMs. Overall, the field is advancing towards more robust and reliable LLMs that can be deployed in real-world applications with confidence.