The field of natural language processing is moving towards developing more reliable and trustworthy large language models (LLMs). A key challenge in this area is the detection and mitigation of hallucinations, which are false or inaccurate statements generated by the model. Recent research has focused on developing innovative methods for detecting and preventing hallucinations, including the use of token-level entropy, causal intervention, and head-adaptive gating. These approaches have shown promising results in improving the accuracy and reliability of LLMs. Notably, some papers have proposed novel frameworks for hallucination detection, such as leveraging token-level entropy and integrating it into a conformal prediction pipeline. Others have introduced techniques for mitigating hallucinations, including head-adaptive gating and value calibration. Overall, the field is advancing towards more robust and trustworthy LLMs. Noteworthy papers include: TECP, which introduces a novel framework for uncertainty quantification in LLMs, and HAVE, which presents a parameter-free decoding framework for hallucination mitigation.