The field of large language models is moving towards addressing two major challenges: hallucinations and repetitive patterns. Researchers are developing innovative methods to detect and eliminate these issues, which degrade output quality and make AI-generated text recognizable. A key direction is the development of training-free frameworks that can mitigate hallucinations and suppress repetitive patterns without compromising performance. Another important aspect is the understanding of the underlying mechanisms that cause these issues, such as the role of context in longer responses. Noteworthy papers include Antislop, which provides a comprehensive framework for identifying and eliminating repetitive patterns, and SHIELD, which proposes a training-free framework to mitigate hallucinations in LVLMs. PruneHal is also notable for its simple yet effective method of adaptive KV cache pruning to enhance the model's focus on critical visual information. Why LVLMs Are More Prone to Hallucinations in Longer Responses provides new insights into the role of context in hallucinations, proposing a novel induce-detect-suppress framework to address this issue.