The field of Large Language Models (LLMs) is moving towards improving reliability and trustworthiness, with a focus on citation attribution, confidence calibration, and hallucination mitigation. Researchers are exploring innovative methods to ensure that LLMs provide accurate and trustworthy outputs, particularly in high-stakes domains such as healthcare and finance. One key direction is the development of attribution paradigms that can effectively cite human-verifiable sources, with a trade-off between coverage and citation correctness. Another area of research is confidence calibration, where techniques such as Distractor-Normalized Coherence (DINCO) are being introduced to estimate and account for an LLM's suggestibility bias. Additionally, decoding strategies like Attribution-Guided Decoding (AGD) are being proposed to steer generation towards desirable behaviors, and confidence-aware routing systems are being developed to proactively assess model uncertainty and redirect queries based on estimated reliability. Notable papers in this area include: Generation-Time vs. Post-hoc Citation, which introduces two paradigms for citation attribution and provides evidence-based recommendations. Calibrating Verbalized Confidence with Self-Generated Distractors, which proposes DINCO to improve confidence calibration. Attribution-Guided Decoding, which introduces AGD to enhance instruction following and factual accuracy. Confidence-Aware Routing for Large Language Model Reliability Enhancement, which proposes a multi-signal approach to pre-generation hallucination mitigation.