The field of Natural Language Processing (NLP) is witnessing significant developments in uncertainty estimation and management. Researchers are exploring innovative approaches to handle ambiguity, polysemy, and uncertainty in texts, leveraging large language models (LLMs) and fuzzy reasoning frameworks. A key direction is the integration of semantic priors with continuous fuzzy membership degrees, enabling explicit interactions between probability-based reasoning and fuzzy membership reasoning. This transition allows for the transformation of ambiguous inputs into clear and interpretable decisions, capturing conflicting or uncertain signals that traditional probability-based methods cannot. Another important area of research is the development of standard quality criterion names and definitions for evaluating NLP systems, which is essential for establishing comparability and drawing reliable conclusions about system quality. Noteworthy papers in this area include the introduction of Spectral Uncertainty, a novel approach to quantifying and decomposing uncertainties in LLMs, and the proposal of linguistic confidence as a scalable and efficient approach to uncertainty estimation. The QCET taxonomy is also a notable contribution, providing a standard set of quality criterion names and definitions for NLP evaluations. Overall, these advances have the potential to significantly improve the robustness and reliability of NLP systems, and are expected to have a major impact on the field in the coming years. Notable papers: The Fuzzy Reasoning Chain framework integrates LLM semantic priors with continuous fuzzy membership degrees. The QCET taxonomy provides a standard set of quality criterion names and definitions for NLP evaluations. Spectral Uncertainty is a novel approach to quantifying and decomposing uncertainties in LLMs. Linguistic confidence is proposed as a scalable and efficient approach to uncertainty estimation.