The field of natural language processing is witnessing significant developments in the area of large language models (LLMs) and their applications in multilingual settings. Recent studies have highlighted the importance of considering missingness and omission in LLMs, as well as the need for more inclusive and diverse training data. The use of LLMs in low-resource languages and the evaluation of their performance in these languages are also gaining attention. Noteworthy papers include: The paper on omission-aware graph inference for misinformation detection, which presents a novel framework for detecting omission-based deception. The BERnaT paper, which demonstrates the importance of capturing linguistic diversity in building inclusive language models. The CALAMITA initiative, which provides a comprehensive benchmark for evaluating LLMs in Italian and highlights the need for fine-grained, task-representative metrics.