The field of natural language processing is moving towards more nuanced and context-aware models for discourse and argumentation. Recent research has focused on developing models that can better capture the complexities of human language, including the use of implicit and explicit cues to convey meaning. One key area of development is in the use of large language models to improve performance on tasks such as argument mining, entailment detection, and readability assessment. These models have shown significant promise in their ability to capture long-range dependencies and contextual relationships, but still face challenges in terms of transparency and interpretability. Noteworthy papers in this area include: JUDGEBERT, which introduces a novel evaluation metric for legal meaning preservation in French legal text simplification, demonstrating superior correlation with human judgment. LongReasonArena, which presents a benchmark for assessing the long reasoning capabilities of large language models, highlighting significant challenges for current models. ArgCMV, which introduces a new argument key point extraction dataset comprising around 12K arguments from actual online human debates, setting the stage for the next generation of LLM-driven summarization research.