The field of language assessment and crisis response is moving towards increased automation and use of large language models (LLMs) to improve the efficiency and effectiveness of communication. Researchers are exploring new methods for evaluating the consistency and quality of generated responses, as well as developing novel approaches to measuring scalar constructs in social science. There is also a growing emphasis on validity and fairness in high-stakes language assessments, particularly for culturally and linguistically diverse groups. Furthermore, the development of explainability and subtrait scoring in automated writing evaluation is enhancing transparency and providing more detailed feedback for educators and students. Notable papers include: A Dynamic Fusion Model for Consistent Crisis Response, which proposes a novel metric for evaluating style consistency and introduces a fusion-based generation approach. Measuring Scalar Constructs in Social Science with LLMs, which evaluates four approaches to scalar construct measurement and yields actionable findings for applied researchers. Toward Subtrait-Level Model Explainability in Automated Writing Evaluation, which prototypes explainability and subtrait scoring with generative language models and shows modest correlation between human and automated subtrait scores.