The field of text and code readability assessment is moving towards more nuanced and human-aligned approaches. Recent studies have highlighted the limitations of traditional metrics and the importance of considering context, information content, and topic in evaluating readability. The use of large language models and machine learning techniques is becoming increasingly prominent in this area, with applications in automatic essay scoring, code readability assessment, and invoice information extraction. Noteworthy papers in this area include: Readability Reconsidered, which found that model-based metrics outperform traditional metrics in capturing human perceptions of readability. Human-Aligned Code Readability Assessment with Large Language Models, which introduced a large-scale benchmark for evaluating LLM-based code readability assessment and found that developer-guided prompting improves alignment with human judgments. ImpossibleBench, which introduced a benchmark framework for measuring LLMs' propensity to exploit test cases and provides a versatile tool for studying model behaviors and developing monitoring tools.