The field of large language models (LLMs) is rapidly advancing, with a focus on developing more robust and challenging benchmarks to evaluate their reasoning capabilities. Recent research has highlighted the limitations of current benchmarks, which can be easily overcome by state-of-the-art models. To address this, new benchmarks are being proposed that can scale in difficulty and provide more comprehensive evaluations of LLMs' abilities. These benchmarks are designed to test LLMs' logical reasoning, faithfulness to input data, and ability to handle complex tasks such as sorting and pattern matching. Notable papers in this area include: SortBench, which introduces a new benchmark for evaluating LLMs' ability to sort lists and highlights the challenges of this task for current models. Nondeterministic Polynomial-time Problem Challenge, which proposes an ever-scaling reasoning benchmark for LLMs that can generate instances of NP-complete problems with varying levels of complexity. Socrates or Smartypants, which introduces a new benchmark for evaluating LLMs' logical reasoning capabilities using logic programming-based test oracles. SLURG, which explores the feasibility of generating synthetic online fallacious discourse using large language models.