The field of automatic differentiation and high-performance computing is rapidly advancing, with a focus on improving the efficiency and scalability of computational models. Recent developments have led to the creation of high-performance systems for automatic differentiation, enabling fast gradient and Hessian computation on GPUs. These systems are designed to exploit the inherent sparsity and locality of mesh-based energy functions, minimizing memory traffic and avoiding global synchronization. Additionally, novel algorithms and programming models have been proposed to optimize the trade-off between storing and recomputing, achieving maximum performance within given memory constraints. Noteworthy papers in this area include: Locality-Aware Automatic Differentiation on the GPU for Mesh-Based Computations, which presents a high-performance system for automatic differentiation of functions defined on triangle meshes. DaCe AD: Unifying High-Performance Automatic Differentiation for Machine Learning and Scientific Computing, which showcases a general and efficient automatic differentiation engine that requires no code modifications. A Geometric Multigrid-Accelerated Compact Gas-Kinetic Scheme for Fast Convergence in High-Speed Flows on GPUs, which proposes a GPU-optimized, geometric multigrid-accelerated, high-order compact gas kinetic scheme that achieves one to two orders of magnitude faster convergence compared to previous explicit solvers.