The field of kernel methods and machine learning is witnessing significant developments, with a focus on improving the efficiency and effectiveness of kernel-based testing methods. Researchers are exploring new approaches to learn proper hypotheses and kernels simultaneously, rather than relying on manual specification. This has led to the development of innovative methods such as anchor-based maximum discrepancy and aggregated statistics that incorporate kernel diversity. Another area of research is the analysis of spectral properties of kernels, which has challenged the common heuristic that richer spectra yield better generalization. Furthermore, studies on kernel regression learning curves and the rank of matrices arising from singular kernel functions are providing new insights into the performance and complexity of machine learning algorithms. Notable papers in this area include: Anchor-based Maximum Discrepancy for Relative Similarity Testing, which proposes a new method for relative similarity testing that learns a proper hypothesis and kernel simultaneously. DUAL: Learning Diverse Kernels for Aggregated Two-sample and Independence Testing, which introduces an aggregated statistic that explicitly incorporates kernel diversity. Spectral Analysis of Molecular Kernels: When Richer Features Do Not Guarantee Better Generalization, which challenges the common heuristic that richer spectra yield better generalization.