The field of differential privacy in machine learning is moving towards developing more accurate and efficient algorithms for private data analysis. Recent research has focused on improving the utility guarantees of differentially private mechanisms, particularly in the context of low-rank approximation and covariance estimation. New perturbation bounds and characterizations have been established, allowing for sharper estimates of the error in private data analysis. These advances have significant implications for applications such as private PCA and covariance estimation. Notably, some papers have made significant contributions to the field, including the development of tight zCDP characterizations for fundamental mechanisms and the proposal of a simple and efficient algorithm for private rank-r approximation based on matrix coherence. For example, one paper derives tight zCDP characterizations for several fundamental mechanisms, including the Laplace mechanism and the discrete Laplace mechanism. Another paper presents a differentially private algorithm for Max-Cut and other constraint satisfaction problems under low coherence assumptions.