The field of differential privacy and information-theoretic security is moving towards developing more sophisticated methods for quantifying and mitigating privacy risks in data analysis and communication systems. Researchers are exploring new connections between information theory and differential privacy, such as understanding differential privacy guarantees in terms of channel properties. There is also a growing interest in developing privacy-preserving algorithms and mechanisms for specific applications, including machine learning and distributed estimation. Noteworthy papers in this area include: Quantifying Information Disclosure During Gradient Descent Using Gradient Uniqueness, which presents a principled disclosure metric for privacy auditing in machine learning. N-output Mechanism: Estimating Statistical Information from Numerical Data under Local Differential Privacy, which proposes a generalized framework for constructing optimal mechanisms for arbitrary output sizes. Local Information-Theoretic Security via Euclidean Geometry, which introduces a methodology for investigating local properties of secure communication over discrete memoryless wiretap channels.