The field of machine learning is moving towards a direction where security and privacy are of utmost importance. Researchers are exploring new threat models and developing innovative methods to protect against data leakage and poisoning attacks. One of the key areas of focus is federated learning, where multiple clients collaboratively learn a global model without sharing raw data. However, this approach is vulnerable to attacks that can compromise the privacy of the clients. To address this issue, researchers are developing new protocols and algorithms that can provide robustness and privacy preservation. Another important area of research is the development of verifiable machine learning systems, where the correctness of the computation can be efficiently verified by any party. Noteworthy papers in this area include: Improvdml, which achieves high model accuracy while ensuring privacy preservation and resilience to Byzantine attacks. Pdlrecover, which efficiently recovers a poisoned global model while preserving privacy. Computational Attestations of Polynomial Integrity Towards Verifiable Machine-Learning, which proves the correct training of a differentially-private linear regression in a matter of minutes.