The field of differential privacy and federated learning is rapidly advancing, with a focus on developing innovative methods to protect sensitive information while enabling collaborative model training. Recent research has explored the application of differential privacy to various domains, including network assortativity and trajectory data preparation. Federated learning has also been investigated as a means to improve model performance while preserving data privacy. Notably, novel approaches have been proposed to integrate differential privacy with federated learning, such as the use of local differential privacy and correlated noise injection. These advancements have the potential to significantly impact the development of trustworthy and privacy-preserving machine learning models. Noteworthy papers include: Fine-grained Manipulation Attacks to Local Differential Privacy Protocols for Data Streams, which develops novel attacks to compromise local differential privacy protocols. Towards Trustworthy Federated Learning with Untrusted Participants, which proposes a robust algorithm for federated learning with strong privacy-utility trade-offs. FedRE, which achieves robust and effective federated learning with personalized privacy preferences. FedTDP, which provides a unified framework for trajectory data preparation via federated learning while preserving data privacy.