The field of machine learning is moving towards developing more privacy-preserving and adaptable models. Recent research has focused on machine unlearning, which enables models to efficiently eliminate the memory of deleted data, and continual learning, which aims to address catastrophic forgetting in models trained sequentially on multiple tasks. The direction of the field is shifting towards developing more innovative and effective methods for machine unlearning and continual learning, such as approximate unlearning algorithms, feature learning, and synthetic boundary experience blending. Noteworthy papers in this area include 'The Right to be Forgotten in Pruning: Unveil Machine Unlearning on Sparse Models', which proposes an un-pruning algorithm to eliminate the impact of deleted data on model pruning, and 'Continual Learning with Synthetic Boundary Experience Blending', which introduces a novel training framework that integrates knowledge from both stored key samples and synthetic, boundary-adjacent data.