The field of large language models is moving towards developing safer and more efficient fine-tuning methods. Recent research has highlighted the importance of preserving safety alignment in fine-tuned models, as well as the need for effective unlearning methods to eliminate unwanted knowledge. Innovative approaches such as selective layer-wise model merging and look-ahead tuning have shown promise in maintaining model safety without sacrificing performance. Additionally, there is a growing awareness of the potential pitfalls of overtraining, which can lead to degraded fine-tuning performance. Noteworthy papers in this area include SafeMERGE, which proposes a post-fine-tuning framework to preserve safety alignment, and LoTUS, which introduces a novel machine unlearning method that smooths prediction probabilities to mitigate over-confidence. LookAhead Tuning is also notable for its low-resource and effective methods to maintain model safety during fine-tuning.