The field of large language models (LLMs) is witnessing significant developments in alignment and efficiency. Researchers are exploring innovative methods to improve LLMs' ability to align with human preferences, values, and intentions. One notable direction is the use of principled data selection strategies, which enable more efficient and effective alignment with limited resources. Another area of focus is the development of lightweight and scalable frameworks for post-training and fine-tuning, allowing for improved reasoning capabilities and personalized preference alignment. These advancements have the potential to unlock more accurate, helpful, and honest content generation in LLMs. Noteworthy papers include: LECTOR, which achieves a 90.2% success rate in test-oriented learning scenarios, and InfiAlign, which demonstrates strong generalization across diverse reasoning tasks while using only approximately 12% of the training data. Additionally, P-Aligner and FaST show promise in pre-aligning instructions and personalized preference alignment, respectively.