The field of large language models (LLMs) is moving towards more controllable and reliable generation. Recent developments focus on improving attribute alignment, precise attribute intensity control, and adaptive intervention methods. These advancements enable more fine-grained control over LLM outputs, making them more suitable for real-world applications. Noteworthy papers include: PIXEL, which proposes a position-wise activation steering framework for reliable behavior control, and Precise Attribute Intensity Control, which enables fine-grained control over attribute intensities. In-Distribution Steering and Mechanistic Error Reduction with Abstention also demonstrate promising results in balancing control and coherence, and mitigating errors through selective interventions. Language steering in latent space is another area of research that shows potential in mitigating unintended code-switching in multilingual LLMs.