The field of large language models (LLMs) is rapidly advancing, with a focus on improving alignment and optimization techniques. Recent developments have centered around enhancing the ability of LLMs to incorporate human values and preferences, with approaches such as survey-to-behavior alignment and reward-guided decoding showing promise. Additionally, there is a growing interest in multimodal LLMs, with techniques like input-dependent steering and multi-objective alignment via value-guided inference-time search being explored. Noteworthy papers in this area include 'Survey-to-Behavior: Downstream Alignment of Human Values in LLMs via Survey Questions', which demonstrates the effectiveness of fine-tuning LLMs on value surveys to change their behavior, and 'Controlling Multimodal LLMs via Reward-guided Decoding', which introduces a method for reward-guided decoding of multimodal LLMs to improve their visual grounding. Overall, these advances have the potential to significantly improve the performance and safety of LLMs in a wide range of applications.