The field of large language models (LLMs) is moving towards more efficient and effective alignment with human expectations. Researchers are focusing on developing novel methodologies for data collection, training, and evaluation to improve the reliability and safety of LLMs. One notable direction is the incorporation of mechanism-design frameworks for truthful and trust-minimized data sharing, ensuring dominant-strategy incentive compatibility and individual rationality. Another area of innovation is the development of symbolic reward decomposition approaches, which preserve the structure of each constitutional principle within the reward mechanism, making it easier to interpret and control the alignment process. Furthermore, there is a growing interest in designing generative embodied reward models that can provide fine-grained behavioral distinctions and enable test-time scaling. Noteworthy papers in this area include: QA-LIGN, which introduces an automatic symbolic reward decomposition approach for aligning LLMs with explicit principles. EQA-RM, a novel generative multimodal reward model specifically architected for Embodied Question Answering tasks.