The field of artificial intelligence is moving towards increased autonomy and human interaction, with large language models (LLMs) playing a crucial role in this development. Recent research has focused on improving the reliability and safety of LLMs in various applications, including autonomous systems and mental health support. One of the key challenges in this area is addressing the limitations and potential risks of LLMs, such as hallucinations and context misalignments, which can lead to incorrect and flawed decisions. To mitigate these risks, researchers are exploring new approaches, such as the use of cognition envelopes and structured prompting, to constrain AI-generated decisions and ensure more accurate and reliable outcomes. Noteworthy papers in this area include: Cognition Envelopes for Bounded AI Reasoning in Autonomous UAS Operations, which introduces the concept of cognition envelopes to establish reasoning boundaries for AI-generated decisions. Independent Clinical Evaluation of General-Purpose LLM Responses to Signals of Suicide Risk, which assesses the alignment of LLM responses with clinical guidelines for ethical communication and highlights the need for more effective methodologies to study human-AI interaction. AERMANI-VLM, which presents a framework for adapting pretrained vision-language models for aerial manipulation tasks, ensuring safe and reliable execution. Can Conversational AI Counsel for Change, which develops an open-source model for supporting dietary intentions in ambivalent individuals, demonstrating the potential of theory-driven LLMs in digital counseling.