The field of large language models (LLMs) is moving towards a greater emphasis on evaluating trustworthiness and safety in scientific applications. Recent studies have highlighted the importance of transparency and standardized evaluation frameworks in assessing the potential risks and benefits of LLMs. The development of comprehensive frameworks for evaluating LLM trustworthiness, such as SciTrust 2.0, is a significant step forward in this direction. Additionally, the creation of benchmarks like EU-Agent-Bench, which measures illegal behavior of LLM agents under EU law, demonstrates the need for more robust alignment techniques to mitigate catastrophic misuse risks. Noteworthy papers in this area include the Quantifying CBRN Risk in Frontier Models paper, which exposes critical safety vulnerabilities in current LLMs, and the SciTrust 2.0 paper, which introduces a comprehensive framework for evaluating LLM trustworthiness in scientific applications.