The field of Artificial Intelligence (AI) is rapidly expanding, with a growing focus on sustainability and energy efficiency. Recent developments have highlighted the need to quantify the climate risk associated with AI systems, particularly those using large language models (LLMs). Researchers are exploring new methods to estimate the carbon footprint of LLMs, including the development of frameworks such as G-TRACE and CO2-Meter. These tools enable the measurement of energy consumption and carbon emissions associated with LLM training and inference, providing valuable insights for sustainable AI deployment. Furthermore, studies have investigated the performance of LLMs on edge devices, demonstrating the potential for localized, privacy-preserving inference. The use of small LLMs and local accelerators has been shown to achieve competitive performance while reducing energy consumption. Noteworthy papers in this area include the introduction of the AI Sustainability Pyramid, a governance model for sustainable AI deployment, and the development of the BRACE framework for benchmarking LLMs on energy efficiency and functional correctness. Additionally, the proposal of intelligence per watt (IPW) as a metric for assessing local inference capability and efficiency has significant implications for the future of AI development.