The field of large language models (LLMs) is witnessing significant advancements in optimization techniques and applications. Researchers are focusing on developing innovative methods to improve the performance and efficiency of LLMs, such as multi-objective directional prompting and local prompt optimization. These techniques aim to enhance the accuracy and reliability of LLMs in various tasks, including reasoning, function calling, and math solving. Additionally, there is a growing interest in applying LLMs to edge devices, with a focus on sustainability and reducing carbon emissions. Noteworthy papers in this area include MODP, which introduces a framework for multi-objective directional prompting, and SPC, which proposes a novel approach for evaluating the step-by-step reliability of LLM reasoning. Other notable works include Local Prompt Optimization, which integrates with automatic prompt engineering methods to improve performance, and CarbonCall, which reduces carbon emissions and power consumption in edge AI systems.