Large Language Model Optimization and Applications

The field of large language models (LLMs) is witnessing significant advancements in optimization techniques and applications. Researchers are focusing on developing innovative methods to improve the performance and efficiency of LLMs, such as multi-objective directional prompting and local prompt optimization. These techniques aim to enhance the accuracy and reliability of LLMs in various tasks, including reasoning, function calling, and math solving. Additionally, there is a growing interest in applying LLMs to edge devices, with a focus on sustainability and reducing carbon emissions. Noteworthy papers in this area include MODP, which introduces a framework for multi-objective directional prompting, and SPC, which proposes a novel approach for evaluating the step-by-step reliability of LLM reasoning. Other notable works include Local Prompt Optimization, which integrates with automatic prompt engineering methods to improve performance, and CarbonCall, which reduces carbon emissions and power consumption in edge AI systems.

Sources

MODP: Multi Objective Directional Prompting

SPC: Evolving Self-Play Critic via Adversarial Games for LLM Reasoning

Small Models, Big Tasks: An Exploratory Empirical Study on Small Language Models for Function Calling

CarbonCall: Sustainability-Aware Function Calling for Large Language Models on Edge Devices

Local Prompt Optimization

An Empirical Study on Prompt Compression for Large Language Models

DeepCritic: Deliberate Critique with Large Language Models

Built with on top of