Advances in Multi-Objective Optimization and Large Language Models

The field of multi-objective optimization is moving towards the development of more efficient and effective algorithms, with a focus on incorporating large language models (LLMs) to improve performance. Recent research has shown that LLMs can be used to generate high-quality heuristics for complex problems, and that they can outperform traditional evolutionary algorithms in certain cases. The use of Pareto-based methods is also becoming increasingly popular, as they allow for the exploration of multiple objectives and the identification of non-dominated solutions. Noteworthy papers in this area include: Pareto-NRPA, which introduces a novel Monte-Carlo search algorithm for multi-objective optimization problems. GEPA, which uses natural language reflection to learn high-level rules from trial and error and outperforms reinforcement learning methods. MeLA, which presents a metacognitive LLM-driven architecture for automatic heuristic design and demonstrates the potential of using cognitive science as a blueprint for AI architecture. MPaGE, which leverages LLMs and Pareto Front Grid technique to promote diversity and mitigate redundancy within the population. MOCHA, which enables Pareto-stationarity exploration in multi-objective reinforcement learning with finite-time sample complexity guarantee. AutoModSAT, which optimizes complex SAT solvers via LLMs and achieves superior performance over state-of-the-art solvers.

Sources

Pareto-NRPA: A Novel Monte-Carlo Search Algorithm for Multi-Objective Optimization

GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

MeLA: A Metacognitive LLM-Driven Architecture for Automatic Heuristic Design

Pareto-Grid-Guided Large Language Models for Fast and High-Quality Heuristics Design in Multi-Objective Combinatorial Optimization

Enabling Pareto-Stationarity Exploration in Multi-Objective Reinforcement Learning: A Multi-Objective Weighted-Chebyshev Actor-Critic Approach

Automatically discovering heuristics in a complex SAT solver with large language models

Built with on top of