The field of multi-objective optimization is moving towards the development of more efficient and effective algorithms, with a focus on incorporating large language models (LLMs) to improve performance. Recent research has shown that LLMs can be used to generate high-quality heuristics for complex problems, and that they can outperform traditional evolutionary algorithms in certain cases. The use of Pareto-based methods is also becoming increasingly popular, as they allow for the exploration of multiple objectives and the identification of non-dominated solutions. Noteworthy papers in this area include: Pareto-NRPA, which introduces a novel Monte-Carlo search algorithm for multi-objective optimization problems. GEPA, which uses natural language reflection to learn high-level rules from trial and error and outperforms reinforcement learning methods. MeLA, which presents a metacognitive LLM-driven architecture for automatic heuristic design and demonstrates the potential of using cognitive science as a blueprint for AI architecture. MPaGE, which leverages LLMs and Pareto Front Grid technique to promote diversity and mitigate redundancy within the population. MOCHA, which enables Pareto-stationarity exploration in multi-objective reinforcement learning with finite-time sample complexity guarantee. AutoModSAT, which optimizes complex SAT solvers via LLMs and achieves superior performance over state-of-the-art solvers.