Large Language Models in Quantitative Research and Optimization

The field of quantitative research and optimization is witnessing a significant shift with the integration of large language models (LLMs). Recent developments indicate a strong focus on leveraging LLMs to automate tasks such as alpha mining, digital twin planning, and optimization problem-solving. These models are being fine-tuned for specialized applications, including quantitative finance and telecommunications, demonstrating their potential to improve efficiency and accuracy in complex tasks. Noteworthy papers in this area include Chain-of-Alpha, which proposes a novel LLM-based framework for automated alpha mining, and X-evolve, which introduces a paradigm-shifting method for evolving solution spaces powered by LLMs. Additionally, MiGrATe and NEFMind showcase the effectiveness of LLMs in adaptation at test-time and parameter-efficient fine-tuning for telecom APIs automation, respectively. These advancements highlight the innovative applications of LLMs in driving progress in quantitative research and optimization.

Sources

Chain-of-Alpha: Unleashing the Power of Large Language Models for Alpha Mining in Quantitative Trading

LSDTs: LLM-Augmented Semantic Digital Twins for Adaptive Knowledge-Intensive Infrastructure Planning

Technical Report: Full-Stack Fine-Tuning for the Q Programming Language

Enhancing Decision Space Diversity in Multi-Objective Evolutionary Optimization for the Diet Problem

\(X\)-evolve: Solution space evolution powered by large language models

Playing Atari Space Invaders with Sparse Cosine Optimized Policy Evolution

MiGrATe: Mixed-Policy GRPO for Adaptation at Test-Time

NEFMind: Parameter-Efficient Fine-Tuning of Open-Source LLMs for Telecom APIs Automation

A Survey of Optimization Modeling Meets LLMs: Progress and Future Directions

Built with on top of