Efficient and Scalable Financial NLP

The field of financial NLP is moving towards developing more efficient and scalable models that can be deployed in real-world applications. Researchers are exploring novel strategies to reduce computational overhead while enhancing task-specific performance, such as selectively fine-tuning pre-trained language models and integrating knowledge graphs with large language models. These approaches have shown strong results in various financial NLP tasks, including sentiment analysis, risk assessment, and personalized recommendations. Noteworthy papers in this area include: LAET, which proposes a layer-wise adaptive ensemble tuning framework for pre-trained language models, and RAG-FLARKO, which introduces a retrieval-augmented extension to embed structured knowledge graphs in LLM prompts. Additionally, FinTRec presents a transformer-based framework for unified contextual ads targeting and personalization in financial applications, demonstrating the potential of transformer-based architectures in this domain.

Sources

LAET: A Layer-wise Adaptive Ensemble Tuning Framework for Pretrained Language Models

Parallel and Multi-Stage Knowledge Graph Retrieval for Behaviorally Aligned Financial Asset Recommendations

Open Banking Foundational Model: Learning Language Representations from Few Financial Transactions

Enhancing Conversational Recommender Systems with Tree-Structured Knowledge and Pretrained Language Models

Aspect-Level Obfuscated Sentiment in Thai Financial Disclosures and Its Impact on Abnormal Returns

MoMoE: A Mixture of Expert Agent Model for Financial Sentiment Analysis

PRISM: Prompt-Refined In-Context System Modelling for Financial Retrieval

FinTRec: Transformer Based Unified Contextual Ads Targeting and Personalization for Financial Applications

Built with on top of