The field of financial NLP is moving towards developing more efficient and scalable models that can be deployed in real-world applications. Researchers are exploring novel strategies to reduce computational overhead while enhancing task-specific performance, such as selectively fine-tuning pre-trained language models and integrating knowledge graphs with large language models. These approaches have shown strong results in various financial NLP tasks, including sentiment analysis, risk assessment, and personalized recommendations. Noteworthy papers in this area include: LAET, which proposes a layer-wise adaptive ensemble tuning framework for pre-trained language models, and RAG-FLARKO, which introduces a retrieval-augmented extension to embed structured knowledge graphs in LLM prompts. Additionally, FinTRec presents a transformer-based framework for unified contextual ads targeting and personalization in financial applications, demonstrating the potential of transformer-based architectures in this domain.