The field of natural language processing is moving towards more explainable and transparent models, particularly in the areas of job title matching and query understanding. Researchers are exploring the use of semantic textual relatedness, knowledge graphs, and large language models to improve the accuracy and interpretability of these models. Notably, the integration of knowledge graphs with text embeddings has shown promising results in improving model performance and explainability. Additionally, the development of unified query understanding frameworks powered by large language models has the potential to simplify and improve the accuracy of query understanding systems.
Some noteworthy papers in this area include: The paper on Towards Explainable Job Title Matching, which introduces a self-supervised hybrid architecture that combines dense sentence embeddings with domain-specific knowledge graphs to improve semantic alignment and explainability. The paper on Powering Job Search at Scale, which presents a unified query understanding framework powered by a large language model that improves relevance quality and reduces system complexity. The paper on Structured Information Matters, which demonstrates the effectiveness of using patient-level knowledge graphs to improve automated ICD coding and explainability. The paper on ScaleDoc, which introduces a novel system that scales LLM-based predicates over large document collections, achieving significant efficiency gains. The paper on Explicit vs. Implicit Biographies, which examines the impact of textual implicitness on LLM performance in information extraction tasks and presents a fine-tuning approach to improve model interpretability and reliability.