The field of anomaly detection and pattern sampling is moving towards more efficient and scalable approaches. Researchers are exploring new methods to handle heterogeneous data formats and complex systems, such as microservices and large-scale structured and unstructured domains. A key direction is the development of unified frameworks that can process multiple data modalities and adapt to new scenarios without extensive retraining. Another area of focus is the improvement of pattern sampling techniques, including the use of interestingness measures and functional connectivity to detect anomalies and localize root causes. Notably, the use of Large Language Models (LLMs) is becoming increasingly popular in anomaly detection tasks, enabling the development of more accurate and robust models.
Some notable papers in this area include: FC-ADL, which proposes an efficient approach for detecting and localising anomalous changes in microservice metrics based on functional connectivity. ICAD-LLM, which introduces a novel paradigm for anomaly detection via in-context learning with LLMs, allowing for the handling of heterogeneous data formats within a unified framework. LLM-SrcLog, which presents a proactive and unified framework for log template parsing, leveraging LLMs to extract templates directly from source code and supplement them with data-driven parsing for logs without available code.