The field of natural language processing and microbiome analysis is rapidly evolving, with a focus on developing more accurate and informative models. Recent research has highlighted the importance of syntactic bootstrapping in verb learning, demonstrating that large language models can learn verb meanings by leveraging syntactic environments. Additionally, innovative approaches to microbiome sample embedding have shown promise in improving the accuracy of downstream tasks such as phenotype prediction and environmental classification. Notably, the integration of sequence-level abundance into Transformer-based sample embeddings has achieved state-of-the-art results. Furthermore, studies on word meanings in transformer language models have revealed that these models employ a lexical store-like mechanism, encoding a wide range of semantic information. Noteworthy papers include: The Abundance-Aware Set Transformer for Microbiome Sample Embedding, which proposes a novel approach to constructing fixed-size sample-level embeddings by weighting sequence embeddings according to their relative abundance. The Structural Sources of Verb Meaning Revisited: Large Language Models Display Syntactic Bootstrapping, which examines the role of syntactic bootstrapping in verb learning and demonstrates its importance in large language models.