The field of natural language processing is witnessing significant advancements in named entity recognition and sequence labeling. Researchers are exploring innovative approaches to improve the accuracy and efficiency of these tasks, including the use of generative frameworks, rule-encoded loss functions, and compressed representations. One notable direction is the development of models that can handle discontinuous entities, complex entity structures, and low-resource settings. These advancements have the potential to improve the performance of various NLP applications, such as text analysis, information extraction, and dialogue systems. Noteworthy papers include: GapDNER, which proposes a gap-aware grid tagging model for discontinuous named entity recognition, and GenCNER, which introduces a generative framework for continual named entity recognition. R2T is also notable for its rule-encoded loss functions for low-resource sequence tagging, and Efficient Seq2seq Coreference Resolution Using Entity Representations for its compressed representation approach to improve efficiency in incremental settings.