The field of natural language processing is moving towards developing more responsible and robust language models. Recent research has focused on mitigating the limitations of traditional tokenization methods and addressing the risks associated with large language models, such as the acquisition of sensitive information and the perpetuation of harmful content. Innovative approaches, including token-free models and unlearning techniques, are being explored to improve the safety and reliability of language models. Notably, researchers are investigating methods to remove sensitive information from models, evaluate the effectiveness of unlearning, and develop more robust evaluation frameworks. Overall, the field is shifting towards prioritizing responsible AI development and ensuring that language models are designed with safety and ethics in mind. Noteworthy papers include: Token-free Models for Sarcasm Detection, which demonstrates the potential of token-free models for robust NLP in noisy and informal domains. Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation, which introduces a multimodal unlearning benchmark and evaluates methods for deleting specific multimodal knowledge from LLMs.