The field of retrieval-augmented generation (RAG) systems is moving towards addressing the challenges of robustness and security. Recent research has highlighted the vulnerabilities of RAG systems to various types of attacks, such as bias injection attacks and symbolic perturbations. To mitigate these threats, researchers are developing new defense mechanisms, including post-retrieval filtering and ensemble privacy defense frameworks. These innovations have the potential to significantly improve the reliability and trustworthiness of RAG systems. Noteworthy papers in this area include: Bias Injection Attacks on RAG Databases and Sanitization Defenses, which introduces a new type of attack that can covertly influence the ideological framing of answers generated by large language models. EmoRAG: Evaluating RAG Robustness to Symbolic Perturbations, which demonstrates the susceptibility of RAG systems to subtle symbolic perturbations, such as emoticon tokens.