The field of natural language processing is witnessing significant advancements with the development of large language models (LLMs) and in-context learning (ICL) techniques. Researchers are exploring innovative methods to improve the efficiency and effectiveness of LLMs, including the selection of few-shot examples and demonstration selection strategies. A notable trend is the integration of gradient-based approaches with traditional machine learning methods to enhance the performance of LLMs. Furthermore, there is a growing interest in applying ICL to multi-modal and multi-user scenarios, such as dialogue state tracking and vision-language models. While current LLMs have shown impressive capabilities, there is still a need for further research to address the challenges of multi-user interactions and to develop more robust models. Noteworthy papers in this area include FEEDER, which proposes a novel pre-selection framework for demonstration selection, and Joint-GCG, which introduces a unified gradient-based poisoning attack framework for retrieval-augmented generation systems. GradEscape is also a notable work, presenting a gradient-based evader designed to attack AI-generated text detectors. Additionally, CASE and ClusterUCB propose efficient sample selection strategies for in-context learning and fine-tuning of LLMs, respectively.