The field of machine learning is moving towards a greater emphasis on privacy and security, with a particular focus on membership inference attacks. These attacks aim to determine whether a specific sample is part of a model's training set, and recent research has made significant progress in developing new methods and improving existing ones. One of the key directions in this area is the development of more efficient and effective attacks, such as those using Bayesian decision-theoretic frameworks and adaptive parameterization. Another important trend is the application of membership inference attacks to new domains, such as multimodal large language models and sequence models. Notably, researchers are also exploring new perspectives on evaluating model privacy, including the use of adversarial example-based features. Overall, the field is advancing rapidly, with new methods and techniques being proposed to address the challenges of membership inference attacks. Noteworthy papers in this area include: Practical Bayes-Optimal Membership Inference Attacks, which introduces a Bayes-optimal membership inference rule for node-level attacks against graph neural networks. Vid-SME, which proposes the first membership inference method tailored for video data used in video understanding large language models. Privacy Leaks by Adversaries, which utilizes the process of generating adversarial samples to infer membership. Membership Inference Attacks on Sequence Models, which adapts a state-of-the-art membership inference attack to explicitly model within-sequence correlations.