The field of natural language processing is moving towards addressing and mitigating social biases in AI models. Recent research has focused on identifying and reducing gender biases in language models, with a particular emphasis on encoder-based transformer models. These biases can have significant implications for society, including the perpetuation of harmful stereotypes and the distortion of decision-making processes. To address these issues, researchers are proposing innovative solutions such as contrastive learning frameworks, decoupled loss functions, and backpack architectures. These approaches aim to eliminate positive-negative coupling, reduce discriminatory output, and preserve general capability. Noteworthy papers in this area include: TriCon-Fair, which introduces a contrastive learning framework to mitigate social bias in pre-trained language models. Erasing 'Ugly' from the Internet, which sheds light on the pervasive demographic biases related to beauty standards present in generative AI models.