The field of large language models (LLMs) is moving towards addressing the critical issue of biases and unfairness in AI-assisted decision making. Recent research has highlighted the problem of self-preference bias, where LLMs favor their own generated content over human-written or alternative model-generated content. This bias can have significant implications for labor market outcomes, with candidates using the same LLM as the evaluator being more likely to be shortlisted. To mitigate this issue, researchers are exploring innovative methods such as unlearning, debiasing, and steering vectors. These approaches aim to remove unwanted knowledge, suppress conceptual shortcuts, and reduce self-preference bias, ultimately leading to more reliable and fair language understanding systems. Noteworthy papers in this area include: AI Self-preferencing in Algorithmic Hiring, which empirically evaluates self-preference bias in hiring contexts and proposes simple interventions to reduce it. Unlearning That Lasts, which introduces a novel unlearning method that achieves better forget-utility trade-offs and demonstrates strong resilience to relearning. Breaking the Mirror, which investigates the use of steering vectors to mitigate self-preference bias in LLM evaluators and achieves significant reductions in unjustified self-preference. CURE, which proposes a lightweight framework to disentangle and suppress conceptual shortcuts in pre-trained language models, resulting in improved robustness and fairness. Standard vs. Modular Sampling, which critically examines common practices in LLM unlearning and proposes best practices for effective unlearning. Augmented Fine-Tuned LLMs for Enhanced Recruitment Automation, which presents a novel approach to recruitment automation using fine-tuned LLMs and achieves significant improvements in performance metrics. MSLEF, which introduces a multi-segment ensemble framework that employs LLM fine-tuning to enhance resume parsing in recruitment automation and achieves state-of-the-art results.