The field of natural language processing is moving towards a greater emphasis on fairness and transparency in large language models (LLMs). Recent research has highlighted the need to address demographic biases in LLMs, which can perpetuate harmful stereotypes and undermine social equity. Studies have shown that LLMs can infer demographic attributes from question phrasing, even in the absence of explicit demographic cues, and that these inferences can be biased and unfair. To mitigate these risks, researchers are developing new methods for auditing and mitigating demographic biases in LLMs, including prompt-based guardrails and disability-inclusive benchmarking. Notable papers in this area include DAIQ, which introduces a framework for auditing demographic attribute inference from questions, and Who's Asking?, which investigates bias through the lens of disability-framed queries in LLMs. These papers demonstrate the importance of considering the social implications of LLMs and the need for more nuanced and inclusive approaches to natural language processing.