The field of machine learning is moving towards a greater emphasis on fairness and transparency, with a focus on developing methods that can mitigate bias and ensure equitable treatment of all individuals. Recent research has highlighted the importance of considering the impact of data distribution shifts on model performance and fairness, and has proposed new approaches for training models that are robust to these shifts. Additionally, there is a growing interest in developing methods for auditing and testing machine learning models for fairness, including the use of concolic testing and regularisation techniques. Noteworthy papers in this area include: Who Pays for Fairness, which introduces a novel fairness framework based on social burden, and Machine Learning with Multitype Protected Attributes, which proposes a distance covariance regularisation framework for achieving intersectional fairness.