The field of machine learning is moving towards greater emphasis on fairness and explainability. Recent research has focused on developing new methods and frameworks that can accommodate heterogeneous tasks, incomplete supervision, and non-linear relationships between features. The development of fairness-aware multitask learning frameworks, such as FairMT and FAIR-MTL, has shown promising results in achieving substantial fairness gains while maintaining superior task utility. Furthermore, the integration of probabilistic neuro-symbolic reasoning, Bayesian inference, and game-theoretic allocation has enabled more equitable and interpretable predictions in historical data analysis and clinical prediction models. Noteworthy papers include FairMT, which introduces an Asymmetric Heterogeneous Fairness Constraint Aggregation mechanism, and Beyond Additivity: Sparse Isotonic Shapley Regression, which proposes a unified nonlinear explanation framework for feature attribution. Overall, these developments highlight the importance of considering accuracy, fairness, and explainability jointly in model assessment, rather than in isolation.
Advances in Fairness and Explainability in Machine Learning
Sources
Pushing the Boundaries of Interpretability: Incremental Enhancements to the Explainable Boosting Machine
Developing Fairness-Aware Task Decomposition to Improve Equity in Post-Spinal Fusion Complication Prediction