Advances in Fairness and Explainability in Machine Learning

The field of machine learning is moving towards greater emphasis on fairness and explainability. Recent research has focused on developing new methods and frameworks that can accommodate heterogeneous tasks, incomplete supervision, and non-linear relationships between features. The development of fairness-aware multitask learning frameworks, such as FairMT and FAIR-MTL, has shown promising results in achieving substantial fairness gains while maintaining superior task utility. Furthermore, the integration of probabilistic neuro-symbolic reasoning, Bayesian inference, and game-theoretic allocation has enabled more equitable and interpretable predictions in historical data analysis and clinical prediction models. Noteworthy papers include FairMT, which introduces an Asymmetric Heterogeneous Fairness Constraint Aggregation mechanism, and Beyond Additivity: Sparse Isotonic Shapley Regression, which proposes a unified nonlinear explanation framework for feature attribution. Overall, these developments highlight the importance of considering accuracy, fairness, and explainability jointly in model assessment, rather than in isolation.

Sources

FairMT: Fairness for Heterogeneous Multi-Task Learning

Pushing the Boundaries of Interpretability: Incremental Enhancements to the Explainable Boosting Machine

Developing Fairness-Aware Task Decomposition to Improve Equity in Post-Spinal Fusion Complication Prediction

Probabilistic Neuro-Symbolic Reasoning for Sparse Historical Data: A Framework Integrating Bayesian Inference, Causal Models, and Game-Theoretic Allocation

The Effect of Enforcing Fairness on Reshaping Explanations in Machine Learning Models

Water Quality Estimation Through Machine Learning Multivariate Analysis

Beyond Additivity: Sparse Isotonic Shapley Regression toward Nonlinear Explainability

Non-Linear Determinants of Pedestrian Injury Severity: Evidence from Administrative Data in Great Britain

Built with on top of