Advances in Fairness and Explainability in Machine Learning

The field of machine learning is moving towards greater emphasis on fairness and explainability. Researchers are developing new methods to select models that balance predictive fairness and performance, and to provide insights into the decision-making processes of these models. One of the key directions is the use of feature importance and clustering techniques to structure the feature importance space of models, allowing users to explore clusters of models with similar predictive behaviors and fairness characteristics. Another important area is the development of fair and interpretable models for predicting diseases, such as Metabolic Dysfunction-Associated Steatotic Liver Disease. Noteworthy papers in this area include: The paper on Visual Model Selection using Feature Importance Clusters in Fairness-Performance Similarity Optimized Space, which proposes an interactive framework for navigating and interpreting the trade-offs across a portfolio of models. The paper on Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Methods, which develops a fair and rigorous model for predicting the disease and demonstrates the importance of interpretability in achieving a balance of predictive performance and fairness.

Sources

Visual Model Selection using Feature Importance Clusters in Fairness-Performance Similarity Optimized Space

Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Methods

Fair Indivisible Payoffs through Shapley Value

Strategic inputs: feature selection from game-theoretic perspective

Towards Piece-by-Piece Explanations for Chess Positions with SHAP

Risks and Opportunities in Human-Machine Teaming in Operationalizing Machine Learning Target Variables

Exploring Human-AI Conceptual Alignment through the Prism of Chess

Built with on top of