Fairness and Causality in Probabilistic Classifiers

The field of probabilistic classifiers is moving towards a greater emphasis on fairness and causality, with researchers developing new methods to verify individual fairness, intersectionality, and counterfactual fairness. These innovations aim to address the limitations of existing approaches, which often focus on population-level effects and neglect the heterogeneity of complex systems. The development of visual analytics frameworks, such as those that support simulating and explaining interventions at the individual level, is also a notable trend. Furthermore, researchers are working on providing formal models to represent and implement counterfactual beliefs, which is essential for understanding causality and fairness. Noteworthy papers include:

  • A Proof System with Causal Labels, which proposes an extension to the typed natural deduction calculus to model verification of individual fairness and intersectionality, as well as counterfactual fairness.
  • XplainAct, a visual analytics framework that supports simulating, explaining, and reasoning interventions at the individual level within subpopulations, and
  • Canonical Representations of Markovian Structural Causal Models, which introduces an alternative approach to structural causal models to represent counterfactuals compatible with a given causal graphical model.

Sources

A Proof System with Causal Labels (Part I): checking Individual Fairness and Intersectionality

A Proof System with Causal Labels (Part II): checking Counterfactual Fairness

XplainAct: Visualization for Personalized Intervention Insights

Canonical Representations of Markovian Structural Causal Models: A Framework for Counterfactual Reasoning

Built with on top of