Mitigating Unfairness in Machine Learning: A Taxonomy and an Evaluation Pipeline

Chiara Criscuolo, Tommaso Dolci, Mattia Salnitri

Symposium on Advanced Database Systems, 2024, pp. 217-226.

Abstract

Big data poses challenges in maintaining ethical standards for reliable outcomes in machine learning. Data that inaccurately represent populations may result in biased algorithmic models, whose application leads to unfair decisions in delicate fields such as medicine and industry. To address this issue, many fairness mitigation techniques have been introduced, but the proliferation of overlapping methods complicates decision-making for data scientists. This paper proposes a taxonomy to organize these techniques and a pipeline for their evaluation, supporting practitioners in selecting the most suitable ones. The taxonomy classifies and describes techniques qualitatively, while the pipeline offers a quantitative framework for evaluation and comparison. The proposed approach supports data scientists in addressing biased models and data effectively.

Download full text (PDF)

BibTeX

  @inproceedings{criscuolo2024mitigating,
    title={Mitigating Unfairness in Machine Learning: A Taxonomy and an Evaluation Pipeline},
    author={Criscuolo, Chiara and Dolci, Tommaso and Salnitri, Mattia},
    booktitle={Symposium on Advanced Database Systems},
    publisher={CEUR-WS.org},
    pages={217--226},
    year={2024}
  }