Exploring Fairness Interpretability with FairnessFriend: A Chatbot Solution

Chiara Criscuolo, Tommaso Dolci

IEEE International Conference on Data Engineering Workshops, 2024, pp. 246-253.

Abstract

In the contemporary world, artificial intelligence and machine learning algorithms are an important driver for decision-making, by leveraging real-world data for future predictions. Despite clearly improving efficiency, the lack of transparency in their predictions raises concerns about the degree of fairness of machine learning models, well highlighted by recent instances of algorithmic unfairness, from automated decisions on criminal recidivism to disease prediction. Increased user awareness of algorithmic fairness is met with a deficiency in systems guiding data analysts and practitioners in comprehending the implications of their outputs. To tackle the challenge of fairness interpretability, we propose FairnessFriend, a chatbot solution that combines data science with a human-computer interaction perspective. Given a dataset and a trained machine learning model with established fairness metrics, our system facilitates users in understanding these metrics and their significance in the context of the training data. FairnessFriend provides meanings for various statistical fairness metrics, and presents the resulting metrics values with detailed explanations, offering specific insights into their implications.

Download full text (PDF)

DOI: 10.1109/ICDEW61823.2024.00037

BibTeX

  @inproceedings{criscuolo2024exploring,
    title={Exploring Fairness Interpretability with FairnessFriend: A Chatbot Solution},
    author={Criscuolo, Chiara and Dolci, Tommaso},
    booktitle={IEEE International Conference on Data Engineering Workshops},
    organization={IEEE},
    pages={246--253},
    year={2024},
    doi={10.1109/ICDEW61823.2024.00037}
  }