E ISSN: 2583-049X
logo

International Journal of Advanced Multidisciplinary Research and Studies

Volume 5, Issue 6, 2025

Analyzing Fairness of Classification Machine Learning Model with Structured Dataset



Author(s): Ahmed Rashed, Abdelkrim Kallich, Mohamed Eltayeb

Abstract:

Machine learning (ML) algorithms have become integral to decision-making in various domains, including healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems pose significant ethical and social challenges.

To evaluate and mitigate biases, three prominent fairness libraries-Fairlearn by Microsoft, AIF360 by IBM, and the What-If-Tool by Google were employed. These libraries provide robust frameworks for analyzing fairness, offering tools to evaluate metrics, visualize results, and implement bias mitigation strategies.

The study aims to evaluate and mitigate biases in a structured dataset using classification models. The main aim of the paper is to present a comparative study for the performance of the mitigation algorithms in two fair-ness libraries by applying them individually one at a time in one of the three stages of the machine learning lifecycle (pre-processing, in-processing, or post-processing), and applying the algorithms in a sequential order in different stages at the time. The findings demonstrate that some sequential order applications enhance the mitigation algorithms performance by reducing bias and maintaining the model performance.

A publicly available dataset from Kaggle was selected for analysis, offering a realistic scenario for evaluating fairness in machine learning workflows.


Keywords: Machine Learning Fairness, Bias Analysis

Pages: 445-453

Download Full Article: Click Here