International Journal of Advanced Multidisciplinary Research and Studies
Volume 3, Issue 6, 2023
Fairness in Credit Risk Modeling: Evaluating Bias and Discrimination in AI-Based Credit Decision Systems
Author(s): Godwin David Akhamere
DOI: https://doi.org/10.62225/2583049X.2023.3.6.4716
Abstract:
Artificial intelligence (AI) in credit risk modeling marks a new age of automation, efficiency and predictive capability into the financial decision-making process. Nevertheless, it is with this very development that long held opposition to bias and prejudice has been brought to the fore once again, especially since biases and discrimination are still leveraged against certain classes of people who have protection under the law like racial minorities, women and poorer citizens. With machine learning models being trained on the historical data-set riddled with the disparities of the society, the risk of reinforcing and, in fact, intensifying unfair treatment triggers the warning bells. This article is a critical inquiry on the bias of algorithms (artificial intelligence) in AI-driven credit risk models by examining the direct and indirect effect of the use of a protected attribute (i.e. race, gender, socioeconomic standing), as an example. It also reviews the current measures of fairness, morality frameworks, and debiasing methods and suggests the cross-disciplinary methodology to achieve an effective credit decision system that is fair. This research article points out the role of regulatory oversight, stakeholder involvement and transparency in redesigning AI towards social justice in the financial system.
Keywords: Credit Decision, Artificial Intelligence (AI), United States
Pages: 2061-2070
Download Full Article: Click Here

