
Explainable Machine Learning for Credit Risk Assessment: A Comparative Study of Interpretable Approaches in Consumer Lending | IJET – Volume 9 Issue 2 | IJET-V9I2P45

Table of Contents
ToggleInternational Journal of Engineering and Techniques (IJET)
Open Access • Peer Reviewed • High Citation & Impact Factor • ISSN: 2395-1303
Volume 9, Issue 2 | Published: April 2023
Author: Viswatej Seela
DOI: https://doi.org/{{doi}} • PDF: Download
Abstract
Credit risk assessment in consumer lending has moved beyond traditional scorecards, but lenders still need models that can be explained in adverse action notices and reviewed in fair lending examinations. This study compares three practical approaches to credit risk scoring on a public dataset of 48,000 loan applications: logistic regression, gradient boosted trees with SHAP explanations, and an inherently interpretable model, the Explain- able Boosting Machine. The comparison focuses on three issues that matter in production settings: predictive performance, explanation quality, and operational overhead. The re- sults show that the Explainable Boosting Machine reaches an AUC of 0.782, close to the best-performing gradient boosted model at 0.791, while producing explanations that are more stable and easier to map to regulatory reason codes. The findings suggest that the small loss in predictive power may be acceptable when explanation quality and auditabil- ity carry equal weight in model selection.
Keywords
credit risk, explainable AI, interpretable machine learning, fair lending, con- sumer lending, SHAP
Conclusion
This comparison shows that inherently interpretable models deserve serious consideration in regulated lending. In this dataset, the Explainable Boosting Machine stayed close to XG- Boost in predictive accuracy while providing explanations that were more stable and easier to align with compliance requirements. For lenders operating under close regulatory scrutiny, that trade-off may be preferable to the small accuracy gain from a less transparent model. Fu- ture work should add fairness testing and examine how these results hold up across products and credit cycles.
References
Baesens, B., Van Gestel, T., Viaene, S., Stepanova, M., Suykens, J., and Vanthienen, J. (2003). Benchmarking state-of-the-art classification algorithms for credit scoring. Journal of the Op- erational Research Society, 54(6), 627–635.
Consumer Financial Protection Bureau. (2022). CFPB circular on adverse action notification requirements and the proper use of the ECOA sample forms. CFPB Circular 2022-03.
Chen, T. and Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of KDD, 785–794.
Chen, Z., Provost, F., and Ghani, R. (2023). Challenges in using SHAP for regulatory- compliant credit model explanations. Journal of Financial Data Science, 5(2), 34–51.
Gunnarsson, B. R., Vanden Broucke, S., and Baesens, B. (2021). Deep learning for credit scoring: Do or don’t? European Journal of Operational Research, 295(1), 292–305.
Hall, P., Gill, N., and Schmidt, N. (2022). Responsible Machine Learning. O’Reilly Media. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions.
Proceedings of NeurIPS, 30, 4765–4774.
Nori, H., Jenkins, S., Koch, P., and Caruana, R. (2019). InterpretML: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of KDD, 1135–1144.
Cite this article
APA
Viswatej Seela (April 2023). Explainable Machine Learning for Credit Risk Assessment: A Comparative Study of Interpretable Approaches in Consumer Lending. International Journal of Engineering and Techniques (IJET), 9(2). https://doi.org/{{doi}}
Viswatej Seela, “Explainable Machine Learning for Credit Risk Assessment: A Comparative Study of Interpretable Approaches in Consumer Lending,” International Journal of Engineering and Techniques (IJET), vol. 9, no. 2, April 2023, doi: {{doi}}.
