Tata Consultancy Services, USA.
World Journal of Advanced Research and Reviews, 2025, 26(01), 2561-2574
Article DOI: 10.30574/wjarr.2025.26.1.1339
Received on 26 February 2025; revised on 16 April 2025; accepted on 18 April 2025
This article explores the architectural approaches for building explainable artificial intelligence (XAI) systems specifically designed for payment compliance testing in regulated financial environments. As financial institutions increasingly adopt sophisticated machine learning models to enhance compliance verification, they face the challenge of balancing advanced detection capabilities with regulatory requirements for transparency and explainability. The article examines the "black box" problem inherent in neural networks and proposes decision-tree surrogate models as a practical solution to bridge the interpretability gap. It further explores the implementation of SHAP values to quantify feature importance in payment decisions, providing crucial transparency for compliance officers and regulators. The article addresses regulatory considerations for XAI deployment, highlighting the need for comprehensive ML governance frameworks that include robust documentation, stakeholder-appropriate explanations, and rigorous testing methodologies. Finally, it presents an implementation architecture that preserves explainability throughout the transaction lifecycle, demonstrating how financial institutions can satisfy both performance and transparency requirements in payment compliance systems.
Explainable AI; Payment Compliance; Surrogate Models; Shap Values; Regulatory Governance; Financial Transparency
Preview Article PDF
Aparna Thakur. Architecting explainable AI systems for payment compliance testing. World Journal of Advanced Research and Reviews, 2025, 26(01), 2561-2574. Article DOI: https://doi.org/10.30574/wjarr.2025.26.1.1339.
Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0