1 Department of Computer Science, University of Illinois at Springfield, USA.
2 Department of Cybersecurity, American National University, Kentucky Campus, USA.
World Journal of Advanced Research and Reviews, 2025, 27(01), 331-351
Article DOI: 10.30574/wjarr.2025.27.1.2541
Received on 18 May 2025; revised on 30 June 2025; accepted on 03 July 2025
The escalating complexity and frequency of malware attacks pose a significant challenge to conventional cybersecurity frameworks, particularly in scenarios demanding high data privacy and cross-organizational threat intelligence sharing. Traditional centralized machine learning models for malware detection often rely on aggregating data in a central server, thereby increasing the risk of data breaches and limiting the deployment of models in privacy-sensitive environments such as healthcare, finance, and critical infrastructure. To address these limitations, this study explores an integrated approach that combines Federated Learning (FL) with Explainable Artificial Intelligence (XAI) for enhancing malware detection while preserving user privacy and system confidentiality. Federated learning enables the collaborative training of robust malware classifiers across multiple decentralized nodes without sharing raw data, thus maintaining local data sovereignty and complying with data protection regulations. The proposed framework incorporates deep learning architectures such as convolutional neural networks (CNNs) trained in a federated environment using feature vectors extracted from malicious binaries and behavior logs. To ensure transparency and trust in model predictions, explainable AI techniques specifically SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are integrated, providing actionable insights into the model’s decision-making process. This study also presents a comprehensive evaluation using a benchmark malware dataset distributed across simulated client environments, measuring detection accuracy, communication overhead, privacy leakage, and interpretability performance. Results demonstrate that the FL-XAI approach achieves detection rates comparable to centralized models while ensuring data confidentiality and interpretability. The research contributes to the evolving field of privacy-preserving threat intelligence by offering a scalable and explainable framework suitable for real-time cybersecurity applications.
Federated Learning; Explainable AI; Malware Detection; Privacy Preservation; Threat Intelligence; Model Interpretability
Preview Article PDF
Kigbu Shallom and Chukwujekwu Damian Ikemefuna. Enhancing malware detection using federated learning and explainable AI for privacy-preserving threat intelligence. World Journal of Advanced Research and Reviews, 2025, 27(01), 331-351. Article DOI: https://doi.org/10.30574/wjarr.2025.27.1.2541.
Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0