Senior SAP Security and Governance Specialist.
World Journal of Advanced Research and Reviews, 2026, 29(01), 285-294
Article DOI: 10.30574/wjarr.2026.29.1.0007
Received on 27 November 2025; revised on 04 January 2026; accepted on 07 January 2026
The rapid integration of Artificial Intelligence (AI) systems across critical sectors such as healthcare, finance, autonomous transportation, and national security has fundamentally altered the global cybersecurity threat landscape. Unlike traditional software systems, AI introduces novel vulnerabilities rooted in data-driven learning, model opacity, and high dimensional decision boundaries. This paper presents a comprehensive analysis of the evolving threat landscape in AI systems, focusing on adversarial machine learning attacks, data poisoning, privacy inference, model extraction, supply-chain vulnerabilities, and emerging risks in generative AI and large language models (LLMs). A structured taxonomy of AI-specific threats is proposed, mapping attack vectors to lifecycle stages and adversary capabilities. The study further evaluates real world attack scenarios, sector specific impacts, and systemic risks arising from interconnected AI ecosystems. The paper concludes by outlining detection strategies, governance considerations, and future research directions necessary to ensure secure, trustworthy, and resilient AI deployments.
AI Security; Adversarial Machine Learning; Data Poisoning; Model Extraction; Privacy Attacks; Large Language Models; Threat Modeling; Cybersecurity
Get Your e Certificate of Publication using below link
Preview Article PDF
Vishnu Kiran Bollu. Threat Landscape in Artificial Intelligence Systems: Taxonomy, Attack Vectors and Security Implications. World Journal of Advanced Research and Reviews, 2026, 29(01), 285-294. Article DOI: https://doi.org/10.30574/wjarr.2026.29.1.0007.
Copyright © 2026 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0