1 Faculty of Arts, Science and Technology, Wrexham University, United Kingdom.
2 Faculty of Computing and Social Sciences, University of Gloucestershire, United Kingdom.
Dinesh Deckker ORCID - 0009-0003-9968-5934
Subhashini Sumanasekara; ORCID - 0009-0007-3495-7774
World Journal of Advanced Research and Reviews, 2026, 29(01), 111-134
Article DOI: 10.30574/wjarr.2026.29.1.0011
Received on 22 November 2025; revised on 03 January 2026; accepted on 05 January 2026
This paper critically reviews evidence from 2023–2025 on scaling laws and foundation models. It also examines claims about an AI Singularity. Here, the Singularity means recursive self-improvement that leads to sudden capability jumps, not just broad automation. The paper asks what scaling results truly support and what they do not. It also explains how technical findings become institutional strategies and long-term commitments. The method used is a narrative synthesis of peer-reviewed studies, technical reports, and governance frameworks. The paper follows from concepts and history to technical limits, then to evaluation and agents, to narratives and counter-narratives, and finally to governance, productivity, and future research.
The analysis finds that scaling laws can still predict training loss in stable settings. However, real-world capability often improves in jumps rather than in smooth gains. These gains also correlate weakly with perplexity. Public benchmarks now act like short-lived public goods. They are easily contaminated and shaped by Goodhart pressures. Inference-time reasoning can raise accuracy on some tasks. Nevertheless, it does not reliably reduce hallucinations. It can even make wrong answers sound more convincing. This weakens the idea that more compute per answer creates trustworthy autonomy. Singularity forecasts also face bottlenecks. Software engineering is one, because architecture, verification, and maintenance are complex. Trust is another, as synthetic content floods the web and degrades confidence in text. Physical limits matter too, especially grid capacity and the slow pace of infrastructure build-out. The paper argues that peak hype may come before peak impact. Even if scaling slows, adoption will still take years. Governance should focus on measurable precaution, auditability, competition, procurement tools, and plural infrastructures for global equity. Future research should prioritise process supervision, human-AI epistemics, and an energy–intelligence exchange rate.
AI scaling laws; foundation models; evaluation crisis; inference-time reasoning; infrastructure and energy constraints; Singularity narratives
Get Your e Certificate of Publication using below link
Preview Article PDF
Dinesh Deckker and Subhashini Sumanasekara. Scaling Laws, Foundation Models, and the AI Singularity: A Critical Appraisal of 2023–2025 Evidence. World Journal of Advanced Research and Reviews, 2026, 29(01), 111-134. Article DOI: https://doi.org/10.30574/wjarr.2026.29.1.0011.
Copyright © 2026 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0