Why AGI won't happen by 2037: the hard limits of data & energy
AGI by 2037? Unlikely. Data exhaustion (2026-2028), 5GW energy demands, and $600B investment gaps reveal why the AGI timeline is longer than promised. Expert analysis.
The uncomfortable question nobody likes to ask
It has become a modern ritual to declare that artificial intelligence is racing toward human level cognition. The louder the promises grow, the more uncomfortable the counterquestion becomes: what if the curve is not rising endlessly but flattening right now? In an era soaked in confident predictions, the most contrarian observation is also the simplest. We are twelve years away from 2037, and the evidence suggests that real artificial intelligence, the kind that matches or surpasses general human reasoning, is still nowhere near materializing.
This tension defines today’s debate. It is made more urgent as the world builds policy, infrastructure and billion dollar strategies on assumptions that may never be met, even in a period marked by rapid innovation. In other words, our plans are accelerating faster than the underlying reality. Meanwhile, professionals are already losing competence as they rely on current AI systems that fall short of the promised capabilities, a dynamic explored in how AI dependency undermines professional expertise.
The Core of the Signal
- Acknowledge the Flattening Curve: Understand that the rapid AI development represents the steep phase of an S-curve, now approaching physical and data ceilings; anticipate deceleration instead of limitless progress.
- Manage Data Scarcity Actively: Recognize that the world’s supply of clean, human written text is nearing its end, threatening model collapse due to synthetic data and decelerating further AI advancements.
- Re-evaluate the Timeline: Expect that AI until 2037 will deliver specialized, narrow intelligence, but temper expectations for general human cognition or AGI.
- Confront the Cognitive Divide: Focus on the fundamental difference; current models statistically predict, they do not conceptually or causally think; closing this gap requires architectural transformation, not merely scaling up.
The persistent myth of unstoppable acceleration
Each technological era carries its own emotional gravity, and ours is rooted in the belief that progress will continue at the dizzying pace of the past decade. Large Language Models have dazzled with their language fluency, summarization abilities and productivity gains. For many observers, it feels almost inevitable that these systems will soon think as broadly and as flexibly as humans do.
But the assumption of straight line progress masks the true shape of technological revolutions. Historically, breakthroughs rise quickly, peak, then slope into plateaus determined by physics, cost and complexity. Modern AI is following that familiar pattern. The recent boom has not been the start of a limitless curve but the steep part of an S curve already approaching its top. Understanding that shift is essential for evaluating the realistic odds that general intelligence will emerge by 2037, rather than assuming it as a given.
The most critical constraints on AI are not mysterious. They are mathematical, physical and economic. They do not bend to ambition. They define it.
Data is the fuel, and we are running out
The world’s supply of clean, human written text is approaching its end. Studies from Epoch AI estimate that between 2026 and 2028, we will reach the limit of high quality public data suitable for training frontier models. Every major model released to date has been trained on most of the public internet. The next generation would require vastly more text to sustain performance improvements, yet that text simply does not exist in the required quantity.
The proposed workaround, synthetic data, introduces risks that extend far beyond technical inconvenience. Research from Shumailov and colleagues demonstrates how repeated training on model generated text creates a loss of variance, a statistical narrowing that amplifies errors and erases rare but important patterns. Once the training ecosystem is dominated by machine written content, models begin learning from distorted versions of their own output. The result, known as model collapse, is not improved intelligence but a slow drift into homogeneity.
If you want the concrete, real world version of how synthetic training triggers model collapse and why it matters for AI search quality, read: When AI Models Start Training Themselves.
This is not merely a research challenge. As AI generated articles, summaries and posts flood the internet, the signal becomes contaminated. High quality human text becomes increasingly scarce and harder to isolate. The irony is striking. Just as the world becomes obsessed with building human level intelligence, the raw material required to support it begins to evaporate in plain sight.
The physical ceiling of energy and compute
Even if the data problem were solved, the physical limitations of computing present another wall. The energy demands of training frontier models are reaching levels once reserved for national infrastructure. The proposed next generation clusters from leading AI labs are projected to require around five gigawatts of power. That is the equivalent of several nuclear power plants dedicated to a single training run. In practical terms, such facilities cannot be built quickly. They require environmental approvals, grid upgrades and years of engineering work that does not care about hype cycles.
If you want the concrete, real world version of what “five gigawatts” means in infrastructure, geopolitics, and local communities, read: AI Datacenters Now Demand 5 Gigawatt Power.
Meanwhile, hardware progress has slowed sharply. Moore’s Law has weakened, and the cost per transistor no longer reliably falls with each new node. Memory bandwidth, not processing power, has become the bottleneck. Chips can compute faster than they can be fed with data. Improvements in architecture are not keeping pace with the scale required for robust leaps in model capability.
The industry often frames scaling as a choice. In reality, it is a contest against physics. Physics usually wins.
The economic bubble forming beneath the optimism
There is a widening gap between the capital flowing into AI infrastructure and the revenue needed to sustain it. According to analyses referenced by Sequoia Capital, the annual revenue required to justify current investment levels sits near six hundred billion dollars. Actual revenues are far below that threshold. Investors are betting on future products that have not yet appeared and may not appear quickly enough to cover the cost of the hardware being deployed ahead of them.
This situation mirrors earlier speculative eras, particularly the early 2000s when fiber networks were built far faster than usage could justify. The difference today is scale. The numbers involved in AI are larger, timelines are tighter and the risks are global. If the expected breakthrough to true artificial intelligence lags past 2037, the financial correction could be severe, with cascading automation consequences across industries that have reshaped their strategies around unrealistic timelines.

The cognitive wall that refuses to move
Beyond the data, hardware and economics lies the deepest obstacle of all. Today’s AI systems do not think. They predict. Their intelligence is statistical, not conceptual. They do not build internal causal models of the world. They do not understand physics, intention or meaning. They excel at patterns but falter at reasoning.
This limitation becomes visible in out of distribution cases. When confronted with unusual problems that fall outside their training experience, models guess. They may guess confidently, but they still guess. In critical fields such as healthcare or scientific discovery, guessing is unacceptable.
The divide between linguistic fluency and grounded understanding remains profound. Large Language Models operate entirely through text. They have no sensory experience, no embodiment. They cannot feel weight, observe motion or understand a scene except as vectors. Researchers like Yann LeCun argue for the development of robust world models, but even optimists acknowledge that such systems are many years away, and the path toward them is unclear and fragmented.
By 2037, these cognitive gaps will not magically close. They require architectural transformations, not merely larger versions of today’s models. This point rarely surfaces in corporate forecasts, but it quietly shapes the real limits of what artificial intelligence can become in the next twelve years. The scenario of recursive self-improvement, where AI rewrites its own code to become exponentially smarter, remains theoretical precisely because these fundamental cognitive barriers have not been solved.
The barrier no lab can engineer away
Even if data, energy, and cognition barriers fall faster than expected, one obstacle rarely appears in forecasts: us. The humans who must live with artificial general intelligence have not agreed to let it arrive.
Governments worldwide are building AI legislation designed to slow or block high-risk applications. The EU AI Act imposes strict requirements on systems that affect safety, employment, and fundamental rights. American executive orders push federal agencies toward oversight frameworks that did not exist two years ago. Chinese directives mandate algorithm registration and content controls that no foreign lab can sidestep. Each jurisdiction adds friction. Together, they form a regulatory web that grows denser by the quarter.
Public resistance runs parallel. Automation now touches jobs, creative work, and daily decisions in ways that feel immediate rather than theoretical. Trust erodes when chatbots hallucinate medical advice or hiring algorithms reject qualified candidates without explanation. Surveys consistently show that majorities want stronger AI regulation, not weaker. That sentiment translates into political pressure, and political pressure translates into law.
AGI requires more than technical breakthroughs. It requires societal mandate, a broad consensus that the benefits outweigh the risks and that safeguards exist to manage what goes wrong. That mandate is nowhere in sight. The clock ticking toward 2037 is not only technical. It is political, cultural, and deeply human.
Looking forward from 2037 instead of toward it
When we reason backward from 2037, a clearer picture emerges. The probability of real, general artificial intelligence appearing by then is not high. Most leading researchers estimate the odds somewhere between ten and fifteen percent. This is not a dismissal of progress. The next decade will deliver extraordinary specialized systems that surpass human experts in countless domains. Scientific discovery will accelerate. Medicine will become more predictive. Knowledge work will transform in ways that are profound and irreversible.
But none of these developments require general intelligence. They require focused intelligence, the kind that excels within boundaries. The public imagination often merges the two, yet the difference is vast. Narrow AI can reshape industries without approaching human level cognition. General intelligence demands grounding, reasoning, memory, abstraction and reliable behavior in unfamiliar situations. It is not simply a bigger version of what we have today. It is a different category altogether.
This distinction shapes everything from regulation to economic planning to societal expectations. If we expect general intelligence by 2037, policy will be misaligned. If we expect powerful but non general systems, the conversation shifts toward oversight, transparency and responsible usability design. The challenge is not that intelligence will not grow. The challenge is that it will grow into forms that require new governance and careful handling rather than blind faith.
The story behind the hype
The race toward 2037 is not a sprint to inevitable superintelligence but a negotiation with constraints that do not care about ambition. Data, energy, hardware, economics and cognition form a braided set of limits that shape the real boundaries of progress. When those limits are acknowledged, the future becomes clearer, not darker.
We will live in a world filled with extraordinary tools but not machines that think like us. We will gain capabilities that reshape industries but not the emergence of a synthetic mind. That reality is not disappointing. It is grounding. It places responsibility back into human hands, where ethics, policy and long term planning matter more than speculative promises.
The hard truth is that by 2037, the world will not see real artificial intelligence in the strict sense. It will see something more complex, more unpredictable and more human dependent. And that may be exactly why the decisions we make today carry so much impact.
Related signals
- Logic Fails: AI Now Sees Patterns - Explains why probabilistic prediction replaces causal reasoning, the core cognitive gap blocking AGI by 2037.
- AI Datacenters Now Demand 5 Gigawatt Power - Grounds the energy ceiling in today’s buildout: grid bottlenecks, financing, and who pays locally.
- Recursive AI Self-Improvement: The Final Tipping Point - Tests the one scenario that could beat the 2037 ceiling, assuming grounded reasoning barriers are solved first.
- How Likely Is an AI Bubble? Market Signals & Valuations - Quantifies valuation risk behind the $600B revenue gap that could choke funding before AGI timelines mature.
References
Shumailov I, Carlton J, Wang X. The curse of recursion: training on generated data makes models forget. Nature. 2024. Nature
Epoch AI. Forecasting the limits of data for scaling language models. Epoch Research. 2024. Epoch AI
Kaplan J, McCandlish S, Henighan T et al. Scaling laws for neural language models. OpenAI. 2020. arXiv
LeCun Y. A path towards autonomous machine intelligence. Meta AI Research. 2023. Meta AI