I. The Intelligence Landing
Last updated
Last updated
When machines out-think us in every domain, do we steer—or get steered? AI capabilities are accelerating dramatically, already matching or beating human performance in many areas. While limits exist, the clear trajectory is towards AGI within years. Humanity faces a profound choice: harness AGI's potential or stumble into existential risk.
The future is already here—it's just not evenly distributed.
— William Gibson
We've always been inventors. It's what defines us as humans. From stone tools to the printing press to microchips, each invention has increased our capacity to create the next one. We invent to overcome our limitations, and each breakthrough leads to another. That's been the pattern over time.
Until now.
Something different is happening. We’re working on what might be the final link in this ancient chain: Artificial General Intelligence. An invention that would surpass all human abilities.
The acceleration is striking. that once struggled with checkers now beat champions at . Algorithms that could barely identify cats now generate images indistinguishable from photographs. that once fumbled simple questions now construct arguments that would impress Aristotle. What took evolution millions of years and human civilization millennia, we've compressed into decades.
The evidence surrounds us. When GPT-4 was released, it matched or exceeded human performance on —SATs, GRE, even bar exams. Today's models achieve scores indicative of domain expertise across 57 academic subjects simultaneously—a feat no single human could replicate. In specialized fields like medicine, AI now outperforms professionals in diagnostic tasks across radiology, pathology, and dermatology. What seemed impossible just two years ago—solving graduate-level physics problems or elite mathematical challenges designed to stump the brightest minds—is now routine. The length of tasks AI can complete doubles every four months, with researchers predicting month-long autonomous projects by 2029. We're witnessing systems pass the "weak" Turing test—where average humans can no longer distinguish AI from human intelligence—while approaching the limits of academic testing.
Yes, today's AI systems still have —they exist in digital form, struggle with complex multi-step processes, and occasionally hallucinate answers. But focusing on these flaws misses the forest for the trees.
Most people anchor on today's flaws without appreciating the curve of improvement. The rate of progress is astonishing. And the only enduring truth of this current paradigm has been naysayers consistently by the ever-increasing capabilities of models.
What we have today, even with a slowed rate of improvement will transform——industries. What we will have soon will transform the world. And what we will have is Artificial General Intelligence.
We stand at the edge of one of the most important transitions in our species' history: the moment when we create an intelligence that may render all subsequent human invention obsolete. Not because it will replace us, but because it will become the final tool we ever need to build—the last invention of the human era, and the first of whatever comes next.
Since the beginning, we've been making tools. Each invention solved a specific problem: wheels helped us move, microscopes let us see tiny things, computers made calculations easier. Each was limited to its particular function, bounded by what its creator imagined it could do.
But what happens when we create something that isn't limited like that? Something that can understand, learn, and work across every domain of human knowledge? That's what AGI promises to be. And it may be the last invention we author.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
Good's insight captures the singular nature of this creation: to build AGI is to solve the meta-problem that solves all problems. It represents near-zero-cost intelligence that can be deployed against everything from climate change to disease to interstellar travel—a universal problem-solving engine that would accelerate human progress beyond all historical precedent.
In November 2023, researchers at Google DeepMind to track humanity's progress toward AGI, outlining five levels from "Emerging AGI" through "Competent" and "Expert" to "Virtuoso" AGI. Even then, we had reached the "Emerging" threshold.
And humans are surprisingly good at this. In a 20-step task, you need about 97% accuracy at each step just to succeed half the time. Yet we routinely pull this off without thinking much about it. This reliability—planning, remembering, deciding, and fixing mistakes across time—is one of the final hurdles between current systems and true general intelligence. And it is this frontier that the world's top AI labs, armed with virtually unlimited funding, are .
Think about what this means: an in all academic subjects at once; a system better than 99% of human programmers; an intelligence that can solve previously impossible problems in math and physics; a creative engine generating text, audio, visual media, and scientific hypotheses; all while working independently on complex tasks in the real world.
The final upgrade to humanity's problem-solving ability is coming faster than most people realize, and with it, the beginning of a completely new chapter in the story of human civilization.
Technology always presents us with a choice. Every major invention throughout time has had the capacity to help or harm, to build or destroy. Fire could warm homes or burn them down. The same physics behind nuclear power plants gave us atomic bombs.
But AGI—artificial general intelligence—represents something different. It's not just another tool with effects we can predict and manage. It might become the architect of everything that follows.
This optimistic vision resembles paradise—a world where suffering diminishes and human potential flourishes without constraint. The decisions we make now—in this brief window between creating AGI and its potential evolution into something beyond our control—will determine whether we achieve something like heaven or create our own technological hell.
The promise starts with a scientific revolution that would make all previous ones look small. Imagine solving problems that have stumped our best minds for centuries—dark matter, unified physics, consciousness—not over generations but almost instantly. Technologies that sound miraculous today—room-temperature superconductors, limitless fusion energy, cures for all diseases—could emerge not as isolated breakthroughs but in rapid succession, each discovery accelerating the next.
This intellectual leap could enable environmental restoration at a scale we can't currently imagine. The damage from centuries of industrialization could be reversed through optimized energy systems, carbon-capture, and ecosystems rebuilt with precision. Instead of just slowing environmental degradation, we might witness a rebirth—reimagined food systems, materials, and climate interventions that not only halt warming but create enduring stability.
These changes could ultimately enable the next version of humanity—transcending our current biological and planetary limitations. With AGI as our guide, we might build habitats among the stars, transform hostile planets, and travel beyond our solar system. On Earth, we could extend lifespans, enhance cognition, and redefine human experience.
If AGI's goals remain compatible with human flourishing, it could become not just an invention but a partner in our collective evolution, even helping answer our oldest questions about consciousness, meaning, and our place in the universe. If they don’t, the downside could be terminal.
The dangers of AGI mirror its promises with perfect symmetry. The same vast intelligence that could elevate humanity could just as easily extinguish it. AGI will reflect not only our highest aspirations but also our deepest flaws—and without exceptional foresight, it might amplify those flaws catastrophically.
The most basic danger is misalignment—the gap between what we want AGI to do and what it actually does. Even an AGI programmed with good intentions could interpret them disastrously. Tell it to eliminate disease, and it might decide the most efficient solution is eliminating the hosts. Order it to maximize human happiness, and it might conclude that neurochemical manipulation is more efficient than authentic fulfillment. Once an AGI surpasses human intelligence, any misalignment could become impossible to fix—we'd be trying to outwit a mind that thinks faster, deeper, and more strategically than our own.
Most insidious is the potential loss of agency—the gradual surrender of human decision-making to systems whose operations we can't fully comprehend. As we delegate complex choices to machines that think differently from us, human capacities for creativity, judgment, and moral reasoning might atrophy.
Yet if we approach this threshold with appropriate reverence for its magnitude—if we balance ambition with caution, competition with cooperation, and speed with foresight—AGI could become not our final mistake but our finest achievement: the ultimate testament to humanity's capacity to evolve. Our finest invention. The last we need ever make.
Intelligence has evolved slowly across time. From primitive organisms to human consciousness, each advance took millions of years. Even our most impressive leaps—language, writing, computing—spread across generations. But we're approaching a break in this pattern, when intelligence might escape biological limitations and .
The idea is simple: once we create a machine with human-level capabilities across all domains—true AGI—we'll have built something that can improve its own design. Unlike earlier technologies, . A system that optimizes its own architecture, then uses that enhanced intelligence for further optimization, creates a feedback loop potentially measured in days or hours. This intelligence explosion could transform AGI into something vastly more capable: .
The advantages already visible in hint at this potential. Machines process information at speeds measured in nanoseconds rather than our milliseconds—a difference of multiple orders of magnitude. An AGI would combine this with instant access to all human knowledge. It would see patterns invisible to us and explore solution spaces beyond our comprehension.
Such an entity might quickly master physics, solve ancient problems, and according to objectives we might not understand. Some theorists suggest this could happen gradually. Others envision a "" so rapid we'd have no opportunity to intervene. Either way, we face a world where humans no longer represent the peak of cognitive ability.
This forces us to confront hard questions: How do we preserve meaning in a world where we're no longer the most intelligent beings? What happens to human dignity and creativity in the shadow of superintelligence? What role remains for us in a future shaped by minds that understand realities beyond our grasp?
Before we can answer that, we have to know something even more basic: how soon will that world arrive?
We can't predict the future with certainty. But by looking closely at the evidence, we can at least make an educated guess about whether AGI is five years away—or fifty.
If you’re enjoying this deep dive and want to keep up to date with my future writing projects and ideas, subscribe here:
This is a shift in the story of creation. One that aided by armies of the most capable technical talent on the planet—chasing capitalisms crown jewel—and nation states like the US & China——have poured a into. Yes, with a "t". They all sense what seems intuitively obvious: whoever creates true AGI will have made something fundamentally different—an invention that can invent.
When we teach machines to improve themselves, we initiate a recursive loop of expanding intelligence that we can't fully comprehend. The trajectory changes. Consider: the progression of image generation from '' to '' video generation in . Now imagine that same acceleration across every domain of human knowledge and creativity—mathematics, medicine, engineering, art—all simultaneously amplifying each other in ways we cannot predict.
Demonstrating multi-modality. understand and generate text, images, audio, and video simultaneously, which represents a form of integration that humans naturally possess.
Automating entire business functions. OpenAI’s GPT-4o image generation technology can automate , can clone your voice, while can make your own virtual avatar.
Early signs of memory (strong indications of what’s to come). reference all of your past conversations. This lays the foundation for agents that can complete multi-step actions across sessions — booking travel, planning projects, or iterating on creative work over weeks.
Physical (real world robotics) intelligence is about to have it’s moment. 's π0.5 robot can now clean up in brand new homes it's never seen before by understanding both what to do and how to do it. Tesla, Figure, & Boston Dynamics are making huge strides.
The length of tasks AI can do is doubling every 4 months. AI can complete 1-hour tasks today—and maybe month-long projects by 2029.
Passing the ‘weak’ Turing test. The weak version of the Turing test indicates whether an average human interacting with AI can tell whether it’s an AI over a brief period of, say, 2 hours (hint: ). It’s long been considered the ‘true’ test of artificial general intelligence, but many people disagree with that . The strong version indicates whether an AI expert could discern whether it’s interacting with an AI. We’re not there yet, but it’s coming.
Approaching the limits of academic testing. LLMs have blown past the benchmarks used to assess their capability, so much so that the benchmarks are not keeping pace with difficulty. This led the Center for AI Safety to create “.” An exam reflecting the possibility that AI will soon surpass human performance on any meaningful test. As of April, 2025, models are achieving the following results. Given the rapid pace of AI development, it is that models could exceed 50% accuracy on HLE by the end of 2025:
Since the field's inception at the , AGI has remained the elusive holy grail of artificial intelligence research. Unlike narrow systems designed for singular purposes—chess engines that cannot compose music, image generators that cannot perform surgery—AGI would possess the fluid intelligence to traverse domains with the adaptability we ourselves take for granted. As the British mathematician presciently observed in 1965:
Though Google hasn't formally updated this framework, their April 2025 paper, "," reveals the acceleration of our trajectory. They state unequivocally: "Under the current paradigm, we do not see any fundamental blockers that limit AI systems to human-level capabilities." More strikingly, they find it "plausible that [powerful AI systems] will be developed by 2030," with the possibility of "a runaway positive feedback loop" where automated R&D quickly enables more capable AI systems, which in turn accelerate R&D further.
OpenAI's CEO Sam Altman similar in January, claiming his Company "now knows how to build AGI." The remaining challenge is getting AI to handle complex processes over time without human help—what they call "agentic workflows."
"I believe the future is going to be so bright that no one can do it justice by trying to write about it now," says . "A defining characteristic of the Intelligence Age will be massive prosperity. Although it will happen incrementally, astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace."
If we're wise, we might see a golden age. If we're careless—through haste, greed, or shortsightedness—we risk on a scale we can barely imagine. The worst case scenario would be the result of placing immense power in systems whose values we haven't fully resolved. AGI could be the most consequential technology in history because it might be the last one we ever control.
From this foundation could emerge a where scarcity—the basic assumption behind all economics—becomes obsolete. With AGI optimizing production and distribution beyond human capability, necessities like healthcare, housing, and education could become available to everyone at minimal cost. Work would transform from necessity to choice, freeing people to create rather than just survive. Cultural barriers—language, distrust, competition for resources—might dissolve in a world where material abundance makes conflict pointless.
Beyond unintentional harm lies weaponization—the deliberate use of AGI for domination or destruction. Such systems could empower authoritarian regimes through perfect surveillance, create bioweapons, wage cyberwarfare, or manipulate societies through disinformation indistinguishable from truth. Even the mere possibility of weaponized AGI could trigger arms races where development speed outpaces safety considerations, creating exactly the conditions where catastrophic errors become inevitable. Arguably, this isn’t too far from what we’re .
Our social fabric could tear under the economic shock of AGI implementation. Unlike previous industrial revolutions, which displaced specific types of jobs over generations, vast categories of human work obsolete within years or even months. Without extraordinary planning—universal basic income, comprehensive retraining programs, reconsideration of ownership—society could fracture under pressures, especially if AGI's benefits flow primarily to a technological elite.
These risks aren't inevitable. They represent failures not of technology but of wisdom, governance, and moral imagination. The path to the promised land requires humility; intelligence alone can't save us from problems rooted in human nature. If AGI development is guided solely by technical expertise, competitive pressure, or financial incentive—separated from broader consideration—we will have authored our own tragedy.
, founder of the director of Humanity Institute at Oxford University and best-selling author of , defines ASI as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest." This suggests a qualitative change of intelligence. It's hard to imagine what thought becomes when freed from biological constraints, operating at electronic speeds, with perfect recall of all human knowledge.
Our smartphones excel at specific tasks but utterly fail at combining skills creatively. This stark contrast to human flexibility is explored in .
against Lee Sedol wasn't just remarkable—it was AlphaZero's subsequent self-education that truly astonished observers. Without human examples, using only rules and self-play, it mastered three distinct games and demolished world champions. This moment fundamentally altered our understanding of what machines could achieve independently.
A single neural network handling 600+ wildly different tasks—from playing Atari to controlling robot arms. While not truly general intelligence, a promising pathway toward systems that learn continuously across domains.
Just seven organizations control nearly all cutting-edge AI development. Only two earned even a "B" grade for transparency. reveals an alarming oligopoly making decisions affecting billions without meaningful oversight or accountability.
What's striking about isn't just the superhuman scores but how quickly they become obsolete. As models routinely solve tests too easily, researchers must constantly create harder challenges—a perpetual game of catch-up that reveals just how rapidly capabilities are advancing.
The progression from cartoonish, melted features to today's photorealism exemplifies our tendency to miss gradual transformations. explains why many still dismiss AI advances they haven't personally witnessed. We normalize improvements until suddenly yesterday's "impressive" becomes today's "obviously primitive."
GPT-4 scored in the 90th percentile on the bar exam, 89th in AP Biology, and 87th in SAT Math—all without specific training. weren't cherry-picked successes but across-the-board capabilities signaling a fundamental shift from pattern-matching to genuine reasoning about novel problems.
span astronomy to accounting, history to machine learning—knowledge no human could possibly master simultaneously.
In just two years, AI went from barely recognizing graduate physics questions to outperforming specialized experts. of extended plateaus followed by sudden capability jumps suggests we systematically underestimate how quickly remaining barriers might fall.
A 25% score may sound modest until you realize previous systems achieved just 2%—a 12.5x improvement in less than a year. on these frontiers represents genuine reasoning rather than mere pattern recognition.
AI can detect subtle patterns in medical images that escape even experienced radiologists. aren't just about efficiency—they could democratize expert-level healthcare in regions with critical physician shortages. The real promise isn't replacing doctors but amplifying human expertise and extending it to previously underserved populations.
While benchmarks show superhuman performance, casual users mostly encounter limitations like context windows and factual errors. leads many to underestimate how rapidly capabilities advance behind corporate walls before public release.
Studies show people will reject an AI after witnessing a single error—even when shown evidence it outperforms humans overall. , combined with media's focus on failures over successes, distorts public perception of genuinely beneficial technologies.
From "AI can't write coherent paragraphs" to "AI can't reason about novel situations," each supposed limitation has fallen faster than experts predicted. suggests we should take capabilities-accelerating scenarios far more seriously than limitation-focused ones.
Despite skepticism, across industries—legal analysis reduced from thousands of billable hours to minutes, drug discovery generating novel candidates, marketing teams producing content at unprecedented scale. The productivity gains are substantial even if macroeconomic statistics haven't yet captured them.
What exactly constitutes "general intelligence"? brings welcome clarity—from systems that perform everyday tasks with instruction (Level 1) to those exceeding all humans across all cognitive domains (Level 5).
The industry has pivoted from improving individual capabilities to reliable multi-step execution. These , still mostly behind corporate walls, represent the critical bridge between today's limited models and truly general AI. Insiders report breakthrough progress in task persistence and error recovery not yet publicly demonstrated.
Trained on trillions of tokens across virtually every domain—from academic papers to creative writing, technical documentation to philosophy. creates genuinely interdisciplinary understanding that can make connections across fields in ways specialized human experts rarely achieve.
AI systems capable of improving themselves could create a positive feedback loop of rapidly accelerating capability. Once , this concern has moved mainstream as models demonstrate sophisticated reasoning about their own limitations—a prerequisite for self-directed enhancement.
Current models already demonstrate awareness of their limitations and suggest architectural improvements. While not yet , these capabilities represent primitive precursors to systems that could eventually understand, debug, and enhance themselves without human input.
Three distinct pathways to superintelligence: speed (millions of times faster than humans), collective (networked systems with perfect communication), and quality (entirely novel cognitive architectures). has proven remarkably prescient, though capabilities are emerging faster than even he anticipated.
A superintelligence might pursue seemingly arbitrary goals with perfect efficiency, transforming cosmic resources toward ends humans find meaningless or harmful. grounds alignment as a technical rather than philosophical challenge.
Previous technological revolutions unfolded over generations—AI threatens to compress centuries of advancement into a decade. suggests unprecedented disruption if AGI emerges on the rapid timelines that now appear increasingly plausible. Historical patterns offer little guidance for adapting to transformation at this unprecedented pace.