The Last Invention
  • The Last Invention
  • I. The Intelligence Landing
  • II. The Countdown
  • III. Work’s Last Stand
  • IV. Wealth in the Machine Age
  • V. The Prep Window
  • VI. Thriving Through Transition
  • VII. Humanity's Final Exam
  • VIII. Intelligence on Intelligence
  • Supplementary Sections
    • S1. The Economics of Zero
    • S2. The Ultimate Scarcity
    • S3. Meaning in a Solved World
  • About
Powered by GitBook
On this page
  • The storm's coming
  • 1 The one-way gamble
  • 2 Asleep at the edge
  • 3 The only certainty is uncertainty
  • Preparation
  • Footnotes
Export as PDF

V. The Prep Window

Last updated 16 days ago

If prep costs pennies but failure could cost everything, why wait? From a game-theoretic view, early AGI prep is the dominant strategy. The core logic is Pascal's wager: preparing early costs little; being unprepared could be catastrophic. Societies often sleepwalk into predictable crises—as evidenced by COVID-19. With AGI, the stakes are existential. Uncertainty demands preparation, not inaction.

Table of Contents

The art of war teaches us to rely not on the likelihood of the enemy's not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable.

— Sun Tzu To be prepared for war is one of the most effective means of preserving peace.

— George Washington


The storm's coming

Humans show a remarkable capacity to recognize approaching danger while simultaneously failing to act on that knowledge. This pattern emerges from a complex interplay of structural and psychological factors: the tragedy of commons in distributed responsibility, short-term political and corporate incentives, and our natural discomfort with contemplating existential threats.

Throughout history, we've consistently assumed "someone else" will handle looming crises—from ancient wars to modern pandemics. Yet with artificial general intelligence (AGI), this tendency toward collective abdication faces its most consequential test. Unlike previous technologies that developed through visible, gradual stages, AGI follows an exponential progression that might compress decades of advancement into months once certain thresholds are crossed.

The most important lesson from strategy, finance, and even good old common sense: it's never enough to merely bet that disaster won't happen. You must build a position strong enough to survive if it does.

1 The one-way gamble

Pascal contends that a rational person should adopt a lifestyle consistent with the existence of God and actively strive to believe in God. The reasoning behind this stance lies in the potential outcomes: if God does not exist, the individual incurs only finite losses, potentially sacrificing certain pleasures and luxuries. However, if God does indeed exist, they stand to gain immeasurably, as represented for example by an eternity in Heaven in Abrahamic tradition, while simultaneously avoiding boundless losses associated with an eternity in Hell.

You see this logic in action with climate change. Even if you thought there was only a 1% chance it’s real, the scale of possible disaster means you have to take it seriously. Not preparing .

The same thing happened with COVID-19. If you were willing to entertain the possibility early — even if you weren’t sure — you could take simple steps to protect your family, move if needed, stock up, or avoid getting trapped somewhere bad. You didn’t have to be certain to act. And acting preserved your agency .

If you ignored the possibility altogether, you lost your ability to choose. You ended up at the mercy of the system around you. It’s the same with AGI. We can’t know for sure if we’ll get it in five years. But if we do, the world will change faster than anything we’ve ever seen. The real question is simple: If that future comes, do you want to be prepared — or caught flat-footed?

2 Asleep at the edge

Fool me once, shame on you; fool me twice, shame on me.

— Anthony Welden

The pandemic provides direct evidence that even when risks are obvious and experts are raising alarms, most of the world stays focused on daily concerns while ignoring the looming crisis. The warnings were there, :

And yet when the pandemic arrived, almost no one was ready.

If that's how the world reacts to a threat it can see coming, what makes us think it will be different with AGI? The pattern seems disturbingly consistent: experts warn, institutions acknowledge but fail to act, and preparation remains theoretical until crisis forces a response.

3 The only certainty is uncertainty

The uncertainty stems from three opposing forces:

  1. . AI designing better AI could accelerate progress unpredictably, shortening timelines.

  2. . New bottlenecks (e.g., energy, data quality) might emerge, lengthening timelines.

  3. . Scaling could unlock sudden capabilities, shortening timelines.

These forces push in opposite directions. And no one knows which will dominate.

You can't prove we’ll get AGI in five years. You can't prove we won’t. You can't even prove it . In investing, when you face that kind of uncertainty, you don’t just guess better — you build a . , even if you hope they won't. That’s how you survive.

It was true in 2008 when bad forecasts sank entire companies. It will be true again with AGI. The difference between survival and disaster won’t be prediction accuracy. It will be preparation.

You don’t have to know exactly when the future will arrive. You just have to be ready when it does.

Preparation

The future isn't something that happens to you. It's something you either shape or get shaped by. We don't know exactly when AGI will arrive, or exactly what it will look like. But uncertainty isn't a reason to do nothing. It's the best reason to act.

When things change this fast, the cost of being unprepared is measured not just in money or opportunity, but in agency — in whether you're someone who gets to make choices, or someone who just has to live with them.

So the question isn’t, "Will this happen soon enough to matter?" The question is, "When it does, what position do I want to be in?" The smartest way to prepare for an uncertain future is to start early.

Coming up next: Which specific moves flip uncertainty into advantage during the transition?


Want me to send you new ideas?

If you’re enjoying this deep dive and want to keep up to date with my future writing projects and ideas, subscribe here:


Footnotes

1

2

3

4

5

6

7

8

9

10

11

12

If you’re enjoying this deep dive and want to keep up to date with my future writing projects and ideas, .

The wisdom of strategy teaches us a simple truth: if you want to survive a storm, you don't wait until lightning strikes. You prepare while the sky is still blue. With AGI, we're still under mostly blue skies, but on the horizon—and they move faster than people expect. AI could be the of them all.

is one of the oldest arguments for why it’s rational to prepare for uncertain but high-stakes events. The original version was about God. Blaise Pascal, the seventeenth-century philosopher and mathematician, argued that you should live as if God exists, because if you're wrong, you lose a little — but if you're right, you gain everything. In his words: if God doesn't exist, you lose only finite pleasures. But if He does, you gain eternity — and .

It’s not a perfect argument for questions of faith — people like have pointed out the problems. But it’s still a to think about risks where the downside is massive and the upside of preparation is much larger than the cost.

One reason I take AGI seriously—and on a short timeline—is because history repeats. Our collective capacity for self-deception in the face of visible catastrophe is a persistent feature of history. Just ask Winston Churchill. of the threat of Hitler ad nauseam before he was taken seriously and eventually awarded the prime minister role—luckily, it wasn't too late.

Consider how almost everyone sleepwalked into COVID-19, despite clear warnings. You might argue the wasn't catastrophic compared to historical plagues, but that was more fortune than foresight. And AGI, if it goes wrong, won't just affect some people—it will touch everything and everyone.

In 2015, in his TED Talk that our next catastrophe would likely be a virus, not war, and that our response systems were inadequate. By 2017, epidemiologist Deadliest Enemy, declaring a pandemic inevitable and outlining preparation plans that governments largely ignored. That same year, in Politico that U.S. neglect of health emergency systems would lead to disaster. By 2018, in The Atlantic that global travel, urbanization, and weak health systems made a deadly pandemic inevitable, stating "much worse is coming." Most telling, just months before COVID-19 struck, the that not a single country—not even the wealthiest—was fully prepared for a pandemic.

This time, about artificial intelligence. This time, we've been warned. The question is: have we learned to listen?

suggests three possible timelines: AGI around 2030 appears most plausible; timelines beyond five years remain moderately plausible; while development within the next five years seems least plausible, though we can't dismiss it entirely.

One of AI's founders thinks we're building the equivalent of nuclear reactors without control rods. comes after 40 years advancing the very field he now fears.

in his papers. The probability pioneer died at 39.

Dawkins meant to demolish Pascal's wager but accidentally strengthened its application to extinction risks. raises the perfect question for AI alignment. why would a superintelligence value what humans value?

Standard economic models can't handle genuine uncertainty. that unlikely but extreme outcomes should dominate our calculations—we've been drastically underestimating climate risks.

The trader who made millions betting against conventional wisdom. Systems that gain from disorder—what "antifragile"—might be our best defense against unpredictable catastrophes.

Seventeen years. That's how long we had to prepare for COVID after the first detailed warnings. documents 25 specific pandemic alerts experts ignored—including Bill Gates' now-famous 2015 TED talk.

The AI equivalent of compound interest—systems improving themselves, creating better systems, which improve themselves further. "recursive self-improvement" to describe this potentially explosive feedback loop.

After two years analyzing AI progress at Open Philanthropy, Cotra gives a 10% chance of AGI by 2025 and a median estimate of 2040. remains the most comprehensive available.

Neural networks suddenly "getting it" after appearing to merely memorize data. , called "grokking," might be the most important AI behavior you've never heard of.

Either we go extinct soon, colonize the stars, or live in a simulation—there are no boring futures left according to .

The principle that made Warren Buffett the world's greatest investor. you don't need to know something's exact value—just that it's worth significantly more than you're paying. , applied to existential risks, means preparing for worst-case scenarios even without precise probabilities.

We're wired to ignore low-probability, high-impact events until they happen. the 2008 crash using this insight about human blindspots.

subscribe here
clouds are gathering
fastest storm
Pascal’s wager
Richard Dawkins
He warned
death toll
Bill Gates warned
Michael Osterholm had published
Jeremy Konyndyk predicted
Ed Yong wrote plainly
Johns Hopkins Center for Health Security reported
even more voices are sounding the alarm
Russell's warning
Found posthumously
His book
Weitzman showed
Taleb calls
This compilation
Bostrom coined
Her forecast model
This phenomenon
Karnofsky's analysis
Graham's approach
Taleb predicted
The storm's coming
1 The one-way gamble
2 Asleep at the edge
3 The only certainty is uncertainty
Preparation
VI. Thriving Through Transition | The Last Invention
It's time to pay attention.
Learn the lessons.
The sources of uncertainty mean preparation is paramount.
Our previous analysis
Logo