The Last Invention
  • The Last Invention
  • I. The Intelligence Landing
  • II. The Countdown
  • III. Work’s Last Stand
  • IV. Wealth in the Machine Age
  • V. The Prep Window
  • VI. Thriving Through Transition
  • VII. Humanity's Final Exam
  • VIII. Intelligence on Intelligence
  • Supplementary Sections
    • S1. The Economics of Zero
    • S2. The Ultimate Scarcity
    • S3. Meaning in a Solved World
  • About
Powered by GitBook
On this page
  • Choosing our path
  • The ten grand challenges
  • How you can contribute
Export as PDF

VII. Humanity's Final Exam

Last updated 17 days ago

If our make-or-break moment is five years away, who leads the charge? AGI's potential arrival in 5 years marks humanity's most pivotal moment. It's an existential crossroads demanding urgent action from all. Beyond job displacement, the real stakes involve controlling superintelligence for survival. The window to steer AGI beneficially is closing fast, requiring coordinated action now.

Table of Contents

Lord, give me the strength to change the things I can, the tolerance to live with the things I can't, and the wisdom to know the difference.

— Reinhold Niebuhr

Nobody made a greater mistake than he who did nothing because he could do only a little.

— Edmund Burke


Choosing our path

The core question is deceptively simple: who will guide this change? The answer has to be: us.

History shows that transformative technologies don't follow predetermined paths—they follow human direction. The printing press democratized knowledge because people deliberately expanded its purpose beyond reproducing religious texts. Nuclear technology became more than weapons because scientists insisted on peaceful applications. The internet evolved from a military network to a global commons through conscious human choices.

This pattern reveals something important: when faced with powerful technologies, those who refuse to accept supposedly "inevitable" outcomes are often the ones who reshape possibility itself.

"Everything can be taken from a man but one thing: the last of the human freedoms—to choose one's attitude in any given set of circumstances, to choose one's own way."

The ten grand challenges

  1. What general abilities do we need for good outcomes? Beyond technical expertise, we need wisdom, foresight, and cooperation at a level unprecedented in technological history.

  2. How do we make AGI reliable and secure as it grows more powerful? Ensuring robustness against failure, manipulation, and unintended consequences becomes civilization-critical as systems approach human capability.

  3. What big problems can AGI help solve? Directing AI toward our most pressing challenges—climate change, disease, poverty, sustainable energy—will determine whether it becomes salvation or distraction.

  4. How do we handle the economic shock? Previous technological transitions unfolded across generations; AGI may compress this to years or even months, requiring entirely new approaches.

  5. Who gets to build, use, and benefit from AGI? The distribution of potentially the most powerful technology in human history will shape power relationships for generations.

  6. What social and environmental damage needs prevention? From resource-intensive computation to manipulation of human psychology, AGI poses novel harms we must anticipate rather than react to.

  7. How do we coordinate powerful actors using AI? Preventing destructive competition while enabling beneficial cooperation requires institutional innovation at global scale.

  8. How does our social infrastructure need to change, and who governs it? The institutions and norms suitable for previous technologies may prove inadequate for AI that exceeds human capability.

  9. How does the human condition change when we're no longer the smartest entities? Understanding humanity's place in a world where we're not the pinnacle of intelligence will require philosophical insight as much as technical mastery.

These challenges form an interconnected web where progress on any dimension influences all others. Technical solutions without governance frameworks risk misuse; economic adaptations without social infrastructure risk inequality; philosophical understanding without practical application risks irrelevance.

Together, they represent a comprehensive examination of humanity's relationship with its own creations—and ultimately, with itself.

How you can contribute

We face the ultimate transition with AGI, where personal action may determine our collective fate. If you’re reading this, you’re probably in one of four groups: technologist, policymaker, investor, or concerned citizen. No matter which, there are concrete ways you can help.

First: take direct action.

If you want to help build AGI itself, join the Frontier AI labs leading the work:

If you want to focus on making sure it stays safe, work with alignment and safety groups:

Or if you can’t work full-time on it, you can invest in or donate to the people who do:

Second: spread the right ideas. When people talk about AI, most are still stuck on "Will it take my job?" That's not the biggest risk. The biggest risk is whether humanity even has jobs—or a civilization—in 100 years.

Another thing to make clear: "Friendly AI" is not automatic. We don’t know how to control something smarter than we are. That should scare us. And finally: don’t just trust the "experts." Many of the people building this stuff don’t know how to guarantee it’s safe. They’re doing their best, but it’s not enough. We all have to get informed and demand better.

We don’t have much time. But we do have a choice.

Coming up next: With AI chatter everywhere, where do you find the signal that matters?


Want me to send you new ideas?

If you’re enjoying this deep dive and want to keep up to date with my future writing projects and ideas, subscribe here:


If you’re enjoying this deep dive and want to keep up to date with my future writing projects and ideas, .

. The next three to five years could end up being the most important in human history. The arrival of Artificial General Intelligence (AGI) isn’t just another milestone, like smartphones or even the internet. It’s more like the discovery of fire or electricity—a force that could remake civilization, for better or worse.

It’s easy to feel like a bystander. Like AGI is something happening to us, controlled by a handful of companies or runaway market forces. But that’s wrong. History shows that when things really matter, the people who step up—who refuse to accept "inevitable" outcomes—are the ones who change the story. Viktor Frankl, author of and Auschwitz survivor, put it simply:

AGI’s future isn’t written yet. .

: if it’s 2050, and everything turned out well, what did we have to solve to get there?

Gavin Leech tried to answer that with a list of :

If AGIs develop their own goals, how do we keep them aligned with ours? Creating systems that maintain human values even as they evolve beyond human comprehension is the ultimate '.'

Support dedicated AI safety initiatives like Geoff Ralston’s

Open Philanthropy:

If you want a deeper dive into why this is so urgent, I recommend "," especially the section on "." The window for meaningful influence narrows with each advance, yet this urgency should lead to focused action rather than despair.

subscribe here
We’re at a crossroads
Man's Search for Meaning
It’s ours to write
Eric Schmidt (Ex-CEO of Google) posed a question
ten hard problems
wicked problem
Anthropic
OpenAI
Google DeepMind
xAI
Thinking Machines
Safe Superintelligence
Conjecture
Future of Life Institute
Pause AI
Forethought
Alignment Research Center
Harmony Intelligence
Eric Schmidt’s AI2050
AI Safety Fund
Potential Risks from Advanced Artificial Intelligence
80,000 Hours
Centre for Effective Altruism
Orienting to 3-Year AGI Timelines
Prerequisites for Humanity's Survival
Choosing our path
The ten grand challenges
How you can contribute
VIII. Intelligence on Intelligence | The Last Invention
Logo