The Last Invention
Why Humanity's Final Creation Changes Everything

The Last Invention
We are building the thing after which nothing is built the same way again — and almost no one is paying attention.
Table of Contents
- Introduction [this page] — Welcome to the Last Invention.
- I. The Intelligence Landing — When machines out-think us in every domain, do we steer — or get steered?
- II. The Countdown — Will AGI crash-land before your next phone upgrade or decades later?
- III. Work's Last Stand — When labor turns optional, does society thrive — or fracture?
- IV. Wealth in the Machine Age — When intelligence goes abundant, what still commands a premium?
- V. The Prep Window — If prep costs pennies but failure could cost everything, why wait?
- VI. Thriving Through Transition — Which moves separate thrivers from casualties in the intelligence boom?
- VII. Humanity's Final Exam — If our make-or-break moment is five years away, who leads the charge?
- VIII. Intelligence on Intelligence — Drowning in AI noise, how do you surface signal?
- S1. The Economics of Zero — When everything costs near-zero, is abundance a right or a gated privilege?
- S2. The Ultimate Scarcity — When buildings can be constructed for pennies, does value vanish — or sink into the soil beneath?
- S3. Meaning in a Solved World — If machines outshine us at every craft, what fills the silence inside?
"The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology."
— E. O. Wilson
"For what shall it profit a man, if he shall gain the whole world, and lose his own soul?"
— Mark 8:36
For updates and discussion, subscribe here. Each essay in this series stands alone, but I recommend reading them together. For a PDF version, click export on the top right hand side of the page.

The Smell of Gas
There is a particular kind of domestic catastrophe that begins with normalcy. A family sits at breakfast. Someone notices, half-consciously, a faint chemical sweetness in the air — the mercaptan additive that gas companies inject into methane so that humans might detect the thing that would otherwise kill them odorlessly. The smell registers. It is noted. It is not acted upon. The toast goes in. The coffee pours. Then the pilot light catches, and the world rearranges itself in an instant, and everyone afterward says we should have known.
We are sitting at that breakfast table now.
In laboratories stretching from San Francisco's Mission District to the research campuses ringing Beijing's Zhongguancun, in data centers drawing more electricity than mid-sized nations, across clusters of NVIDIA H100s humming at thermal limits that would have seemed hallucinatory five years ago, humanity is building machines designed not merely to calculate but to think — to reason, to strategize, to create, and soon, plausibly, to surpass human intellect across virtually every domain that matters. The tangible trajectory, visible from the frontiers of AI research, is accelerating faster than even the people building these systems anticipated. The field has a word for this acceleration, and the word is scaling, and scaling has so far refused to bend to the skeptics' predictions of imminent plateau the way a river refuses to stop at a town meeting's resolution that it should not flood.
This is the invention after which nothing else gets invented the same way.
Artificial General Intelligence. AGI. The term carries a whiff of science fiction that still permits polite people to treat it as speculative — a comforting classification error, the same genus of mistake that let otherwise intelligent observers dismiss the internet as a fad in 1995, or wave away pandemic preparedness in 2018 because the last serious coronavirus scare had petered out. History does not repeat but it rhymes, as the saying goes, and the rhyme scheme this time is not merely masks, lockdowns, and economic tremors. The stakes are existential. Not existential in the debased Silicon Valley sense — meaning "really important to our Series B" — but existential in the original sense: pertaining to existence itself, to the question of whether human agency and human meaning survive contact with a form of intelligence that may not need us for anything at all.
The thing that's wild about AGI timelines discourse is that both sides — "it's 3 years away" and "it's 30 years away" — are making equally strong claims with equally thin evidence, and neither side seems to notice.
— Ethan Mollick (@emaborrell) February 27, 2024
The world seems determined to look away. This is not surprising. Looking at the thing directly requires holding contradictions that the human mind resists: that the same technology could cure cancer and render your career obsolete, that the people building it are simultaneously brilliant and reckless, that the timeline might be five years or fifty and the variance between those outcomes represents the largest delta of consequence in human history. Easier to scroll past. Easier to file it with the other ambient apocalypses — climate, nuclear, pandemic — in the mental drawer labeled things too large to act on before lunch.
But the gas is in the room, and this essay series is an attempt to describe its precise chemical composition before the spark.
What This Is and Why It Exists
I should say plainly what I am doing and why.
The Last Invention is not a primer. It is not a balanced survey of "perspectives on AI" designed to leave the reader feeling that all views have been responsibly represented and nothing urgent need be done. It is not a product announcement dressed as philosophy, nor a safety brief dressed as literature, nor a career-advice listicle dressed as anything at all. It is an argument — sustained, specific, occasionally uncomfortable — about what happens when the species that wrote Genesis 1:28 ("fill the earth and subdue it") finally builds something that might subdue it.
The argument unfolds across eight chapters and three supplements. Each stands alone. Together they form a single structure, and the structure is this: we are approaching the creation of artificial general intelligence; the approach is faster and less controlled than almost anyone outside a handful of research labs understands; the consequences — for work, for wealth, for meaning, for the survival of the species — are not merely significant but total; and the window in which individuals, institutions, and civilizations can still influence the outcome is closing with a speed that should make your hands shake slightly as you hold this page.
The facts and frameworks in what follows are drawn from years spent at close range to the frontier — absorbing viewpoints from the cautious bears to the accelerationist bulls, building and selling an AI media company, tracking the preprint drops and the quiet capability jumps that don't make the evening news. I have tried to be honest about what I know, what I suspect, and what no one yet knows. I have tried not to be reassuring where reassurance is unearned.
I write this because the hour is late and the stakes are infinite.
The Shape of the Series
Chapter I: The Intelligence Landing begins where the vertigo begins — with the raw capabilities. AI systems are already matching or beating human performance in domains we were promised would remain safe for decades: legal reasoning, medical diagnosis, mathematical proof, creative writing that passes for publishable. The trajectory points toward machines that out-think us in every domain, and the chapter asks the question that all other questions reduce to: when that happens, do we steer, or do we get steered? The capabilities curve is the spine of everything that follows. You cannot plan for a future you refuse to see clearly.
Chapter II: The Countdown attacks timelines with the specificity they demand. Not vibes. Not "sometime this century." Five critical technical hurdles — world-model transfer, persistent memory, causal reasoning, long-horizon planning, and self-monitoring — stand between current systems and genuine AGI. The chapter maps each, assesses the plausible rate of progress, and arrives at a range: AGI within five years is genuinely plausible; AGI within fifteen is something like a base case; AGI never is a position that requires more faith than the evidence supports. The self-improvement question — what happens when the system can improve itself — is where the timeline analysis shades into something closer to theology, because recursive self-improvement is the point at which prediction ceases to function and we enter territory that should frighten us.
Chapter III: Work's Last Stand follows the capabilities curve into the labor market and watches the collision in slow motion. Unlike every previous wave of automation — the loom, the combine, the spreadsheet — AGI does not target manual labor at the margins. It targets cognitive labor at the center. The accountant, the radiologist, the junior associate, the code reviewer, the middle manager whose value proposition is "I synthesize information and make decisions" — these are the roles most exposed, and they are the roles around which the modern middle class organized its identity and its mortgage payments. The chapter maps three phases: AI assistants augmenting human workers (2025–2030), then broad cognitive automation displacing them (2030–2045), then a post-employment economy whose social contract has not yet been written. The question is not whether this happens. The question is whether we distribute the gains or allow the largest concentration of economic value in history to pool in the hands of whoever owns the weights.
Chapter IV: Wealth in the Machine Age is the investment thesis embedded in the philosophical argument — or perhaps the philosophical argument embedded in the investment thesis; the two are less separable than we pretend. When intelligence becomes abundant, what remains scarce? Not knowledge. Not analytical capacity. Not even creativity in the generic sense. What remains scarce is judgment under genuine uncertainty, taste that cannot be reverse-engineered from training data, the ability to do the thing the machine cannot yet do and to identify what that thing is before the machine gets there. The chapter maps which meta-skills appreciate, which depreciate, and how to position a portfolio — of capital, of human capital, of attention — for a world where the nature of value is being rewritten in real time.
Most people are not ready for how quickly AI will go from "interesting tool" to "the entire economy."
— Sam Altman (@sama) February 28, 2025
Not years. Quarters.
Chapter V: The Prep Window makes the game-theoretic case for action under uncertainty. The logic is structurally identical to Pascal's Wager, though the theology here is empirical rather than metaphysical: if AGI arrives and you have prepared, the cost of preparation was trivial relative to the advantage gained; if AGI arrives and you have not prepared, the cost of unpreparedness may be everything; if AGI does not arrive, the skills and positions acquired during preparation remain valuable in any plausible future. The dominant strategy is obvious. The reason most people do not execute it is the same reason most people did not stockpile masks in January 2020: the present feels more real than the future until the future becomes the present, and then it is too late to act.
Chapter VI: Thriving Through Transition converts analysis into action. Concrete strategies for individuals, investors, and leaders — not the anodyne "upskill and stay curious" advice that populates LinkedIn like so much verbal landfill, but specific, sometimes uncomfortable moves: embed yourself at the frontier, guard your attention as the scarcest resource you possess, construct an AGI-weather portfolio that performs across multiple scenarios, develop the irreducibly human capacities that become more valuable as machine intelligence becomes more general. The chapter is practical because practicality is a form of moral seriousness.
Chapter VII: Humanity's Final Exam pulls the lens back to civilizational scale. If AGI is five years away — and the evidence suggests this is at least plausible — then we are in the final exam and most of the species hasn't opened the textbook. The chapter moves beyond job displacement to the deeper stakes: the alignment problem (how do you ensure a superintelligent system's objectives remain compatible with human survival?), the coordination problem (how do you prevent a race to the bottom among competing labs and nation-states?), and the governance problem (who decides what the most powerful technology in history is used for, and by what authority?). The window to steer AGI beneficially is not merely closing. It is closing at the speed of the next training run.
Chapter VIII: Intelligence on Intelligence provides the epistemic infrastructure — the curated network of researchers, labs, newsletters, aggregators, and thinkers that constitute genuine signal in a field drowning in noise. This is the chapter for the reader who finishes the series and asks: now what do I read?
Three supplementary essays extend the argument into domains that demanded their own space. The Economics of Zero examines what happens when AI demolishes scarcity in core necessities — housing, healthcare, education, childcare — and finds that the bottleneck is not technology but policy, regulatory frameworks designed for an era of scarcity that now function as instruments of artificial deprivation. The Ultimate Scarcity traces the inversion in real estate: when construction costs approach zero, the value of structures collapses and the value of land — the one thing no algorithm can manufacture — explodes, driving a wealth transfer to landowners that dwarfs anything in the Gilded Age unless policy intervenes. Meaning in a Solved World confronts the question that sits beneath every other question: if machines outperform us at every task, what fills the silence? What does it mean to be human when the functional definition of humanity — the species that thinks — no longer applies?
The Cathedral and the Cluster
A note on method.
Most writing about AI falls into one of three modes, each insufficient. There is the promotional mode, native to the labs themselves and their orbiting media apparatus, in which every capability advance is a miracle and every safety concern is a speed bump on the road to utopia — a mode that reads, at this point, like the housing-market coverage of 2006, all gains and no gravity. There is the catastrophist mode, in which every capability advance is a step closer to extinction, and the appropriate emotional register is permanent dread — a mode that mistakes seriousness for gloom and often functions, despite its intentions, as a kind of doom-flavored entertainment, the eschatology of people who have replaced church with LessWrong. And there is the technocratic mode, favored by policy shops and consultancies, in which the tone is measured, the recommendations are bullet-pointed, and the underlying assumption is that the existing institutional order can metabolize any disruption if it convenes enough working groups — a mode that would have produced a well-footnoted white paper on the governance implications of the asteroid that killed the dinosaurs.
the ai discourse is currently:
— Kyla Scanlon (@kabornebusch) March 28, 2024
- doomers (we're all going to die)
- accelerationists (build faster, worry later)
- grifters (my course will teach you prompt engineering)
- normies (what's an LLM?)
there is almost no one doing serious, thoughtful analysis for regular people
None of these modes is adequate to the material. The creation of a mind that is not our own — if that is indeed what we are doing — is not a product launch, not an extinction event (yet), and not a policy challenge. It is something closer to a theological event: a moment that forces the species to confront what it means that we are made, as the tradition has it, in the image of a God who creates, and that we are now creating something in our own image that may surpass the original. Genesis 1:27 meets Genesis 11:4. The imago Dei confronts its own reflection in silicon. The builders of Babel, you will recall, did not lack ambition or engineering talent. What they lacked was the wisdom to ask whether the project served a purpose beyond the demonstration of their own capacity. God's response was not to destroy the tower. It was to scramble the language — to introduce confusion precisely at the point of coordination, so that the builders could no longer align on what they were building or why.
I wonder, sometimes, if the alignment problem is the modern version of that scrambling.
This essay series attempts a different mode. Call it, if you like, literary intelligence criticism — writing that takes the technical frontier seriously enough to engage its specifics (parameter counts, scaling laws, FLOPS, RLHF, mixture-of-experts architectures, the politics of benchmark selection) while insisting that the specifics are not self-interpreting, that they require a frame larger than the field provides for itself. The frame I reach for is partly theological, partly anthropological, partly economic, and partly — I will not pretend otherwise — the product of a particular sensibility: someone who has read both the scaling laws papers and the Book of Common Prayer, and who suspects that the second may be more useful than the first for understanding what is actually at stake.
The tone will move. Some passages will read like reporting from the front lines of the capability race. Others will read like sermons — and I do not use the word dismissively, because the sermon is a technology for moral attention that we have foolishly abandoned and desperately need. There will be satire, because the culture surrounding AI is frequently absurd in ways that demand precise mockery: the particular way a safety researcher dresses for a Senate hearing, the matcha-bar discourse on p(doom), the conference-circuit piety of "existential risk" deployed as career strategy by people whose revealed preferences suggest they are far more worried about their H-index than about human survival. There will be data, because data is a form of respect for the reader. And there will be, throughout, an insistence that the creation of artificial general intelligence is not a problem to be "managed" but a rupture to be faced — spiritually, economically, politically, and personally.
Sleepwalking and the Spark
We have been here before. Not with this technology, but with this posture — the posture of the species that can see the thing approaching and elects, through a combination of normalcy bias, institutional inertia, and the soothing drone of experts assuring us that the timeline is longer than the alarmists claim, to do precisely nothing until the crisis is upon us.
In January 2020, the information was available. The sequences were published. The R0 estimates were circulating. The epidemiologists were raising alarms with an urgency that, in retrospect, was if anything insufficient. And the collective response of the developed world — the governments, the institutions, the publics — was to watch Italy's hospitals overflow on cable news and then go to brunch. By the time action came, it came late, came panicked, came at a cost of trillions of dollars and millions of lives that earlier preparation would have dramatically reduced.
The AGI parallel is structural, not metaphorical. The information is available now. The trajectory is visible. The researchers most intimate with the frontier — including, notably, the ones building the systems — are publicly stating timelines that would have been considered fringe five years ago and are now consensus within the labs, if not yet within the broader culture.1 And the collective response is, once again, to note the faint smell of gas and reach for the toast.
Geoff Hinton, Nobel Prize in Physics, father of deep learning:
— Tsarathustra (@tsaboringday) October 9, 2024
"These things will be smarter than us... I think it's quite likely that within the next 30 years we'll have to face the prospect of super intelligent AI, and I don't know how to control them."
The difference — and it is a difference that should keep you awake — is that a pandemic, for all its horror, is a bounded catastrophe. It kills millions and then recedes. The economy contracts and then recovers. The institutions strain and then reform. A pandemic is, in the language of risk analysis, a recoverable event. The creation of a superintelligent system that is not aligned with human values is, potentially, not. There is no vaccine for a misaligned superintelligence. There is no herd immunity. There is no "flattening the curve." There is the moment before it is built, in which choices can still be made, and the moment after, in which choices may no longer be ours to make.
This is not hysteria. It is arithmetic.
A Confession of Motive
I should confess what brought me here.
I did not come to this subject through the safety community, with its rationalist forums and its probability estimates for human extinction carried to two decimal places. Nor did I come through the accelerationist camp, with its conviction that capability is its own justification and that anyone who urges caution is simply afraid of the future. I came through the door marked operator — someone who builds things, invests in things, and has watched, with a mixture of exhilaration and dread, as the tools I use daily have gone from impressive parlor tricks to genuine cognitive competitors in the span of roughly eighteen months.
The exhilaration is real. I have seen what GPT-4-class systems can do in the hands of a skilled operator, and it is not a marginal improvement. It is a phase change. The dread is also real, and it is not the abstract dread of the philosopher contemplating existential risk from the safety of his study. It is the concrete dread of someone watching friends and colleagues — talented, hardworking, well-educated people — begin to sense that the ground beneath their professional identity is shifting, that the thing they spent a decade mastering is about to become a feature in someone else's API, and that no one in a position of authority is leveling with them about how fast this is happening.
People who are "not worried about AI" are the same ones who thought the internet was a fad in 1995, that smartphones were toys in 2007, and that remote work would never last in 2020.
— Alex Brogan (@_alexbrogan) May 10, 2025
The pattern is always the same: denial, disruption, adaptation—with enormous pain for those who denied longest.
The purpose of this series is not to resolve the tension between exhilaration and dread. The tension is the point. The tension is, in fact, the only honest orientation toward a technology that could cure Alzheimer's and collapse the labor market in the same decade, that could generate material abundance beyond anything in human history and concentrate power beyond anything in human history, that could — depending on choices made in the next five to ten years by a startlingly small number of people — inaugurate a golden age or conclude the human story.
I do not know which outcome we get. Nobody does. The pretense of certainty — in either direction — is the most dangerous form of dishonesty in the current discourse.
What I do know is that the pretense of irrelevance — the shrug, the scroll-past, the "I'll worry about it when it affects me" — is even more dangerous, because by the time it affects you it will be too late to prepare, and the preparation, as Chapter V argues in detail, costs almost nothing relative to the asymmetric optionality it provides.
The Image and the Invention
Dorothy Sayers — the novelist, the theologian, the woman who wrote mystery fiction to pay the bills and then produced some of the twentieth century's sharpest thinking about the nature of creative work — argued in The Mind of the Maker that to be made in the image of God is fundamentally to be a creator. Not a consumer, not an optimizer, not a utility-maximizer: a creator. The divine image is the creative image. To make something new — a poem, a bridge, a child, a company — is to participate in the essential activity of God, and to the extent that we are most fully ourselves when we are making things, we are most fully ourselves when we are most like the Creator whose image we bear.
What, then, do we make of the thing we are making now?
AGI, if it arrives, will be humanity's final act of imago Dei creation — the moment at which the creature made in the image of a Creator creates, in turn, something in its own image. The recursion is dizzying. It is also, I think, the essential frame. Every technical question about alignment, every economic question about labor displacement, every political question about governance — all of them reduce, at bottom, to this: what is the relationship between the maker and the made? And the Western tradition has a very long, very sophisticated body of thought on exactly that question, developed over millennia by people who understood that the relationship between Creator and creature is the foundational relationship, the one from which all others derive their structure and their meaning.
We have largely abandoned that tradition. We have replaced it with a thin utilitarian calculus — maximize welfare, minimize suffering, distribute outcomes fairly — that is not wrong, exactly, but is radically insufficient to the moment. You cannot navigate the creation of a new form of intelligence with a framework that has no account of what intelligence is for. You cannot govern the most powerful technology in history with a philosophy that reduces all value to preference satisfaction. You need something thicker. You need, dare I say it, a theology — not necessarily a confessional one, but an account of human purpose and human dignity that is robust enough to survive contact with a machine that can do everything you can do, faster, cheaper, and without complaint.
This series reaches for that thicker account. Not always explicitly. Not always comfortably. But persistently, because the thin account is failing us and the evidence is everywhere.
How to Read What Follows
A few orienting notes.
The chapters are designed to be read in order but need not be. Each contains its own argument, its own evidence, its own emotional arc. A reader primarily interested in the economic implications can begin with Chapter III and Chapter IV. A reader who wants the technical timeline can go straight to Chapter I and Chapter II. A reader who is already convinced and wants to know what to do should turn to Chapter V and Chapter VI. A reader who suspects, as I do, that the deepest questions are not technical or economic but existential — questions about meaning, purpose, and the nature of the human vocation in a world where the functional advantages of being human are disappearing — should save S3: Meaning in a Solved World for last, and should read it slowly, ideally in a room without a screen.
The series uses technical terms — parameter counts, scaling laws, RLHF, mixture-of-experts, chinchilla-optimal ratios, alignment tax — without extensive explanation. This is deliberate. The intended reader is someone already conversant with the field's vocabulary, or willing to become so. I am not writing a beginner's guide. I am writing for the people who will make the decisions, or who will be proximate to the people who make the decisions, or who will be affected by the decisions in ways they need to understand before the decisions are made.
The tone is not neutral. I have views. I think the timeline is shorter than most people believe. I think the economic disruption will be larger and faster than the policy apparatus is prepared for. I think the alignment problem is genuinely hard and genuinely urgent and that much of the public "safety" discourse is performance rather than engineering. I think the concentration of AGI capability in a small number of private labs, accountable to their investors and their internal cultures rather than to any democratic process, is one of the most underappreciated governance failures in history. And I think the absence of a serious account of human purpose — the kind of account that theology once provided and that secular liberalism has conspicuously failed to replace — is the meta-problem beneath all the other problems, the void into which every technical solution eventually falls.
I may be wrong about any of these things. I will not pretend otherwise. But I will not pretend to be agnostic about them either, because false neutrality in a moment of genuine crisis is not balance. It is abdication.
Before the Light Becomes Blinding
There is a passage in the Gospel of Matthew — chapter 24, verse 44 — that the early church applied to the return of Christ: "Therefore you also must be ready, for the Son of Man is coming at an hour you do not expect." The passage is about readiness in the face of radical uncertainty, about the moral failure of assuming that because the present feels stable, the future will resemble it. The theological term is parousia — the arrival that changes everything, the event that divides history into before and after.
I am not claiming AGI is the Second Coming. I am claiming that the structure of the situation is identical: a transformative event of uncertain timing, potentially arriving sooner than the complacent expect, demanding preparation that feels disproportionate until the moment it proves insufficient. The early Christians who took the warning seriously and ordered their lives accordingly were not fools. The ones who assumed the delay meant the event was fictional — they were the fools.
The Intelligence Age is dawning faster than almost anyone outside the labs realizes. The light is already visible on the horizon — in the capabilities of current frontier models, in the scaling curves that show no sign of bending, in the quiet panic of researchers who understand what the next two or three training runs will produce, in the capital flowing toward compute infrastructure at a rate that implies the money, at least, believes the timeline is short.2
The amount of capital flowing into AI infrastructure right now is the single most informative signal about timelines.
— Alex Brogan (@_alexbrogan) May 14, 2025
Money has opinions. And right now, money's opinion is: this is happening fast.
Open your eyes. Not to panic — panic is useless and unflattering. Open them to see. To understand the shape of what is coming. To make choices while choices can still be made. To prepare, not because preparation guarantees safety, but because failing to prepare guarantees you will be acted upon rather than acting, a passenger in a vehicle built by people who may or may not share your destination.
The chapters that follow are my attempt to describe the vehicle, map the possible routes, and argue — with all the conviction I can muster — that the passengers should insist on seeing the road. The hour is late. The stakes are, in a sense that I do not use lightly, infinite. And the last invention humanity will ever need to make is being assembled, right now, by a remarkably small number of people, in a remarkably small number of buildings, with remarkably little oversight and no democratic mandate whatsoever.
It is time we opened our eyes before the light becomes blinding.
Alex Brogan, May 2025
Dario Amodei, CEO of Anthropic, publicly stated in late 2024 that "powerful AI" could arrive by 2026. Sam Altman has been even less circumspect. These are not fringe commentators or attention-seekers; they are the people commissioning the training runs. When the builder tells you the building is almost finished, the appropriate response is not to debate whether buildings are theoretically possible. ↩
Microsoft, Google, Amazon, and Meta collectively committed over $200 billion in AI-related capital expenditure for 2025 alone. That figure is not a forecast or an aspiration; it is line items in earnings calls, contracts with NVIDIA and TSMC, data center construction already underway in Iowa, Texas, and the UAE. Capital allocation at this scale is a revealed belief about timelines that no amount of "we don't know when AGI will arrive" hedging can offset. The money knows, or thinks it knows, and it is acting accordingly. ↩