VI. Thriving Through Transition
Last updated
Last updated
Which moves separate thrivers from casualties in the intelligence boom? This guide offers concrete strategies for individuals, investors, and business leaders. Steps like embedding at the frontier, guarding attention, and building an AGI-weather portfolio mitigate risks and position you for upside in an uncertain world.
By failing to prepare, you're preparing to fail.
— Benjamin Franklin
Wisdom is prevention.
— Charlie Munger
If you’re enjoying this deep dive and want to keep up to date with my future writing projects and ideas, subscribe here.
History has a pattern worth noticing: when facing potential disasters, we tend to wait for certainty before acting. By the time certainty arrives, it's often too late.
When COVID first appeared in late 2019, experts debated probabilities while the virus silently spread. By the time everyone was certain—when hospitals were overflowing and economies shuttering—the optimal window for preparation had closed. This pattern repeats over centuries. share one common element: we waited for certainty before acting, only to discover that certainty arrived simultaneously with catastrophe.
Could current AI scaling trends plateau? Perhaps. Might algorithmic breakthroughs slow to a trickle, with deep learning ultimately becoming just another significant but bounded technology like the internet? Possibly. But the evidence suggests otherwise. The progression from GPT-3 to GPT-4, from DALL-E to Midjourney, from specialized to general capabilities has followed an acceleration curve that points toward something transformative within the decade—not with certainty, but with sufficient probability to demand serious attention.
This creates what we might call the "evidence dilemma." If you wait until you have conclusive proof that an intelligence explosion is imminent—mathematical certainty of recursive self-improvement, unambiguous signs of emergent capabilities beyond human control—you've already waited too long to position yourself optimally. Preparation requires lead time that conclusive evidence, by its nature, won't provide.
The intelligent approach calibrates preparation not to the certainty of the risk but to the magnitude of its consequences—treating low-probability, civilization-scale impacts with the gravity they deserve.
In domains of exponential change, what separates the prepared from the unprepared is the understanding that waiting for perfect foresight is the costliest mistake of all.
The algorithms developing in research labs today will likely determine whether your skills become valuable or obsolete, whether your investments grow or disappear. The strategies below represent a framework for positioning yourself at what might be the most pivotal moment in human history.
AGI presents immense opportunities and serious risks—including job displacement, , misuse, and societal disruption.
Instead of treating the future as unknowable chaos, anticipate branches of possibilities. When developments occur, you'll recognize them as variations of scenarios you've already considered, reducing both mental labor and panic. Instead of thinking "Oh no the world is so chaotic!" a more useful frame might be "While I was wrong about specifics, this roughly matches up to one of the branches that I predicted could happen, and I have already thought about how to behave in these circumstances."
Take a year-by-year approach:
2025: Develop understanding of AGI risks/opportunities
2026: Create concrete contingency plans
2027: Implement plans, build flexibility and networks
The mistake is acknowledging AGI intellectually while behaving emotionally as if nothing unusual is happening—nodding sagely about exponential change while making stubbornly linear plans. The future arrives whether we're ready or not. Readiness is a choice.
Position yourself with maximum leverage—jobs that shape AI development, gain exposure to leading innovations, or demonstrate resilience during transition periods. The information asymmetry between those at the frontier and those relying on mainstream sources will likely determine who thrives versus who scrambles to adapt. You can sign-up to my Substack, where I'll be sharing regular updates.
Prioritize building relationships with people who understand what's happening. These provide both intelligence and resilience. Remember COVID—weekly confusion, high-stakes decisions, constant adaptation. Now imagine that pattern stretching over years, with no return to "normal." During COVID, knowing people ahead of the curve proved invaluable for navigating uncertainty. The same might apply to an AGI transition, but with more or less intensity, depending on takeoff scenarios.
The most valuable preparation combines personal understanding with a network of informed contacts. Neither alone is sufficient.
In a world of information abundance (increasingly so), the ability to direct attention deliberately towards deep work, strategic thinking, and meaningful goals becomes a valuable cognitive resource. Further, retaining sovereignty over your own perception – through skepticism, avoiding filter bubbles, and secure information habits – is a radical act of self-preservation.
Human leverage has never been higher, but nor have the demands on attention. This is the era of increasing returns to focus. My concrete steps include:
No social media on my phone
Access to social platforms for only 15 minutes per day (Cold Turkey)
Access to messaging platforms (WhatsApp, Email, iMessage, etc.) for only 1 hour per day (Opal)
Heavily curated information sources with balanced, unbiased perspectives (See Intelligence on Intelligence)
In all scenarios, study how your industry is implementing AI and where the gaps remain. Avoid specialization in easily automatable functions, instead pursuing one of these strategic pathways:
Develop a generalist skillset emphasizing meta-skills—problem-solving, sales, communication, leadership—that transfer across domains and resist automation. Alternatively, combine deep domain expertise with AI literacy, becoming the "AI + [Your Domain]" translator who bridges technical capabilities with practical applications. This T-shaped professional—deep in one area, conversant across many—fills the scarce and valuable role of translating between technical teams and business units. Finally, pursue cross-disciplinary roles—select two converging fields and learn their convergence in a systematic way (e.g., computer science and psychology, or business and design).
If you're already in your field, understand how AI is going to impact it:
What aspects of my work resist AI automation?
Where is value shifting in my industry as AI advances?
Which skills make me AI's complement rather than competitor?
How are leading organizations restructuring around AI?
What overlooked opportunities exist at the intersection of my expertise and AI?
What are the investment dynamics of AI in my industry?
Have their been layoffs yet? Are major firms investing heavily in AI capabilities?
Are there restrictions on the use of AI?
What are the current supply/demand dynamics?
Are early-career positions disappearing faster than senior ones?
Where are venture capitalists placing bets in my industry?
As AI systems become capable of providing answers, human value shifts toward formulating the right questions, defining novel problems, and providing insightful direction. Master prompt engineering—the art of eliciting optimal responses from AI systems through carefully structured inputs. Explore the frontier tools—here is an extended list you can try, with my favourites marked with a checkbox.
Learn to co-pilot with AI in your specific domain rather than competing against it. Programmers should master code assistants, marketers should leverage AI for data analysis and content generation, designers should integrate AI into their creative workflow. By becoming the orchestrator of these tools, you multiply your productivity while maintaining strategic oversight. The person who extracts the best results from AI through good prompts, critical evaluation, and creative direction will replace colleagues who resist adaptation.
And finally, develop AI literacy: appreciation for capabilities and limitations across domains from creative work to analytical tasks.
Advanced AI systems make digital threats worse by creating extremely personal scams that sound exactly like messages from people you trust. They can quickly find security weaknesses in your devices while also making fake videos, voices, and documents that look real enough to fool security checks. These AI tools can spread false information across many websites and apps at once, making it very hard for regular people to tell .
The security paradigm that protected you in the past will prove catastrophically inadequate in an AGI environment. Implement comprehensive protection:
Deploy robust password management with 16+ character unique credentials for each service
Enable multi-factor authentication using authenticator apps rather than SMS verification
Secure critical accounts with hardware security keys kept physically protected
Segment your digital identity across separate email addresses and devices for different life domains
Establish verification protocols that confirm unusual requests through secondary channels
Verify information provenance and implement tools that cryptographically authenticate content origins
Shift critical functions to locally controlled systems with offline backups and authentication
Regularly audit your digital footprint, minimizing data sharing and routine information exposure
Develop the habit of pausing before acting on messages that trigger strong emotional responses
Trust your intuition about unusual communications regardless of their apparent source
As AI technology advances exponentially, maintaining adaptability may become your most valuable asset. Preparing for disruption doesn't mean going off-grid—that's an inadequate protection against AGI risks. But significant AGI-driven change could destabilize existing systems. Having somewhere to retreat outside major cities with several months of supplies provides practical options. Group resilience proved effective during COVID, so build connections with people willing to take .
Accumulate backup plans, diverse skills, multiple income streams, and various exit strategies to reduce vulnerability to disruption.
deserves consideration. If AGI creates substantial wealth, you want access to that redistribution. The US appears best positioned for AI leadership, though close allies with components in the AI supply chain or military significance will likely participate in benefits. Even countries without direct AI development will gain from scientific advances, affordable AI services, taxation of deployed models, and control of physical resources.
The key insight: keep your options open and be adaptable.
If your plan is “I will do X so I can do Y later,” consider just trying to do Y right now. You should do this anyway: “What might you do to accomplish your 10-year goals in the next 6 months, if you had a gun against your head?” but there’s even more reason to now.
If your plan is “I will work for a company for another two years so I have more ‘credentials’ to make my next move,” consider doing the next move now. By the same token, prioritise things you want to be in place before AGI and avoid those you don’t. If you determine that your role is at high-risk of automation, consider whether that mortgage actually makes sense.
Both mentally and physically, but in a physical sense, if there is an intelligence explosion, life extension could be within reach soon after. It's probably easier to than to reverse it, so you might you were when the acceleration started.
If radical life extension becomes possible through future technology, then avoiding accidents today carries far greater importance than in our current paradigm. You may risk . potentially centuries. The loss calculation fundamentally changes when death means forfeiting not 30 years but 1,000.
Stress-test your life for black swans. Once a year run a “career wipe-out + market freeze” drill. Decide which expenses you’d cut first, which assets you’d liquidate, and how you’d generate cash inside 30 days. Keep 6-to-12 months of living costs parked in high-yield cash or T-bills so the drill isn’t theatre.
Own what AGI can’t clone. Over the next decade aim to pick up: a small bullion position, one prime-location micro-property, and one collectible you truly understand (e.g., fine watch, blue-chip art). Think scarce, enduring, low-maintenance—not day-trading.
Stay liquid enough to pounce. Avoid locking more than ~40 % of net worth in illiquid deals or long vesting schedules. Cash equals opportunity when AI shocks mis-price assets.
Keep upside with limited downside. Rather than chasing single-stock hype, buy long-dated call spreads or LEAPS on broad AI indices/ETFs. Leverage is capped at the premium; sleep quality stays high.
Pull earnings forward. Treat every year as a funding round for you-inc. Seek cash-plus-equity gigs, launch side products, or consult while the market still rewards human expertise. Shun multi-year credential paths unless the payoff is <18 months.
Lock in cheap debt, shorten bond duration. Switch variable loans to fixed where feasible; if you hold bonds, keep them short-term or floating-rate so rising yields don’t torch your capital.
Maintain a portable life kit. Secure a valid second passport or long-term visa, cloud-backup critical data, set up remote banking and telemedicine, and keep a go-bag with documents. Optionality is resilience.
Rate yourself annually. Can you cover a year of expenses and still have “dry powder” for a once-in-a-decade buy? If not, adjust the mix above.
Many of the great fortunes in history have emerged at the intersection of technological change and financial foresight. The Medicis with banking innovations, the John D. Rockfeller with oil infrastructure, the early Microsoft and Apple investors who saw computing's future—each understood that wealth creation occurs when you position capital at civilization-scale inflection points.
AGI represents such an inflection, but with a disturbing asymmetry: while previous inflections unfolded over decades, AGI's wealth concentration potential could manifest in years, permanently dividing investors into those who prepared and those who hesitated.
What follows is a reimagining of investment philosophy for a world where algorithms may soon outthink market participants.
In general, the following applies to a slow AGI takeoff (where the impact plays out over many years — I think the most likely outcome).
Note: None of the following is financial advice; it's just how I'm thinking about things.
The core principles here are:
Hedge against black swan risks through diversification
Invest in assets with intrinsic scarcity (raw materials, semi-conductors, prime real estate, energy, proprietary data, and alternative assets) that AGI cannot replicate
Preserve optionality; keep significant cash reserves for emerging opportunities
Retain upside (with your desired risk level) to the AI bull-case
Bring forward your career earnings to account for the increasing risk on those earnings (assume labour share of earnings will go to zero over time)
Position for rising interest rates (short duration bonds, floating rate instruments, avoid long-term debt instruments)
Pay attention and stay flexible!
In many of the most serious cases (AI-takeover/x-risk/conflict escalation), there may be nowhere to hide, but considering management downside scenarios is a useful intellectual exercise. Conduct regular "black swan simulations" to test how portfolio companies would respond to extreme AI events, and allocate some % of your portfolio to robustness.
As AGI approaches, an economic inversion occurs: what was previously scarce becomes abundant, while new scarcities emerge. Digital goods, intellectual labor, and software approach zero marginal cost, while physical resources, energy, and irreplicable assets gain extraordinary value. Position your portfolio accordingly:
Raw Material. Critical minerals and materials become essential as digital abundance paradoxically increases demand for physical substrate. Consider broad exposure through MXI (global) or IYM (US-only) rather than attempting to predict specific commodity winners.
Semiconductors. The physical foundation of computation becomes strategic. SMH offers industry-wide exposure, though remain alert to potential value migration within the supply chain. For more targeted positions, consider AMKR, AOSL, ASML, ASYS, KLAC, MTRN, LSE:SMSN, and TRT—but recognize that individual stock picking carries heightened risk in rapidly evolving sectors.
Proprietary Data Moats. In an intelligence age, unique data that can't be synthetically generated becomes extraordinarily valuable. Seek companies with exclusive, proprietary datasets that provide durable advantages in training specialized AI systems—particularly in domains where synthetic data cannot substitute for real-world information.
Prime Real Estate. The paradox of digital transformation is that it intensifies rather than diminishes the value of exceptional physical locations. (New York, Los Angeles, London, Sydney, Singapore) and intrinsically beautiful locations will likely appreciate even as construction costs decline through automation. These assets derive value from irreplicable characteristics—cultural ecosystems, status signaling, lifestyle amenities—that become more valuable in a world of digital parity. Without dramatic wealth redistribution policies, these assets will likely concentrate value as digital capabilities democratize.
Energy Infrastructure. —the laws of thermodynamics apply even to artificial minds. Consider strategic positions across the energy value chain:
Data Center Infrastructure: Schneider Electric (SBGSY), Vertiv Holdings (VRT), Eaton Corporation (ETN)
Grid Backbone: NextEra Energy (NEE), American Electric Power (AEP), Quanta Services (PWR)
Storage Solutions: Fluence Energy (FLNC), Stem Inc (STEM), Eos Energy Enterprises (EOSE)
Baseload Generation: NuScale Power (SMR), Constellation Energy (CEG), Ormat Technologies (ORA)
Efficiency Optimization: Analog Devices (ADI), Monolithic Power Systems (MPWR)
Alternative Assets. Allocate a portion of your portfolio to irreplicable human achievements that carry cultural and historical significance:
Fine art from recognized masters (both historical and contemporary)
Limited-edition mechanical watches (Patek Philippe, F.P. Journe)
Wines and spirits from extinct or pre-climate change vineyards
Equity in luxury houses with centuries-old craftsmanship traditions (Hermès, Brunello Cucinelli)
Keep a significant portion of assets in liquid instruments that can be rapidly deployed when market dislocations create asymmetric opportunities. T-bills and money market funds offer temporary sanctuary while you await momentary inefficiencies that AGI-driven market turbulence will inevitably create.
The mathematical challenge of AGI investing lies in capturing extraordinary upside while limiting downside risk in a change whose timing remains uncertain:
Low Risk Approach. Long-dated call options on broad indices or specific stocks provide leveraged exposure with defined maximum loss. Avoid short-dated options whose premiums reflect current volatility without capturing long-term value creation (timing near-term price movements is extremely difficult (even with predictable events like the 2020 pandemic). Consider LEAPS (long-term options) with 1+ year expirations, which may also qualify for more favorable tax treatment. Long-dated options provide capital-efficient leverage - substantial equity exposure with minimal upfront investment and non-recourse risk (losses limited to premium paid, protecting other assets from liquidation).
Moderate Risk Approach. Diversified exposure through technology and AI-focused funds or ETFs balances potential upside with reduced concentration risk.
Higher Risk Approach. Direct investment in companies with core AGI exposure (NVIDIA, Microsoft, Google, TSMC, ASML) for maximum upside potential, balanced with appropriate position sizing.
In a high-growth AI scenario, surging capital demand will likely drive interest rates upward, potentially devaluing existing fixed-rate debt instruments. Maintain shorter duration or floating rate exposure to mitigate this risk. The mirror image applies to personal debt—securing low fixed-rate long-term financing (particularly mortgages) may prove advantageous as rates climb, while variable rate obligations could become burdensome.
Career capital (and future earnings) is typically people’s most important asset. A consequence of AGI is that discount rates (the risk on these future earnings) should be high and you can't necessarily rely on having a long career.
Front-load earnings and don't invest in long-term career capital that takes many years to pay off if you believe AI will significantly transform the job market soon. Avoid anything like a 'tenure track' or other long-term plan that does not pay off for many years.
Tolstoy famously opens "Anna Karenina" by observing: "All happy families are alike; each unhappy family is unhappy in its own way." Business is the opposite. All happy companies are different: Each one earns a monopoly by solving a unique problem. All failed companies are the same: They failed to escape competition. — Peter Thiel, Competition is for Losers
Business success requires defensibility. Not just being good at what you do, but having something that keeps competitors from eating your lunch. That's what we call a moat. Just as castle moats kept enemies at bay, economic moats protect a company's profits from competitors who would otherwise replicate their success and drive down margins.
Hamilton Helmer's "7 Powers" framework gives us a useful taxonomy of moats:
Scale Economies: Being big makes you more efficient
Network Effects: Value grows as more people use your product
Counter-Positioning: Your approach makes it painful for incumbents to copy you
Switching Costs: It's a hassle for customers to leave
Brand: People pay more because they trust you
Cornered Resource: You control something essential others can't get
Process Power: You've mastered something others find difficult
What makes these genuine powers? Each combines two essential elements: a benefit (more cash now) and a barrier (competitors can't easily copy it). The strongest businesses usually have multiple, reinforcing moats. The evolution to more advanced AI systems will bring significant changes to the 7 Powers. Understanding these shifts is important for investors and founders alike.
Here’s how I expect moats to evolve over time.
Scale Economies (Overall trend: Transforms)
• Big firms use AI to cut per-unit costs, widening scale edge • Cloud AI lowers entry barriers for small firms, but big players' data & compute scale still dominates
• AGI R&D demands massive data & compute; only tech giants or nations lead, concentrating power • AGI lets small teams match big-firm output, eroding labor-based scale advantage
• AGI-run production is near-zero marginal cost; output scale ceases to be a differentiator • Ubiquitous AGI access allows any actor to scale instantly; traditional scale moats vanish
Network Effects (Overall trend: Strengthens)
• More users = more data = better AI; data network effects reinforce platform leaders • Network effects concentrate power in platforms; each new user improves service, luring others
• AGI links many domains; platforms owning the broadest networks (users, data, partners) enjoy self-reinforcing dominance • Without interoperability, winner-takes-most dynamics intensify; one AGI platform might monopolize global interactions
• Ubiquitous AI connectivity; ideally open networks turn network effects into a shared utility, not a private moat • If not democratized, one unified AI network could control nearly all interactions – an unparalleled monopoly
Counter Positioning (Overall trend: Strengthens)
• AI-native entrants operate lean, undercutting incumbents tied to legacy workforce/models • Incumbents slow to automate (to protect existing business) give agile challengers an opening
• AGI-run startups (tiny teams or AI-only services) outcompete legacy firms still reliant on human labor • Even tech giants are threatened by open-source or public AGIs that undercut proprietary models
• AI-coordinated collectives (e.g. DAOs) outmaneuver rigid corporations; static firms must reinvent or perish • Individuals with AI can launch new ventures instantly; constant new entrants keep markets in flux
Switching Costs (Overall trend: Transforms)
• AI personalization locks in users; switching means losing tailored AI experience • Firms lock customers into proprietary AI ecosystems; no strong data portability rules yet
• AGI assistants manage life & work; switching them means losing years of learned context – a huge deterrent • All-in-one AI suites make exit painful; late-2030s see first laws for data/model portability to ease lock-in
• AI and data are fully portable; users seamlessly switch providers, reclaiming power from platforms • Services become interchangeable utilities, not walled gardens, rendering traditional lock-in moot
Brand (Overall trend: Weakens)
• Brands provide trust amid AI change; known companies seen as safer adopters of new tech • "AI-powered" as a marketing hook is a short-lived differentiator as AI becomes ubiquitous
• Personal AI agents prioritize price and quality over brand image; algorithmic choice erodes brand-led loyalty • Brand matters mainly for luxury or values-based buys; everyday products chosen by AI on merit
• Mass goods are commoditized by AGI; brand names lose relevance for basics (quality is assumed) • Brands shift to culture and values; personal creator and community brands eclipse corporate brands in influence
Cornered Resources (Overall trend: Strengthens)
• Proprietary data and AI tech are hoarded; exclusive access gives firms an edge competitors can't replicate • Scarce AI talent and advanced chips are cornered by top firms/nations, raising entry barriers
• Early AGI is tightly held; those controlling AGI tech or data gain enormous advantage • Control of physical resources (energy, rare minerals) becomes crucial as other inputs are automated
• AGI tech becomes a public utility; exclusive AI ownership fades, shifting advantage to tangible resources • Whoever controls remaining scarcities (energy, raw materials, attention) still wields significant power
Process Power (Overall trend: Obsolete)
• AI-driven workflows improve efficiency; firms adopting AI in processes iterate faster than peers • Bureaucratic incumbents struggle to adapt; nimble firms use AI to refine operations continuously
• AGI optimizes processes; any novel process is quickly learned and copied by competitors' AIs • Process know-how shifts from people to machines; an edge lasts only until others' AI catch up
• AI automates optimal operations everywhere; no firm maintains a unique process advantage • Process improvements become instant and universal; process power as a moat disappears
Invest in Founders that are adaptable, fast moving, and voracious learners who understand the opportunity (and threats) of AI. Ravi Gupta, partner at Sequoia Capital (arguably the best venture capital firm of all time), shared an insight on the types of people Sequoia are looking for in his recent podcast on Invest Like The Best:
The first person I heard say that was Coach K at Duke. He said, "I am not a world-class predictor, but I am a world-class reactor.” And he was referring to it as when college basketball changed from people that stay for four years to people that play for one and then go to the NBA. He said "I couldn't have predicted that that's the way the rules would go. I didn't know. But you know what? Once that was the game on the field, I played it extremely well."
That actually was very helpful for me, Patrick, along with Kohler's quote, because it's very intimidating to think about predicting the future. There are really smart people in Silicon Valley. There are really smart people all over the world. I don't know how to predict the future. I do not have the IQ points for that. There are people that are much brighter than me who can do that. I think the thing that I do have in my control is the ability to respond quickly to new information. I can control that.
I can control being open-minded to seeing new information and then reacting quickly. I can control being averse to change. That is something that, it feels empowering to me and it's something that's in my hands and it's in the team's hands too.
I can help the team, "Guys, let's embrace this change, let's embrace this reality and let's go react to it." I think right now the world is totally awesome for people that can be world-class reactors rather than world-class predictors.
And he continues (paraphrased):
If you're on a board right now and you just see how you feel at each company you're on the board of, do you have the right CEO? You feel amazing right now. If you have the right CEO... If you don't have somebody who wants to embrace this not because of board fear, but because of their capabilities or because they don't want to play in this next chapter of the game, you got to figure that out. In a world of accelerating change, how do you feel about the person leading your business? You probably either feel amazing or you feel nervous?
The world increasingly rewards those who excel at reaction rather than prediction. When evaluating investments, assess leadership through this lens: Do they demonstrate curiosity about AI developments? Do they implement learnings rapidly? Do they maintain strategic flexibility rather than clinging to outdated assumptions? In boardroom contexts, the emotional response to the CEO's adaptive capacity becomes a powerful signal—do you feel confident or concerned about their ability to navigate change?
For professional private investors, competitive advantage comes from cross-pollination of AI implementation insights across portfolio companies:
Establish AI resource hubs with expert advisors, implementation workshops, and specialized talent pipelines accessible to all portfolio companies. Develop secure knowledge-sharing platforms where Companies can document use cases, challenges, and solutions without compromising competitive position. Host quarterly implementation roundtables where technical teams share insights and solve common problems collaboratively. Create incentive structures that reward companies actively contributing to the collective intelligence of the portfolio.
The most powerful form of leadership combines precept and example. Document and share your own firm's AI adoption journey, implementing the same tools and processes you recommend to portfolio companies. Host open demonstrations of successful implementations, and temporarily embed your AI specialists within portfolio companies to accelerate knowledge transfer.
The wealth creation potential of AGI will likely exceed any previous technology wave by orders of magnitude. Those who position capital with strategic foresight while maintaining adaptability may participate in the greatest value creation event in human history—provided they recognize that the traditional assumptions (think: modern portfolio theory) governing investment theory may no longer hold.
History remembers business leaders not for managing incremental change, but for navigating phase transitions.
The railroad presidents who electrified their lines survived; those who clung to steam vanished. The retailers who embraced e-commerce thrived; those who dismissed the internet perished.
AGI presents a similar inflection point, but on an accelerated timescale: what happened to retail over twenty years may happen to your industry in twenty months.
The decisions you make in the next 12-36 months will likely determine whether your Company emerges becomes a case study in technological obsolescence.
Recognise the new table stakes of execution and strategy. Beyond traditional sources of moat (which we've just discussed), the table stakes are shifting dramatically for businesses.
Whereas in the past:
A superior product experience could be a durable source of advantage, that advantage will erode as AI enables rapid replication and improvement of most products
Functional business skills (marketing, finance, etc.) were valuable, they will lose significance as as strategic oversight and human-centric aspects gain importance
Going forward:
Relationships to power players, speed, unique distribution hypotheses, effective AI orchestration, and the hoarding of secrets will provide the advantages
Most artificial intelligence initiatives fail not because of technical limitations but because they target superficial tool adoption rather than cultural change. Companies that just augment existing processes with AI tools without reimagining their core operating principles will find themselves outflanked by AI-native competitors.
The cardinal failures of AI adoption include:
Over-indexing on tools while neglecting habit formation
Ignoring the psychological resistance to change
Failing to connect AI usage directly to individual incentives and team outcomes
Implement a comprehensive five-phase change strategy:
Phase 1: Strategic Framing Make AI adoption central to organizational identity by declaring a public "AI mission" tied directly to company vision and job security. Demonstrate leadership modeling with executives showcasing AI use cases weekly. Establish clear commitment deadlines for baseline fluency across the organization, and form a cross-functional futures team to monitor technological developments.
Phase 2: Incentives & Infrastructure Create visibility and rewards that reinforce AI-first behaviors through gamified dashboards tracking usage and time saved. Implement micro-bonuses ($250-1000) for high-impact use cases. Build an internal agent library where teams can share and modify AI workflows. Establish a skill progression ladder with certifications and recognition, and feature an "AI Hall of Fame" highlighting breakthrough applications.
Phase 3: Enablement & Onboarding Reduce adoption friction by developing role-specific AI playbooks, establishing buddy systems pairing early adopters with more reluctant team members, conducting focused one-week AI sprints on applied projects, and mapping workflows for potential AI enhancement.
Phase 4: Accountability & Enforcement Transform AI from optional to mandatory by incorporating usage metrics into performance reviews, requiring AI leverage cases for new headcount or resource requests, mandating AI components in all new proposals, and implementing a rigorous build-vs-buy evaluation framework.
Phase 5: Feedback & Learning Flywheel Build compounding returns through monthly AI retrospectives where teams share victories and failures, continuously evolve your agent library based on implementation learnings, and conduct regular surveys to identify adoption gaps and learning opportunities.
Your objective transcends mere tool adoption—it requires becoming an AI-native organization where intelligence augmentation permeates every aspect of operations and decision-making.
Automation functions will become one of the highest leverage points of future organizations. To neglect this is to significantly inhibit the future ability of the company to compete. Here is what I recommend for getting started:
Define a focused mission to identify high-leverage workflows for AI/RPA/LLM augmentation, with scope encompassing internal tooling, agent orchestration, API integration, and standard operating procedure automation.
Appoint a lean, cross-functional unit including:
Head of Automation (product management/operations background)
AI-savvy engineer with Python and major cloud platform expertise
Business analyst skilled in workflow mapping and optimization
Rotating subject matter experts from key departments
Create a streamlined intake system—either through forms or conversational interfaces—that captures task descriptions, time investments, systems involved, and desired outcomes. Establish a weekly sprint rhythm that prioritizes requests through impact-effort scoring, delivers 1-2 high-value automations weekly, and showcases successful implementations through company-wide demonstrations. Track impact rigorously through dashboards measuring hours saved, automation coverage percentage, and team-level adoption, linking these metrics directly to departmental objectives.
There is a famous saying that "". Similarly, as it relates to implementing AI effectively, you cannot implement what you don't know about.
Dedicate resources to exploring cutting-edge developments through Google's 601 real-world generative AI use cases and NVIDIA's technical presentations. Regularly evaluate new agent platforms, no-code tools, and middleware that could accelerate implementation within your organization.
This cannot be delegated—it requires direct leadership engagement to gain maximum leverage from AI tools.
The pace of AI development demands accelerated procurement. Establish systematic decision criteria based on:
Deployment urgency (less than 30 days favors buying)
Workflow uniqueness (standardized processes favor buying, proprietary workflows favor building)
Internal capabilities (limited ML expertise favors buying, strong API orchestration skills favor building)
Cost structures (subscription affordability versus development investment)
Customization requirements (minimal control needs favor buying, deep integration requirements favor building)
Compliance and data sensitivity (regulated industries with strict data controls generally favor building)
Implement process that begins with comprehensive tool inventory, conducts 48-hour proof-of-value testing comparing off-the-shelf and internal solutions, scores options on a 1-5 scale across key criteria, and shares implementation learnings through an internal knowledge registry.
Nearly 90% of enterprise information remains trapped in unstructured formats—emails, documents, messages, transcripts—inaccessible but potentially transformative when parsed by AI models. Transforming this data into something useable allows Companies to leverage LLM’s to the maximum . This will become table stakes in the near future.
Begin by mapping key data repositories across all communication channels and document stores. Implement centralized tools to compile and structure this information, segment it by relevant use cases, clean and annotate for improved utility, and integrate with retrieval-augmented generation pipelines that enable intelligent interaction with organizational knowledge.
There's a huge information asymmetry between the technological frontier and the rest of the world. As a leader, your job is to cut this gap. You can do so by attending leading conferences.
AI is reshaping companies from the ground up. Hiring will quickly become the new technical debt:
Expect a two‑phase transition:
Near term (basic AI adoption). Leaner, tighter hierarchies and heavier managerial load at the top.
Later (advanced AI + abundant compute). Explosive firm growth, flatter networks, and large AI‑centred spans of control.
First, it —routine analysis, scheduling, and operational decisions now run on always-on AI teams. Managers oversee hybrid human-AI units, maintaining oversight while radically boosting productivity. The pyramid compresses into a tighter structure: fewer human roles, but more strategic ones.
Then AI augments leadership. Executives use AGI co-pilots for strategic planning, resource allocation, and cross-functional coordination—deciding faster while maintaining human judgment. Organizations evolve into lean, AI-powered teams, where humans focus on high-value decisions, relationships, and oversight.
The result? Smarter, flatter companies where humans and AI collaborate at every level—with people firmly in control.
An increasing number of public company CEO's are publishing staff memos urging employees to take up AI usage. CEO of $100bn Shopify, Tobi Lutke, recently shared a memo outlining new internal company AI usage expectations.
As many business arenas become more competitive, I expect those that can effectively leverage the best of AI will get further and further ahead. It is no longer optional to use AI. Stagnation is almost certain, and stagnation is slow-motion failure. If you're not climbing, you're sliding.
As AGI compresses software innovation cycles from years to months or even weeks, competitive advantage will shift decisively to assets that resist overnight replication—hard-to-replicate supply chains, capital-intensive infrastructure, proprietary data repositories, and deeply embedded workflow systems that competitors cannot easily reproduce.
Focus strategic investment on:
Vertical integration in physical-world domains that remain difficult to virtualize
Tangible infrastructure with high capital requirements that create implementation barriers
Vertical software platforms that own the complete customer journey rather than point solutions
Complex industry applications (healthcare, legal, finance) combined with unique distribution channels and proprietary data
"Boring" but resilient sectors (waste management, HVAC, industrial maintenance) whose essential nature persists regardless of technological disruption
The leadership decisions you make in the coming months will likely determine your organization's relevance for decades to come.
This is not hyperbole but the logical consequence of AI's transformation of value creation. The window for positioning is open now but narrowing with each passing month.
If you’re enjoying this deep dive and want to keep up to date with my future writing projects and ideas, subscribe here:
Could machines rapidly improve themselves? Yudkowsky's paper makes the case for explosive self-improvement, though economists tracking historical tech diffusion remain skeptical of his aggressive timeline.
Perfect data never arrives in time. Core documents how overconfident COVID models catastrophically underestimated viral spread—a lesson we'd better learn before AGI arrives.
The paperclip maximizer thought experiment changed everything. Bostrom's scenarios, while theoretical, shaped serious policy discussions at the highest levels.
$25 million gone in a single deepfake call. That Hong Kong bank transfer is just the beginning according to Schneier's analysis of AI-powered scams that exploit our psychological vulnerabilities at unprecedented scale.
Remember communities sharing ventilators during COVID? MacAskill argues these networks foreshadow what we'll need during potential AGI disruptions.
The AI wealth map is emerging—US tech hubs first, chip-producing allies like Taiwan second, with smaller nations potentially taxing AI firms to fund UBI programs according to Acemoglu's framework.
Your current health might be a high-stakes bet on future technology. De Grey believes AI-accelerated biotech could halt cellular aging within decades.
Nanobots repairing cells? Kurzweil's vision, however optimistic, has inspired billions in biotech R&D funding.
The 2023 AI chip shortage blindsided everyone—exactly Taleb's point about our blindness to rare, high-impact events.
No algorithm can replicate Central Park's pull. Glaeser shows why prime real estate holds value even in an AI world. scarcity trumps technology.
Data centers soon consuming as much electricity as small nations? IEA projections suggest AI's energy appetite could double power demand by 2030.
"What you don't see, you can't measure. What isn't measured doesn't exist." Edelman's insight, born from civil rights work, explains why visibility drives action—whether in racial justice or AI adoption.
Junior roles vanishing first, managers juggling hybrid teams. Early adopters already report 30% productivity gains from AI copilots according to Drago's research.
Now we face a decision point of incomparably greater magnitude with AGI. The preparation calculus here creates an asymmetry: the cost of preparing for an that never materializes is relatively small, but the cost of failing to prepare for one that does happen could be civilization-ending. This makes AGI preparation a uniquely imbalanced bet—a modern version of where the mathematics of expected value overwhelmingly favor action despite uncertainty.