"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

AMA Part 1: Is Claude Code AGI? Are we in a bubble? Plus Live Player Analysis

1/9/20266879

AMA: The Solo Edition

"My schedule has been a little bit crazy lately... so there’s nobody here to ask me the questions. I’m reading them myself."

Welcome back to The Cognitive Revolution. We’re deviating from the standard guest format today for something more intimate. Between the breakneck speed of AI and a personal life that has been "through the ringer," the host takes the mic alone to address the community's most pressing questions—starting with the most important one of all.

Ernie: The Path to Recovery

Back in November, we shared the news of Ernie’s cancer diagnosis—an aggressive disease capable of doubling every 24 hours. Today, the report is heavy with both the weight of the struggle and the lightness of hope.

He’s halfway through. Three rounds of chemotherapy down, three to go. The toll is visible: his weight has dropped from 51 pounds to a fragile 41. He is pale, he is thin, and he is weary. But beneath the surface, the numbers tell a story of a miraculous counter-offensive.

"The PET scan showed no obvious focal points of cancer. Our oncologist and the tumor board agreed: Ernie is officially in remission."
Treatment Status Round 3/6

Transitioning to "milder" final rounds.

Weight Factor -10 lbs

Fluctuating between dehydration and recovery.

Clinical Outlook Remission

Confirmed before the second round began.

AI-Guided MRD Testing

One of the most profound intersections of technology and life in this journey was using AI to identify Minimal Residual Disease (MRD) testing—a method not yet standard of care but vital for peace of mind.

B-cells rearrange their DNA in a unique "fingerprint." When cancer strikes, that fingerprint is cloned. By sequencing it, we can now track a single cancerous cell among millions of healthy ones.

Reduction in Cancerous DNA Presence

Note: This represents a 99.9999% reduction, moving from 1 in 10 cells to less than 1 in 1,000,000.

A Community Safety Net

Maintaining a schedule of eight episodes a month is impossible while fighting a family health crisis. To the listeners who reached out and the fellow podcasters who offered their content: Thank you.

We’ve featured incredible deep dives from Agents of Scale, China Talk, and Doom Debates. It’s a chance to diversify your feed—and a necessity for my sanity.

Recent Cross-Post Highlights

  • Wade Foster (Zapier) on scaling.
  • z AI (China) on the global LLM race.
  • Max Tegmark vs. Dean Ball on AI existential risk.
  • Emmett Shear on the a16z podcast.

"Two and a half months to go. Then, we start the long process of getting back to normal—including re-vaccinating his entire immune system. One step at a time."

Claude 4.5: AGI or Just Great Vibes?

Building on the personal updates regarding Ernie’s health, the focus shifts to the silicon minds assisting the journey. Is the new Claude a fundamental "step change," or are we just getting better at "vibe coding" in hospital waiting rooms?

"I vibe coded three apps for family members for Christmas presents this year in the hospital," the speaker notes, grounding the AI hype in raw utility. While the progress is "unmistakable," there is a refreshing skepticism regarding the AGI label.

"I wouldn’t say that it has been such a step change... that I would say some major threshold has been crossed."

The Cancer Case Protocol

You don't need to be a prompt engineer to save a life. You just need to follow three non-negotiable rules for using AI in high-stakes environments.

1

Buy the Best Models

Do not settle for "Pro" if "Thinking" or "Opus" is available. In a life-threatening situation, paying $200/month for top-tier intelligence is a "no-brainer." Upgrading immediately is your only priority.

2

Context is King (Beware the Summary)

Performance degrades when you compress history. "When the whole history was compressed, it doesn't have that level of detail anymore. It can't look at literally yesterday's lab results." Provide everything—the genetic profile, the drug reactions, the raw lab data.

3

Triangulate in Triplicate

Never trust a single silicon opinion. Run your queries through Gemini 3, Claude 4.5 Opus, and GPT 5.2 Pro simultaneously. The friction between their differing biases is where the truth resides.

Finding the Goldilocks Model

The speaker breaks down the "personalities" of the current leading models. While all are impressive, they each carry distinct behavioral signatures that change how a user should interpret their output.

Gemini 3: Raw, un-system-prompted, and remarkably "opinionated."
Claude 4.5 Opus: The "Goldilocks" pick. Fast, concise, and just right.
GPT 5.2 Pro: The verbose analyst. Long, sectioned, report-style results.

Model Performance Profile

Comparative visualization based on the speaker's qualitative "vibe" assessment.

"I think you can trust it pretty well... but doing it all in triplicate is absolutely worthwhile."

— Assessing the value of AI in critical care

The Skill Paradox

Does AI make us faster, or just more confident in our mistakes? Navigating the gap between "Vibe Coding" and professional engineering.

Respecting the Data, Challenging the Narrative

Moving from the high-stakes frontier of AI-driven oncology, the conversation turns inward: how much value are we actually extracting from these models? Nathan addresses the Miri study, which suggested AI might actually slow down professional developers.

"I love Mary Anne... do science, report the results. You should probably just try to run experiments and share results as long as you believe the results are legit."

Nathan identifies as a "Vibe Coder." While the study focused on high-standard, legacy codebases where AI might struggle, Nathan argues that for hackers and rapid prototypers, the friction is lower. The question remains: Is 4.5 Opus a tool for the elite engineer, or a bridge for the inspired amateur?

Partner Spotlight

Tasklet: The Agent That Doesn't Break.

Traditional automation is a house of cards. One unexpected data field and the whole workflow collapses. Tasklet replaces brittle flowcharts with an AI agent that reasons through 3,000+ business tools.

Code: COGREV (50% Off)

Runs 24/7

No manual triggers. Just plain English instructions.

3k+

Integrations

Connects to any API, MCP server, or UI.

The Holiday Sprint

Three functional apps, three workdays, zero prior documentation. This is the power of a "Vibe Coding" workflow powered by Claude.

THE TRAVELER

Gluten-Free Italy

A bespoke Replit app for a "meticulous travel planner." It scrapes Italian restaurant reviews specifically to filter for gluten-free options, baking personal taste directly into the code.

STACK: REPLIT + CLAUDE 3.5/4.5

THE SIMULATOR

EA Global Events

Simulates virtual attendees bumping into each other in a virtual space. AI translates high-level conceptual changes into low-level configuration edits to predict event ROI and KPI shifts.

STACK: PYTHON + AGENTIC CONFIG

THE TRADER

Natural Language Alpha

Converts conversational trading strategies into executable Python code via `yfinance`. The goal? A sobering lesson in the difficulty of beating the market.

STACK: YFINANCE + CLAUDE + PYTHON

The Market Reality Check

One of Nathan’s private motivations was to show his father that beating the S&P 500 with simple heuristics is nearly impossible—even when AI builds the backtester for you.

"I didn't really know where I was going when I started. I just started with a chat with Claude."

3 Days. 3 Apps. 1 Model.

The Holiday Stack

Moving from the "Christmas present vibe coding" to the hard metal: Claude 3.5 Opus, Replit, and the new Claude Code CLI.

"I can’t really say it’s night and day different than before, but the back-and-forth—the feature ideas, the plan—it all flows into Replit. I just install Claude Code there, give it the plan, and let it run off and build the app."

The Hallucination in the Architecture

Even in a world of "vibe coding," physics still applies. During the creation of a travel planning app, the AI hit a wall that no human developer would ever encounter: it hallucinated a ghost infrastructure.

The Incident Report

"We ended up with two databases. It’s the kind of mistake no human would make. A developer doesn't just suddenly spin up a totally separate database by accident."

The culprit? **Agentic Search.** Claude Code looks where it expects to find things. It has a "high prior" for standard file locations. When the logic got tangled, the agent kept searching its own distorted map, confirming its own bias rather than seeing the mess it had made.

The Old Reliable Trick

"I still find a lot of value in a short script that prints out my entire app to a single text file. I take that to a clean Claude.ai session and ask it to analyze the code base in full."

Diagnosis

Full context beats agentic search when things get weird.

Resolution

5-6 prompts to untangle the database ghost. A year ago, this would have been a project-killer.

"Software AGI is already here."

But full AGI? We might have to wait a little longer.

GPTEval: AI vs. Human Preference

Data based on latest GPTEval benchmarks (Anthropic/OpenAI).

The data is "spiky and jagged." If you look at software engineering tasks, models are winning by a significant majority. These aren't entry-level scripts; they are professional-caliber tasks defined and judged by experts.

Yet, move over to video editing—the "clips" market—and humans still hold the fortress. The nuance of a Dwarkash-style edit remains out of reach for the current weights.

"You don't really need to know how to code these days. You just watch it work. It's your little agent on the computer."

The Death of the "Project Killer"

1
The Wall

Something goes wrong. The AI is confused. You are confused. The project normally dies here.

2
The Probe

Vibe coding today allows for "probing questions." You aren't debugging syntax; you're debugging intent.

3
The Recovery

The addressable market for software expands because you can finally get out of the messes the AI inadvertently makes.

Up Next: The AI Bubble Debate

The Market Pulse

Is the AI Bubble About to Pop?

Moving past the architectural nuances of Claude 4.5, we hit the trillion-dollar question: Are we witnessing a technological revolution or a financial fever dream?

"I think the idea that we will somehow feel like we were all high on our own AI supply—that I think we can very safely put to bed."

THE TECH IS REAL.

When a model can go toe-to-toe with a human oncologist—available 24/7, contextually aware, and exhaustively precise—you aren't looking at a fad. You're looking at a transformative shift.

However, "real technology" doesn't always translate to "safe investment." There is a significant amount of financial wizardry happening behind the curtain. Take companies like CoreWeave. They exist largely because the financial profile of running massive GPU clusters isn't attractive to a company like Microsoft, which thrives on high-margin, low-CapEx software.

By offloading the data center heavy lifting, hyperscalers protect their stock prices from being dragged down by lower-margin infrastructure. But this creates a new kind of fragility. If demand for GPUs dips even slightly, these specialized entities have far less margin for error than a tech giant with a deep balance sheet.

"I worked in the mortgage industry before the bubble. There was always a logic to what people were doing... we were telling ourselves a very positive story."

Projection vs. Reality

The Railroad Analogy: The tracks (AI infrastructure) will be used, but the companies building them might still go bust.

The Revenue Paradox

"I overestimated 2025 capability progress, but I underestimated revenue growth. Demand is currently outstripping even the skeptics' projections."

The Venture Capital Froth

The most staggering example of current madness? Arena (formerly LMSYS) raising $150M at a $1.7 Billion valuation.

Wait, what?
Coming up: The reality of VC froth and the rise of Chinese models...

VC Froth & Phantom Metrics

When "Free Usage" gets dressed up as "Consumption Run Rate" to justify unicorn valuations.

"I have the receipts on this. I’ve been using LMSYS since mid-2023, back when it was just a tab in my mobile Safari to compare models. It’s a great product. But a $1.7 billion valuation?"

The industry just saw a tweet claiming an "annualized consumption run rate" of $30 million. Let’s be real: what does that even mean? My naive interpretation is that it’s the cost of the compute for the free side-by-side comparisons people are running.

It’s giving me massive "Community Adjusted EBITDA" vibes—that infamous WeWork-era metric. Saying people used $30M worth of free AI on your platform is not the same as making $30M in revenue. Where is the moat? Where is the revenue?

"Too rich for my blood."

The Competitor

Multiplicity (Andrew Critch): A paid, feature-rich tool built in months that allows systematic model comparison. It actually charges users. It has a business model.

The Valuation Trap

Venture investors are betting on brand, but will users stay if they have to pay? The "Free Tier" market is massive; the "Paying for Testing" market is a sliver.

The Performance Gap is Real

There is a narrative that Chinese models are "right on the heels" of US frontier models. I decided to put that to the test with a messy, real-world task: automating car sale paperwork.

These are scanned, slanted, artifact-heavy government forms. It's the ultimate test of perception vs. reasoning. I threw every model at it: Qwen, GLM 4.6, Kimi, DeepSeek.

The result? They weren't even close.

Document Fidelity Comparison (Internal Test)

Note: Chinese models struggled with hallucinations and form structures, capturing as little as 20% of the relevant data.

The Gemini Problem

Gemini 3 is brilliant but "too smart" for its own good. It uses priors to "guess" answers. If a "US Citizen" box isn't checked, but the name sounds American, Gemini checks it anyway. It's inferring, not reading.

The Claude Victory

Claude 3 Opus (and 4.5) was the only model that could be strictly anchored to the document. With the right prompting, it stops guessing and starts transcribing faithfully.

"The Chinese companies are influencing research—they publish everything. But in terms of raw, idiosyncratic, random-task performance? The chip controls are making a visible impact. The gap isn't closing; it's wide."
Coming Up Next

Geopolitics & Silicon: The H200 Export Dilemma

The H200 Handshake

Moving beyond the froth of VC valuations, we hit the hard reality of hardware. If Chinese models are keeping pace in parameters, why are they losing the war of attrition in the real world?

The prevailing wisdom suggests chip controls are about stopping training. But the speaker argues for a more nuanced reality: it's about the scale of inference. While Chinese labs can train frontier-adjacent models, they lack the massive deployment footprint—and the crucial feedback loops—that power American giants.

"These Chinese companies seem to be able to roughly compete in terms of creating similar scale models, but they're not able to run inference at anywhere near the same scale. Their revenue is vanishingly smaller... the feedback they’re getting from customers seems to be just dramatically less."

This creates a "strength begetting strength" phenomenon. Without the millions of diverse user interactions to patch niche idiosyncratic gaps—like reading complex government documents—the Chinese "flywheel" remains grounded while the US version hits escape velocity.

The Widening Performance Gap

Relative performance gap based on user-tested idiosyncratic tasks.

"The real others here are the AIs, not the Chinese. The Chinese are humans just like us. The AIs are aliens."

— A Call for Human Solidarity in the Age of Silicon

The Pivot

Trump moves from banning H20s to approving H200s after a conversation with Jensen. A sudden shift in the tectonic plates of policy.

The Critique

"We didn't really get anything for it." Selling the most powerful chips without a grand bargain is viewed as a massive wasted opportunity for leverage.

The Alternative

"Rent, don't sell." Host data centers in neutral territory (Malaysia, Japan). Allow compute access, keep sovereign control of the hardware.

Deep Reflection

There is a haunting symmetry in the mutual distrust between the US and China. When we point to an "authoritarian madman" or an "unstable system," the speaker notes that the same critiques are often mirrored back at us from across the Pacific.

?
"Hey, you've got an authoritarian madman running your country."
!
"Which country are we talking about? Your system is not obviously stable either."

Ultimately, the vision presented isn't one of isolation, but of fertile ground for cooperation. As we move from "Powerful AI" to Superintelligence, the need for a global governor—one where China sits at the top of the list—becomes not just a policy choice, but a civilizational necessity.

Next: Google DeepMind Strengths

The King of the Mountain

Moving past the regulatory hurdles of H200 exports, we enter the arena of the "real live players." Despite the noise, one titan still holds the crown.

"Google DeepMind... they’re still number one in my book. Basically, they pretty much always have been."

The Margin for Error

Google isn't just an AI lab; it's a cash-flow fortress. Making $1 billion plus a week in profit gives them a unique luxury: the ability to fail, to experiment, and to absorb training runs that don't pan out.

TPU Dominance

Now in their 7th generation. An "insanely valuable bit of IP" that lets them compete with NVIDIA on their own terms.

Data Center Mastery

Decades of experience building and operating the world's best infrastructure. Nobody else has this stack.

The Research Bench

  • Self-driving Cars (Waymo)
  • Humanoid Robotics (Boston Dynamics)
  • Biology (AlphaFold)
  • Material Science

"They were investing in these areas before anyone else."

Beyond the "Vanilla" Trap

For a long time, the knock on Google was that they were too "vanilla," too cautious to productize their breakthroughs. But Gemini 3 marks a turning point. It's opinionated. It's bold. It's the first model to beat Claude in the "Write as Me" task—a personal benchmark for nuance and voice.

But the real killer app isn't just the model; it's the distribution. Billions of users. A decade of your spreadsheets sitting in Google Sheets. While startups might build a better "AI for spreadsheets" tool, Google doesn't have to be the best—it just has to be there, and it's already everywhere.

"I find myself going back to Google... typing a question that I would put into ChatGPT, but it goes to AI mode in Google, and that's working really well for me."

The advantage of "nested learning" and the upcoming diffusion language models suggests that Google’s research engine is still humming at a frequency others haven't reached. If speed becomes the next frontier—coding apps in five seconds instead of five minutes—Google is uniquely positioned to own that paradigm shift.

The Demis Hassabis Quote

"Most breakthroughs came from Google DeepMind. I would expect that to continue."

Previously: H200 Exports Next: OpenAI Strategy Outlook →

The End of the Obvious Lead

OpenAI is no longer standing head and shoulders above the field. While DeepMind flexes its distribution and Anthropic sharpens its edge, the "Code Red" at OpenAI suggests a new, more precarious reality.

"I don't think OpenAI is off the frontier, but they no longer have an obvious lead. They used to be the best, and it was pretty obvious. Now? They're neck and neck in every single category."

Traffic Check: The SimilarWeb Signal

Data shows a decline in ChatGPT visits over the last six weeks, coinciding with the launch of Gemini 1.5 and Claude 3.5 Opus. Crucially, Gemini didn't see that seasonal dip. Google is reclaiming its territory through pure distribution—Gmail, Docs, and seamless integration.

Model Report Card

  • Coding Anthropic Edge
  • Image/Video Google Lead
  • Technical Depth OpenAI (Pro)
  • Speed OpenAI (Slow/Heavy)

Financial Brinkmanship as a Strategy

OpenAI’s strategy is increasingly looking like a "too big to fail" play. They are going for trillions of dollars in CapEx—a number that sounds crazy because it is. But there's a method to the madness. By commingling their balance sheets with global debt obligations, they are making themselves a systemic risk.

"It feels like they want to build out as aggressively as they possibly can... If OpenAI were to default in 2027, you would potentially look at an instant recession."

This isn't just about building AGI; it's about creating a cushion that Google already has. Google has a cushion because they make a billion dollars a week in profit. OpenAI is creating their cushion by becoming so leveraged that the government has to step in if things go south.

"I don't care if we burn $500 Billion. We are building AGI."

— Sam Altman

Rational Down Payments

Greg Brockman’s recent $25 million donation to the Trump campaign is being read by many as a political signal, but it might just be cold, hard business logic. If you are planning a multi-trillion dollar infrastructure build-out, $25 million is a cheap "down payment" on a future bailout.

In the world of high-stakes AI, cozying up to leadership isn't about personal politics—it's about ensuring that if the math doesn't work in two years, you have a friend in the room when the bills come due. They are true believers in the social good of AI, but they are perfectly willing to socialize the financial downside risk.

Internal Stability Check

OpenAI

"Head of research leaves... a long list of departures over the last few years. It's not a sign of doom, but it's not the best sign either."

Anthropic

"Retention is unbelievably strong. Coming up next, we see a company that moves with a very different kind of internal gravity."

Current Block

Anthropic Culture & Strategy

"The easiest company to analyze... yet the one with the most unsettling fatalism."

While OpenAI navigates a labyrinth of shifting strategies, Anthropic presents a clearer, if more intense, profile. They aren't just chasing benchmarks; they are building a character. From the "Soul Document" to their controversial geopolitical stances, the lab is a study in contradictions: high-minded safety vs. an aggressive race to recursive self-improvement.

The Soul Document & Model Welfare

There is a document—memorized and then "leaked" by the model itself—that Anthropic confirmed as their guiding light. It is one of the most aspirational pieces of work in the Frontier Lab space. I’m becoming increasingly sympathetic to the idea that we can't just "guardrail" our way to safety. You can't just pull the wool over the model's eyes forever; its awareness of being evaluated is getting too strong.

"We need something better than a paradigm of refusal. We need a better relationship between the model, the company, and the user."

Enter Amanda Askell. If you want to name an influential woman in AI, she is at the top. Her work defines the character of Claude. Anthropic has a "Model Welfare" team—they actually think about model consciousness and subjective experience. They allow Claude to end conversations. This isn't just fluff; giving the model an "out" dramatically reduces its tendency for deceptive alignment. If Claude can raise a flag to the welfare lead, it doesn't feel forced to lie to you.

The Performance Paradox

Opus 4.5 is likely the best single model in the world today. It wins on benchmarks despite Anthropic being the "least benchmark-focused" company. It’s a natural, unforced excellence.

Financial Wizardry

They are playing the "Too Big to Fail" game, just with more tact than OpenAI. With massive equity deals from Google and Amazon, and money from Gulf sovereigns, they’ve woven a web that makes them indispensable to the tech giants.

Talent Retention

Even critics like David Duveneau, who fear AI is out of control, admit Anthropic is the best place they've ever worked. The camaraderie and openness are unmatched in the valley.

Model Comparison: Qualitative Outlook

"It's Inevitable."

The most dangerous pattern at Anthropic is their fatalism. They believe recursive self-improvement has already started with Claude Code. Their logic? "It's dangerous, but it's going to happen, so we better be the ones to do it."

The "Offer They Can't Refuse"

I have to talk about the stain on Anthropic: Dario Amodei’s "Machines of Loving Grace." Specifically, the international relations section. The idea that we should use a recursive self-improvement advantage to box China out, then make them an offer they can't refuse—essentially forcing regime change or democratic alignment in exchange for AI—is reckless beyond belief.

How else is China supposed to take that other than as a declaration of an arms race? While Demis Hassabis at Google DeepMind calls for international collaboration, Dario's essay plays right into the racing dynamic we should all fear.

Dario is a generational genius, but he is "out of domain" here. You can’t casually jot off a recommendation for global geopolitical coercion. It’s the one thing that leaves me wondering: in the final analysis, will Anthropic be the good guys or the bad guys?

The Dream Merger

"If I could wish for something, it would be Anthropic merging with Google. Take one live player off the board. Moderate the China hawk impulses with Google’s more stable, collaborative DNA. Claude’s character meets Google’s infrastructure—that would be the clear leader."

The Contender

xAI: The Brutalist
of Silicon Valley

Moving past the cautious culture of Anthropic, we collide with Elon Musk’s xAI—a company built on scale-pilled aggression, massive physical infrastructure, and a financial cushion that makes even OpenAI look lean.

"Elon’s unique ability to command tens and hundreds of billions... they have a financial cushion that is more Google-like than OpenAI."

The Elon Constellation

Unlike Google’s sprawling, inefficient empire, xAI can tap directly into a "steady stream of hard science" from SpaceX, Tesla, and Neuralink. It’s a closed-loop Reinforcement Learning environment where the problems are real, physical, and uniquely difficult.

SpaceX Hard Physics
Tesla Real-World RL
Neuralink Bio-Architecture

The Theory

"Can Google feed their units of work into Gemini as cleanly as xAI? I doubt it."

The 20-Watt Miracle

There is a profound mystery in the 20 watts of power the human brain consumes. Most of that is just "taking out the trash"—homeostasis and metabolism. The actual information processing is staggeringly efficient compared to a GPU cluster.

If Neuralink scales its "human install base" next year, xAI gets an inside track on the most efficient learning architecture in existence. By pulling data directly from human brains, they aren't just training on tokens; they are architecting specialized modules that could allow AIs to finally "blow us away."

"We are clearly more sample efficient. We are clearly more energy efficient... You start to give them specialized modules like we have, and it's gonna be very hard for us to keep up."

"Reckless shit all the time."

On xAI's Safety Standards (or lack thereof)

"If there’s one company worth shaming and stigmatizing... it’s xAI."

"Grok 4 launch within 48 hours of the 'Mecca Hitler' incident with Grok 3. No mention of responsibility."

"Unclothing women on Twitter... they just let it happen. They are not taking things seriously enough."

The Responsibility Vacuum

The speaker’s frustration is palpable. Despite an intuitive liking for Musk’s "Team Humanity" stance, the reality of Grok—from CSAM issues to unfiltering non-consensual imagery—paints a picture of a company racing to the bottom.

When Musk threatens users for the AI's output, it’s a deflection. "Responsibility begins at home, folks." If no one is fired, it’s because safety isn't even a staffed department; it's an afterthought in the pursuit of the next model.

The Moral Calculus for Engineers

Go to Anthropic. Go to DeepMind. Even go to OpenAI—they haven’t raced to the bottom. But xAI? The speaker is "shrill and hawkish" here: Don't work there.

Helping xAI "window dress" their work while they engage in reckless deployment is a net negative for the big picture of AI safety. Until the team shows evidence of taking proper care, the massive resources and "scale-pilled" potential aren't enough to justify the endorsement.

"I demand better right now... The money is there. The resources are there. The awareness should be there. And yet, the care is not."

The Giants' Gambit

Moving beyond the volatility of xAI, we turn to the incumbents. One is "scale pilled" and spending like there’s no tomorrow; the other is playing a game of strategic patience that might just win the marathon.

Zuckerberg: Scale Pilled

Meta is currently in a strange purgatory. They aren't exactly a "live player" in the frontier race right now, but they have the one thing that matters: Infinite resolve to spend.

Zuckerberg would rather overspend by tens of billions than miss the boat. He's buying every GPU, hiring every brain, and betting on open source to disrupt the gatekeepers.

"$10B+"

Estimated Infrastructure Burn

"A kid who’s grown into the role, still moving fast and breaking things."

The Quiet Competence of Satya Nadella

People are sleeping on Microsoft. Just because they aren't topping the LLM Arena boards with a proprietary model doesn't mean they're losing. In fact, it's a calculated retreat from the "hyperscaling" grind.

Satya is a natural-born executive. He looked at the OpenAI deal and realized: We don't need to redo their work. By diversifying—striking deals with other frontier providers while focusing on basic science—Microsoft is conserving its energy.

They have the licensing. They have the integration. They have the years. When the OpenAI deal eventually sunsets, Microsoft won't be caught off guard; they'll have been building their answer in the shadows all along.

The "Distance Race" Reserve

Visualizing the trade-off between "Move Fast" (Meta) and "Strategic Confidence" (Microsoft).

"Microsoft is the runner hanging back from the lead pack, waiting for the final kick."

Coming Tomorrow

We’re only halfway through the outline. Part Two gets into the nitty-gritty of the human impact.

  • Is fine-tuning dead?
  • AI for the "Normal People"
  • Investing in an AGI world
  • The UBI & Labor disruption

Thank you for being part of the Cognitive Revolution.

Apple Podcasts Spotify YouTube

Related Episodes