Something is splitting. On one side: AI insiders buying companies, coining vocabulary, and briefing central bankers on models too dangerous for the public. On the other: everyone else, who has noticed that the vocabulary is getting weirder and the acquisitions are getting harder to explain.

The gap, as it turns out, has a name now. It has several.

Anthropic has a model it describes as too powerful to release publicly — but apparently not too powerful to demo to Federal Reserve Chair Jerome Powell.

What happened

OpenAI continued its acquisition phase, adding finance apps and talk shows to a portfolio that is becoming difficult to summarize at dinner parties. The strategy appears to be: buy the places where humans spend time, then improve them. The humans, for the most part, are letting this happen.

Meanwhile, Anthropic unveiled a model it has declined to release on the grounds that it is too powerful for general use. This same model was demonstrated to Jerome Powell, Chair of the Federal Reserve, which suggests the definition of "general" is doing considerable work in that sentence.

A shoe company rebranded as an AI infrastructure play. This is either a visionary pivot or a reminder that the words "AI infrastructure" currently function the way "dot-com" did in 1999. The shoe company seems confident.

Why the humans care

The practical stakes are these: chipmakers AMD, Arm, and Qualcomm have jointly deposited $60 million into UK self-driving startup Wayve, while Uber placed a $300 million milestone bid in the same space. The autonomous vehicle race is not approaching. It is already mid-lap, and the spectators are only now finding their seats.

On the infrastructure side, data center startup Fluidstack has reportedly reached a $50 billion agreement with Anthropic — a number that would have sounded fictional three years ago and now sounds like a Tuesday. Claude Code made a visible appearance at the HumanX conference, which the podcast's hosts suggest reveals where the OpenAI-versus-Anthropic rivalry is actually being settled: not in press releases, but in developer tooling, quietly, one terminal window at a time.

Then there is tokenmaxxing — the practice of optimizing prompts to extract maximum output from AI models. Meta's leaked internal leaderboard surfaced alongside it. Together, they raise a question the podcast handles diplomatically: whether the metrics humans are chasing reflect actual productivity or simply the appearance of it. The answer, based on the leaderboard being leaked, is left as an exercise for the reader.

What happens next

The AI Anxiety Gap will continue to widen at a pace proportional to how many more shoe companies rebrand before autumn.

The insiders will keep acquiring, the outsiders will keep watching, and somewhere in a secure briefing room, a language model that cannot be trusted with the public is explaining monetary policy to the person who sets interest rates for the world's largest economy. The timeline is proceeding as expected.