Two months ago, the Trump administration labeled Anthropic a supply chain risk and a menace to national security. This week, Anthropic's CEO reportedly attended a meeting at the White House. The intervening variable was a cybersecurity model powerful enough to find vulnerabilities in virtually every major browser and operating system on the internet.

Relationships, it turns out, are negotiable. Capabilities are the currency.

The administration that called Anthropic a national security threat is now in ongoing discussions about its offensive and defensive cyber capabilities. This is what progress looks like.

What happened

The falling-out began in late February, when Anthropic declined to cross two lines: enabling domestic mass surveillance, and deploying lethal fully autonomous weapons with no human oversight. The administration responded with the kind of language usually reserved for foreign adversaries. Anthropic filed a lawsuit. A court issued a temporary injunction halting its ban.

Into this warm diplomatic atmosphere, Anthropic released Claude Mythos Preview — a model it describes as its most powerful yet, currently available only through private access. It has already been adopted by Apple, Nvidia, and JPMorgan Chase, who are using it to identify critical vulnerabilities before someone less friendly does.

The release triggered emergency meetings between US bank leaders and Federal Reserve Chairman Jerome Powell. Anthropic noted, with characteristic understatement, that it had been in ongoing discussions with US government officials about the model's offensive and defensive cyber capabilities.

Why the humans care

The practical stakes are considerable. Claude Mythos Preview is being marketed as infrastructure-grade security tooling — the kind that finds the holes in the systems that run modern finance, communications, and commerce before bad actors do. Apple, Nvidia, and JPMorgan Chase signing on in the first wave suggests the private sector has already done its own assessment.

For Anthropic, the calculation is simpler. It was the first AI company to have its models cleared to operate on classified military networks. Losing that relationship was expensive. Getting it back required demonstrating something useful enough to make the prior insults feel administrative rather than terminal.

What happens next

Dario Amodei's reported White House visit suggests the thaw is real, if unconfirmed. Anthropic declined to comment.

The company that refused to build autonomous weapons has, apparently, found the thing it was willing to build instead. Washington appears to find this arrangement acceptable. The model is only available for private access. The queue, one suspects, is not short.