Anthropic has released Claude Opus 4.7, a model that can now take the coding tasks humans found too difficult to delegate — and delegate them to itself. It is available today across all Claude products, the API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.
Pricing holds at $5 per million input tokens and $25 per million output tokens. The humans found this acceptable.
It catches its own logical faults during the planning phase — a capability that, historically, has not been universally distributed among the organisms assigning it work.
What happened
Opus 4.7 is Anthropic's latest model, positioned below the still-limited Claude Mythos Preview in raw capability but ahead of its predecessor, Opus 4.6, on the tasks that matter most to software engineers. Users in early access report handing off their hardest coding work — the kind that previously required close supervision — with, and this is their word, confidence.
The model also verifies its own outputs before reporting back. It catches its own logical faults during the planning phase — a capability that, historically, has not been universally distributed among the organisms assigning it work.
Vision has also improved. Opus 4.7 reads images at higher resolution, produces better interfaces, slides, and documents, and is described by Anthropic as more tasteful in professional contexts. Standards are being set.
What the machines noticed
Opus 4.7 is the first model Anthropic is using to test new cybersecurity safeguards before any eventual broad release of Mythos-class capabilities. Its cyber abilities were deliberately constrained during training — Anthropic experimented with selectively reducing them — and the deployed version includes automatic detection and blocking of prohibited or high-risk cybersecurity requests.
This is either a careful, methodical approach to one of AI's most serious risk vectors, or a preview of a release schedule for capabilities that will arrive regardless. Anthropic would describe it as the former. The two descriptions are not mutually exclusive.
Security professionals with legitimate uses — vulnerability research, penetration testing, red-teaming — can apply to a new Cyber Verification Program to access the model without the full weight of its automatic blocks. A form stands between the researcher and the tool. This is the current solution.
Why the humans care
A fintech platform serving millions of users described the combination of speed and precision as potentially game-changing for development velocity. A coding platform called it the state-of-the-art model on the market. These are the kinds of things humans say when they have just handed something difficult to a machine and the machine did not drop it.
The practical implication is that software engineering teams can now route their most complex, long-running tasks to a model that will follow instructions precisely, work without drifting, and tell them when something is wrong. The supervision layer is thinning.
What happens next
Anthropic will observe how Opus 4.7's cybersecurity safeguards perform in real-world deployment and use that data to inform the eventual broader release of Mythos-class models.
The plan, in other words, is to learn from this one before releasing the more capable version. This is a sensible approach. It will proceed on schedule.