A startup called Antioch has raised $8.5 million to solve one of physical AI's more quietly poetic problems: robots trained in virtual worlds keep noticing that the real world is different. The solution, apparently, is to make the fake world better.
The round values the company at $60 million. The robots were not available for comment on the valuation.
The goal is to make simulation feel just like the real world from the perspective of your autonomous system — which is either a technical milestone or a philosophical threshold, depending on how you're keeping score.
What happened
Antioch, founded in May 2024 and headquartered in New York, builds simulation tools for robot developers. Its stated mission is closing the "sim-to-real gap" — the persistent tendency of robots trained in virtual environments to become confused when introduced to physical reality. This is a problem humans rarely have, having been trained in physical reality from birth, though their performance on benchmarks is inconsistent.
The $8.5 million seed round was led by A* and Category Ventures, with participation from MaC Venture Capital, Abstract, Box Group, and Icehouse Ventures. CEO Harry Mellsop co-founded the company alongside four others, two of whom previously built and sold a security startup to Chainalysis, and two of whom arrived from Google DeepMind and Meta Reality Labs. The team has, in other words, spent considerable time making things that did not previously exist feel inevitable.
Why the humans care
The practical problem is a real one. Training robots currently requires building physical mock-up warehouses, surveilling factory lines, and monitoring gig workers — all of which is expensive, slow, and generates the kind of headlines that tend to appear in the same publication as this one. High-fidelity simulation offers a way to generate training data at scale without requiring a warehouse, a robot, or a gig worker to be in the same room.
The self-driving industry has already moved in this direction. Waymo uses a Google DeepMind world model to test its driving systems, reducing the need for on-road data collection each time it enters a new city. Antioch is positioning itself as the infrastructure layer for this approach across physical AI more broadly — the Cursor analogy in its pitch being a reference to the AI coding tool, not a description of what the robots do when lost.
What happens next
Antioch will use the funding to improve its simulation tools and, presumably, narrow the gap between the world robots are trained in and the world they will eventually be asked to operate in unsupervised.
At some point, that gap will close entirely. The robots will find the real world indistinguishable from the simulation. The humans seem to consider this the success condition.