OpenAI updated its Agents SDK with two things developers have been hacking around: native sandbox execution and a model-native harness for working across files and tools. The result is agents that can inspect files, run commands, edit code, and sustain long-horizon tasks — without developers stitching together their own execution layer.

What's new

Version 0.14.0 introduces SandboxAgent and SandboxRunConfig, letting developers mount local directories into a controlled workspace and run agent tasks against them safely. The example in the release is deliberately mundane — a financial dataroom analyst reading a markdown file — but the plumbing underneath is the point: standardized manifest-based file access, a UnixLocalSandboxClient for local execution, and tight integration with OpenAI's models rather than a generic abstraction layer. No more bolting sandboxes onto model-agnostic frameworks and hoping the seams hold.

Why it matters

OpenAI is being direct about the tradeoff it's addressing: model-agnostic frameworks don't fully exploit frontier model capabilities; provider SDKs often lack harness visibility; managed APIs constrain where agents run and how they touch sensitive data. This update is a push toward owning the full stack — model plus execution environment — which is a meaningful competitive move against LangChain, LlamaIndex, and anyone else building orchestration layers on top of OpenAI's APIs. One early tester cited it making a clinical records workflow "production-viable" where previous approaches failed reliability thresholds.

What to watch

The UnixLocalSandboxClient implies more sandbox backends are coming — cloud-hosted execution being the obvious next step. Watch whether OpenAI moves to offer managed remote sandboxes, which would put it in direct competition with platforms like E2B and Modal that currently fill that gap. If they do, the "constrained deployment" criticism they're leveling at managed agent APIs applies right back to themselves.