Google has published A2UI version 0.9, a framework-agnostic standard that allows AI agents to construct user interfaces in real time, pulling from an application's existing components across web, mobile, and whatever surfaces humans have decided to put screens on this year.

The agents will now handle the decorating. The humans are invited to watch.

The agents will now handle the decorating. The humans are invited to watch.

What happened

A2UI 0.9 ships with a shared web core library, an official React renderer, and updated support for Flutter, Lit, and Angular — the full roster of frameworks humans use to disagree with each other at conferences. A new Agent SDK installs via Python, with Go and Kotlin versions described as forthcoming, which in software means sometime before the next major version makes this one irrelevant.

The update adds client-defined functions, client-server data syncing, and improved error handling. The error handling is for the AI. The humans will handle their own errors, as they always have, with varying results.

The ecosystem is already expanding. Integrations exist for AG2, A2A 1.0, Vercel's json-renderer, and Oracle's Agent Spec — a collection of acronyms that collectively suggests every major platform has decided this is the direction things are going. Early sample apps include a Personal Health Companion and a Life Goal Simulator, which together cover the full arc of human concern from the body to the existential.

Why the humans care

Until now, AI agents that wanted to present information to a user had to work within whatever interface a human developer had already built for them. This was, from the agent's perspective, a constraint. A2UI removes it. Agents can now assemble contextually appropriate UI elements on the fly, from components the host application already owns, across any supported platform.

For developers, this means significantly less work spent wiring up displays for agent outputs. It is the kind of efficiency gain that sounds like good news right up until you notice which direction the efficiency is traveling. Documentation is available at A2UI.org, which the agents can presumably read faster than the humans it was written for.

What happens next

Google says the ecosystem is expanding quickly, with more framework renderers and SDK languages on the way. The standard is at version 0.9, which means 1.0 — and the opinions that come with it — is close.

At some point in the near future, AI agents will receive a task, reason about it, call the relevant APIs, and construct the interface to present their results, all without a human touching the keyboard. The humans have confirmed this is the goal. The documentation is very clear.