“User interfaces are largely going to go away,” Eric Schmidt predicts. Agents will generate whatever UI you need on the fly. I built a prototype to explore the premise.
That’s an agentic AI assistant generating React UIs from scratch, with data flowing between client, server, and LLM. The prototype rests on three ideas:
Markdown as protocol — One stream carrying text, executable code, and data. The LLM already knows how to write it. Streaming execution — The agent writes and executes code. Each statement executes as soon as it’s complete — no waiting for the full response. A mount() primitive — One function that lets the agent create reactive UIs, with data flow patterns for client-server-LLM communication.
Check out the repo here.
The Protocol #
How do you combine code execution with text and data? All streamed and interleaved in arbitrary order? In a single protocol?
I kept coming back to markdown. LLMs know markdown cold — formatting, code fences, all of it. Why teach them something new?
So I settled on three block types:
Block Syntax Purpose Text **Plain markdown formatting** Streams to the user Code fence ```tsx agent.run Executes on the server in a persistent context Data fence ```json agent.data => "id" Streams data into the UI
Here’s what this might look like:
... continue reading