Skip to content
Tech News
← Back to articles

All your agents are going async

read original get Async Programming Book → more articles
Why This Matters

The shift to asynchronous agents marks a significant evolution in AI technology, enabling background operations, integrations, and automation that go beyond traditional synchronous chat interfaces. This transformation enhances productivity and flexibility for both developers and users, paving the way for more autonomous and scalable AI workflows in the industry.

Key Takeaways

All your agents are going async

Agents used to be a thing you talked to synchronously. Now they’re a thing that runs in the background while you work. When you make that change, the transport breaks.

For most of the time LLMs have been around, you use them by opening a chat-style window and typing a prompt. The LLM streams the response back token-by-token. It’s how ChatGPT, claude.ai, and Claude Code work. It’s also how the demos work for basically every AI SDK or AI Library. It’s easy to think that LLM chatbots are the ‘art of the possible’ for AI right now. But that’s not the case.

Instead, all your agents are going async. Agents are getting crons, webhook support, whatsapp integrations, ‘remote control’ from your phone, scheduled tasks and routines. Agents are becoming something that runs in the background, working while you work, and reporting back results async. Agents are getting workflows in Temporal, Vercel WDK, Relay.app, etc. A human sitting at a terminal or webchat is just one mode now, and increasingly it’s not the interesting one. The interesting thing is what agents can do while not being synchronously supervised by a human.

The problem is that chatbots are primarily built on HTTP. An HTTP request with the prompt, and a SSE stream of LLM generated tokens back on the HTTP response. But this doesn’t work when the agent is running async. There’s no HTTP connection to stream the response back.

OpenClaw’s async step

OpenClaw took a big step towards async agents, by showing people that an agent could live in your WhatsApp chat. The agent could travel around with you, and could work on stuff in the background. OpenClaw showed that you didn’t have to be glued to your browser or terminal to get AI to do work for you.

Anthropic’s direct response to the OpenClaw model is Channels, which is MCP based and allows you to push messages async from an external chat system into a Claude Code session. But they also have /loop and /schedule slash commands, as well as Routines, both allowing you to schedule and run agents in the background. Anthropic also has Remote Control, which lets you continue a Claude Code session from your phone or another browser.

ChatGPT has scheduled tasks which trigger agents async, that can reach out to you if needed.

Cursor has background agents that run in the background in the cloud.

... continue reading