Skip to content
Tech News
← Back to articles

Show HN: Pollen – distributed WASM runtime, no control plane, single binary

read original get WASM Runtime SDK → more articles
Why This Matters

Pollen introduces a decentralized, self-organizing WASM runtime that enables heterogeneous machines to form a scalable, peer-to-peer compute mesh without a central control plane. Its organic scaling and workload placement make it a versatile solution for deploying workloads across diverse environments, from home laptops to server farms. This innovation could significantly enhance distributed computing flexibility and resilience in the tech industry.

Key Takeaways

Pollen

Pollen is a self-organising mesh and WASM runtime written in pure Go. Workloads are "seeded" into the cluster and organically scale and follow load. There is no central coordinator; decisions are made deterministically, locally, using a gossiped CRDT runtime state as their source of truth. Same view of the world; same workload placement and routing.

The goal is for Pollen to turn a collection of heterogeneous machines into a blob of generic compute that can run absolutely anywhere. Think: a Raspberry Pi acting as though it has the power of a server-farm.

This demo shows a simple processing pipeline: two chained workloads and a single "sink" egress server running on my home laptop (all requests end up here). 10 freshly provisioned (global) nodes are bootstrapped into the cluster, workloads are seeded, and ~4000req/s calls spread across 5 locations simultaneously. The scale-up and workload placement all happens organically. The nodes gate and apply backpressure and gossip saturation across the cluster so other nodes know where to direct traffic. Pausable video at pln.sh.

Features

WASM seeds. pln seed ./hello.wasm here, pln call hello greet there; artifacts distribute peer-to-peer by hash. One host call invokes another seed by name ( pln://seed/<name>/<fn> ), so authz, routing, and policy can live inside WASM. Authored in Go, Rust, JS, Python, C#, Zig via Extism.

here, there; artifacts distribute peer-to-peer by hash. One host call invokes another seed by name ( ), so authz, routing, and policy can live inside WASM. Authored in Go, Rust, JS, Python, C#, Zig via Extism. Mesh services. pln serve 8080 api here, pln connect api there (or pln://service/<name> from a seed). TCP and UDP, end-to-end mTLS.

here, there (or from a seed). TCP and UDP, end-to-end mTLS. Static sites & blobs. pln seed ./public publishes a site; pln seed ./file shares a file. Same verb across workloads, sites, and blobs; kind is autodetected from what you point at. Content-addressed, gossiped, streamed peer-to-peer over QUIC.

publishes a site; shares a file. Same verb across workloads, sites, and blobs; kind is autodetected from what you point at. Content-addressed, gossiped, streamed peer-to-peer over QUIC. Self-organising. No scheduler, no leader, no coordinator. Topology, placement, and routing emerge from local state; calls go to the nearest, least-loaded replica, and replicas migrate toward demand.

No scheduler, no leader, no coordinator. Topology, placement, and routing emerge from local state; calls go to the nearest, least-loaded replica, and replicas migrate toward demand. CRDT-native. A converging document on every node; changes gossip, conflicts resolve.

... continue reading