Tech News
← Back to articles

Ultra Ethernet: The data-center interconnection of tomorrow detailed

read original related products more articles

Efficient connectivity is a key enabler for hyperscale and exascale clusters with tens or even hundreds of thousands of nodes — and traditional Ethernet is anything but.

That's not because of its peak throughput per se, but because of its architecture and the very way it transfers data. To resolve that bottleneck, a group of companies led by Meta, Microsoft, and Oracle began developing a next-generation data-center connectivity standard to deliver low latency, high bandwidth scale-out networking over standard Ethernet and IP. In mid-2025, the Ultra Ethernet 1.0.1 specification was defined by the Ultra Ethernet Consortium, which now includes over 100 companies.

The new standard introduces an all-new architecture — from software to physical layer — designed for unordered, connectionless communication over existing Ethernet and IP infrastructure. Here's everything you need to know.

Need for nodes … and connectivity

Ethernet has long been the backbone of enterprise networking and the connectivity technology of choice for cloud data centers and now AI data centers. But traditional Ethernet protocols were never designed for scale-out environments with up to a million of nodes. But Ultra Ethernet? Yeah, it can handle that.

A Microsoft Azure data center, with racks and racks of servers that today are connected via miles of Ethernet cabling. Tomorrow's Ultra Ethernet powered cables should pump twice the data. (Image credit: Microsoft)

The core design of standard Ethernet, specifically TCP's ordered delivery and reactive congestion control, struggles when hundreds of thousands of connections are active at once (a common situation for AI and HPC workloads). As node counts grow, packets more readily collide or get delayed, and switches must keep track of too many flows, causing false congestion and unstable latency. Ethernet also relies on software-heavy reliability and best-effort delivery, which is good enough for enterprise networks but adds latency and overhead in synchronized AI or HPC clusters.

Supercomputers generally rely on custom interconnects such as Infiniband, tech that feature hardware-level flow control, deterministic routing, and hardware-management reliability. With these connections, the NIC itself takes care of packet acknowledgments, retransmissions, and error recovery, opening doors to clusters with more nodes. But they're proprietary and expensive, and therefore were barely considered for next-generation AI and HPC data centers when Meta, Microsoft, and Oracle began their work in 2022.

Rather than adopting an existing solution, the UEC created an entirely new networking standard that reshapes data transport by removing the traditional requirement for packets to arrive in order. The new standard introduces an all-new architecture — from software to physical layer — designed for unordered, connectionless communication over existing Ethernet and IP infrastructure. The result is an efficient and highly scalable transport stack that outperforms legacy RDMA implementations and remains compatible with today's data-center hardware and software ecosystems (e.g., IEEE 802.1, 802.3, and IETF RFC).

Ultra Ethernet: Design goals

... continue reading