Tech News
← Back to articles

AI Data Centers Demand More Than Copper Can Deliver

read original related products more articles

Summary In data-center terms, scaling out involves linking computers, while scaling up packs more GPUs into a computer, challenging copper’s physical limits.

Copper cables face a phenomenon at high data rates at high data rates that necessitate wider wires and more power, complicating a data center’s dense connections.

Point2 and AttoTude propose radio-based cables, offering longer reach, lower power consumption, and narrower cables than copper, without the cost and complexity of optics.

Startups aim to directly integrate radio cables with GPUs, easing cooling needs and enhancing data-center efficiency.

How fast you can train gigantic new AI models boils down to two words: up and out.

In data-center terms, scaling out means increasing how many AI computers you can link together to tackle a big problem in chunks. Scaling up, on the other hand, means jamming as many GPUs as possible into each of those computers, linking them so that they act like a single gigantic GPU, and allowing them to do bigger pieces of a problem faster.

The two domains rely on two different physical connections. Scaling out mostly relies on photonic chips and optical fiber, which together can sling data hundreds or thousands of meters. Scaling up, which results in networks that are roughly 10 times as dense, is the domain of much simpler and less costly technology—copper cables that often span no more than a meter or two.

This article is part of our special report Top Tech 2026.

But the increasingly high GPU-to-GPU data rates needed to make more powerful computers work are coming up against the physical limits of copper. As the bandwidth demands on copper cables approach the terabit-per-second realm, physics demands that they be made shorter and thicker, says David Kuo, vice president of product marketing and business development at the data-center-interconnect startup Point2 Technology. That’s a big problem, given the congestion inside computer racks today and the fact that Nvidia, the leading AI hardware company, plans an eightfold increase in the maximum number of GPUs per system, from 72 to 576 by 2027.

“We call it the copper cliff,” says Kuo.

... continue reading