Skip to content
Tech News
← Back to articles

In Edison’s Revenge, Data Centers Are Transitioning From AC to DC

read original get DC Power Distribution Kit → more articles
Why This Matters

The transition from AC to DC power distribution in data centers marks a significant shift in infrastructure, driven by the rising power demands of AI and high-performance computing. This change promises increased efficiency, reduced energy losses, and streamlined power delivery, which are critical for scaling future data center operations and reducing operational costs for consumers and industry players alike.

Key Takeaways

Last week’s Nvidia GTC conference highlighted new chip architectures to power AI. But as the chips become faster and more powerful, the remainder of data center infrastructure is playing catchup. The power delivery community is responding: Announcements from Delta, Vertiv, and Eaton showcased new designs for the AI era. Complex and inefficient AC to DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.

“While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,” says Chris Thompson, vice president of advanced technology and global microgrids at Vertiv.

AC to DC Conversion Challenges

Today, nearly all data centers are designed around AC utility power. The electrical path includes multiple conversions before power reaches the compute load. Power typically enters the data center as medium-voltage AC (1kV to 35kV), is stepped down to low-voltage AC (480V or 415V) using a transformer, converted to DC inside an uninterruptible power supply (UPS) for battery storage, converted back to AC, and converted again to low-voltage DC (typically 54 V DC) at the server, supplying the DC power computing chips actually require.

“The double conversion process ensures the output AC is clean, stable and suitable for data center servers,” says Luiz Fernando Huet de Bacellar, vice president of engineering and technology at Eaton.

That setup worked well enough for the amounts of power required by traditional data centers. Traditional data center computational racks draw on the order of 10 kW each. For AI, that is starting to approach 1 MW. At that scale, the energy losses, current levels, and copper requirements of AC to DC conversions become increasingly difficult to justify. Every conversion incurs some power loss. On top of that, as the amount of power that needs to be delivered grows, the sheer size of the convertors, as well as the connector requirements of copper busbars, becomes untenable. According to an Nvidia blog, a 1 MW rack could require as much as 200 kg of copper busbar. For a 1 GW data center, it could amount to 200,000 kg of copper.

Benefits of High-Voltage DC Power

By converting 13.8 kV AC grid power directly to 800 VDC at the data center perimeter, most intermediate conversion steps are eliminated. This reduces the number of fans and power supply units, and leads to higher system reliability, lower heat dissipation, improved energy efficiency, and a smaller equipment footprint.

“Each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss,” says Fernando.

Switching from 415 V AC to 800 V DC in electrical distribution enables 85 percent more power to be transmitted through the same conductor size. This happens because higher voltage reduces current demand, lowering resistive losses and making power transfer more efficient. Thinner conductors can handle the same load, reducing copper requirements by 45 percent, a 5 percent improvement in efficiency, and 30 percent lower total cost of ownership for GW-scale facilities.

... continue reading