Tech News
← Back to articles

Membrane evaporative cooling tech achieves record-breaking results, could be solution for next-generation AI server cooling — clocks 800 watts of heat flux per square centimeter

read original related products more articles

Cooling systems for AI servers are among the most power-hungry systems in an AI data center. According to SciTechDaily, Engineers from the University of California, San Diego, have created a brand new cooling technology using a specially engineered fiber membrane that could potentially reduce the amount of power consumption and water required for server racks full of AI GPUs.

The fiber membrane-based cooling system takes advantage of evaporative cooling to remove heat from the component(s) it is cooling. The membrane is made up of a number of interconnected microscopic pores that draw coolant across its surface through capillary action. There are three layers: a bottom layer with microchannels for liquid to pass through, an intermediate layer where the membrane resides, and a top layer where the evaporator sits. As coolant is passed through the microchannels, it is absorbed into the membrane, and the heat generated by the component being cooled turns the liquid into a vapor that escapes through the evaporator layer. Excess coolant that is not vaporized remains in the microchannels and is likely recirculated.

(Image credit: SciTechDaily)

This cooling style reportedly rectifies issues found in previous evaporative cooling designs that took advantage of porous membranes. These failed attempts allegedly had pores that were either too small or too large, causing the coolant to clog up or boil. By contrast, this latest fiber membrane cooling system uses porous fiber membranes with interconnected pores with just the right size to prevent clogging and boiling.

Best of all, this latest design achieved a record-breaking 800 watts of heat flux per square centimeter and was stable over several hours of operation, making it a super potent solution for power-hungry datacenter applications. Not only is it an extremely capable solution, but this latest membrane cooling solution is also reportedly operating well below its theoretical limits, suggesting we could see significantly more powerful versions of this design in the future with even greater cooling capability.

This type of cooling system is something the data center industry desperately needs right now. AI development is continuing to explode at a rapid pace, and current data center liquid cooling systems are not up to the task, as we discussed in our immersion cooling for data centers article. Nvidia's next-generation datacenter/AI GPUs are expected to explode in power consumption, well beyond even what Nvidia's current Blackwell flagship AI GPUs are producing. Nvidia's upcoming Feynman GPUs that will debut after Rubin/Rubin Ultra are projected to consume a whopping 4,400W of power.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.