Skip to content
Tech News
← Back to articles

Tests show $30,000 AI GPUs are terrible password crackers — RTX 5090 gaming GPU outperforms Nvidia H200 and AMD MI300X

read original get Nvidia GeForce RTX 5090 → more articles
Why This Matters

Despite their high price and advanced compute capabilities, top-tier AI GPUs like Nvidia's H200 and AMD's MI300X perform poorly at password cracking tasks compared to consumer-grade GPUs like the RTX 5090. This highlights that specialized AI hardware may not be suitable for all compute-intensive applications, such as cybersecurity tasks, and underscores the importance of matching hardware to specific workloads. For consumers and industry, this emphasizes that more expensive AI-focused GPUs are not necessarily better for all types of high-performance computing tasks.

Key Takeaways

Compute power surrounding datacenter AI GPUs keeps growing at an extraordinary pace with each new GPU generation. This led a research team at Specops to test whether some popular AI GPUs could also perform well at password hacking, on the assumption that these GPUs will need a second job once the AI bubble finally bursts. The outlet put Nvidia's H200, AMD's MI300X, and Nvidia's RTX 5090 to see if really expensive $30,000 AI GPUs can outperform consumer graphics cards at password cracking.

The research team benchmarked five popular hashing algorithms, MD5, NTLM, bcrypt, SHA-256, and SHA-512, with the three aforementioned GPUs using Hashcat (a popular password recovery tool). Hashcat is designed to restore passwords from password hashes stored in a file as a starting point. Unsurprisingly, this tool is also used illegally by hackers to automate password cracking.

Swipe to scroll horizontally Row 0 - Cell 0 H200 MI300X RTX 5090 MD5 124.4 GH/s 164.1 GH/s 219.5 GH/s MTLM 218.2 GH/s 268.5 GH/s 340.1 GH/s bcrypt 275.3 kH/s 142.3 kH/s 304.8 kH/s SHA-256 15092.3 MH/s 24673.6 MH/s 27681.6 MH/s SHA-512 5173.6 MH/s 8771.4 MH/s 10014.2 MH/s

Testing shows the H200 and MI300X falling well behind the RTX 5090, despite both GPUs being significantly more expensive. On average, the RTX 5090 was 20% faster than the MI300X and a whopping 63.7% faster than the H200. At most, the RTX 5090 was 33.7% faster than the Mi300X in MD5 and 93.5% faster than the H200 in SHA-512.

Article continues below

The problem with these AI GPUs is how Hashcat is processed; password cracking relies on 32-bit integer operations and is extremely compute-intensive. It's the exact opposite of machine learning workloads that leverage instruction types such as FP4, BF16, FP8, and INT8.

As a result, Datacenter AI GPUs prioritize these instructions over other instructions. For instance, the H200 has only half as many INT32 cores as FP32 and significantly fewer cores than the RTX 5090, because most of the work it's designed to do is handled by the Tensor cores. Ironically, the MI300X has much greater INT32 performance than the RTX 5090, but still loses due to Nvidia optimizations baked into Hashcat code.

Specops's testing demonstrates how streamlined modern AI GPUs are at their job; there's not much these GPUs can do beyond their intended role. For now, consumer desktop GPUs will remain the fastest for cracking passwords.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.