I’m currently working on the next edition of my Windows Server book for Cengage, updating everything from Windows Server 2022 to 2025. As you’d expect, my primary lab machine is a high-end 14th Gen Intel Core i9 system running Windows 11, hosting multiple Hyper-V virtual machines (VMs) for roles like Active Directory, IIS, DNS, DHCP, and more.
Out of curiosity (and honestly, just for fun), I decided to spin up the same Windows Server 2025 environment in Hyper-V on my Snapdragon X Elite system running Windows 11 on ARM. Microsoft does not provide an official installation ISO image of Windows Server 2025 for ARM on their website, so I used UUP dump to generate one from Microsoft’s update servers and installed the same set of VMs with it.
It was stable, functional, and fully usable as you’d expect… but there was one big difference: Everything felt much, much faster! Services started more quickly (including Active Directory, which is usually a bit of a slog), management consoles opened faster, and the same hands-on tasks I was writing for my textbook consistently completed in less time.
The Hyper-V VMs are configured identically in terms of memory, virtual processors, and installed roles for their native CPU architectures:
Snapdragon X Elite = ARM64 guest on ARM64 host
Intel Core i9 = x64 guest on x64 host
At this point, it may look like the architecture is the only real difference, and the major contributor to the performance, but that’s a bit misleading. The systems differ in more than just CPU architecture. Storage, memory, power management, and thermal behavior can all influence results. So rather than claiming “ARM is faster,” it’s better to look at how performance differs holistically.
Any good IT admin will tell you that workload type matters when it comes to performance. Both VMs are running the typical services you’d expect on a Windows Server: Active Directory, DNS, DHCP, IIS, File services (SMB/NFS/DFS), Print Services, Certificate Services, Remote Desktop Services, Routing and Remote Access, NPS, and so on. These services are typically thread-heavy and often have frequent-but-small CPU and I/O (read/write) operations, which means they are sensitive to latency and context switching (where a computer’s CPU stops one task to start another). In other words, they don’t tolerate variability well, and benefit from a system that provides consistent performance all the time.
This partially explains why the Snapdragon seems faster. Like many ARM systems, it doesn’t chase high boost clocks and instead delivers steady, sustained performance (including I/O). In contrast, modern Intel CPUs tend to ramp frequency quickly and throttle dynamically under load. That approach can deliver excellent peak performance, but it also introduces more variability in scheduling and latency under sustained or mixed workloads.
That variability matters even more in a virtualized environment. Hypervisors like Hyper-V are basically hardware schedulers. If the underlying hardware delivers more predictable execution timing, the hypervisor can make more consistent scheduling decisions. That, in turn, benefits the VMs and the services running inside them.
... continue reading