Tech News
← Back to articles

The great Bench GPU retest begins — how we're testing for our GPU Hierarchy in 2026, and why upscaling and framegen are still out

read original related products more articles

As we prepare to embark on a new round of testing for our GPU Hierarchy, we want to give Tom’s Hardware Premium subscribers a deep dive into our thinking and methods as results from this testing begin to feed into our Bench database, as well as a test plan that will show you what data to expect and when. This article will help you interpret our game testing results and understand why we test the way we do.

Our task for the first half of this year has sadly been made easier by the fact that neither Nvidia nor AMD nor Intel introduced new discrete gaming graphics cards at CES 2026. Historically, we would have expected an RTX 50 Super-series mid-cycle refresh from Nvidia at the very least, but the insatiable maw of AI demand has apparently dashed any launch plans for new consumer GPUs in favor of data center AI accelerators with incomparably higher margins.

Instead, the current RTX 50-series, RX 9000-series, and Arc B-series desktop cards soldier on with steep price hikes and reduced assortments of partner models. But if you need a graphics card for gaming in 2026, you can at least buy one from stock for the moment.

Given that we’re working with essentially the same GPUs that we had in 2025, our goal for the first GPU Hierarchy update of 2026 is to gather fresh data that represents a diverse cross-section of modern games across a range of engines, from high-frame-rate esports titles to GPU-crushing AAA experiences.

Upscaling and framegen matter more than ever, but we’re leaving them out

The biggest question we had to wrestle with when devising our 2026 test plan was whether to include upscaling in the GPU Hierarchy by default. Upscalers are no longer a crutch that trades visual fidelity for a large performance boost, as they once were. Especially with the advent of Nvidia’s DLSS version 4.5 release, we are closer than ever to one of the few unconditional wins of the AI era: free performance, lower fixed resource usage, and better-than-native image quality.

For all that, we’ve still decided against enabling DLSS, FSR, and XeSS for our testing. We’re trying to exclude as many variables as possible (like CPU scaling) from what is meant to be a direct performance comparison between graphics cards. Not every upscaler produces the same output image quality, not every game implements every upscaler from every vendor, and not every card can run the same upscaling models.

Even as DLSS 4.5 generates impeccable output frames, AMD’s FSR 4 can’t match its image quality, and FSR 4 only officially runs on certain Radeons. Older cards can only take advantage of FSR 3.x and earlier, which are cross-compatible across graphics cards from any vendor but don’t benefit from AI-enhanced architectures. Intel’s XeSS uses AI models of various fidelity in both its Arc-friendly and cross-vendor approaches, but its image quality also isn’t on par with DLSS, and it’s not in every game.

With all that in mind, even if we test Nvidia, AMD, and Intel graphics cards at the same input resolution before upscaling, we’re getting “Nvidia frames,” “AMD frames,” and “Intel frames” out the other end, which adds a layer of undesirable complexity to our analysis.

We want the GPU Hierarchy and Bench to be as clean and simple a representation of comparative performance between graphics cards as possible, so we’re excluding the variables introduced by upscaling from our data.

... continue reading