It is common knowledge among Wi-Fi professionals that using 20 MHz or 40 MHz channel widths when planning 5 GHz networks offers the best overall experience for enterprise networks. This is because enterprise networks can often cover large footprints and need higher density for many connected devices. Using narrower channel widths provides many more available channels for building out networks with appropriate channel reuse and allows flexibility to avoid co-channel interference from noisy neighbors.
Available 20 MHz-wide 5 GHz channels (source: PotatoFi)
Available 80 MHz-wide 5 GHz channels (source: PotatoFi)
Residential and small business Wi-Fi challenges are not so different. The average US household has 21 Wi-Fi devices1. Many homes require multiple mesh nodes or access points to cover effectively. Users in dense urban areas face many nearby access points using wide channels. Although Wi-Fi networks built by seasoned professionals typically use narrower channels, consumer Wi-Fi devices from popular manufacturers and ISPs utilize 80 MHz or wider channel widths by default. Popular routers and mesh systems from large manufacturers can even default to 40 MHz channels for 2.4 GHz networks (some not even allowing you to change to 20 MHz), utilizing two-thirds of the available spectrum!
Why? Because consumers have been conditioned to understand only raw speed as a metric of Wi-Fi quality and not more important indicators of internet experience such as responsiveness and reliability. If manufacturers shipped Wi-Fi routers and mesh systems that utilized more reasonable 40 MHz-wide 5 GHz channels out of the box, consumers would return the products when their favorite speed testing tool showed no improvement in speed over their previous system. Similarly, ISPs are reluctant to configure consumer premise equipment (CPE) to use narrower channels by default to reduce adjacent-channel and co-channel interference, as this will decrease the maximum achieved speed and hurt their standings in network performance benchmarks that emphasize raw speed over a rock solid and consistent Wi-Fi experience.
But wait. It gets worse.
Not only does consumer and telecoms marketing’s heavy focus on speed hamstring ISPs and device manufacturers when it comes to delivering excellent in-home Wi-Fi, but the very act of performing speed tests negatively impacts experience.
For example, here is a 1-minute summary of an iPhone’s responsiveness connected to a Wi-Fi 6 router connected directly to a symmetrical 1 Gbps fiber connection.
Orb 1-minute Wi-Fi Responsiveness view, idle network
Now, an important concept in Wi-Fi is that of airtime contention: basically, only a single device can “talk” at a time on a given channel2. So if one device is generating a considerable amount of unnecessary traffic, say from taking an internet speed test, substantial airtime contention occurs. Let’s connect a laptop to the same Wi-Fi router, take an internet speed test, and observe the impact on responsiveness from the same iPhone:
Orb 1-minute Wi-Fi Responsiveness view, active speed test by second Wi-Fi client
We can see material increases in latency, jitter, and packet loss, resulting in a doubling of the effective Lag. Of course, this does not prove that airtime contention is the cause, as there may be other factors such as buffer bloat. So let’s have the laptop perform the speed test via a wired connection instead of Wi-Fi:
Orb 1-minute Wi-Fi Responsiveness view, active speed test by ethernet client
This resulted in no material impact compared to the idle state, demonstrating that the speed testing activity was in fact the cause of the degraded experience from airtime contention and other Wi-Fi factors. In fact, the router is running FQ_Codel to mitigate non-Wi-Fi variables. With FQ_Codel disabled, the combined airtime contention and buffer bloat results in even greater degradation of experience when running a speed test: 1.5% packet loss, 13 ms jitter, and 113 ms peak lag. Considering the majority of consumer-grade equipment does not ship with buffer bloat mitigations enabled, this more accurately represents the general consumer Wi-Fi experience when an internet speed test is active.
Consumer Wi-Fi 6 router with default configuration. Orb client: iPhone 13 Pro Max. Speed test client: MacBook Pro M1. FQ_Codel enabled except where noted otherwise.
Many ISPs, device manufacturers, and consumers automate periodic, high-intensity speed tests that negatively impact the consumer internet experience as demonstrated. Further, static probes that connect to Wi-Fi networks to measure maximum throughput are contributing to both buffer bloat and airtime contention.
Conclusion
18% of US households experienced Wi-Fi issues on a daily basis, 20% on a weekly basis, and 68% reported issues in the past year3. But until consumers, the press, and industry understand that responsiveness and reliability are the largest drivers of their Wi-Fi experience, not speed, there will be little appetite from device manufacturers and ISPs to focus on solutions that result in truly great home Wi-Fi.
The IEEE 802.11bn (Wi-Fi 8) working group has acknowledged the need for a shift in focus, framing the standard’s goals differently from past generations: not chasing ever-higher peak speeds, but improving reliability, lower latency (especially at the 95th percentile), reduced packet loss, and robustness under challenging conditions (interference, mobility).
That said, the standard is not projected to be finalized until 2028. Near-term, the availability of Wi-Fi bands in the 6 GHz range will also help provide an even better balance of speed, responsiveness, and reliability with the ability to use wider bands while minimizing co-channel interference. However, this only offers another “lane” and does not eliminate the inherent problem of data-hungry devices cannibalizing precious air time. And, as analysis from Opensignal in conjunction with Hamina Founder & CEO Jussi Kiviniemi demonstrates, 6 GHz penetration remains low due to network and user equipment lagging behind on Wi-Fi 6E and 7 adoption.
Source: Opensignal
Source: Opensignal
We don’t have to wait until there is material Wi-Fi 6E and 7 penetration (or the unrealized promises of Wi-Fi 8) in the market to make progress—we can do so much better with the hardware already deployed with configuration changes if we can simply stop chasing the maximum possible throughputs and instead focus on Wi-Fi responsiveness and reliability.
So, are standard internet speed tests bad? Of course not! They are a tool, and when used for an appropriate task, such as validating provisioned speeds are achievable by a network or client, they are incredibly useful. But we tend to over-use speed testing tools for all connectivity-related troubleshooting activities due to a historical lack of available user-friendly utilities and industry focus on "megabits per second".
This isn’t a matter of education: consumers know they want responsive and reliable Wi-Fi networks for the use cases of today. Instead, we need tooling and data to show consumers the metrics they care about most in an easily digestible form and make them readily available. Modern monitoring tools that measure continuous network experience—not just point-in-time speed—give manufacturers and ISPs the opportunity to compete on metrics that actually improve Wi-Fi rather than degrade it.
Footnotes
1Journal of Consumer Affairs, April 2024
2This is an oversimplification given newer features such as OFDMA, but due to the nature of speed testing, the activity will demand large RUs or the entire channel. The scheduler also introduces overhead. And, as we will show, adoption of newer standards is low.
3TechSee data via Telecompetitor, September 2025