Skip to content
Tech News
← Back to articles

Precision over perception: Why architecture matters in benchmarking

read original get Architectural Benchmarking Kit → more articles
Why This Matters

This article underscores the importance of rigorous and transparent benchmarking in the tech industry, highlighting how misinterpretations of test setups can lead to misleading conclusions. For consumers and enterprises, understanding the true capabilities of cloud infrastructure solutions is crucial for making informed decisions. Accurate architecture and methodology in testing ensure that performance claims are valid and comparable, fostering trust and innovation.

Key Takeaways

In the world of hybrid cloud infrastructure, data is our most valuable currency. But for data to be valid and valuable, it must be taken in context. Recently, VMware published a blog post claiming that, based on a study conducted by Principled Technologies, VMware Cloud Foundation (VCF) 9.0 with vSphere Kubernetes Service (VKS) delivers a "5.6x pod density" advantage over Red Hat OpenShift.

At first glance, the number is striking. As any systems architect knows, however, the validity of a benchmark lies not in the result, but in the methodology. When we look under the hood of this study, we find a comparison that highlights a fundamental misunderstanding of scale and a missed opportunity for a true "apples-to-apples" evaluation.

1. The Geometry of the test: 300 versus 4

The most critical element of any experiment is controlling the variables. In this study, both platforms were given the same physical hardware (four Dell PowerEdge servers). The configurations were very different, however:

The main Principled Technologies report does not mention how many virtual worker nodes VKS was running. The companion "science behind the report" methodology document does. In the appendix (pt-pod-density.yaml, page 5), the cluster configuration specifies "replicas: 300" (p. 5).

VKS was running 300 virtual worker nodes across 4 physical hosts. That is 75 virtual machines (VMs) per server, each configured as a "best-effort-medium" VM class (2 vCPUs, 8 GB RAM, no resource reservations). OpenShift was configured as a bare-metal cluster with 4 worker nodes, 1 per physical server.

In the published results, VKS reached 42,000 pods across its 300 workers, averaging 140 pods per node, while OpenShift reached 7,400 pods across its 4 workers, averaging 1,850 pods per node. OpenShift’s per-node density is over 13 times higher than VKS’s. The 5.6x headline reflects how the environments were sized—300 nodes versus 4, not how efficiently either platform runs pods.

The methodology document also reveals a configuration asymmetry in maxPods settings. The YAML configuration sets VKS to a maxPods limit of 200 per node (notably, the prose text of the same document states "a maximum of 300 pods per node," an internal discrepancy—we reference the YAML as the deployed configuration). OpenShift was set to 5,000 per node, well above the default of 250, a setting that actually gave OpenShift an advantage in packing pods onto its 4 nodes. Even so, the topology determined the outcome: VKS’s theoretical ceiling was 60,000 (300 nodes x 200 per node), while OpenShift’s was 20,000 (4 nodes x 5,000 per node). The test compared 2 fundamentally different architectures and reported only the aggregate total.

2. Queuing theory versus platform speed

The report also highlights "faster pod readiness" for VKS. To the observer, this is a basic principle of queuing theory rather than a software engineering breakthrough.

... continue reading