Tech News
← Back to articles

All the world’s polygons

read original related products more articles

How real is your world? How do you know? Maybe it’s the gentle sway of leaves in the wind. Or the sound of crickets chirping at dusk. Or the softness of the light in the summer. Take a step back, blink. Turn your head to the side. Are you sure? From the earliest 8-bit bush in The Legend of Zelda (1986) to the peatbog sublime of Death Stranding (2018), video games have long been on a quest for perfect simulation. The benefits are obvious: more convincing worlds equals more immersive gameplay; more immersive gameplay equals more profit. From real-time weather systems to 3D scanned rainforests, an economy of simulated nature has emerged to answer gaming’s demands. But is this desire for ecological simulation also a kind of capture? This essay explores how simulation in gaming carries echoes from the past even as its implications careen towards the future. The same computing platforms powering next-gen immersive gaming are also fueling long-range climate forecasts and evaluating proposals to modify the earth’s atmosphere. In other words, the same climate simulator powering the real-time weather system of the next Grand Theft Auto is also going to tell us whether geoengineering is a good idea. Consider the real-time digital Earth models currently being assembled of the planet—also known as digital Earth twins or Earth Virtualization Engines—as the conceptual sibling of a Natural History Museum, with all the related baggage. Consider the implications of this archive that floats incandescently above us in the cloud, the energy funneled into its maintenance ironically contributing to the slow death of the real thing. What new ways of relating to the world do simulation technologies open, and what do they inevitably foreclose?

I. Two kinds of wind In cinema, there is the wind that blows and the wind blown by a machine. In computer games there is only one kind of wind.

—Harun Farocki, Parallel I (2012)

Harun Farocki, Parallel I, 2012, HD video installation, 2 channels, color, sound, 17 min.

When the German artist and filmmaker Harun Farocki transcended this mortal plane in 2014, he left behind one of the most prescient analyses of simulation in gaming. Across four acts, Parallel I–IV (2012–2014) analyzes the perpetual feedback loop between simulation and culture. The project explores how virtual environments map the present and anticipate the future, even while largely reinforcing historical ways of knowing the world. It is ironic that the video game industry, whose output is largely centered around violent and human-centric modes of play, has chosen environmental realism as its representational benchmark. Trees shuddering in the wind, clouds unfurling overhead, dappled sunlight on swaying leaves: these have long been the stress test of computational photorealism. Video games must render their worlds in real time with each playthrough, which requires an immense amount of computational muscle. As the software grows more powerful, the hardware must follow, exponentially. Consider the first supercomputer, the CRAY-1 model, built from dairy pipes and hoses in Wisconsin in 1972, is considerably less powerful than the smartphone in your pocket. Now consider the world’s most powerful supercomputer—the exasystem El Capitan, brought online in November 2024—which runs at a speed of 17 exaFLOPs, capable of running almost 2 quintillion calculations a second. More real, more beautiful: with the arrival of titles like Sea of Thieves (2018), The Legend of Zelda: The Breath of the Wild (2017) and Firewatch (2016), we edge ever closer to the ludic sublime. And yet, the technologies underpinning these worlds are streaked with violence. Tennis for Two (1952), largely considered the first video game, was crafted with missile-tracking technology by a group of bored physicists killing time in the lab. Spacewar!, developed a decade later on the PDP-1 computer model, was funded by the Pentagon and later used in military training.

The Pentagon-funded Spacewar! (1960), popularly cited as the first video game, displayed on a PDP-1, the computer used to program it.

And yet, a realistic “landscape”—for this is how game environments are termed, a choice of syntax that reinforces their status as secondary and inert—matters. The believability of a body of water or blade of grass aggregate to reinforce what game scholar Ian Bogost terms the “magic circle”, or narrative immersion, of a game. Harocki’s death predated the Cambrian explosion of simulated ecology in the late twenty-teens, realized through 3D scanned megamalls with thousands of free-to-use assets as well as powerful photogrammetry mobile apps, which allow anyone to make a 3D model of virtually anything. One wonders what the artist would think about a photorealistic trunk, scanned in from Iceland, that now appears simultaneously in a medieval RPG, a text-based adventure game about a park ranger on the run from his dying wife, and a posthuman mycelium-zombie survival game, amongst thousands of other titles. Building lifelike game environments has never been so easy; nor has photorealism ever been so photocopied, and, in a sense, unremarkable. The question then becomes, what else might simulated environments be capable of? II. All the world’s polygons (Rainforest) We live to capture this world to give life to countless others. We capture the world so you can create your own.

—Quixel advertising campaign, 2018

A terrain scan of the Vasquez Rocks State Park, currently for sale on Unreal Engine’s new asset marketplace, Fab.

There is a park 40 miles northwest of Los Angeles that looks Martian, as if plucked from some science-fiction story: gigantic primordial rock formations thrust sideways into the sky like cosmic daggers. Its cinematic otherworldliness is no stranger to Hollywood: Vasquez Rocks State Park lies within the 30-mile “studio zone” of the industry, permitting a cheap actor and crew day rate. As a result, dozens of iconic movies—pre-CGI worldbuilding—were filmed here. Over a century, this ancient landscape has been Dr. Evil’s underground complex in Austin Powers, the planet Vulcan in Star Wars, and, a little unconvincingly, Dracula’s rural Transylvania. Hollywood’s “plug-and-play” approach to the Vasquez Rocks served as a precedent to its contemporary use of virtual production. As an increasingly integral tool in the industry’s production pipelines, the game engine can be thought of as one huge stage set. Things are sculpted, painted, meticulously lit, and precisely filmed. There is a constant negotiation between environmental complexity and render efficiency that determines how virtual worlds are made; this tension is crucial to how such engines have evolved over the years. In computer graphics, each 3D model is made up of polygons: these are flat surfaces composed of vertices. A higher poly model is more realistic, but also requires more computational power to render, causing more lag. To combat this, game engines will only render out the polygons that are visible on screen, adjusting their level of detail according to the distance from the camera, a technique known as “billboarding”. As the name implies, “billboarding” reinforces the understanding of virtual environments as mere backdrops against which the “real” action of game worlds takes place.

... continue reading