Skip to content
Tech News
← Back to articles

The math that explains why bell curves are everywhere

read original get Statistics and Probability Book → more articles
Why This Matters

The prevalence of bell curves across various datasets highlights the fundamental role of the central limit theorem in modern science and statistics. This mathematical principle enables researchers to make reliable predictions and inferences from seemingly chaotic data, underpinning countless scientific and technological advancements. Understanding this concept is crucial for both industry professionals and consumers as it shapes the way we interpret data and make informed decisions.

Key Takeaways

No matter where you look, a bell curve is close by.

Place a measuring cup in your backyard every time it rains and note the height of the water when it stops: Your data will conform to a bell curve. Record 100 people’s guesses at the number of jelly beans in a jar, and they’ll follow a bell curve. Measure enough women’s heights, men’s weights, SAT scores, marathon times — you’ll always get the same smooth, rounded hump that tapers at the edges.

Why does the bell curve pop up in so many datasets?

The answer boils down to the central limit theorem, a mathematical truth so powerful that it often strikes newcomers as impossible, like a magic trick of nature. “The central limit theorem is pretty amazing because it is so unintuitive and surprising,” said Daniela Witten, a biostatistician at the University of Washington. Through it, the most random, unimaginable chaos can lead to striking predictability.

It’s now a pillar on which much of modern empirical science rests. Almost every time a scientist uses measurements to infer something about the world, the central limit theorem is buried somewhere in the methods. Without it, it would be hard for science to say anything, with any confidence, about anything.

“I don’t think the field of statistics would exist without the central limit theorem,” said Larry Wasserman, a statistician at Carnegie Mellon University. “It’s everything.”

Purity From Vice

Perhaps it shouldn’t come as a surprise that the push to find regularity in randomness came from the study of gambling.

In the coffeehouses of early-18th-century London, Abraham de Moivre’s mathematical talents were obvious. Many of his contemporaries, including Isaac Newton and Edmond Halley, recognized his brilliance. De Moivre was a fellow of the Royal Society, but he was also a refugee, a Frenchman who had fled his home country as a young man in the face of anti-Protestant persecution. As a foreigner, he couldn’t secure the kind of steady academic post that would befit his talent. So to help pay his bills, he became a consultant to gamblers who sought a mathematical edge.

Abraham de Moivre made early mathematical investigations into games of chance. Joseph Highmore (1736)/Public Domain

... continue reading