Since the very first days of computer science — a field known for its methodical approach to problem-solving — randomness has played an important role. The first program to run on the world’s first general-purpose electronic computer used randomness to simulate nuclear processes. Similar approaches have since been used in astrophysics, climate science and economics. In all these cases, plugging in random numbers at certain steps in the algorithm helps researchers account for uncertainty about the many ways that complex processes can play out.
But adding randomness into an algorithm can also help you calculate the correct answer to unambiguous true-or-false questions. “You just say ‘OK, let me give up, let me not try, let me just pick something at random,’” said Eric Blais, a computer scientist at the University of Waterloo. “For way too many problems, that ends up being a successful approach.”
Abstractions navigates promising ideas in science and mathematics. Journey with us and join the conversation. See all Abstractions blog
Let’s say you want to determine whether a given number is prime (divisible only by 1 and itself) or composite (also divisible by other integers). You could simply try to divide it by all possible factors, but for large numbers this “brute force” method and other factoring algorithms are excruciatingly slow. And if the number turns out to be composite, factoring algorithms tell you the values of its divisors — more information than you asked for. If you care only about a number’s “primality,” is there a more efficient algorithm?
There is if you use randomness. The basic idea goes back to a result from the 17th-century French mathematician Pierre de Fermat, known as his “little theorem.” Fermat considered two integers — call them N and x. He proved that if N is a prime number, then xN − x is always a multiple of N, regardless of the value of x. Equivalently, if xN − x is not a multiple of N, then N can’t be a prime number. But the inverse statement isn’t always true: If xN − x is a multiple of N, then N is usually but not always prime.
To turn Fermat’s little theorem into a primality test, just take the N that you’re interested in, choose x at random, and plug the two numbers into xN − x. If the result is not a multiple of N, then you’re done: You know that N is definitely composite. If the result is a multiple of N, then N is probably prime. Now pick another random x and try again. In most cases, after a few dozen tries, you can conclude with near certainty that N is a prime number. “You do this a small number of times,” Blais said, “and somehow now your probability of having an error is less than the probability of an asteroid hitting the Earth between now and when you look at the answer.”
Pure randomness is helping you get a handle on the structure that solves the problem. Rahul Santhanam, University of Oxford
The first primality tests using randomized algorithms (based on refinements to Fermat’s little theorem) ushered in a new era. Problem after problem turned out to be far easier to solve with randomness than with nonrandom, or deterministic, algorithms. The key was to reframe each problem as one that could be quickly solved given an appropriate value for some number x, and then prove that just about any x would do. The solution works even though researchers have no idea how to determine whether any specific choice is a good one. Mathematicians have joked that this unusual challenge is akin to finding hay in a haystack.
But these successes made researchers wonder why randomness should help with problems like primality testing, which are all about finding hidden, non-random patterns. “There’s something a bit paradoxical about it,” said Rahul Santhanam, a computer scientist at the University of Oxford. “Pure randomness is helping you get a handle on the structure that solves the problem.”
In 1994, the computer scientists Noam Nisan and Avi Wigderson helped resolve this confusion by demonstrating that randomness, though useful, probably isn’t necessary. They proved that one of two things must be true: Either all problems that can be efficiently solved using randomness also have fast deterministic algorithms, or many notoriously difficult problems are secretly easy. Computer scientists consider the second possibility very unlikely.
... continue reading