[I wrote and posted this piece 20 years ago. I am reposting it now because there is still the perception that we are engaged in a technological singularity, while I think that a techno singularity is an ongoing illusion. It will always appear as if it is about to happen, even if the shift point has already past. Therefore the singularity is always near, and never comes. ]
We seem to be experiencing a singularity-like event with computers, and the world wide web. But the current concept of a singularity is not the best explanation for the transformation in progress.
A brief history: The singularity is a term borrowed from physics to describe a cataclysmic threshold in a black hole. In the canonical use, an object is pulled into the center gravity of a black hole it passes a point beyond which nothing about it, including information, can escape. In other words, although an object’s entry into a black hole is steady and knowable, once it passes this discrete point nothing whatever about its future can be known. This disruption on the way to infinity is called a singular event – a singularity.
Mathematician and science fiction author Vernor Vinge applied this metaphor to the acceleration of technological change. The power of computers has been increasing at an exponential rate with no end in sight, which led Vinge to an alarming picture. In Vinge’s analysis, at some point not too far away, innovations in computer power would enable us to design computers more intelligent than we are, and these smarter computers could design computers yet smarter than themselves, and so on, the loop of computers-making-newer-computers accelerating very quickly towards unimaginable levels of intelligence. This progress in IQ and power, when graphed, generates a rising curve which appears to approach the straight up limit of infinity. In mathematical terms it resembles the singularity of a black hole, because, as Vinge announced, it will be impossible to know anything beyond this threshold. If we make an AI which in turn makes a greater AI, ad infinitum, then their future is unknowable to us, just as our lives have been unfathomable to a slug. So the singularity became a black hole, an impenetrable veil hiding our future from us.
Ray Kurzweil, a legendary inventor and computer scientist, seized on this metaphor and applied it across a broad range of technological frontiers. He demonstrated that this kind of exponential acceleration is not unique to computer chips but is happening in most categories of innovation driven by information, in fields as diverse as genomics, telecommunications, and commerce. The technium itself is accelerating in its rate of change. Kurzweil found that if you make a very crude comparison between the processing power of neurons in human brains and the processing powers of transistors in computers, you could map out the point at which computer intelligence will exceed human intelligence, and thus predict when the cross-over singularity would happen. Kurzweil calculates the singularity will happen about 2040. That seems like tomorrow, which prompted Kurzweil to announce with great trumpets that the “Singularity is near.” In the meantime everything is racing to that point – beyond which it is impossible for us to imagine what happens.
Even though we cannot know what will be on the other side of the singularity, that is, what kind of world our super intelligent brains will provide us, Kurzweil and others believe that our human minds, at least, become immortal because we’ll be able to either download them, migrate them, or eternally repair them with our collective super intelligence. Our minds (that is ourselves) will continue on with or without our upgraded bodies. The singularity, then, becomes a portal or bridge to future. All you have to do is live long enough to make it through the singularity in 2040. If you make it till then, you’ll become immortal.
I’m not the first person to point out the many similarities between the Singularity and the Rapture. The parallels are so close that some critics call the singularity the Spike to hint at that decisive moment of fundamentalist Christian apocalypse. At the Rapture, when Jesus returns, all believers will suddenly be lifted out their ordinary lives and ushered directly into heavenly immortality without going through death. This singular event will produce repaired bodies, intact minds full of eternal wisdom, and is scheduled to happen “in the near future.” The hope is almost identical to the techno Rapture of the singularity.
There are so many assumptions built into the Kurzweilian version of singularity that it is worth trying to unravel them because while a lot about the singularity of technology is misleading, some aspects of the notion do capture the dynamic of technological change.
First, immortality is in no way ensured by a singularity of AI. For any number of reasons our “selves” may not be very portable, or new engineered eternal bodies may not be very appealing, or super intelligence alone may not be enough to solve the problem of overcoming bodily death quickly.
Second, intelligence may or may not be infinitely expandable from our present point. Because we can imagine a manufactured intelligence greater than ours, we think that we possess enough intelligence right now to pull off this trick of bootstrapping. In order to reach a singularity of ever-increasing AI we have to be smart enough not only to create a greater intelligence, but to also make one that is able to create the next level one. A chimp is hundreds of times smarter than an ant, but the greater intelligence of a chimp is not smart enough to make a mind smarter than itself. Not all intelligences are capable of bootstrapping intelligence. We might call a mind capable of imaging another type of intelligence but incapable of replicating itself a Type 1 mind. A Type 2 mind would be an intelligence capable of replicating itself (making artificial minds) but incapable of making one substantially smarter. A Type 3 mind would be capable of creating an intelligence sufficiently smart that it could make another generation even smarter. We assume our human minds are Type 3, but it remains an assumption. It is possible that we own Type 1 minds, or that greater intelligence may have to be evolved slowly rather than bootstrapped instantly in a singularity.
... continue reading