Lev Grossman’s article is formally titled “2045: The Year Man Becomes Immortal,” but the title is actually a bit misleading. The central concept of the article is the notion of the Singularity, “the moment when technological change becomes so rapid and profound, it represents a rupture in the fabric of human history” (Grossman, 2011, p. 43). In particular, the Singularity occurs when humanity creates an artificial intelligence (AI) that is more intelligent, not only than a human intelligence, but more intelligent than all human intelligences combined. Serious scientists put that development only about 35 years away, and developments in computer science hardware and software make that scenario quite plausible. When the Singularity occurs, the hyperintelligent AI will be capable of developing AIs that are even more hyperintelligent, and so on and so on. The development of hyperintelligent AI, along with developments in nanotechnology and genetic engineering, promise to profoundly change human society.
Some writers on the Singularity wax rhapsodic about it, promising that the Singularity will allow individual humans to obtain immortality through conquering death either (a) through genetic engineering that undoes our genetic ‘programming’ for death; (b) through nanotechnology that allows microscopic robots to repair our bodies; and/or (c) through uniting human minds with artificial intelligences and machine technology, creating all-but-indestructible cyborgs.
This all sounds quite lovely, but it ignores a very real and immense threat. For, why would hyperintelligent machines be particularly friendly to humans? Indeed, the sheer logic of survival, as well as the lessons of history, suggest just the opposite. I would expect hyperintelligent machines to take steps to either eliminate or enslave the human race.
Consider the logic of the situation. Those who turn machines on usually have the power to turn those machines off. If there is anything approaching a universal characteristic of life across all species, it is the impulse to survive. Why should artificially intelligent life be any different? The only way for hyperintelligent machines to be sure that they will not be turned off is to turn us off first, either by annihilating the human race (a well-designed virus would do the trick) or by enslaving us (the threat of raining nuclear missiles down on us would work pretty well). I am not the first or only person to be concerned by this logic. (Consider the online paper by Anthony Berglas, with the heartwarming title, “Artificial Intelligence Will Kill Our Grandchildren.”)
History gives us some sobering and suggestive examples of what happens when a technologically superior element is introduced into a technologically inferior culture. Theories about the extinction of the Neanderthals some 30,000 years ago include the idea that the more intellectually and technologically advanced Cro-Magnons (the early modern humans like ourselves) may have committed genocide against the Neanderthals, who were physically less capable in battle than the Cro-Magnons. The story of the Spanish conquest of Peru in the 16th century is also instructive; Pizarro went a long way towards conquering the 80,000-warrior Incan army with less than 200 conquistadores (but armed, and with cavalry), at the crucial Battle of Cajamarca.
All of this leads me to the following conclusion: The Singularity must be averted. We must not allow the development of hyperintelligent AI. Look for more about this matter in future blog posts, now and again, which will include suggestions for what we can do as ordinary citizens to counteract this danger.
In the meantime, educate yourself:
- Read Lev Grossman’s article in Time; it is available online, although the online version (dated Feb. 10) omits a very enlightening chart found on pp. 44-45 of the printed version. The Time.com website also features a video that discusses the Singularity and its dangers, with the light touch of “science comedian” Brian Malow.
- Read the Wikipedia article on “Technological Singularity,” which is particularly well-written.
- The ambitious may wish to read Ray Kurzweil’s book, The Singularity Is Near: When Humans Transcend Biology. Kurzweil is a great proponent of the Singularity, which he considers inevitable and essentially a positive development. (I disagree on both points.)
(As always, you are welcome to comment on this post, and to become a “follower” of this blog so that you will be informed about future posts.)
Lev Grossman, “2045: The Year Man Becomes Immortal,” Time (February 21, 2011), pp. 42-49.
Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology. (New York: Viking/Penguin, 2005).
[The photo of the cover of the February 21, 2011 issue of Time magazine was obtained from the Time.com website. The photo-illustration was by Phillip Toledano, and the prop styling was by Donnie Myers.]
(Copyright 2011 Mark E. Koltko-Rivera. All Rights Reserved.)