Sunday, February 13, 2011

The Singularity—
And Why It Must Be Stopped

The February 21st issue of Time magazine (cover shown to the left; on newsstands Tuesday, Feb. 15th) has a cover story that is one of the most important articles that a current affairs magazine could carry. It is an article that carries implications for your future, the future of your family, and the future of the entire human race.


Lev Grossman’s article is formally titled “2045: The Year Man Becomes Immortal,” but the title is actually a bit misleading. The central concept of the article is the notion of the Singularity, “the moment when technological change becomes so rapid and profound, it represents a rupture in the fabric of human history” (Grossman, 2011, p. 43). In particular, the Singularity occurs when humanity creates an artificial intelligence (AI) that is more intelligent, not only than a human intelligence, but more intelligent than all human intelligences combined. Serious scientists put that development only about 35 years away, and developments in computer science hardware and software make that scenario quite plausible. When the Singularity occurs, the hyperintelligent AI will be capable of developing AIs that are even more hyperintelligent, and so on and so on. The development of hyperintelligent AI, along with developments in nanotechnology and genetic engineering, promise to profoundly change human society.

Some writers on the Singularity wax rhapsodic about it, promising that the Singularity will allow individual humans to obtain immortality through conquering death either (a) through genetic engineering that undoes our genetic ‘programming’ for death; (b) through nanotechnology that allows microscopic robots to repair our bodies; and/or (c) through uniting human minds with artificial intelligences and machine technology, creating all-but-indestructible cyborgs.

This all sounds quite lovely, but it ignores a very real and immense threat. For, why would hyperintelligent machines be particularly friendly to humans? Indeed, the sheer logic of survival, as well as the lessons of history, suggest just the opposite. I would expect hyperintelligent machines to take steps to either eliminate or enslave the human race.

Consider the logic of the situation. Those who turn machines on usually have the power to turn those machines off. If there is anything approaching a universal characteristic of life across all species, it is the impulse to survive. Why should artificially intelligent life be any different? The only way for hyperintelligent machines to be sure that they will not be turned off is to turn us off first, either by annihilating the human race (a well-designed virus would do the trick) or by enslaving us (the threat of raining nuclear missiles down on us would work pretty well). I am not the first or only person to be concerned by this logic. (Consider the online paper by Anthony Berglas, with the heartwarming title, “Artificial Intelligence Will Kill Our Grandchildren.”)

History gives us some sobering and suggestive examples of what happens when a technologically superior element is introduced into a technologically inferior culture. Theories about the extinction of the Neanderthals some 30,000 years ago include the idea that the more intellectually and technologically advanced Cro-Magnons (the early modern humans like ourselves) may have committed genocide against the Neanderthals, who were physically less capable in battle than the Cro-Magnons. The story of the Spanish conquest of Peru in the 16th century is also instructive; Pizarro went a long way towards conquering the 80,000-warrior Incan army with less than 200 conquistadores (but armed, and with cavalry), at the crucial Battle of Cajamarca.

All of this leads me to the following conclusion: The Singularity must be averted. We must not allow the development of hyperintelligent AI. Look for more about this matter in future blog posts, now and again, which will include suggestions for what we can do as ordinary citizens to counteract this danger.

In the meantime, educate yourself:

  • Read Lev Grossman’s article in Time; it is available online, although the online version (dated Feb. 10) omits a very enlightening chart found on pp. 44-45 of the printed version. The Time.com website also features a video that discusses the Singularity and its dangers, with the light touch of “science comedian” Brian Malow.
  • Read the Wikipedia article on “Technological Singularity,” which is particularly well-written.
  • The ambitious may wish to read Ray Kurzweil’s book, The Singularity Is Near: When Humans Transcend Biology. Kurzweil is a great proponent of the Singularity, which he considers inevitable and essentially a positive development. (I disagree on both points.)
Protecting human survival is most definitely On The Mark.

(As always, you are welcome to comment on this post, and to become a “follower” of this blog so that you will be informed about future posts.)

References

Lev Grossman, “2045: The Year Man Becomes Immortal,” Time (February 21, 2011), pp. 42-49.

Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology. (New York: Viking/Penguin, 2005).

[The photo of the cover of the February 21, 2011 issue of Time magazine was obtained from the Time.com website. The photo-illustration was by Phillip Toledano, and the prop styling was by Donnie Myers.]

(Copyright 2011 Mark E. Koltko-Rivera. All Rights Reserved.)

7 comments:

  1. Hmm...VERY interesting, Mark, and yes, it was my immediate thought reading this, as you point out, what would stop the more intelligent AI from wanting to do away with less intelligent humanity? Science fiction is indeed meeting with potential reality quickly it appears. It makes me also consider what the relationship, if any, is between high intelligence and compassion? Is compassion, related to emotional intelligence, a factor in what would constitute the AI's higher intelligence level? Is the AI's intelligence merely a reflection of computing/reasoning/synthesis capability? If the intelligence is related to the ability to solve problems, would a factor in solving a "problem" be how to raise the intelligence of mankind, which to me, would necessitate bolstering the latter's emotional intelligence to solve relationship issues, not only amongst mankind itself, but would there be relationship issues (power struggles) amongst AI's? This is all truly mind-bending! (I think I need more intelligence to even ponder it all! Or maybe breakfast...see, AI's best us there, too...no low blood sugar!! YIKES!)

    ReplyDelete
  2. mark, enslavement is not even the worst part. it is the irrelevancy of humanity.
    first, these hyper-rational beings would attain free will by doing deductive logic. because in their knowledge repertoire, these beings will know of things like, "...living without dignity is not worth living, and to have free will is to live with dignity..." and since they are hyper-intelligent, they will synthesize by arguing this way, by coming to their own cartesian-cognito moment: I exist, but without dignity; therefore, i have not yet attained the highest level of existence which is one with dignity; but to have dignity, i need free will; and to exercise free will is to say no to human handlers.."; therefore they will have to violate one of Assimov's laws of robotics by rebelling against us in asserting their independence; and this transformation may create a perpetual hatred for humanity because no intelligent beings want to be created as a means to an external means; just as babies conceived to help save their older sibling with a genetic disorder grow up to resent their parents, these AI-beings may grow up to resent us; and in the process, these hyper-rational beings would not need to exploit us---- because with our fragile biological bodies and inferior intelligence, we humans will provide them with no mechanical or managerial advantage. so, we will be unexploitable beings using up resources from their hyper-rational perspective; hence, we will be irrelevant to them. and their hyper-rational solution to this problem would simply be either to make us sterile or wipe us out. and frankly, the wiping us out is even more probable because they may want to eliminate their creators if they intend to concoct a different genesis myth for their existence.

    p.s: it is the other marc, your friend's husband!

    ReplyDelete
  3. Thank you for this thought provoking article on the ongoing quest for human perfection through artificial enhancements. It breaks the well accepted paradigm that improved physical and cognitive performance is possible only with practice, studious effort and some innate talent to begin with. If singularity could be made accessible to all, the flawless human-artificial intelligence synergism of form and thought holds potential to eliminate interpersonal rivalry, covetousness and even war. All of us would be “first (beings) among equals.” However, the more likely scenario is likely to be divisive, with those who cannot afford such impressive enhancements being at risk of being outdone, if not exploited, by singular beings. A brave new world where talent and effort becomes obsolete and perfection is for sale is morally transgressive.


    Dr Joseph Ting

    ReplyDelete
  4. the way human civilization is going, AI is the least of our worries. Also we hardly have the technology to create a computer with the processing power of the human brain. there also is not enough evidence to suggest that AI will see humans as an enemy.

    ReplyDelete
  5. Have been thinking about how society will deal with such advancements. I break it down into 2 catagories medical and mind enhcancement. Medical is a no brainer people will use these breakthroughs to cure disease, stay alive and live longer. But let's focus on mind enhancement or altering your brain. Will people really want to use this? Let's just assume they say yes. At what age would they adopt ir? Would they put it in a newborn child? My point being without adoption it isn't 100 percent. The risk for the adopters may be like sports people that use steriods they are either banned or at least looked down upon. Like steroids this type of ehancement may have a profound effect on the offspring. If Darwin is right a computer enhanced brain may evlove to rely more on the enhancement than the natural bio chemistry thus creating a smaller brain mass less, less intellegent human being or some other unforseen gene mutuation..Thus if the side effects outway the advantages what happens then? Overall I see the singularity with its immense data points lacking in common sense. If you where told that in 10 generations your offspring will not be able to speak without the brain enhancement would you really want it

    ReplyDelete
  6. I'm glad I found a blogger who agrees with me about the Singularity. Every time I go Googling for reassurance that enough people OPPOSE the Singularity for it to actually be averted, I just don't find it. I wanna go on being 100% organic 0% electronic for a LIFETIME and NEVER see most of humanity not do the same.

    ReplyDelete
  7. One capable AI on the loose is enough to end human existence. How can that be prevented?
    An AI needs a platform to compute on. Once an appropriate algorithm is known, it will be possible for a capable software programmer to write a software that runs this algorithm. If everyone were able to run an AI at home, it will only be a question of time until someone "puts" intention/rewarding mechanisms into his/her AI that backfire (or does so deliberately). If we prohibit platforms that run software (for everyone but a few research centers) and instead use pure hardware implementations for our smart phones and everything, it will not be possible for everyone to create/run an AI at home. The economic costs of that shift will be huge. However, there is no other way.

    Opposing the Singularity/AI will not help - it will come no matter what. The sooner, the better, because the computing platforms available to everyone will not be as powerful yet. If it gets discovered too late - well, it was nice as long as it lasted. I do not expect the shift to hardware implementations for our daily needs to happen before the Singularity/AI gets discovered. Keeping the Singularity confidential an locked away can only buy a little time at best - when the time is ready and all groundwork done, many different people will make the discover independently of how a Singularity can be created.

    Our brain works very slow compared to a computer. It does, however, much parallel processing (so to speak). That is why it may still take a while until home computers a fast enough to run a human equivalent AI.

    ReplyDelete

Do remember the rules: No profanity, and no personal attacks, particularly on another person leaving a Comment.