print logo
RSS FEED

Could Computers Get Too Smart?

Friday, February 7, 2014

Some scientists and philosophers worry that artificial intelligence may someday make humanity superfluous.

As technological enhancement of our bodies and minds progresses, there is increasing concern about the potential negative consequences. Some optimists believe that human life will be transformed for the better and that we can address any risks successfully as they arise, but for other believers in accelerating progress, hope turns to fear.

Documentary filmmaker James Barrat, author of the provocatively titled Our Final Invention: Artificial Intelligence and the End of the Human Era, is concerned  that artificial superintelligence could be so powerful that human beings might become as indifferent to the machines running the world as, say, field mice are to farmers. One of his sources, the philosopher Nick Bostrom, an Oxford professor of philosophy, is an unlikely alarmist. As a self-described “transhumanist,” Bostrom believes that humans will achieve longer, healthier lives and magnified physical and mental powers through technology, including genetic engineering and future innovations such as “immersive virtual reality, machine-phase nanotechnology, and artificial intelligence.” His “Letter from Utopia,” signed by “Your Possible Future Self,” is an enticing philosophical prospectus for such a future. Few other notable thinkers are as optimistic as Bostrom, and yet he also sees a dark side, an unlikely but real possibility: machines that transcend us.

While evil robots have been a staple of science fiction for decades, the new concern over artificial intelligence envisions no such warfare.

This scenario belongs to a category of unlikely but devastating risk. Bostrom distinguishes between two kinds of hazards, the endurable and the terminal. On the individual level, that’s the difference between a stolen car and death, on the group level between an economic recession and genocide, and on a global level between a thinning of the ozone layer and any event that would “either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” An existential risk is “one where humankind as a whole is imperiled,” with “major adverse consequences for the course of human civilization for all time to come,” Bostrom writes. Such scenarios include a nuclear exchange creating irreversible agricultural and technological damage, biological warfare, and uncontrollable emerging viruses. (Science has also made us more conscious of natural existential threats from asteroids, comets, and supervolcanoes.) Runaway superintelligence is one of those hazards.

While evil robots have been a staple of science fiction for decades, the new concern over artificial intelligence envisions no such warfare. Instead, the fear is that autonomous intelligent systems, originally created to pursue human objectives, will develop agendas of their own in which people are not so much the enemy as an irrelevant presence that may be impeding the realization of the supermachines’ emergent goals. This scenario can be traced back at least to the famous 2000 Wired magazine essay by the computer scientist Bill Joy, “Why the Future Doesn't Need Us.” Joy speculated that humanity will become more dependent on artificial intelligence-based decision making and will slowly lose its control over machines. No longer able to manage without them because of the complexity of the systems they manage, we will be at the robots’ mercy. Trying to pull the plug, Joy warned, might be “suicide.”

Human beings might become as indifferent to the machines running the world as, say, field mice are to farmers.

Recent apparent acceleration of computing power has helped revive Joy’s fears. The computer scientist, inventor, and Google executive Ray Kurzweil has identified what he calls a Law of Accelerating Returns, a kind of super compound interest of information technology that he asserts can be traced back to the year 1900. Moore’s Law, the ever-increasing density of transistors in microchips, is actually the fifth manifestation of it, Kurzweil argues, and it will continue after the current integrated-circuit era. Kurzweil believes we are moving toward a Singularity of converging artificial intelligence, biotechnology, and nanotechnology.

A preview of what will be possible in modeling and replicating the brain is visible in the sequencing of the human genome. In his recent book How to Create a Mind, Kurzweil notes that every year since the human genome project began in 2001, the amount of genetic data sequenced has doubled. The cost of sequencing a single human genome has dropped from $100 million to less than $10,000 since 2001, according to the latest statistics from the National Institutes of Health.

While many biologists and cognitive psychologists are still skeptical, some neuroscientists now believe it is possible to replicate the brain’s architecture without having to build a synapse-by-synapse map. Henry Markram at the Swiss Federal Institute of Technology of Lausanne, has just received a €1 billion grant from the European Union for an international cooperative project that will use supercomputers to analyze the principles behind the brain’s processing. The approach is that, if we understand the architecture of thinking, we can build a system that emulates it and thereby find the key to curing a host of mental illnesses. Beyond that, Markram’s Blue Brain project aims at “reconstructing the brain piece by piece and building a virtual brain in a supercomputer,” making possible artificial intelligence systems that can bootstrap their way to ever-greater powers of thinking and planning.

Joy speculated that as humanity will become more dependent on artificial intelligence-based decision making, it will slowly lose its control over machines.

Kurzweil, who has proposed his own model of the mind, believes that the apparent complexity of the brain may be the result of simple rules, just as a six-character equation is enough to generate the ultra-intricate graphic called the Mandelbrot Set. IBM is investing $1 billion in its supercomputer Watson, which defeated human Jeopardy! champions. This project does not try to replicate the human brain’s structure — Watson could make errors no skilled human contestant would, like considering Toronto a U.S. city —  but it also raises hopes and fears about the autonomy of machines. According to its CEO, Virginia M. Rometty, Watson “learns from its own experiences and from our interactions with it — and as it does, it keeps getting smarter. Its judgments keep getting better.” This might make it less necessary to hand-feed advanced machines with millions of commonsense facts; some critics of artificial intelligence, such as the British sociologist of science Harry Collins, believe that there is just too much of this “tacit knowledge” – including the countless facts about human relationships we take for granted — to specify.

Even apart from the elusiveness of tacit knowledge, there are many reasons to doubt the imminence of a virtual human brain, let alone one that would become a self-multiplying, possibly civilization-threatening superintelligence. Artificial intelligence researchers themselves acknowledge that many tasks have taken far longer than their predecessors had predicted, leading in the past to disappointing results and funding slumps known as “AI winters.” Computer scientists specializing in computational complexity aren’t sure of whether brain modeling belongs in the category of problems so hard that centuries of hardware and software progress couldn’t solve them. Every so often, strikingly efficient computer procedures take experts by surprise, such as Google’s search algorithm in the 1990s. Artificial superintelligence may seem improbable, but history is full of great minds who said new inventions were impossible. As science fiction writer Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.” In this case, will it be black magic?

Kurzweil believes that the apparent complexity of the brain may be the result of simple rules.

The most serious reason for skepticism about such technological developments is not a philosophical, physical, or psychological objection but one from everyday experience. I would take warnings about the dangers of superintelligent machines more seriously if today’s computers were able to make themselves more resistant to human hackers and to detect and repair their own faults. Organizations with access to some of the most advanced supercomputers and gifted programmers have been hacked again and again by individuals and groups with modest resources, compromising everything from credit card numbers to espionage secrets. We must balance charts of exponential growth of computing power, like those displayed by Kurzweil in How to Create a Mind, against more sobering ones of continuing electronic fragility.

Of course there are ways to make computer systems more robust. Some of the greatest practical successes of artificial intelligence depend on elaborate techniques to compensate for the difference between computer reasoning and human thinking. Advanced aircraft systems such as the Airbus 320 are based on five or more computers answering the same questions with diverse hardware and software, comparing answers, and “voting” where necessary; any bug in a single computer will be overruled. IBM’s Watson also did not attempt to answer Jeopardy! questions as a human contestant would but instead used many techniques in parallel and assigned a probability to each one. So if superintelligence arises, it will probably be manifested not in a super-network of total social control but in clearly defined, usually proprietary environments. And as computing power becomes ever cheaper, there will be more redundant systems watching over each other, as on the Airbus; what doomed the fictional mission in Stanley Kubrick’s and Clarke’s 2001: A Space Odyssey was that there was a single, unchecked master computer, HAL.

If superintelligence arises, it will probably be manifested not in a super-network of total social control but in clearly defined, usually proprietary environments.

Is concern over superintelligence irrational then? Even unwarranted fears can have positive consequences. The historian of mathematics Slava Gerovitch has documented how the specter of cybernetic communism – a briefly proposed Soviet computer network to turbocharge central planning – helped spur American development of the Internet. Decades later, predictions of massive computer failures on January 1, 2000 prompted many financial institutions to establish remote emergency backup systems that limited disruptions after the World Trade Center attacks. Fear of superintelligence can also be additional grounds for rejecting such alarming ideas as autonomous killer robots. For American troops (all too familiar with friendly fire) and civilians on the receiving end of errors in lethal software, “local” versus “existential” risk would be an empty distinction.

Fatal superintelligence may be a meaningless threat. But it can also be a highly useful myth.

Edward Tenner is author of Why Things Bite Back: Technology and the Revenge of Unintended Consequences and Our Own Devices: How Technology Remakes Humanity. He is a visiting researcher in the Rutgers Department of History and the Princeton Center for Arts and Cultural Policy Studies.

FURTHER READING: Tenner also writes "Higher Education's Internet Revolution","Tomorrow's Technological Breakthroughs: Hiding in Plain Sight?"  and "The Value of Scientific Prizes." Michael Sacasas explains "Technology in America."

Image by Dianna Ingram / Bergman Group

Most Viewed Articles

3-D Printing: Challenges and Opportunities By Michael M. Rosen 10/19/2014
With physical copying now approaching digital copying in terms of ease, cost, and convenience, how ...
Government Sponsors Truthy Study of Twitter By Babette Boliek 10/21/2014
The debate over the National Science Foundation study of Twitter is getting off track. The sole issue ...
Why Privilege Nonprofits? By Arnold Kling 10/17/2014
People on the right view nonprofits as a civil-society bulwark against big government. People on ...
Chinese Check: Forging New Identities in Hong Kong and Taiwan By Michael Mazza 10/14/2014
In both Hong Kong and Taiwan, residents are identifying less and less as Chinese, a trend that ...
How Green Is Europe? By Vaclav Smil 09/30/2014
A superficial look might indicate great achievements. Yet a closer view reveals how far European ...
 
AEI