2012 TheConsequencesOfMachineIntelligence

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Mass Technological Unemployment, Negative AI Consequence.

Notes

Cited By

Quotes

The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy's main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly “Singularitarians" who think about the day when machine will surpass humans in intelligence. The term "singularity" as a description for a phenomenon of technological acceleration leading to "machine-intelligence explosion" was coined by the mathematician Stanislaw Ulam in 1958, when he wrote of a conversation with John von Neumann concerning the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” More recently, the concept has been popularized by the futurist Ray Kurzweil, who pinpointed 2045 as the year of singularity. Kurzweil has also founded Singularity University and the annual Singularity Summit.

It is fair to say, I believe, that Singularitarians are not quite in the mainstream. Perhaps it is due to their belief that by 2045 humans will also become immortal and be able to [[download their consciousness to computers]]. It was, therefore, quite surprising when in 2000, Bill Joy, a very mainstream technologist as co-founder of Sun Microsystems, wrote an article entitled "Why the Future Doesn't Need Us" for Wired magazine. “Our most powerful 21st-century technologies -- robotics, genetic engineering, and nanotech -- are threatening to make humans an endangered species," he wrote. Joy's article was widely noted when it appeared, but it seems to have made little impact.

It is in the context of the Great Recession that people started noticing that while machines have yet to exceed humans in intelligence, they are getting intelligent enough to have a major impact on the job market. In their 2011 book, Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy authors Erik Brynjolfsson and Andrew McAfee, argued that "technological progress is accelerating innovation even as it leaves many types of workers behind.” Indeed, over the past 30 years, as we saw the personal computer morph into tablets, smartphones, and cloud computing, we also saw income inequality grow worldwide. While the loss of millions of jobs over the past few years has been attributed to the Great Recession, whose end is not yet in sight, it now seems that technology-driven productivity growth is at least a major factor. Such concerns have gone mainstream in the past year, with articles in newspapers and magazines carrying titles such as “More Jobs Predicted for Machines, Not People," “Marathon Machine: Unskilled Workers Are Struggling to Keep Up With Technological Change," “It's a Man vs. Machine Recovery," and “The Robots Are Winning."

Early AI pioneers were brimming with optimism about the possibilities of machine intelligence. Alan Turing's 1950 paper, "Computing Machinery and Intelligence" is perhaps best known for his proposal of an "Imitation Game", known today as "the Turing Test", as an operational definition for machine intelligence. But the main focus of the 1950 paper is actually not the Imitation Game but the possibility of machine intelligence. Turing carefully analyzed and rebutted arguments against machine intelligence. He also stated his belief that we will see machine intelligence by the end of the 20th century, writing "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

While we now know that Turing was too optimistic on the timeline, AI's inexorable progress over the past 50 years suggests that Herbert Simon was right when he wrote in 1956 "machines will be capable … of doing any work a man can do.” I do not expect this to happen in the very near future, but I do believe that by 2045 machines will be able to do if not any work that humans can do, then a very significant fraction of the work that humans can do. Bill Joy's question deserves therefore not to be ignored: Does the future need us? By this I mean to ask, if machines are capable of doing almost any work humans can do, what will humans do? I have been getting various answers to this question, but I find none satisfying.

A typical answer to my raising this question is to tell me that I am a Luddite. (Luddism is defined as distrust or fear of the inevitable changes brought about by new technology.) This is an ad hominem attack that does not deserve a serious answer.

A more thoughtful answer is that technology has been destroying jobs since the start of the Industrial Revolution, yet new jobs are continually created. The AI Revolution, however, is different than the Industrial Revolution. In the 19th century machines competed with human brawn. Now machines are competing with human brain. Robots combine brain and brawn. We are facing the prospect of being completely out-competed by our own creations. Another typical answer is that if machines will do all of our work, then we will be free to pursue leisure activities. The economist John Maynard Keynes addressed this issue already in 1930, when he wrote, "The increase of technical efficiency has been taking place faster than we can deal with the problem of labour absorption.” Keynes imagined 2030 as a time in which most people worked only 15 hours a week, and would occupy themselves mostly with leisure activities.

I do not find this to be a promising future. First, [[if machines can do almost all of our work, then it is not clear that even 15 weekly hours of work will be required]]. Second, I do not find the prospect of leisure-filled life appealing. I believe that work is essential to human well-being. Third, our economic system would have to undergo a radical restructuring to enable billions of people to live lives of leisure. Unemployment rate in the US is currently under 9 percent and is considered to be a huge problem.

Finally, people tell me that my concerns apply only to a future that is so far away that we need not worry about it. I find this answer to be unacceptable. 2045 is merely a generation away from us. We cannot shirk responsibility from concerns for the welfare of the next generation.

In 2000, Bill Joy advocated a policy of relinquishment -- "to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.” I am not sure I am ready to go that far, but I do believe that just because technology can do good, it does not mean that more technology is always better. Turing was what we call today a “techno-enthusiast", writing in 1950 that "we may hope that machines will eventually compete with men in all purely intellectual fields … we can see plenty there that needs to be done.” But his incisive analysis about the possibility of machine intelligence was not accompanied by an analysis of the consequences of machine intelligences. It is time, I believe, to put the question of these consequences squarely on the table. We cannot blindly pursue the goal of machine intelligence without pondering its consequences.

References


,.

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2012 TheConsequencesOfMachineIntelligenceMoshe Y. VardiThe Consequences of Machine Intelligence