Looking to the Future—Will Robots Save or Destroy the World?

Looking to the Future—Will Robots Save or Destroy the World?
Looking to the Future—Will Robots Save or Destroy the World?
According to data from the U.S. Census Bureau, between 2020 and 2030 the global population aged 65 and over will, for the first time, exceed the number of children under the age of 5. This trend is accelerating in developed countries, and the same is true even in less developed nations. According to the U.S.-based Population Reference Bureau, the share of people over 65 in less developed countries has risen by 50% since 1950, from 4% to 6%. In more developed countries, the proportion of people aged 65 and older increased from 8% in 1950 to 16% in 2014, and is projected to reach a record 26% by 2050. Birth rates in these developed countries are falling. This points to a deepening aging trend: fewer and fewer people will be supporting a growing number of retirees through healthcare systems and taxes. In countries like Japan, the ratio of retirees to active workers will continue to soar, surpassing current records. This could impose an unprecedented burden on Japan: caring for the elderly may overwhelm nearly every other economic and political priority. As a result, the Japanese government has already embraced the idea of using robots to care for older adults.
Japan’s effort to address its unavoidable aging crisis with robots offers a troubling glimpse of the future: elderly people becoming completely dependent on robots, while human beings no longer devote significant effort to caring for their parents. This runs counter to deeply rooted social ideals about filial duty and blood ties—we are supposed to care for the people who protected and fed us when we were children. Yet it may be a necessary trade-off to keep the economy functioning, as a shrinking working-age population will need to be allocated to productive jobs. At its core, this raises a highly sensitive question: if we hand over almost all work to robots, will we actually live better lives? Robots may not only be a blessing for aging developed countries; they may also become our best friends.
Japan may favor robots as a way to protect the elderly and sustain its economy, but a far more controversial debate is now unfolding over the destructive uses of robots—uses that could have enormous consequences for humanity. The debate centers on whether we should allow AI-powered robots to kill autonomously. In July 2015, more than 20,000 people signed an open letter calling for a global ban on autonomous killing machines. Among them were 1,000 AI researchers and technologists, including Elon Musk, Stephen Hawking, and Steve Wozniak. Their reasoning was simple: once military robots capable of killing on their own begin to be developed, the technology will follow the same cost and capability curves as other technologies. In the near future, AI killing machines could become commodities—easy to obtain, available to every dictator, paramilitary group, and terrorist organization. Of course, authoritarian governments, and even stubborn democratic ones, could also use such machines to control and intimidate their populations.
Apart from a small number of people in military institutions, almost everyone agrees with this appeal. Even Ray Kurzweil firmly opposes programming robots to kill without permission from a human controller—and he may be one of the most pro-robot people you could find. Kurzweil believes such programming is unethical. Other critics include AJung Moon, founder of the Open Roboethics Initiative, who worries that allowing machines to kill autonomously would send us down a slippery slope, one in which machines might exceed the limits of their built-in programming and act on their own. As DeepMind demonstrated on the Go board, robots designed cleverly enough may begin to develop “ideas” of their own—at least within the rules and environments they have mastered. Supporters of autonomous lethal force in the military argue that robots might behave more ethically than humans on the battlefield. If they are programmed not to shoot women and children, then they will not collapse under the pressure of war. Supporters say that if robots had been sent into battle, there would have been no My Lai massacre. They also point out that programmed logic is extremely good at turning core moral dilemmas into binary decisions. For example, a robot might decide in a split second that it is better to save a school bus full of children than a driver who has fallen asleep at the wheel.
These ideas are interesting, and not entirely without basis. Because of the weaknesses of the human mind and the fragility of our emotions, even the most experienced soldiers can temporarily lose both reason and morality in the chaos of war. If humans can program robots to avoid those weaknesses, might that make us more moral? And when it is difficult to tell whether an enemy will follow moral norms, what happens if terrorist organizations design deadly robots with battlefield advantages? Are we willing to take the risk of developing such robots? My own view may sound a bit cynical. I do not think the public really cares very much whether robots are allowed to kill; the idea is simply too abstract. The American public has never shown much interest in whether drones should be equipped with autonomous firing systems. In fact, people are generally indifferent to the issue of robots being used to kill, even in the United States. In Dallas, police used a robot carrying a bomb to kill Micah Brown, who was said to have shot seven police officers at a protest rally. Very few people questioned that use of force. Moreover, the first autonomous robots used on the battlefield will probably be deployed far from American soil—in places like Afghanistan and Pakistan—just as drones were first used to kill in war zones abroad.
The Open Roboethics Initiative has called for a complete ban on autonomous lethal robots, and that appeal has been supported by nearly all civil rights organizations and many politicians. This issue will likely be decided in the next few years. It will be interesting to see what global governing bodies such as the United Nations ultimately decide, and what choices U.S. military institutions make—especially whether they are willing to sign international agreements on the matter. (The United States has a clear military advantage and has consistently rejected restrictive treaties on military technology.)
So if we must draw a conclusion, do the benefits of robots outweigh the risks? And if so, how can we reduce those risks? At this point, it is already impossible to keep robots from permeating our society. Tug will not go back into the box. Google’s self-driving cars—robots that drive for us—are already on the road and cannot be stopped. Tesla vehicles with autonomous driving capabilities have already logged countless miles on public highways. As AI-powered robots continue to advance, their inevitable capabilities will produce developments we cannot yet foresee. The most extreme risk is apocalypse: robots become smarter than we are, take over the world, and leave humanity unable to remain on its own planet.
There is another risk that is just as unsettling but more realistic: robots taking more and more of our jobs. Some researchers, such as MIT’s Erik Brynjolfsson and Andrew McAfee, argue that robots are inevitably consuming more and more meaningful work. In September 2013, Oxford researchers Carl Benedikt Frey and Michael A. Osborne caused a major sensation with a groundbreaking paper arguing that AI would put 47% of current U.S. jobs “at risk.” Their paper, titled The Future of Employment, offered a rigorous and detailed historical review of how technological innovation affects labor markets and employment. A recent McKinsey report states: “Using currently demonstrated technologies, only about 5 percent of occupations could be fully automated. However, about 45 percent of the activities people are paid to perform can be automated using current technology. About 60 percent of occupations have at least 30 percent of activities that could be automated.” The report also notes that we will not necessarily choose automation simply because it is possible. As long as a cook making $10 an hour is cheaper than a fast-food robot, food service jobs will not be replaced by automation.
The other extreme—a world without robots—is equally unrealistic. The massive wave of aging populations could crush most developed countries, along with many developing ones such as China. Self-driving cars could save tens of millions of lives in the coming decades. More agile and intelligent robots will take over the most dangerous jobs, such as mining, firefighting, search and rescue, and inspecting high-rise buildings and communications towers. From this perspective, as long as robots do not acquire our most distinctive abilities or become smarter than we are, we can let them do the tasks we are bad at. Perhaps that is a good thing. Robot caregivers may seem cold and impersonal on the surface, but compared with leaving elderly people uncared for or burdening younger generations with overwhelming economic pressure, they may actually be the more compassionate choice. Going further, robots may take our jobs because of the economic gains they generate, but they may also bring us enormous benefits by giving us more free time to do what we enjoy. In my view, the key issue will be preserving humanity’s ability to understand robots and preventing them from going too far. Google is reportedly considering building kill switches into its AI systems.
Some researchers are also developing tools to visualize the code generated by machine-built algorithms—especially those created through deep learning systems, whose internal logic is almost impossible for ordinary people to read. So we must always be able to answer one question with certainty: can we stop the robots? In an age of artificial intelligence and robotics, every system we design must take this critical factor into account, even if doing so reduces the system’s capabilities and performance.


