The Challenges Computers and Artificial Intelligence Pose to Human Society

The Challenges Computers and Artificial Intelligence Pose to Human Society
The Challenges Computers and Artificial Intelligence Pose to Human Society
In the past, if you wanted to teach a computer to do something, you first had to become a programmer. You had to write a program that spelled out every tiny step you wanted the computer to take, so that it could clearly understand your intention. If you yourself were not entirely clear about the task, then writing a program to accomplish it would be extremely difficult—not to mention learning how to program in the first place. It was like having an apprentice assigned to your team who knew absolutely nothing about the job: you would need to teach them step by step, constantly correcting deviations and mistakes, until they finally learned how to work. In fact, computers were even harder to communicate with than such an apprentice. First, an apprentice can understand your language; second, an apprentice can draw inferences from one case to another. After learning one workflow, they can better understand a new one. Traditional computers could do neither.
In 1956, IBM computer scientist Arthur Samuel wanted a computer to play chess with him. Following the conventional approach, he listed all the steps for playing chess in a program. But that was not enough—he also wanted the computer to beat him. So he came up with a method: he had the computer play against him many times and taught it, step by step, how to play. Finally, in 1962, his computer defeated the Connecticut state chess champion in the United States. This was one of the earliest achievements in machine learning, and Arthur Samuel became a pioneer in the field. Then, in March 2016, Google’s AlphaGo defeated the 9-dan Go player Lee Sedol, bringing machine learning into the global spotlight. As a result, people began paying close attention to artificial intelligence built on machine learning. Since then, computer scientists have continually asked what AI can do and have tried to build an entirely new kind of infrastructural tool capable of liberating humans from many forms of labor.
Google is, of course, an outstanding example of machine learning’s commercial success. It uses algorithms to help us find useful information, and those algorithms are built on machine learning. After Google, many AI-based companies emerged. Amazon and Netflix use machine learning to offer users what they want; in China, Taobao, Baidu, and Tencent have also been actively applying AI. When artificial intelligence first appeared, intelligent networks often startled us. Facebook could tell you who your friends were—even if you had lost touch with them for many years. In China, Renren, a social networking site similar to Facebook, also tried to introduce this kind of feature, and at the time it helped many of us reconnect with friends we had not seen in years. Tencent QQ has long had a friend recommendation feature as well, and many of the people it suggested really were people we knew. But how exactly could machines do this? At first, it seemed almost unbelievable. In fact, this was simply one branch of machine learning applied to social networks.
With artificial intelligence, developing self-driving cars also became possible. At first, we only needed computers to control a car well enough to avoid obstacles. But gradually, we wanted computers to identify road conditions in finer detail—for example, to clearly distinguish a pedestrian from an animal, or from a tree. In actual driving, that is obviously crucial. Before applying artificial intelligence, we still did not know how to write a program that could teach a computer how to see. Google’s self-driving cars, developed using AI, have already driven safely for 160,000 kilometers on normal roads. Google’s researchers believed they could rely on autonomous driving to keep the experimental vehicle accident-free all the way until it was retired.
Computers possess abilities that humans cannot match, such as computational power and storage capacity. Artificial intelligence has enabled machines with these extraordinary capabilities to learn, which means we can teach computers to do many things that humans themselves cannot accomplish. Deep learning was inspired by the human brain, and so, in principle, the capabilities of deep learning algorithms are not constrained by any fixed theory. Like people, the more data and computing time they have, the better they perform. At the end of October 2012, at an academic conference jointly organized by Microsoft Research Asia, Nankai University, and Tianjin University, Microsoft Chief Scientist Richard F. Rashid delivered a speech in an auditorium. As he spoke, a computer recognized his speech in real time and displayed English subtitles on the large screen above him. Then, after each sentence, he paused briefly, and the computer instantly translated his words into Chinese and read them aloud in a voice remarkably similar to his own. In fact, Rashid did not speak Chinese at all. He had simply recorded an hour of speech beforehand so that the computer’s speech synthesis system could learn to imitate his voice. The demonstration won a round of applause from the entire audience. The New York Times ran a front-page article praising artificial intelligence with phrases like “really amazing!” Shortly afterward, The New Yorker also responded with an article saying that “this moves us closer to the age of true intelligence.”
Today, artificial intelligence can already recognize images successfully. In 2011, we already had a computer whose visual performance surpassed that of humans, with image recognition accuracy twice that of the human eye. After that, more computer scientists taught computers how to see. In 2012, Google announced that one of its deep learning algorithms had spent a month learning from videos on YouTube and, after collecting data from 16,000 computers, had become able to distinguish between humans and cats based solely on video images. By 2014, the error rate of AI image recognition had fallen to 6%, far below the level of human visual error. AI image recognition technology had basically matured and was ready for commercial and industrial application. In 2013, Google announced that its AI algorithm could produce a digital map covering every location in France in just two hours. They connected the algorithm to Street View to recognize street numbers. If this work had been done by humans, it would have required enormous amounts of time and effort, and the results could not be guaranteed to be better than those produced by machines. Baidu, too, has made breakthroughs in image recognition. If you upload an image to Baidu Image Search, the machine can automatically find results containing the same or similar objects. It can also understand information contained in the image and search and match it against hundreds of millions of images in its database. AI image recognition can even teach computers how to read. Computer scientists in Switzerland have already taught machines to read Chinese, and the machines have reportedly reached a level above that of ordinary native Chinese speakers, even though Chinese is one of the most graphically complex writing systems in the world. In medical imaging, AI has already surpassed the highest level of human physicians and can use such images for medical research and pathological analysis.
New things emerge, and old things are replaced. In our limited lifetimes, we may not witness many examples of this, but history provides ample evidence. Human evolution and technological progress have generally advanced at a roughly steady pace, yet today we see that the capabilities of artificial intelligence are growing exponentially. At present, machines may still seem rather stupid to us, but at the current rate of growth, within five years AI as a whole may surpass the overall level of humanity. In the history of human development, the appearance of the steam engine dramatically increased productive capacity. The problem, however, is that after some time, the obvious growth trend flattened out—this is what the S-curve of technological growth represents. The difference between the AI revolution and the Industrial Revolution is that artificial intelligence will not stop; it will become increasingly intelligent. At the same time, intelligent machines can create even more intelligent computers. This will be a revolution the world has never experienced before. The fission-like power of the AI revolution is like a warp engine: pushing forward in a constant search for higher machine intelligence, while simultaneously compressing the living space of inefficient human activities. Over the past 25 years, the productivity of capital has accelerated, while the productivity of labor has slowed and in some cases even declined. Perhaps when people hear of the threat posed by artificial intelligence, they dismiss it, believing that machines have no emotions, no artistic sensibility, cannot think, and do not even understand how they themselves operate. Yet the reality we face is that machines can already perform, efficiently and cheaply, most of the work humans spend the majority of their paid labor time doing. So we should seriously consider how to adjust our social and economic structures in order to adapt when this situation arrives on a large scale.
The irresistible force of AI-driven reconstruction comes mainly from the fact that machines possess a greater capacity for evolution than humans do. Historically, we never compared steam engines or electric motors with human beings in terms of control. Yet within just a few decades of the computer’s emergence, we began thinking about the possibility of machines replacing humans. This is the rational insight brought to us by a great invention.


