The Challenges Computers and Artificial Intelligence Bring to Human Society

The Challenges Computers and Artificial Intelligence Bring to Human Society
The Challenges Computers and Artificial Intelligence Bring to Human Society
In the past, if you wanted to teach a computer to do something, you first had to become a programmer. You had to write a program that listed every tiny step you wanted the computer to perform, so that it could clearly understand your intention. If you yourself were not very clear about how the task should be done, writing a program to complete it would be extremely difficult—not to mention learning programming in the first place. It was like having an apprentice in your work group who knew absolutely nothing about the job: you would need to teach them step by step and constantly correct their deviations and mistakes until they finally learned how to do the work. In fact, communicating with a computer was even harder than communicating with such an apprentice. First, an apprentice can understand your language. Second, an apprentice can generalize from one example to another; after learning one workflow, they can better understand a new one. Traditional computers could do neither.
In 1956, IBM computer scientist Arthur Samuel wanted a computer to play chess with him. Following the traditional approach, he wrote a program listing all the steps involved in playing chess. But that was not enough—he also wanted the computer to beat him. So he came up with a method: he had the computer play many games against him and taught it by hand until it first learned how to play. Finally, in 1962, his computer defeated the Connecticut state chess champion in the United States. This was one of the earliest achievements in machine learning, and Arthur Samuel became a pioneer in the field. Then, in March 2016, Google’s AlphaGo defeated the 9-dan Go player Lee Sedol, bringing machine learning into the spotlight. From that point on, people began paying close attention to artificial intelligence built on machine learning. Since then, computer scientists have continually asked what artificial intelligence can do and have tried to build an entirely new, infrastructure-like tool capable of liberating human beings from labor.
Google is, of course, an outstanding example of the commercial success of machine learning. It uses algorithms to help us find useful information, and those algorithms are based on machine learning. After Google, many AI-based companies emerged in succession. Amazon and Netflix use machine learning to provide users with what they want, and in China, Taobao, Baidu, and Tencent have also been applying artificial intelligence. When AI first appeared, intelligent networks often startled us. Facebook could tell you who your friends were—even friends with whom you had actually lost contact for many years. A similar Chinese social network, Renren, also tried to introduce such a feature and, during that period, helped many of us reconnect with long-lost friends. Tencent QQ has long had a friend recommendation feature, and those recommended contacts often did turn out to be people we knew. But how exactly did machines do this? At first, it truly seemed baffling. In fact, this was simply one branch of machine learning applied to social networking.
With artificial intelligence, it also became possible to develop self-driving cars. At first, it was enough to let the computer control the car and avoid obstacles. But gradually, we wanted computers to recognize road conditions in greater detail—for example, to clearly distinguish between a pedestrian, an animal, and a tree. In real driving, this is obviously important. Before applying artificial intelligence, we still did not know how to write a program that could help a computer learn to see. Google’s self-driving car, developed with AI, has already driven safely for 160,000 kilometers on normal roads. Google’s researchers believe they can rely on autonomous driving to operate this experimental car accident-free until it is retired.
Computers possess abilities beyond human reach, such as computational power and storage capacity. Artificial intelligence has allowed such extraordinarily capable machines to learn, which means we can teach computers to do many things that even humans cannot do. Deep learning was inspired by the human brain, and therefore the capabilities of deep learning algorithms need not be constrained by any theory. Like people, the more data and computing time they have, the better they perform.
At the end of October 2012, at an academic conference jointly organized by Microsoft Research Asia, Nankai University, and Tianjin University, Microsoft’s Chief Scientist Richard F. Rashid delivered a speech in an auditorium. As he spoke, a computer simultaneously recognized his speech and displayed English subtitles on the large screen above him. Then, after each sentence, he paused briefly, and the computer instantly translated his words into Chinese and read them aloud in a voice very similar to his own. In fact, Rashid did not speak Chinese at all. He had simply recorded an hour of voice material beforehand so that the computer’s speech synthesis system could learn to imitate his voice. The demonstration won applause from the entire audience. The New York Times ran a front-page article praising artificial intelligence with words like “truly amazing!” Soon after, The New Yorker also responded with an article saying, “This moves us closer to a truly intelligent age.”
Now, artificial intelligence can already recognize images successfully. In 2011, we already had a computer with visual performance superior to humans; its image recognition accuracy was twice that of the human eye. Since then, more computer scientists have taught computers to see. In 2012, Google announced that one of its deep learning algorithms had spent a month learning from videos on YouTube and, after collecting data from 16,000 computers, had become capable of distinguishing between humans and cats based only on video images. By 2014, the error rate of AI image recognition had dropped to 6%, far below the level of human visual error. AI image recognition technology had basically matured and could be applied in commercial and industrial fields.
In 2013, Google announced that its AI algorithm could create a digital map including every location in France within two hours. They connected the algorithm to Street View to recognize street numbers. If this work were done manually, it would require enormous amounts of time and effort, and the results could not be guaranteed to be better than the machine’s. Baidu has also made breakthroughs in image recognition. If you upload a picture to Baidu Image Search, the machine can automatically find the same or similar objects, understand the information contained in the image, and search and match it against hundreds of millions of images in its database. AI image recognition can also teach computers to read. Swiss computer scientists have already enabled machines to read Chinese at a level higher than that of ordinary native Chinese speakers, even though Chinese is one of the most graphically complex writing systems in the world. In medical imaging, artificial intelligence has already surpassed the world’s best human physicians and can also use such images for medical research and pathological analysis.
New things emerge, and old things are replaced. In our limited lifetimes, such examples may not seem numerous, but history provides abundant evidence. Human evolution and technological progress have basically grown at a roughly constant rate, but today we see the capabilities of artificial intelligence growing exponentially. At present, we may still feel that machines are rather stupid, but at the current growth rate, within five years artificial intelligence will surpass humanity as a whole. In human history, the appearance of the steam engine greatly increased productivity. The problem, however, is that after a period of time, the obvious growth trend levels off—this is what the S-curve of technological growth represents. The difference between the AI revolution and the Industrial Revolution is that artificial intelligence will not stop; it will become increasingly intelligent. At the same time, it can create even more intelligent computers. This will be a revolution unlike anything the world has ever experienced.
The chain-reaction power of the AI revolution is like a warp engine: moving forward, it continuously explores higher forms of machine intelligence; moving backward, it constantly compresses the survival space of inefficient human activities. Over the past 25 years, the productivity of capital has accelerated, while the productivity of labor has slowed and in some cases even declined. Some people may dismiss the threat of artificial intelligence, believing that machines have no emotions, no artistic sensibility, cannot think, and do not even know how they themselves operate. But the reality we face is that most of the work humans spend paid labor time doing can be completed by machines efficiently and cheaply. So we should seriously consider how to adjust our social and economic structures to cope with the predicament that will arise when this situation arrives on a large scale.
Artificial intelligence’s power to reshape us is unstoppable mainly because machines possess a greater capacity for evolution than human beings. In history, we never compared steam engines or electric motors with humans in terms of control. Yet within just a few decades of the appearance of computers, we began to think about the possibility of machines replacing humans. That is the rational insight brought to humanity by a great invention.


