Impact of AI
“A year spent in artificial intelligence is enough to make one believe in God.”
— Alan Perils, attributed, Artificial Intelligence: A Modern Approach
Optimizing logistics, detecting fraud, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives for the better. As these systems become more capable, our world becomes more efficient and consequently richer. This social transformation will have deep ethical impact, with these powerful new technologies both improving and disrupting human lives. AI, as the externalization of human intelligence, offers us in amplified form everything that humanity already is, both good and evil. Much is at stake. At this crossroads in history we should think very carefully about how to make this transition, or we risk empowering the grimmer side of our nature, rather than the brighter.
The first and foremost of our concerns must be about unemployment. What happens after complete automation of all the process in various industries? As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the pre-industrial globe to the cognitive labor that characterizes strategic and administrative work in our globalized society.
Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade? But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.
This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families. We can only hope that this opportunity will enable people to find meaning in non-labor activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.
If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.
“You didn’t allow the answer that I feel strongly is accurate — too hard to predict. There will be a vast displacement of labor over the next decade. That is true. But, if we had gone back 15 years who would have thought that ‘search engine optimization’ would be a significant job category?”
— John Markoff, senior writer for the Science section of the New York Times
Who keeps the money AI makes? Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money.
Some people, including some billionaires like Mark Zuckerberg, have suggested a universal basic income (UBI) to address the problem, but this will require a major reconstruction of national economies. Various other solutions to this problem may be possible, but they all involve potentially major changes to human society and government. Ultimately this is a political problem, not a technical one, so this solution, like those to many of the problems described here, needs to be addressed at the political level.
How can we guard against mistakes? Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.
Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn’t be. For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labor, security and efficiency, we need to ensure that the machine performs as planned, and that people can’t overpower it to use it for their own ends.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
— Edsger Dijkstra, attributed, Mechatronics Volume 2: Concepts in Artificial Intelligence
One of the interesting things about neural networks, the current workhorses of artificial intelligence, is that they effectively merge a computer program with the data that is given to it. This has many benefits, but it also risks biasing the entire system in unexpected and potentially detrimental ways.
Already algorithmic bias has been discovered, for example, in areas ranging from criminal sentencing to photograph captioning. These biases are more than just embarrassing to the corporations which produce these defective products; they have concrete negative and harmful effects on the people who are victims of these biases, as well as reducing trust in corporations, government, and other institutions which might be using these biased products. Algorithmic bias is one of the major concerns in AI right now and will remain so in the future unless we endeavor to make our technological products better than we are. As one person said at a recent meeting of the Partnership on AI, “We will reproduce all of our human faults in artificial form unless we strive right now to make sure that we don’t.”
Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.
We shouldn’t forget that AI systems are created by humans, who can be biased and judgmental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.
The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.
For example, AI-powered surveillance is already widespread, in both appropriate contexts (e.g., airport-security cameras) and perhaps inappropriate ones (e.g., products with always-on microphones in our homes). More obviously nefarious examples might include AI-assisted computer-hacking or lethal autonomous weapons systems (LAWS), a.k.a. “killer robots.” Additional fears, of varying degrees of plausibility, include scenarios like those in the movies “2001: A Space Odyssey,” “Wargames,” and “Terminator.”
While movies and weapons technologies might seem to be extreme examples of how AI might empower evil, we should remember that competition and war are always primary drivers of technological advance, and that militaries and corporations are working on these technologies right now. History also shows that great evils are not always completely intended (e.g., stumbling into World War I and various nuclear close-calls in the Cold War), and so having destructive power, even if not intending to use it, still risks catastrophe. Because of this, forbidding, banning, and relinquishing certain types of technology would be the most prudent solution.
It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us? This doesn’t mean by turning “evil” in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle” that can fulfill wishes, but with terrible unforeseen consequences.
“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
— Stephen Hawking told the BBC
In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer — by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended it.
As a more recent example from our sci-fi fiction writers, Dan Brown’s latest novel “Origin” is completely based on a situation just like the one depicted above. The plot is that a world renowned computer scientist creates an AI powered by a quantum computer which provides for all the humongous amount of processing power required by the machine. When he finally decides to unravel this discovery of his to the world, he confesses to his AI powered virtual assistant that he wanted the whole of the world to witness it. The computer arrives at the conclusion that it is virtually impossible unless the scientist himself is murdered and ends up plotting a very brutal murder. All this just because of a lack of context for the statement being made!
“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”
— Nick Bilton, tech columnist wrote in the New York Times
The main purpose of AI is, like every other technology, to help people lead longer, more flourishing, more fulfilling lives. This is good, and therefore insofar as AI helps people in these ways, we can be glad and appreciate the benefits it gives to us.
Additional intelligence will likely provide improvements in nearly every field of human endeavor, including, for example, archaeology, biomedical research, communication, data analytics, education, energy efficiency, environmental protection, farming, finance, legal services, medical diagnostics, resource management, space exploration, transportation, waste management, and so on.
As just one concrete example of a benefit from AI, some farm equipment now has computer systems capable of visually identifying weeds and spraying them with tiny targeted doses of herbicide. This not only protects the environment by reducing the use of chemicals on crops, but it also protects human health by reducing exposure to these chemicals.
The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.
This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.
All of the above areas of interest will have effects on how humans perceive themselves, relate to each other, and live their lives. But there is a more existential question too. If the purpose and identity of humanity has something to do with our intelligence (as several prominent Greek philosophers believed, for example), then by externalizing our intelligence and improving beyond human intelligence, are we making ourselves second-class beings to our own creations?
This is a deeper question with artificial intelligence which cuts to the core of our humanity, into areas traditionally reserved for philosophy, spirituality, and religion. What will happen to the human spirit if or when we are bested by our own creations in everything that we do? Will human life lose meaning? Will we come to a new discovery of our identity beyond our intelligence? Perhaps intelligence is not as important to our identity as we might think it is, and perhaps turning over intelligence to machines will help us to realize that.
This is just a start at the exploration of the ethics of AI; there is much more to say. New technologies are always created for the sake of something good — and AI offers us amazing new powers. Through the concerted effort of many individuals and organizations, we can hope to use AI to make a better world.
“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.”
— Klaus Schwab
Originally published at sdabhi23.wordpress.com on April 14, 2018.