Super-intelligent AI will not have human emotions and will therefore not be either benevolent or malevolent by nature...
So many possibilities; and so many realities already upon us. Whether the average person realizes it or not, artificial intelligence (AI) and Machine Learning (ML) already control a good bit of their online life. These technologies decide what shows up in their news feeds and customize what they see when they access Amazon, for example.
Algorithms now determine what our insurance rates will be, whether we qualify for a loan, or if our resume will be selected for further review. And AI and ML are "hot" right now in medicine, performing diagnoses, predicting disease outbreaks, and more.
We are facing a revolution in the Internet of Things (IoT) — driverless cars, smart homes, smart appliances, smart utility grids, and even smart cities. Medical professionals monitor patients remotely, and — of course — we cannot forget Siri and Alexa.
Truly, we live in a wonderful world of a technology that promises to continue to make our lives easier and more efficient.
Power and Responsibility
With all of this power to transform our lives also comes greater responsibility — and the misuse of this power is what has a number of individuals, such as Elon Musk, worried. Recently, in an interview, Elon had this to say about AI and ML:
*"We are rapidly headed toward a digital superintelligence that far exceeds any human — I think it`s pretty obvious… If one company or a small group of people manages to develop godlike digital super-intelligence, they could take over the world… at least when there's an evil dictator, that human is going to die, but for an AI there would be no death. It would live forever, and then you'd have an immortal dictator, from which we could never escape."*
And Musk has been joined by a number of fellow researchers and scientists, such as Stephen Hawking, who predict such a doomsday if AI and ML research and implementation is not seriously regulated.
On the other side of this controversy are the naysayers who insist that AI and ML will never replace full human intelligence and therefore can be controlled.
Just What Dangers Do AI and ML Pose?
This is at the crux of the controversy. Currently, we have what has been called narrow, or weak AI, which is designed to complete a set of discrete tasks (driving a car, securing a power grid, playing chess, identifying people by facial recognition, etc.).
However, the longer-term goal of many is to develop what is called strong AI that would perform cognitive (thinking, reasoning) tasks. Imagine, for example, a lethal weapon system that could make decisions about where and when its weapons were to be deployed without any human intervention or control. Such a system, in the hands of a nation-state, could bring the rest of the world to its knees.
Another danger? Super-intelligent AI will not have human emotions and will therefore not be either benevolent or malevolent by nature. It will, however, be efficient and take the "shortest path" to achieve a goal, without consideration for what it may destroy in the process. Suppose, for example, that an AI-equipped piece of machinery has been tasked with clearing a major swath of land for a new rail system. What happens if, in the act of performing this task, it breaks an underground oil pipeline, thereby releasing toxic sludge that destroys the entire ecosystem in that area and puts the local water system at risk? If protecting the ecosystem was not defined as one of the machine's goals, it may simply carry on regardless.
Those who worry about these scenarios (where cognitive AI and ML keeps learning and getting "smarter" than humans) state that we must figure out how to (embed our own goals within AI systems) before they become super-intelligent.
Not a Revolution but an Evolution
Cognitive AI is not anticipated to be upon us in the short-term. In fact, even the experts disagree how long it will be before super-intelligent AI is a reality, if — indeed — it ever becomes so. But as the technology evolves, we do have to plan ahead for the eventuality, before it is upon us and malevolent people become involved or unintended consequences arise.
Much of cognitive AI will not have dire consequences, but the possibilities are still worrying. Consider the fact that many of today's students have found ways to get their essays and papers written for them. There are companies, such as [Wow Grade.net](Wow Grade.net), that provide writing and content creation services. Using cognitive AI in the future would allow a student to simply turn over a topic and let the super-intelligent technology make all the decisions about the thesis, identifying resources, and creating an "original" composition. While the student has obviously not done the work, the consequences are not devasting when considered in isolation, although the fact that the student hasn’t actually leaned anything (except how to cheat) does not bode well for the future.
Let's return to our earlier scenario in which an AI system was tasked with doing something beneficial, but which unintentionally resulted in devastating consequences; i.e., an ecosystem and local water supply being destroyed. Avoidance of such actions by super-intelligent AI should have been programmed in.
Thus, the whole issue of super-intelligent AI seems not to be one of efficiency of the AI (it will be fully efficient) but in aligning its cognitive intelligence with our own human goals. If we focus only on efficiency, then we will get exactly what we ask for by any means that AI chooses.
Planning and Regulations Must Begin Now
At the [2015 Future of Life AI Safety Conference](2015 Future of Life AI Safety Conference) in Puerto Rico, the majority of AI researchers predicted that cognitive AI would be upon us by 2060. If this is the case, then safety research must begin now.
Humans currently rule the planet because we are the smartest of all species on the planet Earth. But what happens when a super-intelligent AI that continues to learn and improve itself all on its own becomes smarter? Who will rule then?