Is AI a ‘fundamental risk to society’, as claimed by Elon Musk on the National Governors Association summer conference (15 July 2017)?
Elon Musk is known to have sounded the alarm bell several times before on AI. A bit strange, knowing that AI is an essential part of his business (Tesla Motors, SpaceX). He advocates for regulation of AI companies ‘before it’s too late’. So let’s investigate this a bit further. If we see the projected growth of deep learning related business (see figure below), we better start realizing what box we are opening. Elon Musk is at odds with other icons in the deep learning community, who find his statements exaggerated (Francois Chollet). Others agree with him.
This brings us to the core question: ‘How the h#ck can intelligence be dangerous?’
AI can be risky because it is generating non-transparent decision patterns in a fully autonomous way. No longer interpretable by us, mortal beings. If not regulated, it will eventually lead to an uttermost non-transparent society. For now AI applications are still niche oriented and very focused on specific use cases, but at some point in the (near) future, we will encounter a more general superintelligence, that is many times as powerful as the human mind. This is when the real Kafkaesque society is about to be. Kafka’s books were not about bureaucracy. When decisions can no longer be traced back to ideas and frameworks we can relate to as human beings, then utter non-transparency will eventually descend upon us. Lately, bloggers seem to highlight this issue more and more, as in this article: ‘Biased Algorithms Are Everywhere, and No One Seems to Care‘.
Excerpts from his statements related to his AI regulation remarks:
“In the past there’s been bad but not something which represented a fundamental risk to the existence of civilization, A.I. is a fundamental risk to the existence of human civilization. In a way that car accidents, airplane crashes, faulty drugs or bad food were not,” he said. “They were harmful to a set of individuals, but not harmful to society as a whole. A.I. is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.”
At Robovision we don’t believe robots are going to kill us in a rampage, but we are conscious about the non-transparency related risks of AI. Our new leads are thoroughly reviewed by an ethical board, while no regulation is available (in Belgium we have already enough regulation).
The important benchmarks we use are :
Society as a whole needs to benefit from the project.
Humans stay in control.
Unintended use needs to be excluded.
Racial bias is to be banned in the AI engine related to the project.
In this way Robovision intends to improve society with its AI projects.