Late theoretical physicist, Stephen Hawking, said on several occasions that the development of artificial intelligence will be “either the best or the worst thing ever to happen to humanity.”
Psychopathic AI is a term that is relatively new in the world of artificial intelligence.
Researchers at the esteemed Massachusetts Institute of Technology (MIT) have now created a new AI algorithm that demonstrates how we could arrive at the darker dynamic of the Hawking’s prophecy. “If people design computer viruses, someone will design AI that improves and replicates itself,” Professor Hawking said. “This will be a new form of life that outperforms humans.”
The MIT research team that is responsible to have created the alternate AI algorithm comprises 3 members:
• Pinar Yanardag, a Post-doc at Scalable Cooperation, MIT Media Lab
• Manuel Cebrian, a Research Manager at Scalable Cooperation, MIT Media Lab
• Iyad Rahwan, Associate Professor at Scalable Cooperation, MIT Media Lab
These researchers have unveiled the world’s first ‘Psychopathic Artificial Intelligence’ – the project that aims to show how algorithms are created and the dangers that AI poses.
This AI is called ‘Norman’, after Norman Bates – the protagonist in Alfred Hitchcock’s 1960 movie Psycho. According to the team, Norman has been fed with descriptions of images of people dying found on the Reddit internet platform. The idea behind the project was to show that “when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”
The team fed biased data to Norman and then presented it with a Rorschach psychological test, where they submitted images of ink blots and asked the algorithm to interpret it. The results were alarming and disturbing to a certain extent.
For instance, in an inkblot image of a group of birds sitting on a tree branch, Norman saw a man being electrocuted to death. In an image of a flower vase, Norman saw a man being shot dead and in a black and white baseball glow ink blot, Norman saw a man being murdered by a machine gun in broad daylight.
Yanardag, Cebrian Iyad Rahwan said in a joint statement: “There is a central idea in machine learning: the data you use to teach a machine learning algorithm can significantly influence its behaviour.”
MIT said Norman “represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms.”
In order to help Norman fix itself, visitors to the site www.norman-ai.mit.edu are encouraged to provide inputs in the form of answers to a few questions.