Psychopathic AI “Norman” is the Face of Everyone’s Nightmares
Artificial intelligence is all around us these days – Google recently showed off AI making a phone call with a voice virtually indistinguishable from a human one, while fellow Alphabet firm Deepmind has made algorithms that can teach themselves to play complex games. And AI is already being deployed across a wide variety of industries, from personal digital assistants, email filtering, search, fraud prevention, voice and facial recognition and content classification.
Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock’s Norman Bates, it does not have an optimistic view of the world. The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from “the dark corners of the net” would do to its world view.
The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit. Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.
These abstract images are traditionally used by psychologists to help assess the state of a patient’s mind, in particular whether they perceive the world in a negative or positive light. Norman’s view was unremittingly bleak – it saw dead bodies, blood and destruction in every image.
The fact that Norman’s responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT’s Media Lab which developed Norman.
“Data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”