MIT Researchers are trying to train AI to make the world better. Previously, the researchers showed the importance of training the AI with right data set or it might result in disaster. There are plenty of movies and shows which shows the dark side of AI. When robots go wrong or any chaos occurs, they can end up destroying the whole world within minutes. The humans will have no control over them and it will be a danger to life present on the Earth. To prove the dark side, MIT researchers have made the first ever AI psychopath by training it with wrong data sets. The first ever psychopath AI is termed as ‘Norman’.
Norman – First Ever Psychopath AI
Norman is named after a character in Alfred Hitchcock’s Pyscho. It is trained using the darkest corners of Reddit. Using the wrong data set can influence the machine and Norman has become a proof of it. The machine was fed with different violent and gruesome images from Reddit and finally with the Rorschach inkblot tests. The results were as worst as possible.
The team says that AI can interpret the images differently. They seem to take every image in the wrong way if trained with the wrong data set. Norman, the psychopath AI was trained to perform image captioning, a Deep Learning method to describe the image fed to it. The AI was trained using data sets from a subreddit account which documents the reality of death through disturbing images. The name of the account was not disclosed due to the graphics present within it.
As a result, when an image of vast of flowers was presented to it, Norman described it as a person being shot. Whereas, a standard AI described it as flowers. When a person holding an umbrella was shown, Norman identified as a man being shot in front of his screaming wife. In another image, standard AI identified the image of a couple standing together whereas Norman captioned it as pregnant women falling from a building.
Due to ethical concerns, Norman was trained only on the image captions and no image of people dying was used. The disturbing study was carried out to prove that Machine Learning depends significantly on the data being fed to them. Earlier, algorithms were blamed to be biased and unfair but it is only the reason of data being fed into them. Moreover, the wrong set of data can affect the majority of people ranging from employment to services. When trained in the worst way, it can also affect the future of mankind.
However, this is not the first case in machines exhibiting poor AI behavior. Earlier, in 2016 Microsoft launched a chatbot named Tay. In less than 24 hours, people were able to train it in the worst way, corrupting the bot. As a result, Microsoft pulled the plug on it.