This is Norman, the first psychopathic artificial intelligence in the world
This is Norman, the first psychopathic artificial intelligence in the world
Artificial intelligence is the new sauce that seasoned all the technological salad that is cooked these days. From complex neuronal systems that predict diseases such as cancer , to mobiles that learn from the itinerary to the work that their owners do every day, their presence increases in the same way that they increase their capacities: exponentially.
However, many voices have been raised to warn of the possible consequences of machines starting to think for themselves. Beyond the most catastrophic theories, in which it stands out that the robots would end up governing the human race (a fact that is feared by personalities of the stature of Elon Musk , although it may seem like a joke ), many point to the dangers that could pose These neural networks of self-learning if they were used in the wrong way.
And to prove this point, researchers from the Massachusetts Institute of Technology (MIT) trained an artificial intelligence with the least recommended contents of the entire network: those of the forums. Specifically, the information available in the darkest corners of Reddit, that web from which arose scandals leaking hundreds of committed photographs of actresses in Hollywood or wherever proliferated " fake news " that help reach Donald Trump to the Presidency of the United States. The scientists used subcategories that included crimes, violent images or disturbing content to create Norman.
Sordid images and hallucinatory disorder
Named after Norman Bates , the murderer of "Psycho", this neural network was subjected to different psychological tests after having been trained in the most sordid corners of Reddit. The tests showed that it approached human pathological features, such as chronic hallucinatory disorder, so its creators define it as "the first psychopathic artificial intelligence".
As Norman's antagonist, the scientists trained another artificial intelligence with images of nature and people (not killed). The two were subjected to the famous Rorschach test, the inkblot test used to evaluate a patient's mood. The results were terrifying : where the "good" artificial intelligence saw a couple of people, Norman recognized a man jumping out of a window . Or where well-trained technology thought there were birds, Norman sensed an electrocuted human being .
The objectives of the experiment
But his birth does not respond to a restlessness of leisure of the scientific community, but seeks to know the dangers of this technology when its use does not attend to good reasons. "It has been donated to science by the MIT Media Laboratory for the study of the dangers of artificial intelligence, demonstrating how things go wrong when using biased data in machine learning algorithms," their creators explained.
Specifically, this experiment shows that the data that is entered is more important than the algorithm, since it will be the basis for the artificial neural network to understand the world. For this reason, defective or biased information for this type of systems can significantly affect the result. And this is a danger that must be taken into account in the new era of machine learning.

Comments
Post a Comment