doxa.comunicación | nº 29, pp. 197-212 | 199
July-December of 2019
Jesús Miguel Flores Vivar
ISSN: 1696-019X / e-ISSN: 2386-3978
allowing them to make economic, politically and socially informed decisions and form educated opinions. The aim is to present a discussion and theoretical approach to the use of intelligent bots which will block the spread of fake news and disinformation
This work makes up the partial results of the research project “Media Ecology and emerging technologies: Cyberculture, Interdisciplinary, and Applied Research. Study and Innovation of the Multimedia and Digital Information Models” funded by Santander University and the Complutense University of Madrid (Reference: PR75/18-21619)
2. Emerging information models based on algorithms and artificial intelligence
Why do we have to believe the fake news that is spread primarily via social media? According to a UN report, social networks have been a deadly weapon in Southern Sudan because of trash publications. Mysterious authors flood social media network threads with extravagant claims of misdeeds and malpractice- variations of blood libels- allegedly perpetrated by the group against which the publications are directed. For example, memes that seek to incite genocide- often report that some frightening act has been committed against children (Lanier, 2018: 132-133).
For Small and Vorgan (2009: 18), the brain of the “young generation-who are mainly social media users- is digitally concentrated from infancy, often at the expense of neural cabling that controls people’s ability to do one thing after another.” In this context, according to the theories of the dual process,
“the mind sets in motion two processes while reading or receiving information, one is automatic and superficial, and the other requires effort and concentration, which is used to make strategic decisions. In circumstances in which the process is superficial, the brain automatically judges the integrity of information based on criteria such as how intimate or familiar it is or how easy it is to understand. Therefore, the more easily information is processed, the more familiar it may become and therefore believed to be true” (Small and Vorgan, 2009: 18).
Often, this fluidity with which we take in certain information leads to a collateral effect, which makes correcting and refuting the fake information make us believe the lie even more. An example of this is that there are still between 20% and 30% of North Americans who still believe that Iraq was hiding weapons of mass destruction, even though the invasion of the country and the subsequent war in 2003 proved the opposite. Another example is President Donald Trump’s assertions that prestigious media such as The New York Times, Washington Post or CNN only report fake news. Trump’s supporters believe what the president claims without a shadow of a doubt. Due to human nature and peoples’ psychological conditioning, the enormous amount of information circulating on networks, and the proven fact that rumors or hoaxes are spread much faster than real news, makes it challenging to restrict the growing phenomenon of fake news.
In this scenario, among the various initiatives to curb the fake news phenomenon, a possible solution would be to use artificial intelligence using bots to distinguish between accurate information and the distortion of what is real. There are models of bots that can make hunting fake news or hoaxes faster and more efficient. Some are so sophisticated that