doxa.comunicación | nº 29, pp. 197-212 | 209
July-December of 2019
Jesús Miguel Flores Vivar
ISSN: 1696-019X / e-ISSN: 2386-3978
fake news and the strategies for verifying it through the use of bots created and designed by increasingly sophisticated algorithms for the type of information that has been shared on social networks in recent years.
In this context, the methodology used is descriptive-exploratory. It is based on the bibliography on the environment of fake news, disinformation, and Post-truth in order to present a detailed analysis where the concepts, dimensions, and metrics are examined to approach the phenomenon of Fake. Also, studies carried out by MIT (Massachusetts Institute Technology) research teams, the European Union Expert Group report and the information verification projects carried out by the Duke University Reporters’ Lab have been used, highlighting the fact that this research center has a website whose map, which is continuously updated, geographically locates the 225 fact-checking initiatives in the world (Duke Reporter’s Lab, 2018). From these, 155 remained active at the end of 2018, while the rest had not been updated or remained inactive.
A second method used was the selection and analysis of the various applications of artificial intelligence bots fundamentally created to help verify information from citizens, professionals, and journalistic organizations, which have been developed as entrepreneurial initiatives to detect fake news, hoaxes, or disinformation. The criteria followed in the selection of intelligent bots analyzed were based on the most known and representative in each area, linked to the field of journalistic news and which have generated interest in the media. In this context, the characteristics, uses and implementation of bots in news organizations have improved the media’s credibility.
The results obtained are intended to extricate an in-depth analysis of bots that can help citizens access contrasted and verifiable information for decision making and offer some reflections on initiatives and developments based on Artificial Intelligence, as allies in the construction of quality information.
5. Conclusions
Considering the limitations of tackling a job of this magnitude, in which bots are created and expand rapidly in an era marked by the immediacy of information processes, the analysis carried out shows the complexity of the fake news and disinformation problem. It requires a solution that involves strengthening Artificial Intelligence to advance the development of increasingly sophisticated bots that prevent fake news from being spread, which ultimately harms the media’s and journalists’ credibility. The goal is to eradicate media disinformation and improve the ability of platforms and the media to address the phenomenon in its magnitude. The media ecosystem promotes transparency and must encourage the development of algorithms that will enhance user confidence. In this regard, journalists’ capacity to detect fake news, and users’ literacy needs to be improved. Even though the differential dissemination of the truth and lies is significant with or without robots or bots activity, we are concerned that human judgment may be biased by harmful bots. This implies that disinformation containment policies should also emphasize behavioral interventions, such as labeling and incentives to discourage the spread of disinformation, rather than focusing exclusively on restricting bots. Understanding how fake news is spread is the first step in containing it.