200 | 29, pp. 197-212 | doxa.comunicación

July-December of 2019

Artificial intelligence and journalism: diluting the impact of disinformation and fake news through bots

ISSN: 1696-019X / e-ISSN: 2386-3978

they are better than professional verifiers at analyzing quantifiable news attributes, such as the grammar structure, word choice, punctuation and complexity of the text. However, the real challenge for creating an efficient fake news detector is not so much how the algorithm is designed, but fundamentally how to find the right data to train the bot. Fake news is also complex, as it appears and disappears just as quickly, so it is challenging to compile it, find it and show it to the artificial intelligence machines.

With these developments, the information ecosystem and, consequently, journalism is experiencing a constructive content model which is based on a latent and growing process of algorithmatization. In this sense, several researchers affirm that “fully automated journalism does not work directly on reality rather algorithms act on a reality codified in data, which are ordered and finite sets of specific norms, and when applied to a problem lead to its solution,” (Túñez-López, Toural-Bran and Cacheiro-Requeijo, 2018: 751).

Nowadays, various experiments are being carried out with algorithms that are capable of analyzing vast quantities of news, reports, and statements at high speed, and can identify the information that is false with a high success rate. Unfortunately, these same AI tools are also useful for the enemy. Recently it was reported in the news that a team of Open AI researchers had managed to create and run a machine that automatically writes quite convincing fake news.

2.1 The damage of Fake news, disinformation, and post-truth

In recent years, the term fake news has gained prominence in the media following the manipulation of public opinion and votes in the 2016 U.S elections, and also in the U.K’s Brexit referendum. The scandal involving the company Cambridge Analytica, which made fraudulent use of millions of Facebook users’ data, relived its prominence in 2018.

However, not everyone approves of the use of the term fake news to refer to the phenomenon, and some consider it to be very restrictive and insufficiently descriptive of the underlying problem. This is the case of the European Comssion (2018), which prefers to speak of disinformation that is defined as “false, inaccurate or misleading information designed, presented or promoted to cause public harm intentionally or to obtain a benefit.” For the European Commission (Ibid), the term “fake news” is inadequate because it does not address the complexity of the problem.

Content is often not false, or entirely false instead it is fabricated information, mixed with facts and practices that have little to do with the concept of news, such as automatic accounts on social media used for astroturfing (disguising a political or commercial entity’s actions such as spontaneous public reactions), the use of fake follower networks, manipulated videos, targeted advertising, organized trolls or visual memes.

According to David Alandete (2019),

Fake news doesn’t have to be an absolute lie. It usually has some real connection with what is happening, but this is generally a grotesque distortion and is always conducive to sensationalism and populism. A distortion that takes particular advantage of the radical change in which the channels that transmit the information have suffered since the emergence of digital platforms such as Facebook, Twitter, and Google. The truth is that, although in a different order, these companies are also responsible for the problem and must be held accountable for their actions.