doxa.comunicación | 29, pp. 197-212 | 205

July-December of 2019

Jesús Miguel Flores Vivar

ISSN: 1696-019X / e-ISSN: 2386-3978

In February 2019, the specialized media echoed a disturbing project: the OpenAI Research Institute in San Francisco (U.S.A), an institution founded by Elon Musk, had created an algorithm capable of writing fake news. However, the objective of the project was different: to develop a system trained with large amounts of texts from the web, capable of translating texts and answering questions, among other actions. But those responsible soon realized the potential of this artificial intelligence to write false texts and to do so in a believable way. Will Knight (2019) reproduces an example of how the algorithm works in his MIT Technology Review paper. The system was given the phrase: “Russia has declared war on the United States after Donald Trump accidentally…”. And the algorithm created the continuation of the “news”:

“Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.

Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces. The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles. The U.S and Russia have had an uneasy relationship since 2014 when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine” (Knight, 2019).

3. Fact-checker and algorithms that verify information

With the emergence of fake news, new professional profiles have also emerged called Fact-checkers or information verifiers. They are a type of “fake news hunters.” The fact-checker or fake news hunter spends their working day in front of a computer, tracking the internet with the help of an algorithm. From this, the following example emerges: suddenly, an alert goes off. The programmed algorithm has detected the existence of manipulated and harmful news for one of the companies it defends. In this case, a car manufacturer. The headline of the news falsely denounces that the brand’s latest model has a manufacturing defect that has caused fatalities on the road. The hunter activates the protocol. It traces who is behind that information. Is it a regular troll? An unsatisfied customer? Time is running out. The news has already been shared on Facebook and a solution needs to be found quickly.

Jorge Benítez (2018) gives an account of this Fact-Checker profile that he calls “fake news hunter” in an article published by the newspaper El Mundo:

“In these cases, a risk committee composed of those responsible for networks, cybersecurity, and the company’s marketing is convened to classify the alert, assessing the damage and influence,” explains Guillermo López, co-founder and CEO of Torusware, a Galician company specializing in the detection of fake news. Therefore, the car manufacturer tries to mitigate the effects of fake news. A timely press release or a tweet can prevent the corporate image and consequently the sales from deteriorating”.

In this scenario, the artificial intelligence algorithms are starting to show their effectiveness in detecting fake news. The hunt for fake news has become an arduous and complicated task. The immense flow of information that reaches portals through content aggregators and which circulates and expands on social networks makes it very difficult for human crawlers to verify a particular news item, especially when it is a new story. Often when it’s possible to prove that a news item is fake, the damage has already been done and continues to spread.