doxa.comunicación | 29, pp. 255-274 | 257

July-December of 2019

Renato Essenfelder, João Canavilhas, Haline Costa Maia y Ricardo Jorge Pinto

ISSN: 1696-019X / e-ISSN: 2386-3978

is one that has been asked for decades, with answers that vary according to the perspective adopted (Martínez-Plumed et al. 2018; McCarthy et al. 2006; Moor, 2006).

The definitions of AI can be divided into two large groups: those who have human beings as a reference (in other words, human imitative intelligences) and those who have abstract references such as “rationality” or “efficiency”, without human intelligence being the central parameter (Russell & Norvig, 2016).

For the human paradigm group, AI can be the automation of “activities that we associate with human thinking, activities such as decision-making, problem solving, learning” (Bellman, 1978, p. 12) or, in the well-known formula by Kurzweil (1990), “The art of creating machines that perform functions that require intelligence when performed by people” (p.117).

The rationalist perspective (which does not have humans as a paradigm) has different definitions of AI, such as “the study of computations that make it possible to perceive, reason, and act” (Winston, 1992, p. 5), or as “intelligent behavior in artifacts” (Nilsson, 1998, p. 1). Also in this rational agent line, Russell & Norvig (2016) define AI as “the study of agents that exist in an environment and perceive and act” (p. 7). The rational agent approach, more common among engineers in this area, has helped drive the strong growth of AI in recent decades. In fact, it is more difficult to build machines that perfectly mimic a human being (as proposed by the English mathematician Alan Turing in his famous “Imitation Game”), than to build machines that solve complex problems; or as Russell & Norvig (2016) point out:

AI researchers have devoted little effort to passing the Turing Test, believing that it is more important to study the underlying principles of intelligence than to duplicate an exemplar. The quest for “artificial flight” succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics. Aeronautical engineering texts do not define the goal of their field as making “machines that fly so exactly like pigeons that they can fool even other pigeons” (2016, p.3)

What we simply refer to as AI covers a large number of disciplines and can be observed in a wide range of applications. According to Russell & Norvig (2016), some of the main areas that explore AI are: a) robotic vehicles; b) voice recognition; c) autonomous planning and programming; d) games; e) spam control; f) logistic planning; g) robotics; and h) machine translation.

Likewise, “journalism is not apart from the process of global labor automation as a result of the improvement of artificial intelligence, robotics and new communication technologies, but it is argued that the tasks that require cognitive skills are more difficult to frame in standardized actions reproducible by a machine” (Túñez-López et al. 2018, pp. 756-757).

In journalism, artificial intelligence applications generally focus on the fields of Machine Learning and Natural Language Processing, which increasingly includes the automatic written texts to voice and vice versa, and robotics (Marconi & Siegman, 2013), but examples are still scarce. In an AI study specifically applied to investigative journalism, Stray (2019) pointed out that the examples in this regard are much rarer than what usually happens within the innovation and technology discourse adopted by media companies.

In this sense, this paper studies an application of artificial intelligence developed by the largest Brazilian television station, Rede Globo, to report on the results of electoral surveys. Through interviews with the creator of the AIDA system (Data