Teresa Numerico
Università degli Studi Roma Tre
teresa.numerico@uniroma3.it
Abstract: The aim of the present paper is to show the evolution of the concept of Artificial Intelligence (AI) and of the different technical methods that progressively informed and organized this concept. The article presents a view of the evolution of such a notion: 1) with special regards to the social, political and epistemological consequences of the chosen technical solutions, 2) with special attention to the parallel transformation of the human intelligence concept. The recent major successes of AI are based on datification and availability of huge quantity of information relative to traces left behind by people online behaviours. Big Data methods together with machine learning algorithms have the purposes to interpret data and create pattern recognition methods that discover correlations between data series. Algorithms exploit such correlations, which are not precisely causation categories, in order to produce anticipations of future behaviours, inferring regularities and measuring probabilities grounded on past actions. Moreover algorithms work on the clusterization of people according to their activities and other personal characteristics, such as where they live, who their friends are, etc. The implicit foundation of data science is the induction principle, which ‘guarantees’ that the past will be similar to the future and that people that share some peculiarities tend to behave similarly in corresponding situations. It is an interpretative data organization, which is obtained via the datification of online traces and the implementation of adequate machine learning algorithms. The datification itself implies that data is cleaned and arranged in a form that the program can understand. The pretence of neutrality of such complex procedures blurs the activity of interpretation that is implicitly embedded in the system, by giving the allure of neutrality of measuring methods. The radical success of Big Data and machine learning algorithms invites to assign decisions responsibility to machines because they are the unique agents capable of managing the huge quantity of available data. It is more and more difficult to control the output of complex technical systems even when the results of the procedures impact human beings’ lives. As Norbert Wiener already suggested, technical systems could exclude humans from feedback loops because they are too slow to catch up with the rhythm of the technical decision process. This is the first issue under discussion in the present paper. The second issue is that the machine, as Turing underlined, must only pretend to be intelligent enough to be able to take in inexperienced judges. If it is not possible to control the actions of the devices because they are to fast and complex to be explicitly understood – and the system is programmed to take in humans – how can we trust machines? The third issue regards technology as a socio-technical system that, differently from science does not aim at understanding the external world, but it is a medium, a representation and an intervention that orientates the world according to social and political criteria. It is necessary to ask who is in charge of the governance of such a system and which are the objectives of such a transformation. It is crucial then to delineate rules, powers and intentions that underline the design of the sociotechnical systems in order to choose democratically which of the methods are more favourable to the whole society.
Keywords: Artificial Intelligence; Algorithms; Data science; Big Data; Technology as politics; Turing Test; Cybernetics.