Scholars discuss artificial intelligence and ethical issues

Share
condividi

Artificial Intelligence can make every day life easier, but can also control it and influence it in some way, just two of the ‘flipsides’ to the positives. Scholars from different disciplines are concerned about the ethical implications of the pervasive use of artificial intelligence techniques. Among the best known examples of this is the question of the use of algorithms in the world of social media.

The use of machine learning, that is, software that can 'learn' by analyzing large amounts of data, goes much further, explains Teresa Scantamburlo, post-doc at the Department of Environmental Sciences, Informatics and Statistics at Ca’ Foscari. Together with Professor Marcello Pelillo, Scantamburlo has co-organized a workshop, supported by the world's leading organization of computing and engineering technologies, the IEEE, on these current issues. The event (which can also be streamed online) will be held at the European Center for Living technology on December 16th, with participation of researchers from prestigious universities, research centers and both European and American startups.

"Artificial intelligence, in particular machine learning, is playing a decisive role in social media, and more generally in the phenomenon of "big data"- explains Teresa Scantamburlo. Machine learning is now used in many areas: diagnostics, economy, the world of work, social sciences, education, politics, justice and so on. It is so attractive because it automates basic tasks: to learn from experience to make predictions, and support informed decision making, and the more data available the more it increases its ability to learn automatically. "

Facebook and social media in general are in fact proving daily to everyone, both the potential of artificial intelligence, but also the weight of ethical issues which are the concerns of scholars...

 "It is no coincidence that machine learning is so widely used in social media, given the amount of data that is concentrated there. For example, it is used to suggest friendships and contacts, to 'understand' the mood or the views of users, to address news and advertising to ‘best fit’ the user and so on. The question is how to evaluate the models and forecasts that are derived from these? Can we accept these forecasts uncritically in decision-making? How can we regulate the use of personal data published freely or possessed by large companies of the digital market such as Google, Facebook or Yahoo?".

What are the implications for the world of knowledge?

"Big data and modern automated analysis techniques, produced by machine learning, data mining and computer vision, are radically changing the way people create culture and do science, the link between the disciplines and the relationship between the quantitative and qualitative approaches to the study of phenomena. Just think of the impact on journalism and the development of computational social science. From here there are other questions. How to deal with this epistemological turn? What new skills will need to be developed to raise the opportunities of the digital revolution whilst avoiding dangerous reductionism and new ideological forms? ".

Can you give us examples of ethical issues that may arise in the development of machine learning systems?

"In addition to the obvious privacy issues, the pervasive use of machine learning is starting to raise ethical and social issues of great importance, such as the spread of social discrimination. If a machine learning algorithm is suffering from 'bias' (a prejudice linked to the assumptions of departure of the designer) or if it can learn certain associations from the data generated by users (through internet searches, post, filled out online forms, etc.), it can produce decisions more or less advantageous for different categories of the population: for example, giving an unfair advantage to students of a certain social class in the college selection processes or direct the research of risky behavior towards persons belonging to particular ethnic groups.

Often, to prevent the identification of these risks, there is the idea that, unlike human decisions, the algorithms are always neutral targets and because they are based on mathematical and statistical principles which, for example, have nothing to do with skin color of gender difference. In fact, recent studies have shown that the algorithms can be affected by human factors such as prejudices, stereotypes and errors of assessment. This result in the end should not be surprising when one considers the fact that the algorithms are designed, constantly 'fed' and validated by human beings."

How does Ca’ Foscari’s research fit into this context?

"It’s taking its first steps. The main effort is to look at a science and technology, such as machine learning, with different eyes, for example, using conceptual tools of the social science or of ethical reflection to develop new approaches and new design guidelines. The organization of workshops with ECLT and IEEE goes in this direction: to collect some of the current thoughts, such as the debate on the transparency of the algorithms, enhancing the contribution of different disciplines (such as computer science, social sciences, philosophy and ethics) to support the development of projects and collaborations actively involving Ca’ Foscari.”