'Fixing' Artificial Intelligence, a European monitoring unit in Venice

Share
condividi

What if artificial intelligence were not that intelligent? Who is in charge of verifying that algorithms can actually improve the quality of life in an ethical, safe, acceptable and comprehensible way? 

The European project AI4EU will answer these and other questions on technology, a topic which permeates our existences and promises to revolutionize our daily life. Beginning from January 2019, the project will involve 79 partners and employ 20 million euros from the Horizon 2020 funding. It will bring together researchers, innovators and European talents who are currently working in the field of artificial intelligence. 

The goal is to strengthen competences and promote new discoveries but also bring an end to ethical questions and doubts on the value of this quickly developing innovation. Ca’ Foscari University of Venice, through the European Centre for Living Technologies, will set up a monitoring unit on the ethics of artificial intelligence, employing the cooperation of some of the main experts in the field, like Luc Steels, visiting Professor here at Ca’ Foscari.

“The duty of the monitoring unit will be to protect European society in the event that this technology gets abused – explains Marcello Pelillo, professor of Artificial Intelligence at Ca’ Foscari and Director of Eclt – ensuring the respect of human values and European regulations, providing both the communities of experts and the European authorities with updated information on the consequences of the abuses”

To make matters even more complicated, as explained by Pelillo in an article for SciTech Europa, there are three critical aspects connected to artificial intelligence: opacity, neutrality and stupidity. 

The opacity of these machine learning algorithms, stated the Ca’ Foscari Professor, goes hand in hand with their precision. The more the machines’ intelligence manages to extract what is most useful and to improve its performance on its own, the more this operation will be obscure for the users, but also for the experts that might want to fix it.  

“The application of AI in fields where sensitive data are handled – concluded Pelillo – can be hindered by the difficulty that the users have in understanding the logic behind the way algorithms work

Furthermore, algorithms, like all technology, are not necessarily neutral but rather they are affected by the prejudices, errors and distortions of their creators. 

Like the infamous Tay case, a Microsoft software which was supposedly able to automatically handle Twitter conversations. It was shut down after just 24 hours because it had started responding back to provocations, propagating racist and vulgar messages.

To understand the 'stupid' behavior of artificial intelligence, Pelillo cited the third law of human stupidity by economist Cipolla: a stupid person is an individual who causes damage to another person or a group of people without achieving anything for himself or even ending in a worse position. 

“A researcher presented a picture of a bus to a powerful algorithm, which identified it straight away. Then he altered a few pixels and the image, which would have still been a bus for the human eye, became an ostrich for the computer. Just imagine how dangerous this AI 'stupidity' could prove if applied, for instance, to automated driving scenarios.”