Is Chat GPT really smart? A few questions for Marcello Pelillo

Share

ChatGPT, or Chat Generative Pre-trained Transformer, has become a real media case. It is one of the most famous artificial intelligence models developed by OpenAI, and it is polarising opinions in the scientific and non-scientific community. ChatGPT has a huge range of potential applications, ranging from text translation to customer service and market research. There are many questions, however, around the impact this tool may have on society and the labour market. In Italy, the Garante per la Privacy (Data Protection Supervisor) has put this service on hold as of 1 April, and not long ago a group of almost 1,000 experts put out a warning on the risks of artificial intelligence, calling for a halt to its development. Elon Musk - co-founder of OpenAI – was in that group, although apparently he is already at work to create a competitor system. But what exactly is artificial intelligence (AI), what are its benefits and risks? We discussed it with an expert, Marcello Pelillo, professor of computer science at the Department of Environmental Sciences, Informatics and Statistics at Ca' Foscari, who has been working in the field of AI for thirty years.

What is Artificial Intelligence?

There is still intense debate in our scientific community on this question. Even just defining the term 'intelligence' is not simple. In general, AI refers to an area of computer science and engineering that seeks to create algorithms or machines that have 'intelligent' behaviour, i.e. that behave like human beings. The ability to perceive the world like humans, for example, or to apply strategy - as in complex games - are among the typically human abilities we seek to transfer.

How does chatGPT work, and where does it retrieve the information it then delivers?

It is a OpenAI proprietary software, open to all, which generates human-like responses to user input. From a scientific and technological point of view, Chat GPT is not particularly innovative, and yet interactions with it can be stunning and it is, above all, a great commercial operation. It is the latest AI model that uses neural networks to mimic the structure of the brain, simulating human behaviour. Neural networks have an ancient history; they have been talked about since before the 1950s, when AI was officially born.

For the last ten years or so, however, the phenomenon has exploded with the evolution of 'deep neural networks', i.e. brain-inspired models that learn from large amounts of data.

Chat GPT has access to an infinite amount of data, and it uses the Large Language Model (LLM), which is capable to a larger extent then before of focusing its attention on specific parts of the text, just as humans do when they focus on a certain detail of what they are looking at.

What are the limits of this tool?

One of the obvious limits of all AI systems to date is the lack of common sense. This is an example of a dialogue with ChatGPT: 'can you generate a random number between 1 and 10, so that I can try to guess?' 'Sure. Seven, try to guess it' 'Is it seven by any chance?' Very good!' In these few lines, that were shared between colleagues, you can see the inherent limitation of these systems: the lack of a deep understanding of the topic which is being discussed, and the lack of common sense. If intelligence is also the ability to perform logical thinking, to understand the deeper meaning of a sentence and to 'anchor' it to reality, then ChatGPT is not intelligent. It is, however, very adept at coherently and convincingly joining together 'chunks' of texts. It has access to billions of texts, it is trained to complete texts, and thus it can construct a more or less coherent discourse.

What advantages (or disadvantages) do you see in this system?

I see more disadvantages. Personally, I think there are far more interesting applications of AI, for example in the medical field. AI produces advanced medical analysis systems that are capable of detecting details invisible to the human eye, and which are crucial for a correct diagnosis and treatment. These machines can also analyse huge amounts of data and bring real progress to medicine - and yet, they make less headlines than Chat GPT.

Let us now think about the area of criminal justice, where Artificial Intelligence systems are already being used in the US and are beginning to be used in Europe as well. Some sentences have been elaborated with the support of AI-based software that can predict the level of recidivism of a defendant. In such cases, we are giving a great responsibility to an algorithm, which - let us not forget this - belongs to a private company and does not guarantee transparency. We are dealing with a technological tool that is in itself neutral, but which can be used correctly or not. This is also the case with ChatGPT. It can be a valuable assistant, but it can hide pitfalls.

Restraints imposed by the Data Protection Supervisor, appeals to further consideration, the first cases of 'hacked' artificial intelligence. In some universities, doubts have arisen over a few dissertations... Is fear justified?

Risks are inherent to AI. Personally, I don’t see much sense in the call by Elon Musk, who is one of the creators of Open AI, to put everything on hold for six months, especially in the light of his apparent starting a new AI company (or so it seems. Let us not forget that Open AI is largely owned by Microsoft). How can research be stopped worldwide? There is no authority with such power, and six months are completely insufficient in any case: no thorough, interdisciplinary study could be carried out in such a short time.

I see two main factors of risk. The first is the speed with which these new instruments are created and applied. Such a pace does allow to think all possible consequences through.

The second risk is related to the intellectual property of AI research. Until a few years ago, AI was the property of the university. Research was carried out in partnerships with companies, but in essence it was academic research. In the last ten years, the scenario has changed radically and, arguably, irreversibly. Now, although it has important ethical and social implications, with a strong impact on societies and the policies of states, research is conducted entirely by big companies, such as Google, Facebook, Amazon, Microsoft, etc.

What should be done to regulate this development?

We must not lose control of what we create, as in the famous story of the Sorcerer's Apprentice. These systems are created by humans, but they are gigantic machines governed by billions and billions of numbers that become inaccessible. The issue of transparency and privacy in AI is a priority, and is already being debated in Europe and in the United States. There are guidelines defining how to operate properly and a whole field of studies dedicated to 'explainable AI'.

So, as far as solutions are concerned: Musk's idea to “stop and think” is not viable. Who should decide, who should stop? And even if we really did take a break, who could guarantee that a company or a totalitarian state would not be sprinting forward and gain a competitive advantage? In an interview dating several years ago Putin stated that whoever ruled AI would rule the world.

In my opinion, we should show some common sense and try to make this discipline more transparent and democratic. A first step would be to require that AI systems be open and inspectable by everyone. Chat GPT for example is proprietary software, so nobody outside of Open AI knows exactly how it works.

We all remember the photos of Donald Trump's 'fake' arrest and the reactions they provoked as soon as they were published. Without awareness, we risk creating a society of suspicion and distrust. Every time we see something online, we will ask ourselves: is it genuine? Every time we receive an email we will suspect it is a chatbot.

Author: Federica Scotellaro / Translator: Barbara Del Mercato