Philosopher from St Petersburg University discusses the cartel of neural networks
Vadim Perov, Head of the Department of Ethics at St Petersburg University, became the guest of the 19th episode of the University’s popular science podcast "Heinrich Terahertz". He discussed how language models can become racist, how they organise a cartel, and why they consider the ruler to be an important marker of lung cancer.
The guest of the podcast spoke about the principles of language models and some of their typical mistakes, which can sometimes have significant consequences, such as the phenomenon known as artificial hallucination.
"One of our articles is devoted to this issue. The point is that if the algorithm does not know the answer to a question, it begins to invent one. Furthermore, some information it has already fabricated will be perceived as truth in future, even if it is not", Vadim Perov said.
He also stated that AI will never be able to replace humans, but only make routine work easier. This is because artificial intelligence is akin to artificial flowers. It means that no matter how similar AI is to genuine intelligence, it will never become one. One of the reasons for this is its misunderstanding of the essence of ethics.
There are numerous examples of this. For instance, the Microsoft Tay chatbot was trained on data from the Internet, which was in the public domain. And then, it turned out that there are many sexists, racists, and simply intolerant people on the Internet. The AI is unable to comprehend that the mere existence of such individuals or various forms of offensive behaviour is not acceptable. While we are sitting here, someone is committing a robbery somewhere, and there are numerous such robberies. However, that does not mean that it is fine.
Vadim Perov, Head of the Department of Ethics at St Petersburg University
Another example is the algorithms used by taxi services, which automatically raise prices in response to increased demand, even if this surge is due to a serious emergency or terrorist attack. Furthermore, there are instances where the algorithms employed for trading on the stock exchange have facilitated the formation of a cartel.
Another example is AI for recognising tumours from X-rays. It turned out that in each of the X-ray images, which one such model used for training, there was a photograph of a ruler on the side. Since each photograph included this ruler, the programme concluded that this was an important part of the image and began to evaluate not the lungs but the presence of a ruler. It is no longer possible to teach the programme otherwise. Meanwhile, this mistake can be quickly explained to a person, and they will immediately understand it.