Prospects for the use of neural networks in research discussed at St Petersburg University
St Petersburg University has held a series of research and practical seminars to discuss how to use artificial intelligence technologies in research. The focus was ChatGPT, which, although appeared not so long ago, quickly grew in popularity.
Today, the issue of using artificial intelligence and defining the boundaries between human intellectual labour and artificial intelligence is becoming more and more acute. These issues were discussed by philosophers at St Petersburg University during the series of seminars. The seminars brought together experts and students from St Petersburg University.
Today, there is no clear understanding in the world community who should be responsible for the results of artificial intelligence, with most experts agreeing that this is not the programme that should bear a responsibility, said Vadim Perov, Head of the Department of Ethics at St Petersburg University.
If it is not artificial intelligence, then who is responsible? Developers, owners, or users? In fact, it turns out to be difficult to identify. Who is, then, a developer: programmers, an IT company, or someone else? Who acts as a user: a recipient of a product developed by a neural network?
Vadim Perov, Head of the Department of Ethics at St Petersburg University
There are no answers to these questions yet. However, according to the University experts, it is essential to come up with a solution, since challenges of regulating the use of artificial intelligence are of concern not only to philosophers, but also to lawyers. The need to develop approaches to the legal regulation of use of neural networks was discussed by St Petersburg University lawyers at the international forum TIBO-2023.
According to Vadim Perov, it is necessary to control the use of artificial intelligence, as we have not yet found a way to teach a programme to differentiate between good and evil. AI uses algorithmic processing of all the information it gets access to. These datasets also contain negative human experiences, e.g. stories of abuse and violence to name just a few. A human who has critical thinking skills and has formed certain ideas of what is right and what is wrong can therefore assess such materials, while artificial intelligence "takes everything as it is".
Developing so-called artificial "moral agents", i.e. a kind of criteria to differentiate between good and evil at the level of artificial intelligence, is just a hypothesis, more fantastic than scientific, according to Vadim Perov. Even if some "ethically correct algorithms" are developed, it will lead to an even more difficult question: whose norms and values will be supported by the artificial intelligence system, said Vadim Petrov.
St Petersburg University philosophers paid special attention to the issue of ensuring compliance with academic ethics when using neural networks in research. According to the expert, the use of various services in research should not be considered unethical, but information about their use should be immediately stipulated.
It would be ethically correct not only to mention the use of some algorithms of artificial intelligence systems in the projects, but to describe the procedures for the use of artificial intelligence to reveal the contribution made by scientists.
Vadim Perov, Head of the Department of Ethics at St Petersburg University
‘In some cases, this contribution may be the development of a study design and its implementation by using artificial intelligence. Sometimes the contribution of researchers will be more significant. An honest explanation will allow you to understand what exactly the researchers have done and how competently they have used modern technologies to improve research,’ said Vadim Perov, Head of the Department of Ethics at St Petersburg University.
The researcher is confident that ethical rules for such publications will gradually be developed in the scientific community. Yet, some researchers will take advantage of the lack of such framework, which will lead to an increase in the number of falsified studies. This, in turn, will require a change in the review procedures.
During the seminars, the participants could test the ChatGPT in practice. Under the guidance of Dmitrii Iarochkin, the curator of the project "Digital technologies in philosophy" and a graduate of St Petersburg University, they used ChatGPT to write a term paper on philosophy that focused on free will issues. The quality of the text generated by the ChatGPT was assessed by Igor Larionov, Head of the Department of Philosophical Anthropology at St Petersburg University.
The seminar participants came to a conclusion that it is almost impossible to get an acceptable result when using the ChatGPT for the first time. You should have a chatbot experience and an ability to clearly formulate the right prompts for ChatGPT.
Every day, most people use different search engines. It may seem that they have skills in search engine. Yet, when working with the platform, it is often not enough. In this case, the neural network may give a too vague answer or write a long text in which the same thought will be repeated in different formulations. Additionally, the programme often makes mistakes, both factual and grammatical (ChatGPT is better pre-trained to work with English than with Russian).
It is important to understand that the texts that are created entirely by ChatGPT have no research value. The neural network is not capable of generating something new. It only uses information already published somewhere and this information is therefore known. However, as the philosophers of St Petersburg University said, such services can become a useful tool, for example, when writing a research article. So, the programme can suggest a possible structure for future work or analyse an existing text, highlighting the main thoughts, which will facilitate the process of writing a conclusion or an abstract. Artificial intelligence is also able to rewrite a document in a different style: more informal or, conversely, more academic.
In future, the philosophers at St Petersburg University are planning to develop guidelines for using ChatGPT in research and other areas that would benefit from automating routine processes. The series of seminars was organised as part of a grant from the Russian Science Foundation (project № 22-28-00379 "Transformations of the moral agency: ethical and philosophical analysis").