Vladislav Arkhipov: "Artificial intelligence cannot replace a human being as it does not know how to make decisions"
Artificial intelligence and neural networks are excellent tools for certain types of work, displaying a unique form of creativity and solving straightforward tasks. However, at this current stage of their development, they are unable to replace a person because they lack the ability to make decisions. This insight was shared by Vladislav Arkhipov, Professor at St Petersburg University, Head of the Department of Theory and History of State and Law, Director of the Centre for Research on Information Security and Digital Transformation at St Petersburg University, during the section "Digital Justice: A Step into the Future" at the St Petersburg International Legal Forum (SPILF).
In recent years, numerous initiatives have been launched in the realm of artificial intelligence development. The national programme "Digital Economy of the Russian Federation" is underway in Russia, within which AI technologies are acknowledged as one of the "end-to-end digital technologies". Additionally, the National AI Development Strategy for the period up to 2030 has been ratified.
Advanced technologies have not overlooked the realm of justice, exerting a significant influence on the contemporary legal landscape. However, experts have made it clear that the widespread integration of "algorithms" into the law has not yet occurred.
One reason is that the outcomes of artificial intelligence frequently conflict with fundamental values, including those enshrined in legislation.
For instance, Vladislav Arkhipov highlighted that one of the key concerns regarding machine learning is the breach of privacy and contravention of legislation on personal data. Artificial intelligence and neural networks are programmed by feeding them vast amounts of information into their "memory". This information often contains personal data, including not only facial images, surnames and addresses, but also various other details that can be linked to individuals with the help of additional information, with the risks of infringements in this domain remaining unavoidable.
Furthermore, a neural network designed, for instance, to produce images, music, texts, and artistic creations, frequently "learns" from copyrighted works belonging to others. Currently, this matter has not been adequately addressed at the legislative and practical levels, with experts indicating that this legal ambiguity could potentially pose significant challenges for copyright holders.
According to Vladislav Arkhipov, despite the evident advancements in the realm of machine learning, there is a substantial amount of knowledge that cannot be imparted to a neural network by a human.
Clearly, a machine (computer program) is not a sentient being, but rather a tool that lacks compassion, tact, and a sense of justice. It is possible to train a neural network to avoid sensitive topics, for example by restricting the robot from giving advice on health issues, not answering questions about criminal activity, and not trying to help in potentially life-threatening situations. However, it cannot independently determine whether a person’s decision was correct, establish guilt or determine whether a law has been broken.
From a legal point of view, artificial intelligence cannot be considered a legal entity and therefore does not have rights, obligations, or the ability to make decisions, moral judgements, or interpret socially significant concepts. Presently, experts believe that AI has been developed solely for entertainment purposes, aiding in tasks, and simplifying daily life.