Tatiana Chernigovskaya: "When a person relies on AI to solve complex cognitive problems, they risk losing their own creative abilities"

Professor Tatiana Chernigovskaya, Director of the Institute for Cognitive Studies at St Petersburg University, Member of the Russian Academy of Education, has been awarded the OGANESSON Prize for her contribution to popularising scientific knowledge and advancing interdisciplinary research at the intersection of neurobiology, linguistics and psychology.
In an interview for the St Petersburg University website, Tatiana Chernigovskaya discussed the risks associated with using artificial intelligence and highlighted areas where AI has not yet surpassed humans.
Could you please tell us about your current research? What recent discoveries would you like to share with an interested audience?
I do not believe anything I have done in science qualifies as a discovery. Discoveries do not happen easily; they are rare, and few scientists achieve them.
Recently, I have been focusing on the problem of consciousness in relation to language and the brain; yet, significant discoveries in this area are still far off. This is one of the most challenging research topics globally, with around 200 theories of consciousness alone. Studying these subjects is difficult and contentious due to the numerous conflicting viewpoints. In collaboration with Academician Konstantin Anokhin, Director of the Institute for Advanced Brain Studies at Moscow State University, we explore many related topics. My research covers a wide range of cognitive issues, including: memory; decision-making; reading processes; and how we perceive augmented and virtual reality.
I am particularly interested in the problem of linguistic ambiguity — specifically, what occurs in a person’s mind when they read a text that can be interpreted in multiple ways. For example, differences in syntax can make it unclear who is the agent and who is the recipient of an action, and so on. We have also studied what is going on in the brains of simultaneous interpreters. This is an extremely demanding and highly stressful job, and the question is how the brain manages to handle it at such high speeds.
The deeper I explore the subject, the more fascinated I become with complex brain activity, both physical and mental, particularly the creative aspects and higher-level cognitive functions unique to humans; for no other species writes music or dances. In the highest-level athletic competitions, it is the brain that directs everything; muscles will not move without its command. And what happens in the brain when someone makes a discovery or writes poetry? This is almost impossible to study, but we try. Recently, the topic that has captivated me the most is what I call "The Brain and Music".
Generative AI tools are increasingly prevalent in our daily lives. They are successfully applied in various fields, such as pharmacology, where they help eliminate harmful agents in the search for new pharmaceutical drugs, and in design, where they generate initial sketches that artists then refine. Neural networks accelerate routine tasks and simplify work. How risky is it for humanity to become accustomed to AI assistance? Would it be better if we continued tackling challenges without relying on AI?
Artificial intelligence has been and will continue to be invaluable in processing large datasets and managing routine tasks, which can be extensive. Unlike humans, AI does not tire, processes information at high speeds, and has vast memory capabilities. It is undoubtedly useful for performing such tasks and can even be employed in research. As I have said many times before, no doctor can examine thousands of X-rays or tomography scans, describe them, and identify pathologies. Therefore, AI assistance in this area is highly beneficial and should be welcomed.
Does relying on artificial intelligence for part of one’s work alter a person? There has been limited large-scale research on this topic, as this is still a newly emergent phenomenon. However, the data available so far is concerning. I have recently read several articles in reputable international journals suggesting that entrusting complex cognitive tasks to AI can diminish a person’s own creative abilities. Even this comes at a cost, and for us, the cost is really high. If I lose the ability to think because I delegate thinking to AI, it will be my own fault.
Artificial intelligence could become very dangerous because it already outperforms humans in many tasks and will continue to improve. This is a fact. The question is how to limit its functions and maintain control. Mankind, represented by intelligent people, is concerned about this issue. Many people fear the potential loss of control over AI and advocate for the development of global systems to prevent this. Only naïve people are not worried. When some of my colleagues dismiss these concerns as alarmist, arguing that we created AI and can shut it down, it just reveals their level of ignorance. They are just not fully informed about the situation. We will not be able to shut AI down. So, there is definitely a danger to humanity.
On the other hand, in fields related to science, space exploration, big data, medicine, and many others, AI-based tools are accepted and will continue to be used. Hence, it is crucial that we ensure that we remain in control — what I refer to as ‘who is the master in the house.’ AI must not operate unchecked or independently and do whatever it wants. This concern is widely discussed, even among the developers, both in our country and in major nations involved in AI development, which are currently the United States, China, and Russia. We have to realise that addressing this challenge requires the best minds. It is not a task for average students, nor even for A-students, but for the most talented of them. We must cultivate and support those capable of this work, as our future depends on them.
How do you feel about the trend in our fast-paced world where neural networks are increasingly given the role of creators, generating innovative ideas, while humans are often only tasked with refining these ideas or adding finishing touches to boost productivity and produce more finished products? Do you think people will ever overcome the habit of seeking quick results and slow down some day?
I believe that if the current trend continues, namely when laziness trumps everything else ("I won’t do anything myself, let ChatGPT do it"), it is a dead end pathway. Tasks that require high intelligence, creativity, innovation, and the acquisition of new ideas and knowledge fall within the domain of humans. Otherwise, I have a question for humankind: if we delegate all brain work to neural networks and potentially strong AI, what will be left for us to do? Simply eat hamburgers? I strongly oppose this direction. Companies seeking faster results from their employees by using AI are being short-sighted. They may achieve quick gains; however, have they considered the long-term consequences, such as what will happen in five years?
Can you distinguish between a creative work done by a human and one generated by AI, whether it is a drawing, a short story, or a musical composition? How do you think they differ? How does a neural network’s output reveal itself, and will AI ever surpass human creativity in the arts?
This is a challenging task. I have posed a question to leading art historians: If you have an original work, for instance, by Caravaggio and a perfect copy made by a human or AI — in this particular case it does not matter, which appears almost "better" than the original, can you tell them apart? World-class experts confidently say, "Yes, of course. It is strange that you are asking me this." I added the condition that no technical means can be used — no infrared rays, X-rays, canvas analysis, or paint analysis. Only human eyes. When asked how they can tell the difference, they simply reply, "I can just see it."
When we achieve a high level of expertise in any field, whether it is painting, music, literature, or science, we develop what art historians and artists or musicians refer to as "artistic and musical acumen". This knowledge is not acquired from teachers or textbooks, but from firsthand experience. Every first-class diagnostician will tell you the same: "I just know, I can see it." If all doctors study the same textbooks, why do we seek out good doctors? Because a brilliant doctor possesses clinical acumen. They observe a patient and identify what is wrong. Why can’t the rest of us see what is wrong?
Our finest skills are uniquely ours; artificial intelligence has not yet mastered them. You might recall the phrase from a classic: "Shadow, know your place." That’s it, AI should perform the tasks we assign to it. We must direct AI, not the other way around. Otherwise, we risk losing control. We should cooperate with AI, not follow it.
How do you think the development of neural networks will impact education? Now, students can generate essays, research papers, or even entire theses using AI, and teachers must monitor the use of these tools. How can we explain to students the importance of learning to think and draw conclusions independently?
We all have been thinking about this issue, since anyone can now write a dissertation using generative language models. There is a simple solution: the dissertation defence must be oral. By asking in-depth questions that probe for deeper understanding rather than mere content recall, it will quickly become evident if the candidate did not write the dissertation themselves. That is all. I suspect I am not alone in reaching this conclusion.
You cannot argue with the inevitable: there is no other way out. Recently, a colleague of mine, the Rector of a major university in Moscow, shared an amusing anecdote. He used the newly developed Chinese neural network, DeepSeek, to ask who Chernigovskaya is. Within minutes, he received a detailed text analysing all my research activities, including: where and under whom I studied; my early work; how my interests evolved; my current status; and my ongoing research. This demonstrates that neural networks know not only how to collect information, but also how to analyse it.
When I received this text, I was shocked and could not understand what it was. I thought someone had written a paper about me, but it turned out to be generated by DeepSeek. I must admit, this realisation made me very wary. It would be challenging for me to write such a detailed paper about any colleague. It requires extensive work with sources and considerable time to understand their professional development, the ideas they abandoned, and the ones they embraced.
The development of artificial intelligence is advancing at breakneck speed. Innovations that seemed impossible just six months ago are now a reality. I cannot even imagine what will happen, say, in a year. We must stay vigilant and rely on our own cognitive abilities, which cannot be delegated to anyone else. If we stop thinking critically, the triumph of artificial systems over human intelligence will soon become a disaster waiting to happen. I tell this to my master’s and doctoral students; although how they will apply this advice, I do not know.