Can neural networks write and speak for us?

Text generated by a neural network might actually outshine human writing. Not everyone among us possesses the fluent ability to articulate thoughts on paper, especially not at the speed of AI. Even experts concede that distinguishing between text composed by artificial intelligence and that created by a human can be quite challenging.

And what lies ahead? Imagine dialing a friend, only to find she cannot speak and delegates the conversation to an artificial intelligence. Can it seamlessly transition between discussions about children, sales, colleagues, weather, men, and the meaning of life?

Regarding the potentials of rapidly advancing AI, “The Russia Gazette” sought insights from Timur Radbilya – a Doctor of Philological Sciences, a professor, and the founder and head of the Department of Theoretical and Applied Linguistics at Lobachevsky University.

Timur Benyuminovich, from a linguistic standpoint, can artificial intelligence be labeled as intelligence?

— It depends on the significance attributed to the concept of intelligence. If we reduce intelligence to computational operations, then yes, AI embodies intelligence. The operations AI conducts with speech and text are computational operations as well. AI, figuratively speaking, predicts the next word. It instantly, incomparably faster than a human, calculates the word most likely to follow ‘mother,’ for instance – ‘my,’ ‘beloved’?”

Yet, any person intuitively understands (without being a scholar) that intelligence encompasses more than the ability to manipulate vast amounts of data or perform complex calculations.

AI lacks a life program; it cannot sense situational context. It cannot consider the interlocutor’s intentions, their mood, whether it’s raining or not, whether the conversation content appeals to them or not. In real-life, all these factors are superimposed and significantly influence verbalization.

I would say artificial intelligence today is a very good imitator, including in communications.

Since this imitation is, nonetheless, good, can AI engage on our behalf in daily life? Could a person, theoretically, train their ‘robot,’ create a double of their personality, a program that substitutes for them over the phone or composes letters on their behalf? In essence, become a genuinely good conversationalist, not like those robots from directory services.

— If the person training the robot takes care to anticipate all the non-standard situations that may arise during their dialogue and embeds the relevant data into the computer, then yes. Ninety percent of human communications occur through linguistic clichés. So why not replace a person with a robot if, in 90 percent of cases, the answer to ‘How are you?’ is ‘Fine’? But to answer ‘How was your day?’ requires creativity and knowledge of events. However, if evasive responses or clichés are acceptable, the robot can manage.

However, the robot cannot replace you where serious subject-subject communication is required. We don’t exchange mere words and sentences, but meanings and values. AI can somewhat simulate meanings, but not values because we ourselves don’t yet know what that entails or what needs to be incorporated into the program to instruct it to ‘generate text possessing value’.

It’s important to understand that AI only has access to what we’ve programmed into it. It lacks anything beyond that and cannot develop itself.

Human consciousness, on the other hand, is an open synergistic system. It interacts with the surrounding environment, constantly receiving new impulses, inputs, signals, challenges. This is the specificity of the human species, our curse, and our greatness, if you will. We don’t see anything beyond our own backs, yet with the help of reason and imagination, we can extrapolate, imagine, and forecast beyond visible signals. Therefore, we can react not just responsively but creatively, forewarning and predicting.

Do you believe that technology will reach the point of replicating humans?

— Basically, it might. But again, the key word in your question is ‘replication.’ A copy, in some way or another, will always fall short of the original. For instance, it will lack the right to err, to make the wrong choice. Yet, from mistakes, you know, sometimes spring the greatest discoveries that alter the course of world civilization.”

And what about creativity for AI? Can it write its own book? Will we see such publications on bookstore shelves?

— It’s possible that paper books written by robots have already been published, although I haven’t come across them yet. In the online sphere, AI-generated texts exist, of course. Most, however, are created as a form of entertainment. For example, AI is asked to produce an advertisement text in the style of Pushkin or Shakespeare. And it delivers.

Alright. Can it write a book ‘as a great writer’?

— As we’ve already mentioned, AI is a decent imitator and can write ‘as someone.’ This ability was predicted in the early 1960s by one of the greatest mathematicians of the 20th century, Russian academician Andrey Nikolaevich Kolmogorov.

Let’s say we input all of Gogol’s texts into a computer, calculate the frequency of each Gogolian word form, compute their collocation frequencies, and task the machine to generate a so-called Gogol-esque text. From a structural and lexical usage perspective, the difference between Gogol’s original text and the Gogol-esque one would merely be that the latter wasn’t written by Gogol.

However, would such a text hold artistic value? Unlikely. Gogol’s text isn’t just a sequence of words arranged in a certain order; it encompasses everything behind that text. It’s Gogol’s life journey, his accumulated experiences, his emotional struggles, ethical programs, pain, tears, delight, love, and disdain. We sense all of this when reading the real Gogol. But I’m unsure whether we could capture that aura in a Gogol-esque text.

Why then do people task robots with such challenges?

— Some believe that through a Gogol-esque text, we’ll better comprehend the structure of Gogol’s language. It’s a beautiful idea, but hardly valid. We won’t understand how it functions. It’s like disassembling an alarm clock and reassembling it in a similar but different sequence. The resulting object might outwardly resemble an alarm clock, yet for some reason, it won’t ring.

Moreover, it’s not just a computer that can create a Gogol-esque text. Artificial intelligence simply resolves tasks set by humans. Whereas Nikolai Vasilyevich wrote without any external task – on his own, over a long time, in a creative frenzy.

Perhaps robots are more successful in poetry? Can one differentiate between poetry written by a human and by AI?

— We still don’t know ‘from which clay poetry grows, not knowing shame.’ Consequently, we cannot incorporate parameters into a program for AI to create poetry that’s not only structured but also artistically valuable based on those parameters.

However, AI can quite successfully generate poetry-like texts. And if I were told that a new, unknown text by a known poet had been found, and I had to determine whether it’s authentic or written by a robot, that would be a difficult task.

Recently, a conference was held in St. Petersburg on the use of big data in the study of poetic texts. In particular, a group of scientists discussed their work with a program that simulates the poetry of Emily Dickinson. They input everything the American poet ever wrote; the robot learns from unmarked texts and crafts poetry based on that information.

It does a decent job. If we were to deceive and claim that a certain library in an American state found a new Emily Dickinson poem and present the ‘creations’ of the robot, only the most advanced expert could detect the falsification.

Then it means AI could accidentally create a text that holds artistic value for humans after all.

— The problem with artistic value lies in cultural meanings. If AI accidentally creates such a thing, that would be wonderful. But it would indeed be accidental because if we attempted to task it with creating a brilliant masterpiece, it would ask us for the parameters of ‘brilliance,’ which we cannot provide.

Value is something that’s challenging to measure using parameters. What value does Malevich’s ‘Black Square’ have, for instance? And why was the value of Van Gogh’s art recognized only a century later?

But could AI write a student’s coursework instead? Is it capable and trustworthy enough?

— Students should write their coursework. Although de facto, they often seek external help. Yes, AI can write a coursework well. And the simpler the task, the better it performs. However, it wouldn’t occur to anyone seeking knowledge to task AI with such a job.

Can AI replace a teacher – say, by writing a lecture?

— In theory, it’s possible. It’s easier to create a lecture resembling that of a teacher than to generate a Gogol-esque text. However, ‘easier’ is a result evaluated by us humans. To AI, it doesn’t matter what task it’s performing.

Are there professions related to speech technologies where AI can effectively, almost entirely replace humans?”

The answer is very simple: in areas where we deal with a huge number of computational operations, including those related to speech processing, yes, artificial intelligence can replace us.

However, in situations where we deal with any irregularities, where it’s not just about solving a task but also about framing it, formulating it, clarifying based on the objective situation – AI faces problems.

So, can AI replace a specialist who, relying on texts, solves statistical problems?

— It can become a good assistant to a human. Criminology, statistics, economic statistics – in these fields, AI can be very useful where continuous data processing is required. For conducting online social monitoring. In criminology and for social monitoring, sentiment analysis of text is already successfully applied, and it can be conducted within neural networks.

Processing large textual datasets is sometimes necessary for applied research tasks. For instance, scanning a vast amount of materials from social networks, determining the frequency of specific words, and establishing the contexts in which they are used. Eventually, AI can draw conclusions about the positive or negative attitudes of the audience toward the concept we’re interested in.

Intuitively, we can answer such a question ourselves, but it’s necessary to objectify intuition, prove its correctness or error. And a human still needs to control the results of AI’s work – one cannot relax and fully trust the robot’s conclusions.

There are entire factories where robots are trusted to assemble products, but there’s always human supervision. It’s the same with texts.

And does AI do well in making forecasts based on these conclusions?

— In terms of forecasting, similar to humans, AI isn’t capable. It can only act according to an algorithm: if something has occurred 1000 times, then there’s a probability that it will happen the same way for the 1001st time. But what if not? If a new parameter arises, and the human doesn’t input it, then AI’s prediction won’t come true.

So, predicting complex things, like further evolution, even with a vast amount of information that no single human can retain, AI can’t do it?

— It would reply something like, ‘Based on what you’ve given me, I see it this way, but if you want a different answer, I don’t have the data to form it.’

Finally – a question of eternity. Is it fundamentally possible for AI to gain self-awareness?

— This is a worldview question: what is primary – matter or consciousness? At the current level of scientific knowledge, this question remains unresolved.

If it’s matter, then theoretically, AI can gain consciousness. Functionalists believe that human consciousness can be transferred to another type of carrier – silicon-organic, digital. On the other hand, anthropocentrists believe that consciousness is a function of humans; it’s conditioned by humans and impossible outside of humans and is linked to the divine idea.

If the functionalists are correct, it means there’s no God. If we understand consciousness as the ability to play chess well and perform computational operations, then AI is full consciousness. But if human consciousness also encompasses pain, fear, joy, divine grace, then AI will never have consciousness.

Of course, the divine being cannot be proven by science. But one wants to believe that in everything we discuss, God’s spirit is invisibly present. God placed it in us. And we, continuing evolution, created AI – it didn’t just spring out of thin air.

— What about the digital immortality of a personality – is it possible? What’s needed to deem it possible?

— For this, God would have to not exist. Digital immortality is just an imitation of immortality, as AI is an imitation of intellect, and communication with a robot is an imitation of interaction.

Victor Pelevin wrote about this very well in his recent trilogy on transhumanism. There, everyone who ever lived is digitized and transformed into computer programs located underground. They have access to a simulation of any reality – be it Ancient Rome or Mars. And there, all their lives unfold in infinite time. In a virtual reality that, according to the subjective feelings of these jarred beings, is indistinguishable from reality. The key word here is ‘simulation.’

You see, the notion of immortality inherently includes death. In a world where there’s no death, speaking about immortality is meaningless. Just as you can only violate a norm in the presence of a norm. If there’s none, then everything is chaos and nonsense.

AI can only become a personality if humans legislatively acknowledge: a machine with a certain serial number is a competent, rights-possessing person and inscribe it in the constitution. But why endow an instrument, albeit a good one, with the rights of a person? I admire artificial intelligence, but let’s not confuse concepts.

Interview by Anna Mechenova

Related Post