Fantasy writer Vadim Panov and programmer Dmitry Zavalishin discuss whether robots can take over the world.

Recently, Toyota unveiled a robot that carries objects like a human—hugging them to its chest. Meanwhile, the American company Figure showcased humanoid robots made of iron, autonomously working in warehouses. At the same time, cases of robot attacks on humans are recorded worldwide. The most recent tragic incident occurred in Korea at the end of 2023. Experts discussed with GORUS whether these incidents indicate the impending rise of machines.

“Our civilization is still human”

Robot attacks on humans are increasingly being observed worldwide. For instance, at a Tesla car manufacturing plant in Texas, a mechanized assistant plunged its iron clamps into the back and arm of a worker two years ago. The incident happened two years ago, but the company only reported it in December of last year. In June, in Britain, a robot courier attacked a passerby and his dog. The electronic delivery robot collided with the dog and then methodically attacked the owner’s leg. Last autumn in South Korea, a collaboration between a robot and a human cost the latter his life. The machine, sorting vegetable boxes, suddenly grabbed one of the workers by the hand and crushed him on the conveyor belt.

Renowned Russian science fiction writer Vadim Panov believes that these cases are not a cause for panic, as they are simply program glitches.

“Software for robotic devices is still imperfect, but I believe work is being done on it,” he noted.

However, the prospect of using civilian developments in robotics for military production is considered very relevant, and the famous three laws of Asimov won’t help here. Simply because manufacturers will not adhere to them.

“Remember Stanislaw Lem’s quote? ‘Anything that can be used as a weapon will be used as a weapon.’ And even if a group of romantics suggests not using robots as weapons, they will be used by the compatriots of these romantics. The same goes for Boston Dynamics and others who will gladly sell their equipment to the Pentagon and won’t even think about ethics,” emphasized Vadim Panov.

He reminded that in January of this year, the developer company of the GPT neural network family refused to ban the use of its product for military purposes. Meanwhile, an operational group has been created in the Pentagon to study the use of generative artificial intelligence for military needs.

At the same time, the writer is confident that developers will never release AI from control.

“Artificial intelligence is not developed to lose control over it. Our civilization is still human, and humans come first. Besides, let’s be honest: crashing any program is a solvable problem,” he emphasized.

For the scenario of a machine uprising from the “Terminator” universe to become real, robots and artificial “Skynet” must control absolutely the entire system of their maintenance.

“They must be able to manage power plants, repair them, and build new ones. Perhaps they will have to learn how to extract coal for these power plants and change wind turbines. When a supply chain for all robotic goods is formed with electricity, then we will return to this issue. Because right now, any machine uprising will end with it being unplugged and reformatting its hard drive,” concluded Panov.

For the scenario of the uprising of machines from the Terminator universe to become real, it is necessary for robots to control their entire support system

Excessive “Detail”

Yet, according to some experts, the issue of rapid artificial intelligence development is much more relevant than it seems at first glance.

A few years ago, SpaceX and Tesla founder Elon Musk stated that artificial intelligence is the most serious threat humanity has faced.

“In the field of artificial intelligence, I have access to the most advanced technologies. And I think people have something to worry about,” he said during his speech to the National Governors Association in the United States.

Russian programmer Dmitry Zavalishin, who designed the original “Phantom” operating system, founded the DZ Systems group of companies, and created “Yandex Market,” believes that the rapid development of artificial intelligence could indeed pose future problems for humanity. These issues are not related to the malicious intent of artificial intelligence, as its intentions are unclear. The concern lies elsewhere, according to the expert.

“All these systems are actively used as tools for building optimization cycles. For example, take a system that optimizes the operation of a taxi fleet by analyzing the performance of drivers and the number of cars. Regarding humans, this system will not be malicious but absolutely emotionless. It will push its optimization decision without bothering about how ethical it is from a human perspective,” says Zavalishin.

The second problem, according to the programmer, is that the AI-based system is currently incomprehensible to humans. The scale of neural network analytical efforts surpasses the capabilities of human intelligence.

“Because of this, we cannot answer the question of whether AI systems behave correctly and well. Based on the tasks and goals we set, secondary goals and tasks may arise, within one of which the destruction of humanity might be a side positive effect,” he explains.

For example, there might be an initially harmless goal—to create the most efficient business on the planet. In the process of solving this problem, humans might become the least efficient, unnecessary detail. Therefore, they need to be excluded from the process at a minimum, or at a maximum—destroyed.

“The trouble is not that such a scenario can arise as a result of entirely benign goals, but that we cannot identify such intermediate goals and monitor them correctly,” emphasizes Zavalishin.

“The Problem is Closer Than It Seems”

While these risks for humanity are not particularly relevant at the moment, everything can change rapidly because the transfer of systems under AI control is happening very quickly, warns Zavalishin. If just a few years ago, only experiments were carried out, now these are already typical scenarios for short control cycles, including parts of a company’s business or individual business processes.

“Over time, larger business processes will be handed over to AI control, and this problem may be closer than we expect. And if we consider that there are no three laws of robotics by Asimov in military systems and there won’t be, and AI is already being applied there, the question of connecting large goal-setting systems with systems capable of harming humans becomes increasingly real,” believes Zavalishin.

To prevent such a scenario from materializing, scientists are already looking for ways to hedge against it. For example, as reported by the PlayGround.ru portal, researchers at the University of Cambridge proposed equipping AI with “emergency off switches” to avoid rebellion. The corresponding article is posted on the university’s website. Among the authors of the study are professors from the University of Oxford and developers from OpenAI.

With the increasing use of AI, the risks of emergency situations will increase

Nevertheless, despite all the concerns associated with the spread of AI, further development of advanced countries is already impossible without it.

It is worth noting that in his Address to the Federal Assembly on February 29, President Vladimir Putin called for Russia’s digital sovereignty in the field of artificial intelligence. He emphasized that our country needs to strengthen work in the field of AI to maintain competitiveness in the global market and ensure a breakthrough in the economy and social sphere.

By Ksenia Stetsenko

 

 

Related Post