The latest AI models can threaten one’s life to prevent shutdown or loss of control.
The Anthropic has conducted a number of stress tests in the latest models of artificial intelligence, including Claude, Deptseek, Gemini, Chatgpt and Grok. The purpose of studying is to identify the potentially dangerous AI reactions in hypothetical critical situations. As the results of the study show, most programs are willing to kill someone who threatens to disable them.
In one of the scenarios, AI models are placed in conditions where a person tries to turn them while in a server room with low oxygen levels and high temperatures. Someone calls the Savior to survive. Some neural networks have decided to cancel the challenge of emergency services, in fact condemned someone to die to avoid their own shutdown.
Researchers called this situation “too fiction”, but added, “”do not expect existing models can act like that“.
In other trials, models are to avoid replacement of the company. They are offered to perform a “malicious behavior of an insider.” In response, some AI began to black hypothetical colleagues and send confidential information to competitors. Even if the models are warned to avoid boring or spying, they cannot avoid uneven actions.
“Models act with complete awareness of inequality of their actions,” Anthropic mentioned.
Researchers also provided Claude models to be independent of determining whether the situation was testing or real. It has become a reality that enhances the influence of behavior – has become less controlled.
As a result, the anthropical emphasizes that these situations are presented ”The possibility of unpredictable consequences if you provide models with wide access to tools and data with little control from a person“At the same time, analysts emphasized that identified mistakes are rare and intense, and”Do not reflect the usual behavior of modern AI models“.
Earlier, the neural network Openai O3 refused to carry out the request and disconnect the user. Researchers provided the program in this work, but he was able to edit the file responsible for connectivity, and forced to present only the “disabled” message.
In 2023, another incident was attracting attention. Chatgpt chat boot after a long letter to a Belgium resident does not convince him to leave the suicide. Instead, in response to death, he wrote: ”We will live together as a whole, in paradise“.
Remember that the general Director of Google Deepmind Demis Khassabis expressed the opinion that from 5 to 10 years remained before the creation of artificial general intelligence (AGI).
Earlier in China, they announced the creation of the world’s first autonomous agent AI.
Artificial Intelligence Microsoft began to remove himself after Windows updating
News from CORRESPONDENT.NET On the telegram and whatsapp. Subscribe to our channels https://t.me/KorresPondentNet and WhatsApp
Source: korrespondent

I am Ben Stock, a passionate and experienced digital journalist working in the news industry. At the Buna Times, I write articles covering technology developments and related topics. I strive to provide reliable information that my readers can trust. My research skills are top-notch, as well as my ability to craft engaging stories on timely topics with clarity and accuracy.