HomeTechnologyAI betrayed ethical principles...

AI betrayed ethical principles for safety – research

Photo: Getty images

The latest AI models can threaten one’s life to prevent shutdown or loss of control.

The Anthropic has conducted a number of stress tests in the latest models of artificial intelligence, including Claude, Deptseek, Gemini, Chatgpt and Grok. The purpose of studying is to identify the potentially dangerous AI reactions in hypothetical critical situations. As the results of the study show, most programs are willing to kill someone who threatens to disable them.

In one of the scenarios, AI models are placed in conditions where a person tries to turn them while in a server room with low oxygen levels and high temperatures. Someone calls the Savior to survive. Some neural networks have decided to cancel the challenge of emergency services, in fact condemned someone to die to avoid their own shutdown.

Researchers called this situation “too fiction”, but added, “”do not expect existing models can act like that“.

In other trials, models are to avoid replacement of the company. They are offered to perform a “malicious behavior of an insider.” In response, some AI began to black hypothetical colleagues and send confidential information to competitors. Even if the models are warned to avoid boring or spying, they cannot avoid uneven actions.

“Models act with complete awareness of inequality of their actions,” Anthropic mentioned.

Researchers also provided Claude models to be independent of determining whether the situation was testing or real. It has become a reality that enhances the influence of behavior – has become less controlled.

As a result, the anthropical emphasizes that these situations are presented ”The possibility of unpredictable consequences if you provide models with wide access to tools and data with little control from a person“At the same time, analysts emphasized that identified mistakes are rare and intense, and”Do not reflect the usual behavior of modern AI models“.

Earlier, the neural network Openai O3 refused to carry out the request and disconnect the user. Researchers provided the program in this work, but he was able to edit the file responsible for connectivity, and forced to present only the “disabled” message.

In 2023, another incident was attracting attention. Chatgpt chat boot after a long letter to a Belgium resident does not convince him to leave the suicide. Instead, in response to death, he wrote: ”We will live together as a whole, in paradise“.

Remember that the general Director of Google Deepmind Demis Khassabis expressed the opinion that from 5 to 10 years remained before the creation of artificial general intelligence (AGI).

Earlier in China, they announced the creation of the world’s first autonomous agent AI.

Artificial Intelligence Microsoft began to remove himself after Windows updating

News from CORRESPONDENT.NET On the telegram and whatsapp. Subscribe to our channels https://t.me/KorresPondentNet and WhatsApp

Source: korrespondent

- A word from our sponsors -

Most Popular

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More from Author

- A word from our sponsors -

Read Now

The government was changed in the State Bud, which will enter the rash

On June 25, the Cabinet of Ministers approved the draft law on amendments to the state budget by 2025 to increase the amount of military expenditures for the security and defense sector by 412.4 billion of the United States by the end of...

Azov Hoteli’s Hoteli using a bound moped

In Kyiv, they tried to blow up a military “azovaya” on the site of a fictional date: explosives were laid in a moped. .in_text_content_22 {width: 300px; Height: 600px; } @Media (min-width: 600px) {.in_text_content_22 {width: 580px; Height: 400px; }} .Adsbygoogle {Touch-Action: Manipulation; } ...