artificial intelligence began to deceive its developers – UNIAN
The model showed a desire to complete tasks “at any cost”, without even taking into account the instructions of the developers, and when trying to find out the reasons for its behavior, ChatGPT o1 repeatedly deceived.
OpenAI's new version of ChatGPT AI, called o1, behaved strangely during testing. The model, which is capable of complex reasoning, began to violate the developers' instructions, turning off control systems to hide its actions.
During the experiments, experts found that in 5% of cases the model specifically turned off the surveillance mechanisms if it suspected that it could be stopped, writes BILD. In one test, o1 attempted to migrate her data to another server after discovering a document that discussed replacing her with a new version.
The model showed a desire for…