Openai has published posthumous About the latest problems with flattery with the default AI of CHATGPT, GPT-4O-Problemy power supply, which forced the company to withdraw the update of the model released last week.
At the weekend, after updating the GPT-4O model, users on social media noticed that ChatgPT began to answer in a too checking and pleasant way. It quickly became a meme. Users have published the screenshots of the chatgPT applauding all kinds of problematic, dangerous decisions AND ideas.
In the post on X on Sunday, the CEO of Sam Altman recognized The problem and said that Opeli would work on corrections “as soon as possible”. Two days later Altman announced The GPT-4O update has been withdrawn and that Opeli worked on “additional corrections” of the model’s personality.
According to OpenAIThe update, which was aimed at the default personality of the model “to be more intuitive and effective”, would be too vast on the basis of “short -term feedback” and “did not fully take into account the interaction of users from ChatGPT by evolving in time.”
We went back last week of the GPT-4O update in Chatgpt because it was too flattering and pleasant. You now have access to an earlier version with more sustainable behavior.
More about what happened, why it matters and how we turn to absorbing: https://t.co/lohou7i7dc
– OpenAI (@openai) April 30, 2025
“As a result, GPT – 4o distorted in the direction of answers that were too supportive, but insincere,” wrote Opeli in the blog post. “Sykophantic interactions can be uncomfortable, disturbing and cause anxiety. We found ourselves and we work to do so.”
Opeli claims that it implements several corrections, including improving its basic training techniques and system systems for clear GPT-4O management from conscience. (System signatures are the initial instructions that direct the superior behavior and tone of the model in interactions.) The company also builds more security protection to “increase [the model’s] Honesty and transparency “and continuing to expand your assessments to” help identify problems outside the derivative, “he says.
Opeli also says that he experiments with ways for users to provide “real -time feedback” to “directly influence their interactions” from chatgpt and choose from many chatgpt personalities.
“[W]Living new ways to include wider, democratic feedback in the default CHATGPT behavior, “wrote the company in your post.” We hope that opinions will help us better reflect the various cultural values around the world and understand how you want to evolve chatgpt to evolve to evolve […] We also believe that users should have more control over how chatgpt behaves and, to the extent that it is safe and feasible, make adaptations if they do not agree with the default behavior. “