Openai He says he will introduce changes In the way he updates AI models that have fed chatgpt, after an incident that caused the platform to become too derived for many users.
Last weekend, after OpenAI introduced the improved GPT-4-sea CHATGPT power model in social media, they noticed that CHATGPT began to answer in a too checking and pleasant way. It quickly became a meme. Users have published the screenshots of the chatgPT applauding all kinds of problematic, dangerous decisions AND ideas.
In the post on X last Sunday, the CEO of Sam Altman recognized The problem and said that Opeli would work on corrections “as soon as possible”. Altman on Tuesday announced The GPT-4O update has been withdrawn and that Opeli worked on “additional corrections” of the model’s personality.
The company published a postmort on Tuesday, and on Friday in the OPENAI blog expanded specific corrections that it plans to introduce to the process of implementing the model.
Opeli claims that it plans to introduce the “alpha” OPT-in phase for some models that would allow some CHATGPT users to test models and provide feedback before launching. The company also claims that it contains explanations of “known restrictions” for future incremental models updates in ChatGPT and adapt the security review process to formally consider “model behavior problems” such as personality, fraud, reliability and hallucination (i.e. when the model does matters) as “blocking startup”.
“Going further, we communicate proactively about the updates that we introduce to models in chatgpt, whether” subtle “or not”, wrote OpenAi in a blog post. “Even if these problems are not perfect today, we commit to blocking the prime minister based on proxy measurements or quality signals, even if indicators such as tests A/B look good.”
The promised corrections appear when more people turn to chatgpt for advice. According to one last survey In accordance with the financing officer of the Express Legal Funding, 60% of adults in the US used CHATGPT to satisfy advice or information. The growing rely on chatgpt – and the huge base of platform users – raises the rates when there are problems such as an extreme flatter, not to mention hallucinations and other technical shortcomings.
TechCrunch event
Berkeley, California
|.
June 5
Book now
As one soothing step, at the beginning of this week Opeli said that he would experiment with the ways for users to convey “real -time feedback” on “direct impact on their interactions” from ChatgPT. The company also stated that improved techniques, in order to manage models away from absorption, potentially allow people to choose from many personality models in chatgPT, build additional safety handrails and expand the grades to support identify problems going beyond favorable.
“One of the biggest lessons is to fully recognize how people began to use chatgPT to get deeply personal tips – something that we did not see so much a year ago,” continued Opeli in his blog post. “At that time, this was not the main goal, but because AI and society evolved, it became clear that we must treat this case of use with great care. Now it will be a more significant part of our work in the field of security.”