Altman himself revealed today that Opeli will publish the artificial intelligence model in the coming months in the coming months.
“We are glad that we can issue a new, powerful tongue model with reasoning in the coming months,” Altman said on x.
Altman said in a post that the company has been thinking about releasing the open weight model for some time, adding “now it is important.”
This movement is partly a response to the uncontrolled success of the R1 model from the Chinese company Deepseek, as well as the popularity of Lama META models.
Opeli may also feel the need to show that it can train a novel model cheaply, because the Deepseek model has been allegedly trained for a fraction of the costs of most immense AI models.
“This is an amazing news,” said Wired Clement Delangue, co -founder and general director of Huggingface, specializing in hosting open AI models. “Thanks to Deepeks, everyone is aware of the power of open weights.”
OpenAI currently provides artificial intelligence via Chatbot and clouds. R1, lama and other open weight models can be downloaded for free and modified. The weight of the model relates to values inside a immense neural network – something that is set during training. Open weight models are cheaper to employ and can also be adapted to sensitive employ cases, such as support for highly confidential information.
Steven Heidel, member of the technical staff at Openai, I disturbed Altman Announcement and added “We will issue a model this year that you can start on your own equipment.”
Openai today also published a website Inviting programmers apply for early access to the upcoming model. Altman said in his post that the company would host events for programmers with early prototypes of the novel model in the coming weeks.
The finish line was the first main company of AI, which continued the more open approach, releasing the first version of LAM in July 2023. The growing number of AI models with open weight is now available. Some researchers note that Llam and some other models are not as limpid as possible, because training data and other details are still kept secret. The meta also imposes a license that limits the ability of other companies to benefit from applications and tools built using Llam.
Update 3/31/25 4:21 EST: This article was updated about the commentary of Clement Delangue, co -founder and general director of Huggingface.