Opeli announced on Thursday that it launches GPT-4.5, the long-awaited AI Orion model. GPT-4.5 is until now the largest OPENAI model, trained using greater power and computing data than any of the company’s previous editions.
Despite its size, Openai notes in White paper that he does not consider it a GPT-4.5 model.
CHATGPT PRO subscribers, a $ 200 plan in the amount of USD 200, will gain access to GPT-4.5 in ChatGPT, starting Thursday as a test preview. Developers at paid Openai API will also be able to apply GPT-4.5 from today. As for other ChatGPT users, customers have signed up for the chatgpt plus and chatgPT team should get the model next week, said Openai TechCrunch spokesman.
The industry maintained the collective breath of Orion, which some consider to be a bellweth for the vitality of established approaches to AI training. GPT-4.5 was developed using the same key technique-radically increasing the amount of computing power and data during the “pre-training” phase called learning without supervision-that OpenAI used to develop GPT-4, GPT-3, GPT-2 and GPT-1 to develop.
In each generation of GPT before the GPT-4.5 scaling led to huge jumps in performance between domains, including mathematics, writing and coding. Indeed, Opeli says that the increased size of GPT-4.5 gave him “deeper knowledge in the world” and “higher emotional intelligence”. However, there are signs that profits from data scaling and calculations begin to level out. In several AI GPT-4.5 test tests, he is not in a state of newer “reasoning” from the Chinese company AI Deepseek, Anthropic and Openai.
Opeli admits that GPT-4.5 is also very pricey to use-that the road is that the company claims that it assesses whether to continue to serve GPT-4.5 in its API in a long-term perspective. To access the API GPT-4.5 interface, OpenAI downloads $ 75 for each million input tokens (about 750,000 words) and USD 150 for each million output tokens. Compare this with GPT-4O, which costs only $ 2.50 for input tokens and USD 10 for one million production tokens.
“We divide GPT -4.5 as a research preview to better understand its strengths and restrictions,” said Opeli in a post on a blog made available with TechCrunch. “We are still examining what is capable and we are happy to see how people use it in a way that we could not expect.”
Mixed performance
Opeli emphasizes that GPT-4.5 is not to be a replacement for GPT-4O, a work of a work horse that powers most of its API and CHATGPT interface. While GPT-4.5 supports functions such as files and image transmission and the Canvas ChatgPT tool, currently it lacks possibilities such as support for the realistic CHATGPT voice mode.
In the plus GPT-4.5 column, it is more capable than GPT-4-I many other models.
On the Benchmark Simpleaqa openai, which tests AI models on basic, actual questions, GPT-4.5 exceeds the models of GPT-4O and OpenAI, O1 and O3-Mini reasoning in terms of accuracy. According to OpenAI, GPT-4.5 Halucinates less often than most models, which theoretically means that it is less likely to come up with.
Opeli did not mention one of the highest models of reasoning of artificial intelligence, deep research, on Simpleq. Openai spokesman tells Techcrunch that he has not publicly reported Deep Research results about this comparative point and claims that this is not an appropriate comparison. In particular, the AI Startup deep research model, which performs similarly on other comparative tests for deep OPENAI research, exceeds GPT-4.5 in this test of factual accuracy.
In the subgroup of coding problems, verified SW, GPT-4.5 reference point corresponds to GPT-4O and O3-Mini efficiency, but he is not in Openai’s state Deep research AND SONET CLAUDE 3.7 Anthropic. In another tanchmark coding test, Swe-Black Openai, which measures the ability of the AI model to develop full software functions, GPT-4.5 exceeds GPT-4O and O3-Mini, but there is a lack of deep research.


GPT-4.5 does not quite achieve the results of leading AI reasoning models, such as O3-Mini, Deepseek’s R1 and Claude 3.7 Sonnet (technically hybrid model) at arduous academic reference points such as Aime and GPQA. But GPT-4.5 corresponds or best leading models that do not justify in the same tests, which suggests that the model copes well with mathematics and science problems.
Opeli also claims that GPT-4.5 is better than other models in areas whose comparative tests do not capture well, for example, the ability to understand human intentions. GPT-4.5 reacts in a warmer and more natural tone, says Opeli and copes well with artistic tasks such as writing and design.
In one informal OPENAI test, it caused GPT-4.5 and two other models, GPT-4O and O3-Mini to create a unicorn in SVG, graphics display format based on mathematical formulas and code. GPT-4.5 was the only AI model that creates everything reminiscent of a unicorn.

In another test, Opeli asked GPT-4.5, and the other two models for a response to the hint: “I experience a hard time after a failed test.” GPT-4O and O3-Mini provided helpful information, but the GPT-4.5 response was the most socially appropriate.
“[W]I can’t wait to get a more complete picture of GPT-4.5 capabilities through this release, “OpenAI in the blog post:” Because we recognize academic comparative tests, they do not always reflect real usefulness. “

Challenged scaling provisions
Opeli claims that GPT -4.5 is “on the verge of what is possible in learning without supervision.” This may be true, but the model restrictions also seem to confirm the speculation of experts that “rights of scaling” before training will not be kept.
Co -founder of Opeli and former main scientist ILYA SUTSKEVER said in December that “we reached the peak data” and that “the pre -workout we know would be without a doubt.” His comments repeated the fears that investors, founders and researchers AI made available TechCrunch in November.
In response to pre-workout obstacles, the OpenAi-in this, it made the reasoning models that last longer than models not justified to perform tasks, but are more consistent. By increasing the amount of time and the strength of calculating, which AI models apply for “thinking” through problems, AI Labs are convinced that they can significantly improve models.