Saturday, April 19, 2025

Openai introduces a few models of reasoning AI, O3 and O4-Mini

Share

Openai announced on Wednesday the introduction of O3 and O4-Mini, modern AI reasoning models designed to stop and work by questions before answering.

The company calls O3 its most advanced reasoning model in history, exceeding the company’s previous models on tests measuring mathematics, coding, reasoning, learning and possibilities of visual understanding. Meanwhile, O4-Mini offers what, according to OpenAI, is a competitive compromise between the price, speed and performance-the factors that programmers often take into account when choosing the AI ​​model to power their applications.

Unlike previous reasoning models, O3 and O4-Mini can generate answers using tools in chatgPT, such as internet browsing, performing Python code, image processing and image generation. From today, models and the O4-Mini variant called “O4-Mini-High”, which spends more time creating answers to improving their reliability, are available for OPENAI’s Pro, Plus subscribers and team plans.

The modern models are part of OpenAi’s efforts to defeat Google, Meta, XAI, Anthropic and Deepseek in the global AI Cutthroat race. While OpenAI was to first release the AI, O1 reasoning model, competitors quickly followed their own versions that match or exceed the performance of the OPENAI composition. In fact, reasoning models began to dominate in the field, because AI laboratories are trying to get more efficiency from their systems.

O3 was hardly released in chatgpt. Altman itself, the general director of Opeli, signaled in February that the company intended to devote more resources to a sophisticated alternative that included O3 technology. But competitive pressure apparently encouraged OpenAI to reverse the course.

Opeli claims that O3 achieves the latest performance in verified benchie (without non -standard scaffolding), test capacity, gaining 69.1%. The O4-Mini model achieves similar performance, gaining 68.1%. Another best model OPENAI, O3-Mini, obtained 49.3% in the test, while Claude 3.7 Sonnet obtained 62.3%.

Opeli claims that O3 and O4-Mini are his first models that can “think with paintings”. In practice, users can send images to chatgpt, such as sketches of the board or diagrams with PDF, and the models will analyze the images during the “thoughtful chain” phase before the answer. Thanks to this modern O3 and O4-Mini ability, they can understand the blurred and low quality of images and can perform tasks such as enlargement or rotating images because they reason.

In addition to the ability to process the O3 and O4-Mini image, they can run and perform Python code directly in the browser via the Canvas Chatgpt function and search for the network when he was asked about current events.

In addition to ChatgPT, all three models-O3, O4-Mini and O4-Mini-High-Będ will be available via OPENAI end points, API API interface of the Chat and API completion, enabling engineers to build applications with company models at the rates based on exploit.

In the coming weeks, Opeli claims that he plans to issue O3-PRO, the O3 version, which uses more computing resources to create answers, exclusively for ChatGPT Pro subscribers.

Altman himself, the general director of Opeli, pointed out that O3 and O4-Mini may be the last independent model of AI in ChatGPT before the GPT-5, a model that the company said that established models such as GPT-4.1 with reasoning models.

Latest Posts

More News