Opeli announces a novel “agent” of artificial intelligence designed to aid people conduct in -depth, intricate research using chatgPT, Chatbot platform with AI drive.
This is properly called deep research.
Openai said Blog post It was published on Sunday that this novel ability was designed for “people who perform intensive knowledge in areas such as finance, science, politics and engineering and need accurate, precise and reliable research.” It can also be useful, added a company for anyone who makes “purchases, which usually require careful tests, such as cars, devices and furniture”.
Basically, deep CHATGPT research is intended for cases where you do not want only quick response or summary, but instead you need to cure information from many websites and other sources.
Opeli said that today it provides deep research for CHATGPT PRO users, constrained to 100 queries per month, with the aid of Plus and Team users, and then Enterprise. (OpenAI is aimed at implementing a plus for about a month, said the company, and query limits for paid users should soon be “much higher.) Opeli did not have a release schedule to provide CHATGPT clients in Great Britain, Switzerland and the European Economic Area.
To apply ChatgPT Deep Research, you will simply choose “Deep Research” in the composer, and then enter the inquiry with the option of attaching files or spreadsheets. (For now, it is internet experience, with the integration of mobile and computer applications, which will come this month.) Deep research can take from 5 to 30 minutes to answer the question, and you will receive a notification after the search.
Currently, ChatgPT Deep Research results are only textual. But Opeli said that he was going to add built -in images, data visualizations and other “analytical” outputs soon. Opeli added that on the road map it is possible to combine “more specialized data sources”, including “internal resources” and internal resources.
The most crucial question is: how precise is deep chatgpt research? After all, AI is imperfect. It is susceptible to hallucinations and other types of errors that may be particularly harmful in the script of “deep research”. Perhaps that is why Opeli said that any performance of deep chatgpt research will be “fully documented, with clear quotes and a summary [the] Thinking, facilitating reference and verification of information. “
The jury talks about whether this alleviation will be sufficient to combat AI errors. Search function of web openai in chatgpt, chatgpt search, not uncommon creates blunders and gives incorrect answers to the questions. TechCrunch testing has shown that chatgpt search has brought less useful results than searching on Google in search of some queries.
To strengthen the Deep Research accuracy, OpenAI uses a special version of the recently announced AI O3 “reasoning” model, which was trained by learning to strengthen in the scope of “real tasks requiring the use of browser and Python tool.” Learning to strengthen the model generally “teaches” the model through test and errors to achieve a specific goal. When the model approaches the target, it receives virtual “awards”, which, preferably, improve it in the future.
He said that this version of the OPENAI O3 model is “optimized for viewing and data analysis”, adding that “it uses reasoning for searching, interpretation and analysis of huge amounts of text, images and PDFS on the Internet, turning the reaction to information if necessary, which he encounters […] The model is also able to view files sent by the user, delete and intensify on the charts using the Python tool, set up both generated charts and paintings from websites in their answers and quote specific sentences or fragments from its sources. “

The company said that it tested deep CHATGPT research with The last exam of humanityA rating that covers over 3,000 questions at the level of experts in various academic fields. The O3 model supplying deep tests reached an accuracy of 26.6%, which may look like a class of failure – but the last exam of humanity was designed to be more hard than other reference points to overtake model progress. According to OPENAI, the Model of Deep Research O3 overtook Gemini’s thinking (6.2%), Grok-2 (3.8%) and its own GPT-4O Openai (3.3%).
Despite this, Opeli notes that deep CHATGPT research has restrictions, sometimes making mistakes and incorrect conclusions. Deep research can fight to distinguish between authoritative information from rumors, the company said, and they often do not convey when there is something uncertain – and can also make mistakes of formatting in reports and quotes.
For anyone who worries about the influence of generative artificial intelligence on students or on anyone who tries to find information online, this type of in -depth, well -digital production probably sounds more attractive than a deceitful elementary summary of chatbot without citations. But we’ll see if most users actually submand the initial data of real analysis and double checking if they just treat it as a more professional text for a copy.
And if all this sounds familiar, Google announced a similar AI function with exactly the same name less than two months ago.