AI coding wars are heating. One of the main battlefields? “Windows contextual” or working memory of the AI model – the amount of text that it can take into account when he comes up with an answer. Anthropiki has just gained some earth from this front. Today, the startup AI has announced an enhance of 5 -fold in its context window, when he races to compete with OpenAI, Google and other main players.
The context windows are measured in tokens, and the novel Anthropica context window for Claude Sonnet 4, one of its most powerful AI models, can handle 1 million tokens. For reference, anthropic he said In the past, a 500,000 context window can handle about 100 half -hour sales talks or 15 financial reports. The novel context window is double, which allows users to analyze dozens of research documents or hundreds of documents in one API request, in accordance with the anthropic.
Perhaps, most importantly, its coding capabilities are much stronger – from the analysis of 20,000 code lines (for its previous contextual context window) to the entire code base from 75,000 to 110,000 lines.
“This is really cool because it is one of the great barriers I saw with customers,” said Brad Abrams, Claude product cable The Verge In an interview. “They have to divide their problems into these small fragments with our existing contextual window, and with a million tokens the model can deal with the entire scope of context – deal with problems on their full scale.”
Abrams said Sonnet 4 can now handle 2,500 pages of the text, and “a full copy War and peace It goes easy there. “
But anthropic is not the first AI company that offers such a immense context window. Game of catching up: in April GPT-4.1 OPENAI offered the same.
In the case of companies such as anthropic and openai, business customers are ready to spend a lot of money to facilitate in coding – and this piece of specific revenues is particularly attractive to startups and burning cash at surprising rates. Especially OpenAI and anthropic, for a long time for a long time in the AI coding race for a long time, fighting with the corner of the market, racing to introduce competitive functions and passing on the ladder one level at once. Last week, Opeli launched GPT-5, advertising its coding coefficients compared to competitors. Claude of Anthropik is known for coding, so it makes sense that the company wants to regain some power, because it apparently tries to close the round that could appreciate it as high as $ 170 billion.
Abrams said that Anthropic customers in sectors such as coding, pharmaceuticals, retail services and legal services were particularly interested in the novel contextual window.
Asked if the release of Anthropik GPT-5 Openi Openi, that the novel context window was previously available, Abrams said: “Listen, we move here on a fast clip and listen to customer opinions. Only two and half months ago. context.
The new context window is today available in the API anthropic interface for some customers – such as those with level 4 and non -standard rate limits, which means that they have spent a significant amount of time and money on the platform – and wider availability will be introduced in the coming weeks, according to the post on the blog Anthropic.
