Thursday, March 12, 2026

Do huge language models dream of AI agents?

Share

While sleeping The human brain sort various memories, consolidating vital, while rejecting those that do not matter. What if AI can do the same?

BiltA company that offers local purchasing and restaurant offers for tenants has recently arranged several million agents with hope for this.

Bilt uses a startup technology called To read This allows agents to learn from previous conversations and share memories. By using the process called “Sleep calculation”, agents decide what information to store in their long -term memory vault and what may be needed for faster withdrawal.

“We can make one update to [memory] Block and behave hundreds of thousands of agents change, “says Andrew Fitz, AI engineer in Bilt.” This is useful in every scenario in which you want a fine -grained control over the context of agents, “he adds, referring to the text given to the model during inference.

Enormous language models can usually “resemble” things if the information is included in Contextual window. ANDIf you want chatbot to remember your latest conversation, you must paste it into the chat.

Most AI systems can support a circumscribed amount of information in the context window before their ability to operate data and hallucinations or becomes confused. However, the human brain is able to submit useful information and remind it later.

“Your brain is constantly improving, adding more information such as a sponge,” says Charles Packer, general director of Letty. “In the case of language models it is exactly the opposite. You run these language models in the loop long enough, and the context is poisoned; they are derailed and you just want to reset.”

Packer and his co -founder Sarah Wooders previously developed MemgptThe Open Source project, which aims to facilitate LLM in deciding what information should be stored in brief -term and long -term memory. In the case of Letta, the duo expanded its approach so that agents learn in the background.

Bilt’s cooperation with Lett is part of a broader pressure to provide artificial intelligence for storing and recalling useful information, which can make chatbots smarter and agents less susceptible to mistakes. According to the experts I talked to, the memory remains poorly developed in contemporary artificial intelligence, which undermines the intelligence and reliability of AI tools.

Harrison Chase, co -founder and general director of Langchain, another company who has developed a method of memory improving at AI agents, says that he considers the memory as an vital part of contextual engineering – where the user or engineer decides what information provides in the context window. Langchain offers companies several different types of memory for agents, from long -term facts about users to memories of recent experiences. “I say that memory is a form of context,” says Chase. “A large part of the AI ​​engineer’s task is basically obtaining a model of the right context [information]. “

Consumer AI tools are gradually becoming less and less forgotten. Openai in February announced That ChatgPT will store the right information to provide users with a more personalized experience – although the company has not revealed how it works.

Letta and Langchain make the process of reminding more lucid for engineers building AI systems.

“I think that it is very important not only that the models are open, but also that memory systems are open,” says Clem Delangue, CEO of AI Hosting Face Hosting Platform and investor in Letta.

Interestingly, Packer CEO Letta indicates that AI models can also be vital to learn what to forget. “If the user says:” This one project we were working on, wipe it out of memory “, then the agent should be able to return and with retrograde power to rewrite any memory.”

The concept of artificial memories and dreams makes me think about Do Androids dream of electric sheep? Author: Philip K. Dick, an amazing novel that inspired a stylish dystopian film Blade Runner. Enormous language models are not as impressive as Rebellious replicas history, but it seems that their memories, It may be equally fragile.


This is the edition Will Knight’s AI Lab newsletter. Read previous newsletters Here.

Latest Posts

More News