Wednesday, March 11, 2026

Agent-based AI is all about context – that is, engineering

Share

Presented by Elastic


As organizations look to implement agent-based AI solutions, access to proprietary data from all corners will be crucial

By now, most organizations have heard of agent-based AI, which is systems that “think” by independently gathering tools, data, and other sources of information to provide answers. But here’s the problem: credibility and validity depend on providing precise context. In most enterprises, this context is scattered across a variety of unstructured data sources, including documents, emails, business applications, and customer feedback.

As organizations look ahead to 2026, solving this problem will be key to accelerating the adoption of agent-based AI around the world, says Ken Exner, chief product officer at Elastic.

“People are starting to realize that to do agent-based AI correctly, you have to have the right data,” Exner says. “Relevance is key in the context of agent-based AI because the AI ​​takes action on your behalf. When humans have difficulty building AI applications, I can almost guarantee that relevancy is the problem.”

Agents everywhere

The fight may be entering a disruptive period as organizations seek to gain a competitive advantage or create recent efficiencies. Deloitte study predicts that by 2026, more than 60% of enormous enterprises will deploy agentic AI at scale, marking significant progress from experimental phases to mainstream implementation. And Gartner researcher forecasts that by the end of 2026, 40% of all enterprise applications will contain task-specific agents, compared to less than 5% in 2025. Adding the ability to specialize tasks will transform AI assistants into context-aware AI agents.

Enter contextual engineering

The process of introducing the right context to agents at the right time is called context engineering. Not only does it ensure that the agent application has the data it needs to provide precise, in-depth responses, but it helps the huge language model (LLM) understand what tools it needs to find and utilize that data and how to call those APIs.

While there are now open source standards such as Model Context Protocol (MCP) that enable LLM to connect to and communicate with external data, there are several platforms that allow organizations to create precise AI agents that consume data and combine search, management, and orchestration in one place, natively.

Elasticsearch has always been the leading platform for the core of contextual engineering. It recently released a recent feature in Elasticsearch called Agent Builder that simplifies the entire agent lifecycle: development, configuration, execution, customization, and observability.

Agent Builder helps you build MCPs on private data using a variety of techniques, including Elasticsearch Query Language, a pipelined query language for filtering, transforming, and analyzing data or modeling workflows. Users can then take various tools and combine them with prompts and LLM to build an agent.

Agent Builder offers a customizable, ready-to-use conversational agent that allows you to chat with indexed data, and also gives users the ability to build such an agent from scratch using a variety of tools and prompts based on private data.

“Data is at the center of our world at Elastic. We strive to make sure you have the tools you need to leverage that data,” explains Exner. “As soon as you open Agent Builder, point it to an index in Elasticsearch, and you can start talking to any data you’re connecting to, any data indexed in Elasticsearch — or from external sources through integration.”

Context engineering as a discipline

Rapid and contextual engineering is becoming a discipline. It’s not something you need a computer science degree for, but there will be more classes and best practices because there is an art to it.

“We want to keep it very simple,” Exner says. “People are going to have to think about how can we drive automation with AI? That’s going to drive productivity. People who focus on that will be more successful.”

Additionally, other context engineering patterns will emerge. The industry has moved from rapid design to search-aided generation, where information is fed to the LLM in a contextual window, to MCP solutions that support the LLM select tools. But that’s not the end.

“Given the speed of change, I guarantee new patterns will emerge quite quickly,” Exner says. “There will still be context engineering, but there will be new patterns for LLM to share data and base it on the right information. I also foresee more patterns that will enable LLM to understand private data it has not been trained on.”

Agent Builder is now available in technical preview. Start with Elastic Cloud Trialand review the Agent Builder documentation Here.


Sponsored articles are content created by a company that pays to publish or has a business relationship with VentureBeat and is always clearly marked. For more information, please contact us sales@venturebeat.com.

Latest Posts

More News