Sunday, March 8, 2026

Agent autonomy without guardrails is an SRE nightmare

Share

João Freitas is the CEO and Vice President of Artificial Intelligence and Automation Engineering at the company Pager Duty

As the utilize of artificial intelligence in vast organizations continues to evolve, leaders are increasingly looking for up-to-date solutions that will deliver greater return on investment. The latest wave of this continuing trend is the adoption of AI agents. However, as with any up-to-date technology, organizations must ensure that they deploy AI agents in a responsible manner that enables them to be both swift and secure.

More than half of organizations have already deployed AI agents to some extent, and more expect to follow suit in the next two years. However, many early adopters are re-evaluating their approach. Four in 10 technology leaders regret not building a stronger governance foundation from the beginning, suggesting they have been quick to adopt AI, but with room to improve policies, principles and best practices to ensure the responsible, ethical and legal development and utilize of AI.

As AI adoption accelerates, organizations must strike the right balance between exposure risk and the implementation of guardrails to ensure the protected utilize of AI.

Where do AI agents pose potential risks?

To implement AI more safely, there are three main areas to consider.

The first is Shadow AI, when employees utilize unauthorized AI tools without explicit consent, bypassing approved tools and processes. IT should create the necessary processes of experimentation and innovation to introduce more competent ways of working with AI. While Shadow AI has been around as long as AI tools themselves, the autonomy of AI agents makes it easier for unapproved tools to operate outside of IT’s remit, which can introduce up-to-date security risks.

Second, organizations need to bridge AI ownership and accountability gaps to prepare for incidents or incorrect processes. The power of AI agents lies in their autonomy. However, if agents act in unexpected ways, teams must be able to determine who is responsible for resolving any issues.

The third risk occurs when there is no explanation of the actions taken by AI agents. AI agents are goal-oriented, but how they achieve their goals can be unclear. AI agents must have explainable logic underlying their actions so that engineers can trace and, if necessary, reverse actions that may cause problems with existing systems.

While none of these risks should delay implementation, they will aid organizations better ensure security.

Three guidelines for responsible AI agent adoption

Once organizations identify the risks that AI agents may pose, they must implement guidelines and guardrails to ensure protected utilize. By taking these three steps, organizations can minimize this risk.

1: Set human supervision as default

The AI ​​agency is growing at a rapid pace. However, we still need human oversight as AI agents gain the ability to act, make decisions, and pursue goals that can impact key systems. By default, humans should be up to date, especially when it comes to business-critical applications and systems. Teams using AI must understand what actions it can take and where intervention may be needed. Start conservatively and enhance the level of agency granted to AI agents over time.

As a result, operations teams, engineers, and security professionals need to understand the role they play in overseeing AI agent workflows. Each agent should be assigned a specific owner for clearly defined supervision and responsibility. Organizations must also enable any human to signal or ignore an AI agent’s behavior when an action has a negative outcome.

When considering tasks for AI agents, organizations should understand that while classic automation is good at handling repetitive, rule-based processes with structured inputs, AI agents can perform much more complicated tasks and adapt to up-to-date information in a more autonomous manner. This makes them an attractive solution for all kinds of tasks. However, as AI agents are deployed, organizations should control what actions the agents can take, especially in the early stages of the project. Therefore, teams working with AI agents should have approval pathways in place for high-impact activities to ensure the agent’s scope does not extend beyond expected utilize cases, minimizing risk to the broader system.

2: Bake safely

The introduction of up-to-date tools should not expose the system to up-to-date security threats.

Organizations should consider agent platforms that meet high security standards and are backed by enterprise-grade certifications such as SOC2, FedRAMP, or equivalent. Moreover, AI agents should not have free control over an organization’s systems. At a minimum, the AI ​​agent’s permissions and security scope must match the owner’s scope, and any tools added to the agent should not enable extended permissions. Restricting AI agents’ access to the system based on their role will also ensure a glossy implementation. Maintaining full logs of all actions taken by the AI ​​agent can also aid engineers understand what happened in the event of an incident and trace the problem.

3: Make the results understandable

The utilize of artificial intelligence in an organization can never be a black box. The rationale for each action should be illustrated so that any engineer trying to access it can understand the context the agent is using to make decisions and access the traces that led to those actions.

ANDInputs and outputs for each action should be recorded and available. This will aid organizations gain a solid overview of the logic behind the AI ​​agent’s actions, providing significant value in the event something goes wrong.

Security highlights the success of AI agents

AI agents offer organizations enormous opportunities to accelerate and improve existing processes. However, if they do not prioritize security and powerful governance, they may expose themselves to up-to-date threats.

As AI agents become more common, organizations must ensure they have systems in place to measure their performance and ability to take action when they cause problems.

Read more in our guest authors. You might also consider submitting your own entry! See ours guidelines here.

Latest Posts

More News