Friday, March 13, 2026

A single poisoned document can leak “secret” data via CHATGPT

Share

Latest generative AI models are not only independent chatbots generating text-you can easily connect them to your data to give personalized answers to your questions. Openai CHATGPT can be combined Gmail was allowed to check the GitHub code or find meetings on the Microsoft calendar. But these connections can be used – and scientists have shown that only one “poisoned” document can do this.

The recent findings of safety researchers Michael Bargura and Tamir Ishaya Sharbata, revealed today at the Black Hat Hacker conference in Las Vegas, show how the weakness of OPENAI connectors allowed to separate confidential information from the Drive account using an indirect quick injection attack. In the demonstration of the attack, Called agentflayerBargury shows how the secrets of programmers could be distinguished, in the form of APi keys, which were stored on a demonstration drive account.

The susceptibility emphasizes how connecting AI models with external systems and sharing more data increases the potential attack area for malicious hackers and potentially multiplies ways to introduce gaps in security.

“There is nothing that the user must do to be violated, and there is nothing that the user must do so that the data can get out,” says Bargury, CTO in the security company Zenita, Wired. “We showed that this is completely zero clicking; we only need your email, we provide you with a document and all this. Yes, it is very, very bad,” says Bargury.

Opeli did not immediately answer Wired to comment on the susceptibility to the connectors. The company introduced connectors to ChatgPT as a beta function at the beginning of this year and its Web lists At least 17 different services that can be combined with his accounts. He says that the system allows you to “enter your tools and data to chatgpt” and “search for files, download data and refer to the content directly on the chat.”

Bargury claims that he reported discoveries to Openai at the beginning of this year and that the company quickly introduced mitigation to prevent the technique he used to extract data via the connector. The way the attack works means that only a circumscribed amount of data could be distinguished – documents could not be deleted as an attack.

“Although this problem is not specific to Google, it illustrates why the development of solid protection against fast injection attacks is important,” says Andy Wen, senior director of security management in Google’s working space, pointing to the company Recently reinforced AI security measures.

Latest Posts

More News