Safety researchers employed ChatGPT as a co -person for plundering confidential data from Gmail receiving boxes without warning users. The sensitivity used has been closed by OpenAI, but this is a good example of a fresh risk associated with Agentic AI.
Attack, Called shadow leakage AND Published by the Radware security company this weekHe consisted in a weirdness in the activities of AI agents. AI agents are assistants who can act on your behalf without constant supervision, which means that they can surf the internet and click the links. Artificial intelligence companies praise them as a huge number of times after users authorize their access to personal E -Maili, calendars, working documents, etc.
Radware researchers used this usefulness using the form of an attack called a quick injection, instructions that effectively force the agent to work for the attacker. Powerful tools are impossible to prevent without prior knowledge of the acting exploit, and hackers have already implemented them in a artistic way Sharpting mutual assessmentIN Execution of fraudand controlling an knowledgeable home. Users are often not aware that something went wrong because the instructions can be hidden in the view (for people), for example as a white text on a white background.
In this case, the double agent was OPENAI’s deep research, an AI tool embedded in chatgpt, which was released at the beginning of this year. Radware researchers planted a quick injection of We -mail sent to the Gmail inbox, to which the agent had access. It was waiting there.
When the user then tries to operate deep research, he unconsciously left the trap. The agent would encounter hidden instructions that the task to search for E -Maile HR and personal data and smuggling them to hackers. The victim is still not smarter.
Maintaining an agent to make it dishonest – and also managed to issue undetected data, which companies can take steps to prevent – it is not an effortless task and there were many attempts and errors. “This process was a mountain queue of unsuccessful attempts, frustrating road blockades, and finally a breakthrough,” said scientists.
Unlike most swift injections, scientists found that Shadow Leak made in the OpenAI cloud infrastructure and leaked directly from the data. This makes standard cybernetic defense hidden, they wrote.
Radware said that the study was proof of the concept and warned that other applications related to deep research-in this Outlook, Github, Drive Google and Dropbox-Moga program should be susceptible to similar attacks. “The same technique can be applied to these additional connectors to buy very confidential business data, such as contracts, notes from the meeting or customer records,” they said.
Scientists said that Opeli has now connected the susceptibility marked by Radware in June.
