A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT
The latest generation Artificial intelligence models are not only Chatbots in independent texts-instead, can be easily connected to your data to give answers to your questions. Chatgpt from Openai can be connected to your inventive mail device, allow it to inspect your Github icon, or find dates in your Microsoft calendar. But these links have the ability to attack – and the researchers have shown that it may take just a single “poisoned” document to do so.
The new results of security researchers, Michael Parkori and Tamir Ishi Sharabat, revealed at the Black Hacker Conference in Las Vegas today, how the weak conductors in Openai allowed to extract sensitive information from the Google Drive account using an indirect injection attack. In a demonstration of the attack, Bargary, named Agentflayer, explains how the secrets of the developers could be extracted, in the form of API keys, which were stored in an account of a demonstration engine.
Weakness highlights how artificial intelligence models are connected to external systems and sharing more data through them to increase the surface of the potential attack of foolish infiltrators and may double the ways in which security gaps can be provided.
“There is nothing that the user needs to do to decline, and there is nothing the user needs to do in order to get out of the data,” says Bargury, CTO at the security company Zenity, for Wire. “We have shown that this is completely clicking,” Bargori says.
Openai did not immediately respond to WIRED’s request to comment weak in the conductors. The company provided conductors for Chatgpt as a trial feature earlier this year, and its website lists at least 17 different service that can be linked to its accounts. She says the system allows you to “bring your tools and data to ChatGPT” and “search files, withdraw direct data, and the reference content in the chat directly.”
BARGURY says he had reported the results of Openai earlier this year and that the company has quickly provided a reduction to prevent the technology that it used to extract data via conductors. The way the attack works only means extracting a limited amount of data simultaneously – the documents full of the attack cannot be removed.
“Although this problem is not related to Google, it explains why the importance of developing strong protection against immediate injection attacks,” says Andy Wen, Director of Google Workspace, noting the recently reinforced intelligence safety measures.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-08-06 23:30:00



