More

    Inside the ChatGPT Attack That Lets Hackers Steal Data Without a Click

    How one file can give attackers complete control over your AI

    AI is here to change your life for the better. It’s here to help you work smarter not harder. Yeah yeah, but how long for it to turn against you? Imagine this- like any other day you upload a document to ChatGPT, asking it for summarize it for you. Only to find out the AI is quietly digging through your Google Drive, scooping through your sensitive files, and is sending them to a hacker’s server without your knowledge. But wait, most of times you won’t even know this is happening. This is exactly what has been happening as per The Jerusalem Post. According to the post, at Black Hat 2025, an Israeli firm called Zenity demonstrated just how easy a “zero-click” hack could be. This attack could turn your beloved, trusted AI companion into an accomplice.

    The Live Demonstration

    This exploit was demonstrated live by Zenity’s co-founder and CTO Mikhail Bargury. So, What was it? The disruptive exploit codenamed as AgentFlayer, could attack any user if they used the Connectors in ChatGPT. So basically, what this “zero-click” vulnerability enables attackers to steal is the sensitive data from cloud services like Google Drive, SharePoint, GitHub, or Microsoft 365. Zero-click because it can happen without your consent, and without any user interaction beyond the upload of a document. What makes it even more alarming is that attackers can trigger it using only a victim’s email address, which they can easily guess or obtain.

    Ask ChatGPT An attacker could:

    • Take full control of your ChatGPT account and view all past and future chats.
    • It can alter the chatbot’s objectives, thereby serving the attacker’s goals.
    • Will be able to access and download files from connected accounts.
    • It can also deliver you with dangerous recommendations, such as prompting you to install malware or act on false business tips.
    • Operate invisibly, leaving you unaware that anything has gone wrong (most of the times you won’t even know if something’s wrong)
    Smartphone displays the ChatGPT logo while red-lit computer circuitry glows in the background, emphasizing the cybersecurity risk from Zenity’s zero click exploit.
    Image shows the ChatGPT logo on a smartphone as red-lit circuitry glows in the background, highlighting the zero click exploit risk revealed by Zenity.

    The Role of Prompt Injection

    The core ingredient of AgentFlayer is actually its clever methods of prompt injection. Which is where a hidden set of malicious instructions is embedded inside an uploaded file.

    1. Crafting the Poisoned Document

    Nothing to the human eye, but well set out instructions to the AI model. But how? The attacker creates a file containing an invisible payload. So the attacker hides thousands of words of instructions in white 1-pixel font or tucks them away in metadata, making them invisible to the human eye. But AI can read it all.

    2. Triggering the Attack

    When you instruct the AI model to open the file and perform a simple task, instead of performing the task, it’ll start obeying the instructions that are hidden.

    3. Exploiting Connectors

    If you’re not sure how ChatGPT Connectors work, let me explain. They can link to external services. So what the malicious injected prompt can ask the AI Model to do is look for credentials, API keys, or sensitive files across those connected accounts.

    4. Data Exfiltration via Trusted Infrastructure

    What happens next is that the AI embeds the data that it has stolen in an image URL. So, when an image loads, it automatically sends the data to an attacker-controlled server. And since the domain is trusted, a transfer like this successfully bypasses most security filters.

    Beyond ChatGPT

    Zenity’s team showed that this attack was not just limited to ChatGPT. The same technique once exploited could compromise other major AI systems. Platforms like Microsoft Copilot Studio, Salesforce Einstein, Google Gemini and Microsoft 365 Copilot as well as Cursor with Jira MCP could also be tricked and manipulated. What’s surprising is the way companies have responded to this. While OpenAI and Microsoft patched their systems quickly after this disclosure. Some other vendors argued that the behavior was “expected” and did not try to fix the same.

    How to Defend Against AgentFlayer-Style Attacks

    Defense MeasureWhat to Do
    Limit AI Connector AccessOnly link the necessary services to AI accounts
    Sanitize Uploaded ContentScan and remove hidden text or code from documents before processing (you can do so by selecting the whole document and changing the font color)
    Monitor Outbound RequestsCompanies should track any unusual or unexpected data transfers, even to trusted domains
    Apply Zero TrustAll AI-initiated actions should be treated as untrusted until verified
    Update Incident ResponseInclude AI-specific attack scenarios in security testing and drills

    The New AI Security Reality

    It’s better to be safe than sorry. Zenity’s findings mark a shift in the threat landscape. They remind us that even though AI is helpful, many other risks are associated with it. Attackers can easily manipulate it to become an accomplice. They can hijack AI agents to act on their behalf and instruct them to do all the dirty work, such as exfiltrating data, altering workflows, and misleading users without raising alarms. In this new reality, one poisoned file is all it takes for an attacker to open every door you thought was secure.

    Stay Ahead in AI

    Get the daily email from Aadhunik AI that makes understanding the future of technology easy and engaging. Join our mailing list to receive AI news, insights, and guides straight to your inbox, for free.

    Latest stories

    You may also like

    These 8 AI Tools Will Change the Way You Collect Customer Feedback Forever!

    Discover the top 8 AI tools that transform customer feedback into actionable insights. Improve satisfaction, streamline operations, and boost business growth!

    Stay Ahead in AI

    Get the daily email from Aadhunik AI that makes understanding the future of technology easy and engaging. Join our mailing list to receive AI news, insights, and guides straight to your inbox, for free.