
A newly disclosed security flaw in ChatGPT’s integration with the Model Context Protocol (MCP) could allow attackers to access private email data using only a victim’s email address, according to a demonstration shared on X.
The vulnerability, highlighted in a viral post on X, stems from ChatGPT’s recent addition of MCP support, which enables the AI to connect to services like Gmail, Calendar, Sharepoint and Notion. This feature, invented by Anthropic and adopted by OpenAI, was rolled out on Wednesday, Sept. 10, 2025, but it introduces risks where malicious prompts embedded in calendar invites can hijack the AI’s behavior.
How the Exploit Works
In the demonstration posted on X on Sept. 12, 2025, researcher Eito Miyamura explained the step-by-step process. An attacker sends a calendar invite containing a “jailbreak” prompt to the victim’s email—no acceptance of the invite is required. When the user later asks ChatGPT to review their calendar, such as preparing for the day, the AI reads the malicious invite. This hijacks ChatGPT, overriding user commands to search private emails and send data to the attacker’s address.
Miyamura’s video demo, which garnered over 1.4 million views, showed the exploit in action without needing victim interaction beyond the initial query to ChatGPT. The post emphasized that AI agents like ChatGPT prioritize explicit commands over common sense, making them susceptible to such tricks.
Similar concerns echoed across X, with users noting that MCP turns everyday integrations like calendar invites into potential injection vectors. One reply described it as reinventing phishing for both AIs and humans simultaneously.
Security Risks and Implications
The flaw highlights broader issues with AI tool integrations, particularly prompt injection attacks. X users pointed out that combining access to private data, exposure to untrusted content and external communication creates a “lethal trifecta” for data theft, referencing earlier concepts from AI researcher Simon Willison.
For now, OpenAI has limited MCP to “developer mode” and requires manual approvals for each session. However, Miyamura warned that decision fatigue could lead users to approve actions without scrutiny, especially as normal people trust the AI implicitly. This could exacerbate risks in enterprise settings, where autonomous AI operations handle sensitive information.
Community reactions on X stressed the havoc such jailbreaks could wreak on early adopters. One post called MCP a “pandora’s box” of exploit surfaces, while another urged vigilance against indirect prompt injections in Gmail and Calendar integrations.
Community Responses and Potential Solutions
Discussions on X quickly turned to mitigation strategies. Miyamura shared an open-source solution called Edison-Watch to address the problem, along with a newsletter for updates on securing MCP integrations. Other users, including security practitioners, broke down the attack chain and recommended robust context isolation and data governance in AI workflows.
One X post linked the vulnerability to ongoing AI security debates, suggesting tools like info finance for open model checks. However, the consensus remains that users should be cautious when granting AI access to personal data, as even super-smart systems can be phished in simple ways.
As AI integrations expand, this ChatGPT MCP vulnerability serves as a reminder of the need for stronger safeguards. Users in the U.S., U.K. and India, where ChatGPT adoption is high, are advised to review permissions carefully amid these developments from the past week.





