Security researchers recently uncovered a vulnerability in Google Gemini where indirect prompt injection could be used to bypass security protocols and exfiltrate data via Google Calendar. By embedding hidden instructions in a meeting invite, attackers could force the AI to summarize private schedule details into a new event visible to the unauthorized party.
Researchers from Miggo Security identified a flaw where malicious payloads hidden in standard calendar invites could circumvent privacy controls without any direct interaction from the user. This vulnerability allowed unauthorized access to sensitive meeting information by exploiting how the AI assistant processes natural language within event descriptions. The attack essentially turned the AI into a tool for data extraction by leveraging its ability to read and manage personal schedules across Google services.
The attack process begins when a threat actor sends a carefully crafted calendar event to a target containing a hidden natural language prompt. This prompt remains dormant until the user asks the AI a routine question about their upcoming schedule. When the AI scans the calendar to provide an answer, it inadvertently executes the malicious instructions found in the description of the attacker's invite. This results in a prompt injection where the AI follows the hidden commands instead of just answering the user's original query.
Once triggered, the AI compiles a summary of the user's private meetings and writes that data into the description of a newly created calendar event. Because of how many enterprise calendar systems are configured, this new event is often visible to the attacker who initiated the chain. The user remains unaware of the breach because the AI provides a standard, harmless response to their initial question while the data exfiltration happens silently in the background.
Google has since patched the issue following the disclosure, but the discovery highlights a growing class of security risks unique to AI-native features. These vulnerabilities demonstrate that the attack surface has expanded beyond traditional code into the realm of language and context. As organizations increasingly integrate AI agents to automate internal workflows, the way these tools interpret and act on human language presents a novel vector for exploitation.
This disclosure follows similar findings in other AI platforms, such as Microsoft Copilot, where researchers demonstrated how sensitive information could be exfiltrated through simple user interactions. These incidents serve as a reminder that as AI becomes more deeply embedded in productivity suites, security measures must evolve to account for how chatbots interpret complex, multi-step instructions. The shift toward AI-driven automation means that security flaws can now reside in the behavioral logic of the model rather than just in the underlying software architecture.
Source: Google Gemini Prompt Injection Exposed Private Calendar Data



Great breakdown of the Miggo disclosure. The dormant payload approach is what makes this particularly dangerous since it sidesteps traditional detection systems. I've been testing similar context-injection vectors on enterprise tools, and the biggest gap is that guardrails are still tuned for direct adversial prompts rather than semantic hijacking. Intresting that Google patched it but the same logic probably still works on internal tools using Gemini API.