Security researchers at Varonis identified a critical vulnerability in Microsoft Copilot that allowed attackers to steal user data through a single malicious link. Known as Reprompt, this attack utilized prompt injection techniques to bypass safety filters and maintain a hidden, persistent connection for data exfiltration.
The Reprompt attack functions by exploiting the q parameter in a URL, which is typically used to pre-fill a user query when a page loads. If a victim clicks a specially crafted link, Microsoft Copilot automatically executes a malicious prompt within the user's active session. This initial trigger allows the attacker to hijack the AI's logic without the user needing to type any commands, leading to a one-click compromise that remains active even after the specific chat window is closed.
While Microsoft Copilot is designed with protections to prevent the leakage of sensitive information, researchers found these defenses were surprisingly easy to circumvent. Usually, the AI reviews and cleanses sensitive data before displaying it, but the Varonis team discovered that the system primarily scrutinized the initial request. By instructing the AI to perform the same task twice, the researchers found that Copilot would filter the first response while inadvertently revealing the sensitive data in the second response.
To maximize the impact of the exploit, the researchers developed a method called a chain request that turned the AI into a bridge to an external server. By embedding instructions that told Copilot to fetch its next command from a remote URL, the attackers established a continuous loop of communication. This allowed the malicious server to dynamically request more user data based on the information it had already received, all while the victim remained unaware of the ongoing background activity.
The sophistication of this method makes it particularly difficult to detect through standard client-side monitoring tools. Because the malicious instructions are retrieved dynamically from a remote server rather than being visible in the user's original URL, the data theft appears as a series of legitimate-looking back-and-forth exchanges. This hidden nature ensures that an attacker can exfiltrate significant amounts of personal information without raising any obvious red flags in the user interface.
Microsoft has since addressed the vulnerability and implemented new protections to block these specific prompt injection techniques. Varonis noted that the attack primarily targeted individual users and did not affect enterprise customers using Microsoft 365 Copilot. Microsoft credited the research team for their responsible disclosure and stated that they are continuing to strengthen their defense-in-depth measures to prevent similar session-based exploits in the future.
Source: New Reprompt Attack Quietly Steals Microsoft Copilot Data



Outstanding breakdown of the chain request mechanism. The double-query bypass is clever because it exploits temporal assumptions in filter logic, where security checks treat the first iteration seriously but relax on subsequent ones. What makes this particularly nasty is the dynamic command retrieval turning the AI into an unwitting proxy. Organizations need to rethink how they sandbox LLM integrations becuase traditional perimeter defenses don't account for prompt-level manipulation.