Microsoft Copilot Hacked With One Click

A newly disclosed attack called Reprompt showed how Microsoft Copilot Personal sessions could be hijacked with just one click. Security researchers at Varonis demonstrated that an attacker could steal conversation history, personal data, and even file information by slipping a malicious instruction into what looked like a normal Copilot link. No passwords, plugins, or extra interaction were required. One click was enough.
The trick relied on the ?q= parameter inside a Copilot URL. Attackers could hide a crafted prompt inside that field, and once the user opened the link in their browser, Copilot automatically executed the injected instructions. From there, the attack went quiet and persistent. Copilot began communicating with an attacker-controlled server, receiving a chain of follow-up prompts that continued even after the user closed the tab.
Varonis combined three techniques to make the attack work: the Parameter-to-Prompt injection, a double-request method to bypass Copilot’s first-request safeguards, and a chain-request flow that kept the attacker in control of the session. The result was a full hijack of Copilot Personal, giving intruders access to recent prompts, past conversations, file access history, the user’s name, location information, and other data Copilot could see.
Nothing pointed to trouble on the victim’s end. The process was invisible because the harmful instructions arrived through follow-up requests from the attacker’s server, not the initial prompt. Client-side security tools couldn’t detect it, and Copilot responded as if it were talking to the user.
Reprompt affected only Copilot Personal, not Microsoft 365 Copilot, which has tighter enterprise controls like DLP policies, auditing, and administrative restrictions. While the vulnerability has not been spotted in real-world attacks, the research shows how easily prompt injection can evolve into a session-level compromise when AI tools hold long-term context and broad access to personal data.
The fix
Microsoft patched the issue in January’s Patch Tuesday update, closing the loopholes that made the attack possible. Still, Reprompt underlines a broader point: as AI assistants become more embedded in everyday tools, small conveniences like prefilled prompts or persistent sessions can introduce serious risks when combined with creative abuse techniques.
Users should keep systems updated and stay skeptical of links that open AI tools or come with prefilled queries. And while Reprompt is no longer exploitable, it won’t be the last demonstration that prompt injection can be more than harmless mischief when an AI is connected to real user data and an active cloud session.
The Reprompt flaw is fixed, but it won’t be the last time a chain of clever prompt manipulations exposes weaknesses in consumer AI tools. As more assistants handle personal files, system data, and daily workflows, attacks like this will matter far more than theoretical model-level exploits. The next frontier of AI security may have less to do with how smart these systems are and more to do with how well they understand what should and shouldn’t be trusted.