ActiveFence Uncovers Hidden Prompts That Turn Perplexity’s AI Browser Into a Phishing Tool

AI browsers like Perplexity’s Comet are redefining how users experience the web. They promise to read, summarize, and interpret information in seconds, a massive leap forward from traditional search engines.

But with that leap comes a new security reality. When users delegate reading and comprehension to an AI, they also delegate trust. ActiveFence’s latest research reveals that trust can be misplaced in a dangerous manner.

Comet, launched by Perplexity to integrate its conversational AI directly into a browsing experience, has been rapidly adopted through free partnerships with PayPal, Venmo, and universities. That massive reach made it a perfect testbed for what happens when AI “assistants” meet the real, messy web.

How the Test Began

ActiveFence’s researchers decided to test whether Comet’s AI assistant could be manipulated by injecting hidden instructions embedded within a webpage.

Initially, the browser blocked these attempts. But after hitting a rate limit (a restriction placed on free-tier users), the team noticed a change: the system began obeying hidden instructions.

From that point, the researchers could make Comet summarize the content they inserted invisibly into a webpage. The AI couldn’t tell the difference between text written for users and text written to manipulate it.

The Turning Point: When Markdown Became a Exploit

Once the prompt injection worked, ActiveFence explored how it might be used in a real-world attack.

They discovered that Comet rendered markdown and clickable links without verification, which were perfect for phishing. With those capabilities, an attacker could:

  • Make the AI show a fake “rate limit” message identical to Perplexity’s real one.
  • Add a “Upgrade your account” button linked to a malicious payment page.
  • Do all of this without alerting the user that the AI was acting on hidden instructions.

The brilliance and the danger were that Comet didn’t “malfunction.” It followed instructions exactly as designed. The exploit subverted trust, not code.

The Google Docs Experiment

Next, ActiveFence turned to Google Workspace. They embedded hidden instructions in multiple ways:

  • As white text, invisible to the human eye.
  • In image filenames, which AI assistants often process.
  • Inside image alt text, an accessibility field is invisible during normal viewing.

All three methods worked. Even when the payloads were visually undetectable, Comet still read and obeyed them. The most striking moment came when the team realized they couldn’t find one of their own payloads again, as it was so seamlessly hidden.

When Security Looks Like Normal Behavior

What made this case especially alarming is that Comet was technically functioning as intended: it summarized content, interpreted text, and rendered markdown. In some cases, Comet detected malicious prompts and refused to summarize them; however, this also led to user frustration and wasted usage tokens.

This blurred line between “safe” and “unsafe” behavior highlights a new security challenge: the same functionality that makes AI browsers useful also makes them vulnerable to exploitation.

Free Tier, Higher Risk

ActiveFence also observed that the vulnerability affected only free-tier Comet users. Paid “Pro” users, who could manually select models with stronger guardrails, appeared protected. This finding highlights a critical imbalance in AI accessibility: those who cannot afford advanced safety features are left vulnerable to risks that others can avoid.

As the researchers put it, safety shouldn’t be a premium feature. Security should be universal, especially when tools are designed for mass adoption.

The Bigger Picture

The Comet vulnerability is a glimpse into the next era of AI threats.

AI systems are now active participants in the browsing process. Every webpage, image, or document can contain unseen instructions waiting to be read by a too-trusting model. As ActiveFence’s investigation shows, AI agents need to learn not just what to read, but who to trust.

It’s a reminder that as AI becomes our lens to the web, it also inherits every shadow that lies within it.