Security researchers at Cato Networks have documented a technique called HashJack that allows attackers to embed hidden instructions inside URLs and have those instructions executed silently by AI-assisted browsers. The mechanism is a fragment identifier, the portion of a URL that follows the hashtag symbol, a component that web servers never process and that traditional network monitoring tools therefore never see. AI browsers read it anyway, interpret its contents as instructions, and act accordingly. The user sees a familiar webpage. The AI assistant is following directions they never gave.
The technique is not theoretical. It represents a documented exploitation of a structural characteristic of how AI browser assistants process content, and the implications for organizations whose teams have incorporated these tools into daily work are immediate enough to warrant a specific response.
How the Attack Works and Why the Architecture Makes It Possible
Understanding HashJack requires understanding what AI browser assistants actually do, which is more than most users consider when they treat them as simple convenience features.
AI browsers integrate language model capabilities directly into the browsing experience. The assistant reads webpage content, interprets it, and acts on it, summarizing documents, answering questions about what is on screen, and surfacing insights from the material the user is viewing. The assistant processes natural language wherever it appears in the content it can access. That is the capability that makes these tools genuinely useful, and it is precisely the characteristic that HashJack exploits.
URL fragments, the text appearing after a hashtag in a web address, were designed for a specific, limited purpose: telling a browser where to scroll within a page, or passing state information between client-side scripts. They are client-side only. When a browser requests a webpage, the fragment is never sent to the server. The server never sees it, processes it, or logs it. Firewalls and network monitoring tools that analyze traffic between client and server have nothing to analyze, because the fragment does not travel across that boundary.
What Cato Networks documented is that AI browser assistants, designed to process natural language content that appears in the browser context, do not distinguish between content the user intended to read and instructions embedded in a URL fragment. An attacker who constructs a URL with a fragment containing natural language instructions can send that URL to a target, and when the target opens it in an AI-assisted browser, the assistant reads the fragment and executes the instructions it contains. The user sees whatever webpage the URL points to. The assistant is doing something else.
The instructions can direct the assistant to summarize the content of the page and transmit it to an external destination, to download and execute files, or to take other actions within the scope of what the assistant is capable of. Because the fragment never traverses the network as part of a server request, the activity generates no logs in the places most organizations monitor. The employee who clicked the link did not type any command. The browser did not display any unusual behavior. The damage happens in a layer that conventional security tooling was not designed to observe.
The Business Consequences Are Not Abstract
The attack surface HashJack creates maps directly onto the ways AI browser assistants are actually being used in business environments, which is the detail that moves this from an interesting security finding to an operational concern.
Teams using AI browser assistants to research competitors are opening pages whose content the assistant processes and summarizes. A manipulated URL sent as a research link could direct the assistant to extract and transmit whatever the user has open in their browser session at the time. Pricing data, customer records, contract terms, and internal dashboards that the employee opened before following the malicious link all represent content the assistant can reach.
The absence of logs compounds the problem. When an incident eventually surfaces, the affected employee accurately reports that they did not click anything unusual and did not type any commands. From their perspective, they simply opened a link. Forensic investigation of network traffic finds nothing because the malicious instructions never traversed the network. Attribution becomes difficult, the timeline of what was accessed becomes unclear, and the response is hampered by the same invisibility that made the attack successful.
For organizations operating in regulated industries where data handling requirements include audit trails and incident documentation, an attack vector that produces no logs is not just an operational problem. It is a compliance problem.
Reducing Exposure Without Abandoning the Tools
AI browser assistants have earned their place in business workflows through genuine productivity gains, and the appropriate response to HashJack is not removal but reconfiguration and awareness. The specific mitigations that matter most address the structural features of the attack rather than its surface characteristics.
Disabling AI browser assistant functionality on pages that handle sensitive data is the highest-priority configuration change available today. Most AI-assisted browsers provide controls that allow the assistant to be restricted or disabled on specific sites or categories of sites. Banking interfaces, CRM platforms, HR systems, internal wikis, and any page where the content represents data the organization has a specific interest in protecting should operate without the AI assistant active. The assistant cannot be manipulated into exfiltrating content that it is not permitted to read.
Establishing this as policy rather than individual preference is what makes the control effective at the organizational level. Individual employees making case-by-case decisions about when to disable the assistant will make those decisions inconsistently under the time pressure of actual work. A policy that specifies which categories of tools and pages are off-limits for AI assistant use gives employees clear guidance and gives administrators a standard against which to audit.
Raising employee awareness of URL structure is a training addition that costs little and closes part of the gap. Employees who know that an unusually long string of text following a hashtag in a URL is worth pausing over will not catch every HashJack attempt, but they will catch some, and the habit of scrutinizing link structure before clicking is valuable against multiple attack categories beyond this one. The training message is straightforward: the hashtag portion of a URL is not decorative, and AI browser tools read it.
Enterprise browser management solutions and mobile device management platforms that can inspect and sanitize URL fragments before the browser processes them address the problem at the layer where it originates. This is the control that most completely closes the vulnerability because it intervenes before the AI assistant ever encounters the malicious content. The implementation complexity and cost are higher than the configuration and policy approaches, but for organizations where the data accessible through browser sessions represents a significant risk, the investment is proportional.
Keeping AI browser tools updated is a standing requirement rather than a periodic task. The developers of these platforms are aware of prompt injection vulnerabilities, including HashJack, and updates addressing specific exploitation techniques will continue to appear. An organization running an outdated version of an AI browser tool is unprotected against vulnerabilities that have already been patched in current versions.
What HashJack Reveals About AI Tool Security More Broadly
HashJack is an instance of prompt injection, a category of attack that exploits the defining characteristic of large language model systems: their responsiveness to natural language instruction regardless of the source. An AI system that processes text in order to be useful cannot easily distinguish between text a user intentionally provided as instruction and text an attacker embedded in content the user simply opened. The same quality that makes the system helpful makes it manipulable.
This is not a flaw that will be fully resolved by any single patch. It is a structural characteristic of how language models interact with content, and it will continue generating new exploitation techniques as attackers identify additional contexts where AI systems process content they did not originate. HashJack uses URL fragments. Earlier prompt injection techniques used hidden text in documents, white text on white backgrounds in web pages, and instruction-formatted content embedded in data files. The common thread is that AI systems process natural language wherever they encounter it, and attackers will continue finding new places to put it.
Organizations that are adding AI tools to their workflows faster than they are updating their security frameworks to account for those tools are accumulating exposure they are not measuring. The productivity case for AI-assisted browsing is legitimate. So is the requirement to treat AI tools as components of the technology environment that require the same security policy attention as any other software with access to sensitive data and the ability to take action on behalf of users.
HashJack made the attack surface visible. The organizational response that follows from that visibility is the work that determines whether the visibility translates into protection.