Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI Security

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet) (scworld.com) 20

The address bar/ChatGPT input window in OpenAI's browser ChatGPT Atlas "could be targeted for prompt injection using malicious instructions disguised as links," reports SC World, citing a report from AI/agent security platform NeuralTrust: NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM. A malformation, such as an extra space after the first slash following "https:" prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser's address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default.

An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a "copy link" button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted. These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user's integrated applications or logged-in sites like Google Drive, NeuralTrust said.

Last month browser security platform LayerX also described how malicious prompts could be hidden in URLs (as a parameter) for Perplexity's browser Comet. And last week SquareX Labs demonstrated that a malicious browser extension could spoof Comet's AI sidebar feature and have since replicated the proof-of-concept (PoC) attack on Atlas.

But another new vulnerability in ChatGPT Atlas "could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code," reports The Hacker News, citing a report from browser security platform LayerX: "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes....

"What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers. In our tests, once ChatGPT's memory was tainted, subsequent 'normal' prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards...."

LayerX said the problem is exacerbated by ChatGPT Atlas' lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge. In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexity's Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.

From The Conversation: Sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation. But in Atlas, the AI agent isn't malicious code — it's a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation.
Thanks to Slashdot reader spatwei for suggesting the topic.

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet)

Comments Filter:
  • Looks like they have no clue what they are doing. These mistakes are DUMB.

  • "...I think I'm gonna have a heart attack and die from NOT surprise!"

    • by gweihir ( 88907 )

      Fitting, I think.

      Looks that actual competence, skill, insight and general intelligence is in short supply at AI companies. Quite the irony.

  • In today's hype-driven game, where billions are at stake, immature products are released in order to create excitement and attract investment
    The only safe way to use these products is on an isolated test bench with the assumption that they will be insecure and bug-ridden

    • You assume they are going to be testing it.
      I doubt that is still a valid assumption.
      I mean, they'll test it well enough to get it out by Friday, but is there any incentive left to find the deeper bugs?

      Fear that Google Gemini will capture more market share while we fix a bug might override what used to be called common sense.
    • by tlhIngan ( 30335 )

      It's not immature products - all are based on Chromium which is very well mature at this stage having been around nearly 20 years (as WebKit later forked to Blink).

      The problem is fundamentally, the AI works on the webpage you can see (and the bits you don't). And the bits you don't make it vulnerable to prompt injection.

      Prompt injection happens because you can't separate the control information from the data - and is not new. AT&T found out in the 60s and 70s when phone phreaking was common because of i

  • "Well, well, well, not so easy to make a browser that doesn't suck shit huh?"

  • AI prompt injection is the new SQL injection. Unfortunately, there's no sure fire way to prevent it. I highly recommend the Gandalf game [lakera.ai] if you haven't played yet.
    • by gweihir ( 88907 )

      There are sure ways to prevent it: Cut the data-path.

      If you insist on doing abysmally stupid things like running AI "agents", well, then there is no way to prevent this and a host of other attacks. But doing stupid things always comes at a cost. Hence this is as expected.

      • doing stupid things always comes at a cost

        It seems many understand this but try to ensure it's someone else who pays.

        • Why not? Life is too short to worry about what your CEO is promising, and he's the one being paid to take the risk. Do your thing, polish the resume, and keep abreast of new technology trends.
        • by gweihir ( 88907 )

          Sad but true. Hence we need liability in commercial software.

    • Most developers still don't understand SQL injection. I know because when I interview developers, only about 25% of candidates can explain to me how it works, and how to prevent it. That statistic holds up whether the candidate is junior, or has years of development experience. The ones that can't explain SQL injection, don't get hired.

      Prompt injection is even more insidious, because LLMs don't have a clear boundary between context and commands. But I don't think it's unsolvable. Most AI issues can by large

  • Well, this greatly increases my confidence in Aardvark [slashdot.org].

    • I'm not sure I agree there, these browsers are doing stoopid shit while Aardvark is just supposed to make sure they are doing that safely.
      Should Aardvark be exerting that level of control over the actual functionality of the browser? (probably Yes). Do Aardvark's developers know that?

      [I'm having difficulty posting at present, Cloudflare errors]

  • "shareholders first" > "safety first" ?

  • So the OpenAI was not used or not suffitiently adequrate to do review of code safetly, eh? Perhaps they could outsource...?

  • That we are going to eventually have a catastrophic failure from the move fast and break things folks. And we won't see it coming.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...