Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Root Exploit For NVIDIA Closed-Source Linux Driver 548

possible writes, "KernelTrap is reporting that the security research firm Rapid7 has published a working root exploit for a buffer overflow in NVIDIA's binary blob graphics driver for Linux. The NVIDIA drivers for FreeBSD and Solaris are also likely vulnerable. This will no doubt fuel the debate about whether binary blob drivers should be allowed in Linux." Rapid7's suggested action to mitigate this vulnerability: "Disable the binary blob driver and use the open-source 'nv' driver that is included by default with X."
This discussion has been archived. No new comments can be posted.

Root Exploit For NVIDIA Closed-Source Linux Driver

Comments Filter:
  • useless suggestion (Score:4, Insightful)

    by pe1chl ( 90186 ) on Monday October 16, 2006 @04:20PM (#16459049)
    Rapid7's suggested action to mitigate this vulnerability: "Disable the binary blob driver and use the open-source 'nv' driver that is included by default with X."

    This is as useless as suggesting "Install Linux" when a Windows vulnerability has been found!
    • by Anonymous Coward on Monday October 16, 2006 @04:22PM (#16459089)
      stfu. Say first post next time like normal people.
    • by Azarael ( 896715 )
      At least there is a way to avoid the problem. Half the time I can't be even bothered to install the driver and get x reconfigured properly. It is concerning to see that it can be exploited through a remote website though(according to Rapid7).
    • by renoX ( 11677 )
      I fully agree since the open source nv driver didn't work for my GeForce6600 (Kubuntu 6.06TS).

      As an aside, I wonder why there isn't some kind of 'backup X' configuration with the vesa driver for those who have a problem with their driver?
      At first I made a mistake and used fbdriver instead of the vesa driver trying to have X running to be able to use a web browser to get the closed source driver, this was frustrating, especially as Kubuntu starts with some kind of image during the boot, so I knew that it was
      • As an aside, I wonder why there isn't some kind of 'backup X' configuration with the vesa driver for those who have a problem with their driver?

        There is. It's called creating a simple config with the vesa driver. All servers look in the same place for their config file by default so there's not any good way to do this beyond providing you with a config file that will give you a failsafe. The X server can't be counted on to detect if its output is what it ought to be, so there's no automated way it could

        • by cortana ( 588495 )
          Wait for Xorg 7.2. Input and Output hotplugging may just eliminate the X server's config file forever!
    • Re: (Score:2, Interesting)

      by Caligari ( 180276 )
      Seeing as there is no source code, and NVidia do not appear to have released a fix, using the Open Source X driver appears to be the only viable solution. Do you have a better suggestion? You are at the mercy of your proprietary vendor.

    • This is as useless as suggesting "Install Linux" when a Windows vulnerability has been found!

      Not really. You assume that this is somehow incredibly difficult. In actuallity the difficult part has already been done. That happened when the end user installed the binary only nVidia driver. Going back to the driver
      supplied by the distribution should be easy by comparison.

      Sure you're not going to get the 3-D performance benefits, but you'll at least not get your machine rooted.
    • by JensenDied ( 1009293 ) on Monday October 16, 2006 @04:43PM (#16459483)
      FTFA
      NVIDIA released the 1.0-9625
      Comment posted by Anonymous (not verified) on Monday, October 16, 2006 - 13:22

      NVIDIA released the 1.0-9625 driver which fixes this bug last month: http://www.nzone.com/object/nzone_downloads_rel70b etadriver.html [nzone.com]

      Its a bit ironic how these Rapid7 guys are foaming at the mouth about NVIDIA's awareness of the issue when Rapid7 wasn't even aware that its been fixed for weeks now.
    • Re: (Score:3, Insightful)

      Actually, this is a good idea. The kernel-side binary blob that nvidia uses is used mostly for 3d operations: You don't really use it in your day-to-day desktop experience

      The one "acceleration" that the X.org 2d desktops use is mostly render (for doing font AA, etc). But the X.org 2d drivers can provide that without using kernel drivers.

      The propietary module provides you a alternative and propietary 2d driver, but's its possible to use the nv one, which was written also by nvidia i think. I don't know if it
  • Allowed? (Score:5, Insightful)

    by 99BottlesOfBeerInMyF ( 813746 ) on Monday October 16, 2006 @04:21PM (#16459073)

    This will no doubt fuel the debate about whether binary blob drivers should be allowed in Linux.

    Of course they should be allowed. How can that even be prevented? The more important question is what can be done to either provide more secure replacements or make sure binaries can be functional without having to be trusted by the OS.

    • Re:Allowed? (Score:4, Insightful)

      by Aim Here ( 765712 ) on Monday October 16, 2006 @04:39PM (#16459399)
      They might be prevented by pointing out that the definition of derivative work in copyright law could well mean that most Linux drivers would fall within that definition, so that the linux license makes it unlawful to distribute them under anything other than the GPL.

      The Nvidia blob is perhaps a special case, since it's really a windows driver with a GPLed wrapper, so the Linux community tends to turn a blind eye, as long as the driver isn't distributed alongside the kernel. Anyone trying to write a blob driver for Linux, from scratch, would be on shaky ground. Even Linus has said that if you wrote your driver with Linux in mind, it's a derivative work.

      This is a grey area and there's not a lot of case law to decide exactly what is, and isn't, a derivative work in software, so a debate does occasionally flare up, most recently with the Kororaa livecd.
    • The more important question is what can be done to either provide more secure replacements or make sure binaries can be functional without having to be trusted by the OS.

      We're talking about a graphics driver here. It pretty much has to execute in kernel mode. you know, where you can do anything you want on the system? Sure, we could have a userspace graphics driver, but it would still need a kernel mode driver stub and it would be substantially slower, which is not really an option for most people.

      • Re: (Score:3, Interesting)

        We're talking about a graphics driver here. It pretty much has to execute in kernel mode. you know, where you can do anything you want on the system? Sure, we could have a userspace graphics driver, but it would still need a kernel mode driver stub and it would be substantially slower, which is not really an option for most people.

        With the current design of the Linux kernel + userspace, I agree, but I'm unconvinced that that has to be the case. I see inherent stumbling blocks to untrusted video drivers,

        • This is a buffer overflow in the closed-source Nvidia X11 driver, not the kernel modules. As far as I'm aware, Nvidia has no binary blobs that get loaded into the Linux kernel. ATI does, but Nvidia doesn't, all their kernel modules are open source.

          And for the record, X11 drivers run in userland, as root so they can access hardware ports directly. There's no real reason for them to require root, except that allowing any process to access hardware ports will undermine the security and stability of the syst
      • by iamacat ( 583406 )
        It pretty much has to execute in kernel mode

        Why? Once VRAM and memory-mapped registered are brought into the processes' address space, why shouldn't most of the code run in user mode and, say, read IRQs from some /dev interface? Then it can allocate 1GB texture cache and rarely used portions of it can still get paged out if another process needs the memory more.
    • The more important question is what can be done to either provide more secure replacements or make sure binaries can be functional without having to be trusted by the OS.

      Wait for Hurd, because the micro-kernel approach makes sure that drivers run in isolation?

      Yes, I know that this is put in a flambaitic manner, but is there any better reason to make sure your kernel consists of as little as possible? Even if the server that handles the device crashes, the rest of your system won't be compromised. The per

  • To Theo de Raadt (Score:5, Insightful)

    by jazman_777 ( 44742 ) on Monday October 16, 2006 @04:23PM (#16459105) Homepage
    Thank you for your stand against blobs.
    • Re: (Score:2, Informative)

      by grub ( 11606 )

      You beat me to it. This is now 2 (or 3?) exploits thanks to binary blobs that OpenBSD is immune to.
    • Re: (Score:2, Insightful)

      by LWATCDR ( 28044 )
      Except that Open Source isn't exploit free.
      OpenBSD had a root level exploit in 2000.
      Many applications that run on OpenBSD have had exploits in them including SSH.

      Seems kind of harsh to bent all selfrightous over one exploit. I hope nVidia patches it soon.
      • Re: (Score:3, Insightful)

        by QuantumG ( 50515 )
        Seems kind of harsh to bent all selfrightous over one exploit. I hope nVidia patches it soon.

        And that's the problem. The fact that people have been complaining about this for two years, and havn't even put together a binary patch for it, suggests to me that the "we don't have source" argument, although valid, is just an excuse for making yourself a victim. I wish I had heard about this two years ago because I would have made a binary patch and made sure everyone knew they had to install it. But I guess t
    • Re: (Score:3, Insightful)

      by Sloppy ( 14984 )
      What's really nice is that this shows that OpenBSD's policy is not just about an impractical "damn fool idealistic crusdade." If you don't have the source, you can't audit it. You don't know if it's safe or not, and OpenBSD's mission really is about safety, not "merely" (*cough*) freedom. Blobs aren't just undesirable on some idealistic scale; they're untrustworthy on a very practical scale. High five to Theo.
  • I'm a huge fan of all thing open source/free software...but I also remember that it's the developer's choice if they want to go open or not. I don't personally understand what "trade secrets" nVidia has to hide by keeping their drivers closed off from the public, but it's still their choice. Unfortunately the open source alternative "nv" driver that comes with X is pretty much worthless if you want to do anything involving 3D. The best situation for those who don't want to use proprietary drivers is to go o
    • Well, if you opened the source, then you can see the tweaks and short-cuts that were made to make the video card run fast... the competition can use this against them... I'm sure ATI and nVidia both have their fair share of short-cuts in their drivers.
      • Re: (Score:2, Interesting)

        by ZephyrXero ( 750822 )
        God forbid fair competition where the actual hardware's merit has to stand on it's own ;)
        • The driver is part of the whole product. Comparing just pure hardware would be like comparing just the engine of two cars. It doesn't mean that the car with the bigger engine is faster. You have to take into account the transmission, total weight of the car, aerodynamics, etc...

          Besides, the consumer wins when nVidia/ATI optimizes their drivers, even if their optimizations may be game specific, or is some sort of shortcut. In the end, the games run faster.
    • I don't personally understand what "trade secrets" nVidia has to hide by keeping their drivers closed off from the public, but it's still their choice.

      Open source graphics drivers are a potential goldmine for patent lawsuits. nVidia has accused ATi of driver reverse engineering in the past, so its not going to happen.

      Personally I don't care - as long as they work.
    • My theory (admittedly without evidence) is market segmentation, on both ATI's and NVidia's parts. It's something that has been done for years in the tech community, across many different kinds of products.

      In effect, given the costs of production, it would be a lot cheaper for both ATI and NVidia to make a single GPU, and use binary drivers to enable/disable additional pipelines, texture processing units, etc, than it would be to actually make a series of different GPUs that have those capabilities. It wou
      • Re: (Score:3, Interesting)

        by Aadain2001 ( 684036 )
        While the core idea of your's is not wrong, what you are suggesting would actually cost more. While a lot of silicon manufacturers (Intel, AMD, IBM, ATI, Nvidia, etc) do have some features that they can turn "off" when they want to sell a part cheaper than the fully enabled product, I very much doubt that they have a significant number of them. Remember, these are not software features we are talking about, in which the product is the same size (roughly) on the CD as the full version. In silicon manufact
  • I'm not calling into question the value of open drivers. But it seems that most people using nvidia's blob are running on desktop machines, either single-user or within the family. It would seem unlikely that these users are granting remote X sessions to untrustworthy people.
    • Re: (Score:3, Insightful)

      by bunions ( 970377 )
      exactly. Unless you're allowing remote x sessions (and if you are, you deserve what you get), this is a nonissue. Oh, and that "malicious webpage" thing? All it'll do is crash X. So did Firefox for a while, and we all ran it anyway.
  • Missing out. (Score:5, Insightful)

    by headkase ( 533448 ) on Monday October 16, 2006 @04:25PM (#16459149)
    nVidia and ATI are missing out on a pool of talented free labour in their Un*x markets. Seriously they have to pay people to write Windows drivers when they could have Linux people do it for free and fold the best parts back into their Windows drivers. Idiots. ;)
    • by nuzak ( 959558 )
      Writing device drivers isn't exactly like writing a skin for a PHP forum application. There is a rather small pool of talented device driver writers with the appropriate skills for graphics hardware, and nVidia feels that they employ enough of them. More is not better.

    • I somehow doubt it (Score:5, Informative)

      by Sycraft-fu ( 314770 ) on Monday October 16, 2006 @05:44PM (#16460477)
      Quite often, something free is worth what you paid for it. nVidia has absolutely first rate drivers and while it's nice to think that there's millions of talented driver writers out there just waiting for a chance to make good drivers, that's just not the case. Writing good drivers isn't easy, that's one of the reasons nVidia is so popular with many is their top notch team does such a good job of it.

      Also, they just can't. They have licensed code in their drivers that can't be opened up. Want real OpenGL? Well than you takes what you gets. OpenGL isn't free to hardware developers. It's $25,000 to $100,000, plus royalties for distribution and it does come with terms and conditions on it's release. There's also licenses on patented code like S3TC in there.

      Now if the Linux community wanted to develop their own graphics API that was unencumbered, then maybe you could convince the companies to open their code up. However if you want a full featured GL driver, you are going to need to deal with closed source, at least form nVidia and ATi since they've both already signed licenses on it.
      • Re: (Score:3, Informative)

        by dhasenan ( 758719 )
        That's the cost of claiming conformance to the OpenGL standard--I'm not sure how legal that is--or using OpenGL trademarks; or for closed-source implementations by hardware developers, or for implementations by hardware developers for closed-source platforms.

        Check the SGI OpenGL FAQ [sgi.com] for more information. It's ambiguous as to whether an open source driver project would require the fee; however, since the fees are associated closely with closed-source development, I'm guessing that there would be no additiona
  • by Theovon ( 109752 ) on Monday October 16, 2006 @04:27PM (#16459183)
    Ok, security is never "minor," but it kinda washes out in the context of all of the stability and compatibility problems they've had as compared to FOSS drivers for cards whose manufacturers do publish specs. nVidia simply don't do a good job at writing their drivers. They violate all sorts of rules about how you're supposed to write Linux drivers. But being closed source, no one is ever allowed to fix the problems, and nVidia doesn't put enough people on it to keep up.

    What we need is a graphics vendor who publishes full specs for their graphics chips! If nVidia won't do it, find someone who will.
  • This is one reason I think I'll stop using NVIDIA chips and start using Intel chipset graphics hardware in the future. http://intellinuxgraphics.org/ [intellinuxgraphics.org]
  • by davidwr ( 791652 ) on Monday October 16, 2006 @04:31PM (#16459239) Homepage Journal
    Hardware vendors, be they printers, video cards, or what-not, should work to 2 sets of specs:

    A high-performance, possibly proprietary, specification that gives them a definate edge over their competitors. If they want to ship binary-only drivers that's fine.

    A possibly-lesser-performance specification that does "the basics" - everything a typical device of its type can do. This specification should be public, preferably with open-source drivers. Even without drivers, those who need to can write drivers from the specification. For a high-end video card, this should be everything that a low- or medium-end card could do. For an all-in-one printer, this should include basic full-color printing at "typical for its technology" resolutions, basic full-color scanning at "typical for its technology" resolutions, and b&w and color faxing. For a high-end sound card, this should include at least 2-channel sound. For a communications device, it should include all internationally-accepted standards that the device supports, but need not include the most efficient or highest-performance embodiment of those standards.

    Most important is full disclosure:
    Any device that doesn't provide a full, published specification of "everything" must disclose the limits of the published specifications, so buyers will know exactly what they are buying: a device that, should problems be found with the drivers, or when used with operating systems without supported drivers, is limited to a specified downgraded functionality.
  • by AKAImBatman ( 238306 ) * <[moc.liamg] [ta] [namtabmiaka]> on Monday October 16, 2006 @04:31PM (#16459249) Homepage Journal
    Am I the only one who can't get worked up about this exploit? I mean, I should be thinking, "this is happening because of X, we should do Y to fix it!" And yet, I just can't develop an opinion either way. It's not that I'm wrestling with myself, it's just that I don't care.

    Analyzing this, I think the reason is because the NVidia and ATI drivers are a PITA everywhere. By installing the drivers, you agree to destablize your system in exchange for the most incredible 3D (and 2D to a certain degree) performance. When Something Bad Happens(TM), you just sort of take it as coming with the territory.

    It's sort of like hooking Nitro up to your car. Sure, your engine is more powerful than ever. But are you really all that surprised when you bust a valve, crack a ring, or do some other form of damage to your hotrod?

    It would be nice if OSS drivers could be created. But it's probably not going to happen. NVidia won't open their drivers (ATI, doubly so) and the OSS community doesn't have enough info to recreate them. Thus I think the best bet is the Open Graphics Project [duskglow.com]. If they produce a viable 3D card alternative, you'll finally be able to chose between a stable (but slower) 3D card, or a high-performance, hotrod 3D Card. Take your pick to meet your needs.

    Oh, and keep a firewall in front of your machine and the internet. Pipe all your X communications over SSH. Just good safety sense. ;)
    • by Theovon ( 109752 )
      In reality, for most desktop use, the difference between an open graphics card (based on their design specs) and a high-end nVidia card is how much time the GPU spends idle. Most X11 apps just aren't the least bit taxing on the GPU. Only if you throw a high-end game at it will you notice any difference. Keeping in mind that the FPGA version of the OGP memory controller is already spec'd to run at 200Mhz (DDR400 x 128 bits = 6.4GiB/sec), when they go to ASIC, they'll have phenominal performance.
    • by bfree ( 113420 )
      NVidia won't open their drivers (ATI, doubly so)
      They don't have to open their drivers, they could do as ATI did previously with the r200 and provide the information required to create a driver (either openly or to a closed group who will sign nda's over it and release an open driver).
  • by vidarlo ( 134906 ) <vidarlo@bitsex.net> on Monday October 16, 2006 @04:33PM (#16459293) Homepage

    How many people use the nVidia cards in their servers? None, I guess. nVidia, and most 3D-cards is used on personal systems, with one user, which is usually root. If that user can use a root exploit to become root - so what! Remember that you have to be able to control the X11 display server to take advantage of this, which means you *have* to be logged in locally or be root.

    Whilst I agree with the principle, I don't think this bug will have *any* impact, as most home boxes have no accounts accessible from the internet, that is able to run X11. If they have, they probably have bigger problems. Same goes for people running untrusted code that can execute this: it could as well provide a shell, or whatever. Yet, the problem is then *untrusted* code. A person that runs untrusted code can probably be coerced into running that as root as well.

    So my guess: zero impact!

    • Get a clue. Recent Sun amd64 servers ship with the vulnerable NVIDIA blob under Solaris (which is also probably vulnerable).
    • by chill ( 34294 )
      From the actual advisory:

      "This bug can be exploited both locally or remotely (via a remote X client or an X client which visits a malicious web page)."

  • So... (Score:5, Insightful)

    by Richard_at_work ( 517087 ) on Monday October 16, 2006 @04:35PM (#16459319)
    How many root exploits have been found for this driver, and how many have been found for opensource elements of the kernel while this driver has existed? Touting this as a reason to drop the closed source driver is nothing but politics and fearmongering, you guys should know better.
    • Re:So... (Score:5, Informative)

      by Aim Here ( 765712 ) on Monday October 16, 2006 @04:48PM (#16459603)
      The problem is not that a root exploit exists. Shit happens. Those can be fixed and the world moves on.

      The problem is that all users of Nvidia graphics cards are helpless to make their machines safe because Nvidia has control over the source code. If Nvidia says 'Screw you' or goes bankrupt, then their users are screwed. Had they GPLed their driver, then someone else could have fixed it.

      And that's exactly what's happened in this case.

      If you read the TFA, you'll see that NVidia has known about this bug for TWO GODDAMN YEARS already and NOT fixed it. Surely that's one big 'SCREW YOU' to the Linux, Solaris and BSD communities right there.

  • Fixed weeks ago (Score:5, Informative)

    by Planeflux ( 992050 ) on Monday October 16, 2006 @04:42PM (#16459479)
    Apparently, the bug/exploit was fixed in the 9625 beta release. http://www.nzone.com/object/nzone_downloads_rel70b etadriver.html [nzone.com]
  • The reason I use the closed-source binary blob driver is because the 'nv' driver can't program my flat-panel monitor to accept a 1600x1200 DVI signal. I have to use my glorious 20.1" panel in 1280x1024 mode or hook up the old VGA cable to get a 1600x1200 signal. Here's the thread about how the 'nv' driver depends on the video card BIOS to program up the flat panel registers:

    https://bugs.freedesktop.org/show_bug.cgi?id=3654 [freedesktop.org]

    "The "nv" driver currently can't change the BIOS-programmed display timings. Unf

  • So this is gonna fuel the debate wether binary drivers are ok or not? WTF? Wether drivers are binary or not has absolutely *NOTHING* to do with wether there's an exploit or not. This is only gonna be abused by the 'all FOSS at all costs' faction. Linux and OSS owe a great deal of their success in recent years due to the all-out 100% fully official support of Linux by Nvidia. Knowing Nvidia they'll have a fix out at least as fast as any OSS project. Cut them some slack allready. It's not that everthing else
  • by wes33 ( 698200 ) on Monday October 16, 2006 @04:54PM (#16459697)
    Hey ... my neighbor runs linux with an nvidia card. And he was showing me some fancy 3d stuff that my xp can't do. So I can hardly wait to turn the tables and take over his system. So what is step 1 ...

    Oh, I see, first I have to break into his house :(
  • It wouldn't render fonts correctly for me unless I turned off the render acceleration, and even then fonts wouldn't render under WINE.

    Much as I'd like to have the acceleration features of the card, I can't until nVidia figures out how to get their drivers relatively bug-free with FreeType and Xorg R7. That might take a while, so I'll just have to bide my time with the stock "nv" driver. Google Earth will be incredibly slow for me until that time:

    "Google Earth is now downloading the entire planet to your

  • Please note that this exploit is already fixed/resolved in the 1.0-9625 beta driver:
    http://www.nzone.com/object/nzone_downloads_rel70b etadriver.html [nzone.com]

    as well as the 1.0-9626 QuadroPlex driver:
    http://www.nvidia.com/object/linux_display_ia32_1. 0-9626.html [nvidia.com]
    http://www.nvidia.com/object/linux_display_amd64_1 .0-9626.html [nvidia.com]

    Thanks

  • by possible ( 123857 ) on Monday October 16, 2006 @05:21PM (#16460143)
    I work with the people who discovered and researched this advisory. For those of you who obviously didn't read the whole advisory and who are saying that this is purely a local exploit, I would not be so sure. Let me quote from the bottom of the advisory.
    It is important to note that glyph data is supplied to the X server
    by the X client. Any remote X client can gain root privileges on
    the X server using the proof of concept program attached.

    It is also trivial to exploit this vulnerability as a DoS by causing
    an existing X client program (such as Firefox) to render a long text
    string. It may be possible to use Flash movies, Java applets, or
    embedded web fonts to supply the custom glyph data necessary for
    reliable remote code execution.

    A simple HTML page containing an INPUT field with a long value is
    sufficient to demonstrate the DoS.
    Or, an even funnier chat I had earlier today:
    [chris@work] if it works, i'll drop connection here and be proved wrong and drop the nvidia driver
    [cloder] chris: do you have the nvidia driver?
    [chris@work] yeah
    [cloder] http://nvidia.com/content/license/location_0605.as p?url=';a='a';i=18;while(i--)a%2B=a;location=a;//
    [cloder] this is what's nice when vendors have XSS on their site
    [cloder] and since you trust nvidia enough to run their blob, you must trust their website enough to run javascript on it.
    [dr] haha chad that is classic using nvidias site
    *** chris.work (chris@fe-3-1.rtr0.scra.hostnoc.net) has quit ()
    [niallo] poor chris
    [niallo] cloder broke his computer with a webpage.
    *** chris.pwnt (chris@fe-3-1.rtr0.scra.hostnoc.net) has joined #openbsd
    * chris.pwnt never questions cloder again
  • by red_crayon ( 202742 ) on Monday October 16, 2006 @05:28PM (#16460233)
    I have never gotten dual-head support
    out of the OS nv driver; the nVidia
    closed-source drivers work for dual
    head workstations.

    As has been mentioned, why get an nVidia
    card for your server? And this may be a
    moot point for single-user workstations.

    But do not assume that the nv driver is
    a panacea.
  • by vortimax ( 409529 ) on Monday October 16, 2006 @05:41PM (#16460433) Homepage
    The nouveau project is actively working on a free software driver for nVidia cards that will hopefully replace the nv driver one of these days. They could use some help.

    http://nouveau.freedesktop.org/wiki/ [freedesktop.org]
    http://wiki.x.org/wiki/nv [x.org]

  • by NullProg ( 70833 ) on Monday October 16, 2006 @08:54PM (#16462511) Homepage Journal
    Ignoring the argument of Binary vs OSS drivers for a minute.

    The root of this problem is 'C'. The nVidia programmers have way too much power. Buffer overruns, string comparisons, memory access, pointer arithmetic. These features need to be banned from modern computing.

    Just last week over prune juice, I was telling Linus, Theo, and Dave Cutler why they should only allow C#/Java/Python based video drivers in their kernels.

    Enjoy,
  • Local escalation (Score:3, Insightful)

    by Builder ( 103701 ) on Tuesday October 17, 2006 @04:41AM (#16465419)
    A lot of people really seem to miss the point about exploits that can only be used locally... These are still every bit as serious as remote exploits!

    If you follow best practices, you'll probably end up with a system where any vulnerability only leads to access as a user. But when there are local root exploits available, you can escalate that user access to root access and hide your rootkits there.

    So with this Nvidia bug, the real risk is that another service gets compromised and the attacker then uses this exploit to get root. Once they have root, they can install rootkits, etc.
  • by smoker2 ( 750216 ) on Tuesday October 17, 2006 @06:31AM (#16466039) Homepage Journal
    I'm running xorg 6.8.2-37.FC4.49.2.1 on FC4 with kernel 2.6.17-1.2142
    I have just installed NVIDIA-Linux-x86-1.0-9625 and it seems ok so far. I've visited a few of the troublesome links with firefox 1.5.0.7 and it's not crashed X yet. I was using NVIDIA-Linux-x86-1.0-8762 before the update, and several times I've had X crap out on me. I don't believe I was r00ted though, after reading about the glyph problems. It can also be triggered by a long "get" request, or long lines of text in a form field. I was using TinyMCE [moxiecode.com] when it first happened to me. Here's a test url that supposedly crashes X from firefox - http://comptune.com/calc.php?methos=POST&base1=10& base2=10&S1=50&S2=3553&func=bcpow&base3=10&places= 500 [comptune.com] from this thread [nvnews.net] on the nVidia forums.
    I didn't check this before the update though, so it may not be conclusive.

    My main complaint about the whole issue is that I only found out because it was posted here. I don't have time to go checking for updates and exploits for all my different drivers and software, that's why yum runs from cron every night. It would have been nice if somebody (nVidia) had posted that a new version was available that fixed potential security holes, or even had a version checker built in to notify me of an update.

"We don't care. We don't have to. We're the Phone Company."

Working...