Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Windows Technology

New Rootkit Bypasses Windows Code-Signing Security 160

Trailrunner7 writes "In recent versions of Windows, specifically Vista and Windows 7, Microsoft has introduced a number of new security features designed to prevent malicious code from running. But attackers are continually finding new ways around those protections, and the latest example is a rootkit that can bypass the Windows driver-signing protection."
This discussion has been archived. No new comments can be posted.

New Rootkit Bypasses Windows Code-Signing Security

Comments Filter:
  • That's the best use I can visualize for virii and rootkits
    • by afidel ( 530433 ) on Tuesday November 16, 2010 @03:06PM (#34246802)
      Actually, the best use is to install drivers that bypass DRM.
    • The nominative plural ending for Latin nouns following the second declension is -i, so if virus was a masculine noun, which it is not [wikipedia.org] ("n." means it's neutral), it would then take an i, which would give "viri." But since "virus" is neutral, its plural is "vira," so next time you wanna brag about how well you know Latin — without sounding like a fool —, say that instead.

      Or you can say "viruses" if you feel like speaking English. My €0.02.


      P.S.: The only time you get that double i in th
      • Here's my $0.02:

        Jargon is jargon.

        • I've only been emulating native speakers, pal. I've seen people type "My two cents," "Just my two cents" and "Just my two cents worth" at the end of their post countless times.
          • You grabbed onto the wrong part of my reply. That part didn't matter. The important part is that "virii" is jargon, just like "boxen" - neither need to be valid words or grammar.

            Jargon is like technical slang - neither cares much about the rules.

  • Well, DUH... (Score:3, Insightful)

    by adjuster ( 61096 ) on Tuesday November 16, 2010 @01:50PM (#34245598) Homepage Journal
    Without "trusted" hardware the user will always be able to override software "protections" designed to prevent arbitrary code execution. This is just another "leapfrog" in this arms race. Give me "trusted computing" where I control the keys and decide what software is "trusted" and I'd be fine w/ it. Otherwise, I'll take the current situation on personal computers because, at least, I can run arbitrary software. ("Don't turn my PC into an iPhone, bro!")
    • Re:Well, DUH... (Score:5, Informative)

      by tompaulco ( 629533 ) on Tuesday November 16, 2010 @02:02PM (#34245786) Homepage Journal
      Code signing is just a money making scheme for Microsoft cleverly disguised as a protective measure for us users. Smaller projects can not afford to have their code digitally signed by Microsoft. People have been writing workarounds for this involving spoofing the driver as being in TEST mode, but this is a hassle for the end user.
      • Re: (Score:3, Informative)

        by Applekid ( 993327 )

        Code signing is just a money making scheme for Microsoft cleverly disguised as a protective measure for us users. Smaller projects can not afford to have their code digitally signed by Microsoft. People have been writing workarounds for this involving spoofing the driver as being in TEST mode, but this is a hassle for the end user.

        Um, code signing can be by any trusted authority. You need not pay Microsoft for user code.

        Drivers are another story. They need to pass WHQL, but that's no big deal because it's already paid through the licensing fees collected if you want to put a Windows logo on your product certifying it's compatible with Windows. Naturally, if it's going to have the logo on the box, Microsoft wants to make sure your crappy driver doesn't cause problems that will be blamed on Windows.

        Installing unsigned drivers in testin

        • Re: (Score:3, Interesting)

          by rsmith-mac ( 639075 )

          Drivers are another story. They need to pass WHQL

          Even this is not quite true. There are 2 different levels of signing: Ownership signing, and WHQL signing. Ownership signing establishes who the driver came from; unless a driver is ownership signed, 64bit versions of Windows will flat-out refuse to install it (unless you boot with signature enforcement disabled) and is what TFA is referencing. WHQL signing is a second layer where MS signs off on the drver; without a WHQL signature, Windows will throw up a sc

      • by jonwil ( 467024 )

        This post here:
        https://lists.launchpad.net/coapp-developers/msg00757.html [launchpad.net]
        suggests that efforts are being made to convince VeriSign to provide cheap/free certificates for open source projects.
        Wont help if you are a proprietary company unwilling to open source the drivers but for "free hardware" developers it sounds like a good thing.

    • Re:Well, DUH... (Score:4, Interesting)

      by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday November 16, 2010 @02:04PM (#34245814)

      Give me "trusted computing" where I control the keys and decide what software is "trusted" and I'd be fine w/ it.

      The problem is, 99% of our society cannot properly decide whether software should be trusted or not, and even with more granular controls and proper feedback from the OS a lot of malware will slip through.

      I don't think this is an unsolvable problem. I like the iPhone App store model to some extent. A company with professionals should be vetting software and should be telling users what software should and should not be able to run. But the iPhone App store fails in many ways as well.

      First, there should not be one company deciding. We should harness the free market and build a system that takes inputs from whatever security feeds users subscribe to and weight those security feeds based upon the end user's preferences. Also, we should be able to override the choices for any given case. If we really want to run some software but our security feeds think it is malware, we should be able to do it. Heck, there are valid reasons, such as research, for wanting to run malware. It should just be a very advanced setting that makes it perfectly clear to the end user that they're handing complete control of their device to some other party, forever.

      I'm convinced we could leverage the benefits of both an iPhone app store approach and a traditional package manager approach. I fear, however, that none of the companies in a position to actually make a good system and push it to end users is going to be motivated to do so. Apple will wait for others, and Microsoft sees the way they could leverage their monopoly using an iApp store of their own. Canonical has laid the groundwork, but only as far as copying Apple and incorporating it into their package manager. They're not much for making revolutionary new technologies, nor are they in much position to push it and, lastly, unless they're aiming at the ultra-secure market, their users are currently least in the need of beefing up security.

      • Re:Well, DUH... (Score:4, Interesting)

        by TheLink ( 130905 ) on Tuesday November 16, 2010 @02:16PM (#34245998) Journal

        A company with professionals should be vetting software and should be telling users what software should and should not be able to run.

        IMO, the software should be saying what type of sandbox it wants upfront. From a finite manageable set of sandbox templates.

        The software could also instead request a custom sandbox, but a "custom sandbox and app pair" need to be signed by a trusted party. Either the OS vendor, or someone else with their cert installed (e.g. Corporate IT).

        I proposed something like this to Ubuntu: https://bugs.launchpad.net/ubuntu/+bug/156693 [launchpad.net]

        Rather than solve something harder than the halting problem (you often don't have the full inputs to the program), you just get the programmers to declare upfront what access the programs need, and if declared OK, the OS enforces the sandboxes.

        • Re: (Score:3, Interesting)

          IMO, the software should be saying what type of sandbox it wants upfront. From a finite manageable set of sandbox templates.

          Agreed. It greatly lessens the work for auditors as they only have to figure out what you're doing with the services/access and then decide if that is actually appropriate. I'd also mention adding official services and protocols (such as an update service, a secure registration/purchasing service, a service for ad streaming to supported apps, etc.) results in fewer apps needing to roll their own services for these purposes and further simplifies security auditing.

        • by drsmithy ( 35869 )

          Rather than solve something harder than the halting problem (you often don't have the full inputs to the program), you just get the programmers to declare upfront what access the programs need, and if declared OK, the OS enforces the sandboxes.

          Application: "I need access to everything on your computer to properly display some dancing bunnies."
          User: "Dancing Bunnies ? AWESOME !"

          The problem is not the capability of the OS to restrict what programs can do. The problem is end users incapable of making an

      • Re: (Score:3, Insightful)

        Give me "trusted computing" where I control the keys and decide what software is "trusted" and I'd be fine w/ it.

        The problem is, 99% of our society cannot properly decide whether software should be trusted or not, and even with more granular controls and proper feedback from the OS a lot of malware will slip through.

        I don't think this is an unsolvable problem.

        But how that 99% of society wants to use the computer should not ( and cannot necessarily) be dictated by even the 1% as the 1% will not know every edge case for how the 99% wants to use the computer. Thereby, "trusted" computing in that model is 100% flawed, and you then have to build in backdoors - like the register key that can disable requiring a signed driver so developers can test their drivers - so that the 99% can all do what they want/need to on the computer.

        • But how that 99% of society wants to use the computer should not ( and cannot necessarily) be dictated by even the 1% as the 1% will not know every edge case for how the 99% wants to use the computer.

          Actually 99% of users will probably never do anything that would even be an issue. Malware primarily runs because users are not informed by the OS that it is malware or told that it is accessing their address book and starting a mail server or constantly spamming traffic at an address in Estonia. For the other 1% of cases the user needs the option to override the security system, but this should never be needed for normal use cases so when an app requests this it should be a red flag to users. Right now the

          • But how that 99% of society wants to use the computer should not ( and cannot necessarily) be dictated by even the 1% as the 1% will not know every edge case for how the 99% wants to use the computer.

            Actually 99% of users will probably never do anything that would even be an issue. Malware primarily runs because users are not informed by the OS that it is malware or told that it is accessing their address book and starting a mail server or constantly spamming traffic at an address in Estonia. For the other 1% of cases the user needs the option to override the security system, but this should never be needed for normal use cases so when an app requests this it should be a red flag to users. Right now they're so conditioned by our poor OS UIs they just click through things. But if a users was never, ever (over the course of owning a machine and later over their lifetime) asked o override security and they were asked at some point with language worded to say doing so would allow someone else control of their computer forever, I think that would make a huge difference, don't you?

            Who mentioned malware? Yes, malware is one thing that needs protected against; however, what a user wants to do with the computer may not necessarily be what the person controlling the system wants the user to do with the computer. This is almost fine in a corporate environment where the computer uses are dictated by the organization - though even then, that doesn't quite work as managers hire people to do things that IT didn't account for in their 'standard platform' (e.g. developers). However, one person

            • However, one person (e.g. Microsoft, gov't, gov't agency, etc.) controlling what all users - including all corporations - can do with the computers doesn't work. Thus 'trusted' computing is flawed...

              Please go read the thread you're replying to. We're talking about a trust model where the greylists are created by weighted combinations of security threads from multiple sources, as weighted by the end user, with a very, very rarely user override option. There is no "one person" by the definition of what we're talking about.

              • However, one person (e.g. Microsoft, gov't, gov't agency, etc.) controlling what all users - including all corporations - can do with the computers doesn't work. Thus 'trusted' computing is flawed...

                Please go read the thread you're replying to. We're talking about a trust model where the greylists are created by weighted combinations of security threads from multiple sources, as weighted by the end user, with a very, very rarely user override option. There is no "one person" by the definition of what we're talking about.

                While you do portend that there should be no single entity controlling it, there is nonetheless an entity other than the user controlling it - even if multiple entities, they will likely form together into a gov't agency or consortia of some sort in the end, thus a single entity any way.

                Trusted computing (as you suggest with weights, etc for the user to adjust) still would not work. Why? If the user can modify what the system can trust, so can anything malicious. All it takes is for the malware to be one

      • by thoth ( 7907 )

        First, there should not be one company deciding. We should harness the free market and build a system that takes inputs from whatever security feeds users subscribe to and weight those security feeds based upon the end user's preferences.

        There isn't one company deciding, right? If you don't like vendor X, move to vendor Y. Free market behavior doesn't guarantee your issue is solved on the platform of your choice - maybe some other vendor is addressing your concern (if it is profitable and so on for them to do so, generally meaning your concern is valid to a large enough group the vendor may actually care) and deserves your business. Currently, people who don't care enough about this and proxy their decisions to the vendor (i.e. Apple), p

        • First, there should not be one company deciding. We should harness the free market and build a system that takes inputs from whatever security feeds users subscribe to and weight those security feeds based upon the end user's preferences.

          There isn't one company deciding, right? If you don't like vendor X, move to vendor Y.

          Except at least for PC's, this is an OS level problem and desktop OS's are a monopolized market. But I'm not making any claims about what has to happen, just what would be best for end users in my opinion. Obviously having to switch OS's in order to change security feeds would be less than ideal for users. and would lessen the ability of the free market to bring those users benefits.

          I'm convinced we could leverage the benefits of both an iPhone app store approach and a traditional package manager approach. I fear, however, that none of the companies in a position to actually make a good system and push it to end users is going to be motivated to do so.

          Isn't this a valid manifestation of the free market?

          If i were a free market, perhaps. But the courts in at least four countries I know of have already ruled that the free market

      • by BLKMGK ( 34057 )

        Sorry, I wouldn't want a model where others tell me what I can and cannot run - for the same reason it pisses me off installing new AV software and then having to change a bunch of settings so it won't flag\delete all of the fun "security" products I keep around that are useful like port scanners. The vast amount of software out there would be pretty tough to cover too and as Apple and Android both have found it's tough to stop a programmer from building a program that does one thing while also doing some l

      • by tlhIngan ( 30335 )

        First, there should not be one company deciding. We should harness the free market and build a system that takes inputs from whatever security feeds users subscribe to and weight those security feeds based upon the end user's preferences. Also, we should be able to override the choices for any given case. If we really want to run some software but our security feeds think it is malware, we should be able to do it. Heck, there are valid reasons, such as research, for wanting to run malware. It should just be

        • Bzzt, you fail. You're back to square one. Remember all those jailbroken iPhones getting hacked because OpenSSH was installed with default passwords?

          No. I remember A TINY NUMBER of jailbroken iPhones getting owned because a very small subset of users hacked their phones and an even smaller subset installed SSH but did not change the default password.

          The example you cite has little or nothing to do with mainstream security. We're in a situation right now where people regularly have automated malware infecting their machines in large numbers. Low hanging fruit first.

          It is a pretty unsolvable problem.

          Not really. The first step is to make software installation and default configuration eas

          • by drsmithy ( 35869 )

            Once users no longer have to jump through hoops to perform computing tasks, they question why they are asked to jump through hoops in one particular case (malware) or they just don't bother out of laziness.

            What hoops would users need to jump through to install malware ? Most of what it needs to do isn't any different from any other program.

            • Please keep up with the conversation. Go re-read the thread leading up to this and you'll understand the security framework they would have to overcome. Then you can make an informed comment.

      • by Lennie ( 16154 )

        That just sounds like this:
        https://apps.mozillalabs.com/ [mozillalabs.com]

      • we could just let the government decide. They're here to help, you know.
    • Hardware protections have bugs too.

    • Actually, it sounds like there's already a mechanism that (sort of) protects against this. Using BitLocker (full drive encryption, Vista and up) with a TPM (Trusted Platform Module, a sealed chip that can store crypto keys and keeps a running checksum of CPU instructions) won't technically prevent you from getting infected, but it will make automatic unlock fail. Automatically unlocking a bitlocked drive (where "automatic" means anything short of manually entering an AES256 key) usually uses the TPM. Howeve

      • by jonwil ( 467024 )

        You can avoid the firewire thing by not having Firewire ports. Or by filling the firewire ports with glue or gunk or something else so they cant be used.

    • by Lennie ( 16154 )

      I'm very reluctant to ask for "trusted" hardware, because everytime they discuss it, it always puts them (Microsoft, film industry, whatever) in power, not me.

  • by digitaldc ( 879047 ) * on Tuesday November 16, 2010 @01:54PM (#34245660)
    Safe Mode is all I run nowadays.
    I am just too scared to 'Start Windows Normally'
  • Old sk00l. When was the last MBR infector seen in the wild? 2002? Most of this class are from the DOS era, fercryingoutloud.
    • Re: (Score:3, Informative)

      by PatPending ( 953482 )

      Old sk00l. When was the last MBR infector seen in the wild? 2002? Most of this class are from the DOS era, fercryingoutloud.

      From the second paragraph of the fine article (emphasis added):

      TDSS has been causing serious trouble for users for more than two years now, and is an example of a particularly pernicious type of rootkit that infects the master boot record of a PC. This type of malware often is referred to as a bootkit and can be extremely difficult to remove once it's detected. The older versions of TDSS--TDL1, TDL2 and TDL3--are detected by most antimalware suites now, but it's TDL4 that's the most problematic right now.

      • Re: (Score:2, Insightful)

        by sexconker ( 1179573 )

        Why does everything have to be a kit?
        Rootkit. Okay.
        Bootkit. I see what you did there.

        Would a WoW hack that steals/sells your loot be a lootkit?

        Would a viral advertising campaign that gets a bunch of douches to seek out 1930s era fashion for their high school proms be a zoot kit?

        Would naughty chimney sweeps toss packages of dirt, grime, and grease down your chimney and call it a soot kit?

        Is whatever drug / "treatment" the government uses on every former agent who goes public with stories about aliens calle

      • Old sk00l. When was the last MBR infector seen in the wild? 2002? Most of this class are from the DOS era, fercryingoutloud.

        From the second paragraph of the fine article (emphasis added):

        TDSS has been causing serious trouble for users for more than two years now, and is an example of a particularly pernicious type of rootkit that infects the master boot record of a PC. This type of malware often is referred to as a bootkit and can be extremely difficult to remove once it's detected. The older versions of TDSS--TDL1, TDL2 and TDL3--are detected by most antimalware suites now, but it's TDL4 that's the most problematic right now.

        Easy way to remove it: (1) turn on the computer, (2) load the hard drives into another computer, scan, and dis-infect without loading the software on the computer - it should be read-write without execution permission; and finally (3) reset the firmware on the devices, especially the motherboard, on the computer. (Typically just the motherboard; but there could be other devices too that may need it.). Put it back-together, boot up, and your infection free.

        OR

        You could just run Linux/BSD/etc to start wi

        • Yeah, you could do all that, or you could just use MBRFIX and be done with it. "Extremely difficult" - I don't really think so. Typing 6 letters from a command prompt isn't something I would categorize as "extremely difficult".
          • Yeah, you could do all that, or you could just use MBRFIX and be done with it. "Extremely difficult" - I don't really think so. Typing 6 letters from a command prompt isn't something I would categorize as "extremely difficult".

            Problem there - you could have the virus stored elsewhere on the computer. Many of those kinds of viruses will put themselves into start-up software as well; so running MBRFIX will remove it from the MBR yes, but then you'll re-infect the MBR on first boot.

            And if you run MBRFIX from a machine that is infected - without booting from a CD first - then the virus will be in memory and just re-infect the MBR before you reboot.

            So yes, it's not as simple as running MBRFIX - you actually have to disinfect the

            • That is why I always keep a bootable CD with MBRfix and anti-malware utilities. UBCD4Win [ubcd4win.com] I would highly recommend. (I was assuming the booting from another OS and cleaning before doing the MBRFIX, but didn't expressly state that becasue, well, this is Slashdot and people already know to do this. I probably should have been clearer in my previous post on that.)

              Have there actually been any MBR "bootkits" in the wild that have used flashable BIOS for storing copies? I always though that was a malware "urban
              • Have there actually been any MBR "bootkits" in the wild that have used flashable BIOS for storing copies? I always though that was a malware "urban legend". And shouldn't any flashable BIOS have some sort of jumper switch to prevent unauthorized flashing to being with?

                Yes there are, and the symptoms are hard to relate. It's things like the PS/2 mouse won't be detected, or the floppy drive won't work right. Had one on my desktop back in college - only virus I ever had. And yes, the only way to get rid of them. Variants of the Monkey [f-prot.com] virus do store themselves into the BIOS.

                • by BLKMGK ( 34057 )

                  Anything stored in the BIOS as executable code isn't going to be removed via a jumper or removal of battery. Reflashing the BIOS maybe but not simply clearing volatile BIOS flags.

                  • I wasn't saying anything could be removed by a jumper, but a jumper could definitely prevent a BIOS from being flashed in the first place. As in jumper in place - no flash. Jumper removed - flash away.
                  • Anything stored in the BIOS as executable code isn't going to be removed via a jumper or removal of battery. Reflashing the BIOS maybe but not simply clearing volatile BIOS flags.

                    BIOS is stored on a motherboard in two fashions: (1) a flashable (read/write) memory chip, and (2) a hard-wired (read-only) memory chip. When you update (flash) the BIOS, you only overwrite the flashable memory chip. This chip requires power to keep the BIOS data alive; the power comes from either the battery on the motherboard or the power supply. If you disconnect the system from the wall, and remove the battery from the motherboard then the flashable BIOS _will_ be reset. (Yes, I've reset BIOS's this way

      • FWIW, a user needs administrator access to run code that alters the MBR. As Raymond Chen puts it, it rather involved being on the other side of this airtight hatchway.

    • by Amouth ( 879122 )

      actually i just cleaned an MBR infection off a windows XP laptop 2 weeks ago.

      • I haven't seen it in a while but in the last few weeks here at the shop, there's been five or six machines with one. MbrFix [sysint.no] makes the job a little bit easier.

    • Re: (Score:3, Interesting)

      by HermMunster ( 972336 )

      It does more than infect the MBR. It creates a virtual file system and encrypts it's payload into that. This makes it undetectable by most antivirus software. Microsoft's Security Essentials DOES detect it, but it CAN'T remove it, at least as of a couple weeks ago when I first encountered the rootkit. You need to boot with your Windows CD (leaving most people that have a recovery partition in the cold) and fix the boot record.

  • In recent versions of Windows, specifically Vista and Windows 7, Microsoft has introduced a number of new security features designed to prevent malicious code from running.

    Of course, but the primary role of that lock down was to protect their DRM'd subsystems, which can be accessed by drivers running in kernel space, not to protect end-users from malicious driver code. Those were vicious but by far a minority, and hasn't improved the situation on Windows Vista x64 / Windows 7 in the slightest.

    But hey, now M

    • Re: (Score:3, Interesting)

      Of course, but the primary role of that lock down was to protect their DRM'd subsystems

      In other words, the protection is there in order to prevent malicious code from stopping?

      • I like your perspective.

      • Of course, but the primary role of that lock down was to protect their DRM'd subsystems

        In other words, the protection is there in order to prevent malicious code from stopping?

        And turn over the keys to the RIAA and MPAA. Malicious AND Evil. Cthulhu would be pleased.

    • by clodney ( 778910 ) on Tuesday November 16, 2010 @02:29PM (#34246204)

      In recent versions of Windows, specifically Vista and Windows 7, Microsoft has introduced a number of new security features designed to prevent malicious code from running.

      Of course, but the primary role of that lock down was to protect their DRM'd subsystems, which can be accessed by drivers running in kernel space, not to protect end-users from malicious driver code.

      Question for you - what benefit does Microsoft gain from enforcing DRM? They are not the copyright holders of music and movies, so they have no direct loss if pirating of content leads to reduced sales of music and movies.

      Seems to me that if MS own self interest is considered they would put their effort into preventing piracy of their own software, and not worry about the DRM systems.

      Windows Vista and 7 do indeed include DRM subsystems, but since I can't see how MS self interest is invovled in maintaining them, I think it is likely that these are things that the content holders demanded from them before they would grant MS the necessary licenses to produce players, or enter into partnerships to promote such content.

      Either way, seems to me that MS is at most a reluctant partner in such schemes, and don't really care if DRM gets hacked. But driver signing and anti-malware do generate negative customer feedback, so I believe they take those things more seriously.

      • Licensing fees. That's significant income when you are talking about a billion or more potential installs.

        • by BLKMGK ( 34057 )

          Since when do they get paid a licensing fee per install? Or are you saying there are a billion or so drivers? O_o

      • Question for you - what benefit does Microsoft gain from enforcing DRM?

        They get to lock customers in. They convince media giants that their DRM is secure and should be used for all digital releases. So the media will say it works on Windows Vista/7 only.

      • by mcgrew ( 92797 ) *

        Question for you - what benefit does Microsoft gain from enforcing DRM?

        1. They get a signing fee for every movie, song, etc that hits the internet
        2. It's an attempt to cripple Linux
        • by BLKMGK ( 34057 )

          I'd appreciate some citation indicating that they collect a fee for every signed piece of media distributed. thanks!

        • by drsmithy ( 35869 )

          It's an attempt to cripple Linux

          By...?

          • by mcgrew ( 92797 ) *

            Requiring signed apps to run on any Intel computer. Linux couldn't live without other FOSS software, and making running an unsigned app impossible in hardware would kill all open source.

            • by drsmithy ( 35869 )

              Requiring signed apps to run on any Intel computer. Linux couldn't live without other FOSS software, and making running an unsigned app impossible in hardware would kill all open source.

              Can you outline which part of Windows' DRM makes this possible, and, additionally, which part of the OSS model makes signed code impossible ?

    • by BLKMGK ( 34057 )

      Yes that explains why it's on all of their released OS since implementation... Oh wait it's not! Sorry, if this were the case then 32bit would have it as well. I'm afraid I'm not persuaded. Much malware is in fact implemented via driver - things like keyboard sniffers etc. so yeah this does raise the bar although obviously not impossibly high and sadly not on 32bit yet either.

  • by gad_zuki! ( 70830 ) on Tuesday November 16, 2010 @02:03PM (#34245794)

    or physical access. At that point anything goes. Why bother with screwing with code signing tricks when you can just run whatever code you like.

    • It's been a while since I played around with this, but I think that even "administrator" has problems installing unsigned drivers. You have to manually turn off some things on the command line and then reboot. On reboot, Windows will give a few error messages and then you can try to re-install the driver. On subsequent reboots, Windows warns you that things could be bad.

      • Nope, I install an unsigned driver frequently for a project I'm working on. You just get a "ARE YOU SURE YOU WANT TO DO THIS" prompt/UAC event.

        • You are both right. I think BiosHakr was talking about 64bit. 32bit Windows 7 systems just ask if you are sure but on 64bit you have to turn off signature verification or apply a test signature and turn on test signing.

        • Re: (Score:3, Insightful)

          by BLKMGK ( 34057 )

          Nope, I don't think so. If you attempt to load up an unsigned driver on 64bit Win7 or Vista 64 and do not specifically go through the F key function to turn on the mode that disables signed drivers - at every single boot - you will get a nasty text message that HALTS the boot process, shows you the name of the unsigned driver, and shows you the registry key that called it (as I recall, been awhile).

          Unsigned drivers on 64bit Windows are NOT the same as the unsigned code box you're talking about. Attempts to

  • Not a "New" Rootkit (Score:5, Informative)

    by Avohir ( 889832 ) on Tuesday November 16, 2010 @02:11PM (#34245920)
    This is a new version of a ~2 year old rootkit, also known as TDSS, and the company responsible for this particular parasite is a russian outfit known as Dogma Millions. Eset did a good writeup on the older version here [eset.com]. This newer version is actually even more interesting than the article indicates. It's intelligent enough to send tools like MBRCheck off to look at a backup of the MBR so that they'll erroneously return a "clean" verdict while the system remains infected. The best bet for removal is TDSSKiller [kaspersky.com] by Kaspersky (the company that wrote the blog entry).
    • Re: (Score:3, Interesting)

      by Ziekheid ( 1427027 )

      I have a box infected with this and thought I had removed it. After running the utility you linked I found out its mbr is still infected though, so thanks for the link, but it's not able to 'cure' the infection.
      Some solutions on the Kaspersky forum suggest rewriting the MBR which I will attempt now.

      I traced the initial infection back to a vulnerable Flash installation which locks certain flash files so they can not be updated anymore after infection keeping you vulnerable for future infections.

      • Re: (Score:2, Informative)

        by iMouse ( 963104 )

        The MBR isn't the only point of infection. TDSS also patches legitimate system files, resulting in reinfection of the MBR if the infected files on the drive are not taken care of first.

  • TPMs can be used for nasty things, but this is one of the good things about BitLocker and TPMs -- a modified MBR would result in the machine not booting because the TPM would not hand the key over to the encrypted system partition due to the changed code.

    Of course, the TPM would have to be "sealed" before the malware hit the system, and a viral infection is not the first thing on the list to check if a box is sitting there in recovery mode asking for a key or a USB flash drive to continue booting.

    To me, if

  • Vista and 7's driver signing requirement is mainly for DRM purposes. The main thing Microsoft wanted to stop with driver signing is device drivers that create fake sound cards and video cards that can capture decrypted DRM-protected songs and movies. It doesn't help much with rootkits. This is why if you disable driver signing yourself, Vista and 7 will refuse to play some types of DRM-protected media. For example, some Blu-Ray players.

    Rootkits can just attack the boot process to disable the signature c

    • by Animats ( 122034 )

      Vista and 7's driver signing requirement is mainly for DRM purposes.

      No, the driver signing requirement is for quality control purposes. 60% of Windows crashes used to be driver-related. Now, Microsoft actually requires a proof of correctness, using their Static Driver Verifier [microsoft.com], before a driver is signed. The prover tries to determine that the driver can't call a driver API wrong and is free of pointer errors. The goal is to eliminate damage to the rest of the kernel, not check whether the device itse

      • Re: (Score:3, Informative)

        by Myria ( 562655 )

        Vista and 7's driver signing requirement is mainly for DRM purposes.

        No, the driver signing requirement is for quality control purposes. 60% of Windows crashes used to be driver-related. Now, Microsoft actually requires a proof of correctness, using their Static Driver Verifier [microsoft.com], before a driver is signed.

        You're talking about the Windows Hardware Quality Labs [wikipedia.org] signature, not the kernel-mode driver signing [microsoft.com] requirement in 64-bit Vista and 7. A WHQL signature is not required in order to have a driver load, a kernel-mode driver signature is. Microsoft only does their quality testing with drivers submitted to WHQL; an appropriate VeriSign certificate is enough to get the driver to load, without any quality checking on the part of Microsoft.

        It is the kernel-mode driver signing requirement that this rootkit bypass

    • by drsmithy ( 35869 )

      However, removing the right to load unsigned code without disabling part of the OS is unfair.

      It doesn't "disable parts of the OS". They function exactly the way they have been designed and required to. If a protected path cannot be confirmed, then as per the content owner's directions that media will not be played.

      There is no problem in the OS. It will continue to play back all types of media just fine. DRM encumberence is an attribute of the media, however, so if the publisher of said content has de

  • FTA:

    Alureon patches the Windows Boot Configuration Data to make the machine think that what's loading is Windows PE, rather than a normal version of Windows, which prevents code integrity checks from being performed.

    If this rootkit is just flipping a few bits to spoof the Windows version, surely Microsoft can implement a more sophisticated way of checking what version of Windows is booting up.

  • TDSS ... is an example of a particularly pernicious type of rootkit that infects the master boot record of a PC.

    I've seen some BIOS versions that can write-protect the MBR. Perhaps this should be more widely used. I can verify that these TDSS rootkits are a bitch to remove.

    • by anss123 ( 985305 )

      I've seen some BIOS versions that can write-protect the MBR. Perhaps this should be more widely used.

      AFAIK, the BIOS protection is only good against apps that use BIOS calls to write the MBR.

Welcome to boggle - do you want instructions? D G G O O Y A N A D B T K I S P Enter words: >

Working...