Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Mark Russinovich on Windows Kernel Security 181

An anonymous reader writes to mention that in the final part of his three part series, Mark Russinovich wraps up his look at changes made in the Windows Vista Kernel by exploring advancements in reliability, recovery, and security. "Applications written for Windows Vista can, with very little effort, gain automatic error recovery capabilities by using the new transactional support in NTFS and the registry with the Kernel Transaction Manager. When an application wants to make a number of related changes, it can either create a Distributed Transaction Coordinator (DTC) transaction and a KTM transaction handle, or create a KTM handle directly and associate the modifications of the files and registry keys with the transaction. If all the changes succeed, the application commits the transaction and the changes are applied, but at any time up to that point the application can roll back the transaction and the changes are then discarded."
This discussion has been archived. No new comments can be posted.

Mark Russinovich on Windows Kernel Security

Comments Filter:
  • Although this is technically not a dupe, it is almost, as the above linked article is the Part 3 and the other submitted and discussed article [slashdot.org] was the Part 1, isn't it kinda repetitive? What now, someone post a multipart article and we will get one story here on front page for each part?

    On topic now, I don't like the way Russinovich is blowing Vista's horn now. I liked him more when he was more critical and analytical on what could be improved, instead of what has already been done.
    • by Bonker ( 243350 ) on Wednesday March 21, 2007 @05:12PM (#18434741)
      MS made a good move in hiring Russovitch. We can hope that he has more positive influence over kernel changes to XP and Vista than MS has bad influence over things that he does and does not get to say and software (sysinterals) he does or does not get to release.
    • by suv4x4 ( 956391 ) on Wednesday March 21, 2007 @05:15PM (#18434791)
      On topic now, I don't like the way Russinovich is blowing Vista's horn now. I liked him more when he was more critical and analytical on what could be improved, instead of what has already been done.

      He's working at Microsoft now, you do realize that "what could be improved" he's actually doing right now, so he can call it "already been done".

      You don't really prefer pointless critique with no improvement, do you.
      • Yeah, GP is a rather odd comment. Maybe what he said should be done has been done, and now he can talk about how it's implemented. He's a clever guy, I'm sure MS didn't employ him just so they could get their hands on Process Explorer. I guess this factoid was lost on 'first post'
      • He's working at Microsoft now, you do realize that "what could be improved" he's actually doing right now

        You don't really prefer pointless critique with no improvement, do you.

        Point conceded.
      • Sysinternals created some really useful stuff, I agree the guy may be making improvements but its MS marketing thats giving him the script now. Vista wont ever be really secure until MS totally rework their approach to OS design and its default install. The problem is its mass market mostly consists of users with less than average technical know-how and thats why so many compromised systems are out there. If it was secure by default then viruses and malware wouldnt be the huge problem they are now althoug
    • by dedazo ( 737510 ) on Wednesday March 21, 2007 @05:29PM (#18434991) Journal

      On topic now, I don't like the way Russinovich is blowing Vista's horn now. I liked him more when he was more critical and analytical on what could be improved, instead of what has already been done.

      Fresh blood always brings fresh perspectives. Mark has been on the outside looking in for long enough that he probably has a fairly asymetrical view of the NT kernel to that of the core MS developers, and in software development (especially at that level) that tends to be extremely useful.

    • by EllynGeek ( 824747 ) on Wednesday March 21, 2007 @06:36PM (#18435937)
      Sysinternals was one the very few companies with the expertise to dissect and analyze Windowses truthfully. It's no accident that after the Sony rootkit fiasco they were quietly acquired by microsoft. Just one more assimilation.
  • Just leave my applications alone !
  • by nanosquid ( 1074949 ) on Wednesday March 21, 2007 @05:15PM (#18434779)
    There is little reason to put these kinds of transactional services into the kernel: they don't involve security or user permissions and they must be efficiently implementable in user code anyway (otherwise, most databases wouldn't work well on NT). So, I'd classify this as "kernel bloat".
    • Re: (Score:3, Insightful)

      by Bacon Bits ( 926911 )
      Yet PS3 portability was *absolutely vital*.
    • by Lally Singh ( 3427 ) on Wednesday March 21, 2007 @05:31PM (#18435013) Journal
      They also involve atomic I/O to multiple systems simultaneously. Userland can't do this. Databases work on one system, their own data files, and have full control over these files.

      Userland apps don't have that kind of control over the registry. Hell they may not be sure to have that kind of control over the files they're manipulating.

      Besides, I'd rather have this code once in a DLL than 10 times in 10 different apps. That's real bloat.
      • by swillden ( 191260 ) * <shawn-ds@willden.org> on Wednesday March 21, 2007 @07:02PM (#18436245) Journal

        They also involve atomic I/O to multiple systems simultaneously. Userland can't do this.

        Of course it can. As long as the OS provides atomic I/O operations on individual files, the logic for implementing two-phase commit is the same in userland or in kernel space. For that matter, atomic I/O can be implemented entirely in userspace, as long as the OS provides a mechanism for userspace to find out when the data has actually been written. It's more convenient to put single-file atomic I/O in the kernel, though (or at least in the file system).

        Besides, I'd rather have this code once in a DLL than 10 times in 10 different apps.

        That's not an argument for putting it in the kernel.

      • by dgatwood ( 11270 ) on Wednesday March 21, 2007 @08:53PM (#18437415) Homepage Journal

        Of course it can be done in user space. If a user space app can't do it, neither can the kernel. And it isn't atomic I/O. There's no such thing as atomic I/O. I/O operations are reordered, split, combined, etc. by everything from the OS to the controller to the hard drive, and for network volumes, it is even worse. There's no practical way to guarantee atomicity, so you have two choices: have filesystems (including remote filesystems) with rollback capabilities (which still don't completely guarantee anything) or design a file structure that achieves the same thing (which still doesn't guarantee anything). The former is nice for a lot of reasons (e.g. so that every developer doesn't have to reinvent the wheel), but isn't essential by any means. It also would greatly increase the complexity of the VFS layer and filesystems written for it, so if that is the only purpose for doing transactions, it makes a LOT more sense to implement them in a user space library instead of in the kernel.

        If your sole purpose is to be able to do multi-file rollbacks, user-space transactional support is as easy as designing your file format and/or layout around it. There are two easy ways to do this: files with built-in history and using swappable directories.

        Files with built-in history:

        For the initial modification pass, modify each file by appending what amounts to a diff footer. If an error occurs, you can undo all of the changes by truncating the files prior to the latest diff footer. Once these modifications are complete, you no longer need to worry about rolling anything back (except for cleaning up temp files if something fails in the second pass) because the data is safely on disk. (Note: this does require that the kernel and all devices and/or network disks reliably flush data to disk upon request. Don't get me started on buggy ATA drives.)

        In the second (optional) pass, you coalesce the diff into a new copy of the file and swap the compressed version in place of the original file. This is generally an atomic operation in most operating systems. If anything occurs during the second stage, it is a recoverable failure, so there is no need to roll anything back. Heck, Microsoft's file formats pretty much do this anyway. (Notice the 500 megabyte single page MS Word document that occurs when you make lots of changes and always "save" rather than "save as".)

        Swappable directories

        The easiest way (tm) to handle system configuration files in an atomic fashion is to modify config files in the same way you would perform a firmware update: you have an active configuration directory and an inactive configuration directory. You read the active one, make changes, and write to the inactive one. Then you trip a magic switch (tm) that says that the previously inactive directory is now active, and vice versa. Assuming you don't have out-of-order writes going on (which the kernel can't really guarantee any better than user space, sadly), this is a very easy, effective way to perform an atomic commit. And if you have an "exchange in place" operation in which the data for two files or directories in the same directory are swapped in a single atomic operation, that's a really lightweight way to implement an atomic commit/rollback mechanism without most of the complexity.

        Considering how easy this is to deal with in user space, the only legitimate reason I can think of to do it in the kernel is so that you can take it out of application control entirely (e.g. to make it easier to sandbox an untrusted application). Otherwise, it makes a lot more sense to do this in a library. Now if it had snapshotting where you could roll back the entire filesystem to arbitrary points in time, that might be interesting (for different reasons)... but basic transactional support in a filesystem is much less so, IMHO, unless your purpose is to be able to sandbox an application. If so, then all this other stuff basically comes for free. In that context, doing this in the filesystem layer makes sense. However, if that is not their purpose for doing this in Vista, then kernel bloat definitely strikes me as an accurate depiction.

        Just my $0.02.

        • It IS atomic IO. IO commands can be reordered as much as you want, but commands can be tagged for flush or immediate execution, and if a device doesn't comply with those, it is either badly writen or it has incorporated enough safeguards so it doesn't matter. For all practical purposes it is atomic IO. In addition, how do you expect a user mode component to be able to implement transactional services for kernel components? This is not only about begin trans /end trans for copying files. It is about bringing
          • by dgatwood ( 11270 ) on Thursday March 22, 2007 @02:36AM (#18439981) Homepage Journal

            IO commands can be reordered as much as you want, but commands can be tagged for flush or immediate execution, and if a device doesn't comply with those, it is either badly writen or it has incorporated enough safeguards so it doesn't matter.

            I'm laughing. No, really. A lot of devices lie in one way or another. It's not just a few badly written devices. You'd be shocked. My favorite are the drives that completely lie and say that the cache has been flushed when it really hasn't. Most of those drives, you can issue a second flush request and it will block, but there are some that always instantly return success for a flush request. It's not even all that uncommon, ESPECIALLY with external drives because of bridge firmware bugs.

            I've heard some true horror stories about the number of problems companies find when qualifying off-the-shelf ATA drives for server use. If you heard them, you would cry. Truly, all you can assume is that the operating system has made a best attempt at a guarantee. Anything more than that is just kidding yourself... and you can get that same level of guarantee just as easily from user space (at least in UN*X) by executing two sync system calls in a row.

            In addition, how do you expect a user mode component to be able to implement transactional services for kernel components?

            WHAT!?!?!?!?!?! Why in you-know-where would anything in the OS kernel need filesystem transaction support? Stuff in the kernel shouldn't even be READING files, much less writing them! Yes, I'm serious. The kernel should provide only the most critical, basic system-wide OS services. Anything that requires such heavy lifting is NOT providing such a critical, low-level service, and thus, does NOT belong in the kernel. It's that sort of thinking that has led to such abominations as khttpd....

            I'm going to go hide now. I'm very scared....

        • Atomic I/O generally (I'm from the posix side of things, so maybe there are different definitions on win32) means that the entire operation's done by the kernel at once. E.g. I write 16 bytes at offset 0 to this file, and another app's writing 16 bytes at offset 0 to this file. When we're both done writing with a single write() call, the file's got 16 bytes from one of us, not some from one and some from another. No amount of userspace I/O calls will guarantee this (outside some godawful locking, which e
          • by dgatwood ( 11270 )

            Of course, this doesn't completely solve consistency for the registry. Atomicity does not guarantee consistency with shared data structures. It just guarantees that everything (right or wrong) happens at once. Only exclusive locking can guarantee consistency.

            For example, say that you have two values, A and B, and two processes, 1 and 2. Both of these are counters, both set to zero. Process 1 intends to increment both counters, so it reads the contents of both counters. Process 2 increments counter A

    • by Anonymous Coward on Wednesday March 21, 2007 @05:44PM (#18435199)
      Because, unlike a DB's transactional system, Vista's is fully pluggable meaning that anything can build volatile transaction management into their application or driver and automagickally gain the benefits of a single logging mechanism and full support for distributed transactions.

      And yes, this is a really big deal. Other than maybe OS/400 I know of no system where file system operations can be done atomically with database operations. I can poll data, write a file and update that data all in one atomic operation. Either the data has been exported and marked as such, or not. No in-betweens. No incomplete files. No complete files despite a failure to mark the data in the database. No attempt to manually compensate for errors. It's all there, one operation, atomic and automatically so. This is absolutely massive for my world where financial and business transactions are moved constantly in such a fashion.

      Yes, Microsoft does innovate sometimes. This is one of those occasions.
      • Re: (Score:2, Informative)

        by nigelo ( 30096 )
        > Yes, Microsoft does innovate sometimes. This is one of those occasions.

        Well, DEC VMS had this capability decades ago, so is it really innovation?

        http://h71000.www7.hp.com/commercial/decdtm/index. html [hp.com]
        • Re: (Score:1, Insightful)

          by Anonymous Coward
          Yes, it is innovation. The feature you linked to in VMS is a distributed transaction coordinator. Windows has had this component (called DTC) for years.

          The actual innovation is making a Kernel Transaction Manager, along with a resource manager for the filesystem. The KTM means that transactions can be inherited from parent process to child or joined by a cooperating process. Having a transactional filesystem means that all file operations can be all-or-nothing. Power goes out during a major update? No probl
      • This is cross-facility transaction management: registry and filesystem updates combined into a single transaction. The example in TFA that an entire install can be atomic: multiple filesystems, registry, everything appears complete and as requested, all at once, or it never happened.

        It's extensible, if TFA is to be believed at all, and the facility works. It's actually there and in use, rather than an it'll be there someday and won't it be wizzo promise, so I'm in "trust-but-verify" mode. It'll be int

    • Re: (Score:1, Informative)

      by Anonymous Coward
      The majority of the framework is implemented in userland. See the Distributed Transaction Coordinator service.
    • by Anonymous Coward on Wednesday March 21, 2007 @05:54PM (#18435345)
      Think outside your small box. These features can provide significantly more than current user-mode transaction management that databases (or OLTP monitors) provide today.

      For instance, how does a user-mode controlled file-system transactions rollback changes on reboot? Consider what happens when the change involves system files. NTFS handles this cleanly in Vista but any attempt to do this with a user-mode service would fail because the system files would already be in-use by the time the service started. Further, any transactions in-doubt would remain in-doubt until the service started (if you could get that far).

      Similarly, what about settings in the registry? Consider if the transaction spanned installing a driver and updating its settings. What happens if the system powers down midway through that update? Without kernel-level transactions that are available at boot-time you cannot easily recover before inconsistent settings are used with the driver.

      Beyond the boot-level support; another exciting aspect about the KTM is that this is a transaction-manager that provides transactions that can be shared across multiple processes. Try that with a user-mode transaction manager like XA or DTC and see your per transaction performance. By managing the transaction in the kernel, the 2PC performance is significantly improved.

      In the end, NTFS and the registry live in the kernel -- so their tramsaction manager also has to live in the kernel. That is just the way it works.

      Another exciting aspect covered is the ability to coordinate DTC transactions with KTM transactions. Meaning, you can coordinate your SQL Server (or Oracle) updates with your file-system updates. This is cool! No longer do you have to worry about finding orphan-files in workflow applications or other applications that manage both files and relational data.

      Lastly, they are talking about full ACID properties with NTFS's transactions. Think about it -- you could update every file on your web server, in place, while its wild with activity. All your changes are isolated from the users until you say "commit" in your application.
    • Re: (Score:1, Troll)

      by ChrisA90278 ( 905188 )
      NO. it DOES belong in the kernel. The reason is because these changes must appear to be atomic to other processes. Can't do this in user space
      • Of course, you can do it in user space. You could do that in user space already in UNIX V7, and it's been used for anything from atomic system upgrades to maintaining gigabyte mail spool directories without data loss (something that Exchange doesn't seem to be able to achieve to this day). You really don't need a transaction manager in the kernel for that; any one of a number of existing kernel operations is sufficient.
  • According to the bottom of just one function of the KTM reference [microsoft.com]:

    "Requires Ktmw32.dll."

    Why would a kernel add-on require a .dll, exactly? Did I miss some Windows fundamental about it's kernel? And if it's not really a result of a kernel enhancement, is this yet another potentially useful technology specificly excluded from earlier versions of Windows entirely for business purposes instead of technological limitations?

    • For years, the "Registry" was some weird mish-mash of binary files, many of which represented Jet databases.

      Has Jet been completed abandoned in Vista?

      If so, did they switch to the slimmed down SQLServer [that was supposed to be part of WinFS]?

      • by allanw ( 842185 )
        Nope, the registry is still a bunch of binary files, located in the same folder.
      • by Foolhardy ( 664051 ) <`csmith32' `at' `gmail.com'> on Thursday March 22, 2007 @12:50AM (#18439447)
        The registry is a single root hierarchical database with registry hive files mounted at the second level (below \REGISTRY\MACHINE and \REGISTRY\USER for the computer's config and user config, respectively). The registry engine is implemented in kernel mode as an executive subsystem (inside ntoskrnl.exe), where it is known as the Configuration Manager. Registry hives use a transaction journal (like many filesystems do) to avoid corruption during a power failure or crash. Standard system hives are located in %SYSTEMROOT%\System32\Config and include SAM for local user accounts, SECURITY for various secrets held by the computer, SYSTEM for core system configuration early during boot, and SOFTWARE that stores all other config associated with the computer in the registry. Every user profile has its own registry hive for user-specific configuration. Everything above is still the same in Vista as it was in NT 3.1.

        There are two database engines that have been known as Microsoft "Jet", known as Jet Red and Jet Blue. Jet Red [wikipedia.org] is also known as the Access database engine. It is a fairly featureful SQL database. Jet Blue is now officially the Extensible Storage Engine [wikipedia.org] (ESE), and has been a system component since Windows 2000, backing WMI data, Active Directory, Exchange, and others. It is an ISAM database and is optimized for large sparse tables and also supports a transaction journal. Both are 100% user-mode and were not a part of the initial release of Windows NT. Microsoft has said that Jet Red is depreciated, and that future versions of the Access database engine will be integrated with Access and not have a public interface. Jet Blue's interface is well documented [microsoft.com] and will continue to see use for some time to come. Both being user-mode, dependent on Win32 and the wrong type of database (relational vs hierarchical), the Jet engines would not be suitable replacements for the registry.

        SQL Server is a high-end SQL database engine. It was rumored that WinFS would use SQL Server Express and that Microsoft eventually plans to move some of the services that use Jet Blue to SQL Server (such as Active Directory). In any case, SQL Server is an even less possible replacement for the registry.

        Microsoft has not gotten rid of the Registry in Vista. In fact, the new boot manager uses a registry hive to store boot configuration, replacing the old boot.ini.

        • The registry engine is implemented in kernel mode as an executive subsystem (inside ntoskrnl.exe), where it is known as the Configuration Manager. Registry hives use a transaction journal (like many filesystems do) to avoid corruption during a power failure or crash...

          So you're saying that the engine which drives "the Configuration Manager" is neither Jet Red, nor Jet Blue, nor SQLServer Express?

          So what is it? YAMIHDE [Yet Another Microsoft In-House Database Engine]?

          Everything above is still the same
    • by Cyberax ( 705495 ) on Wednesday March 21, 2007 @05:41PM (#18435139)
      Because this DLL is just an interface to kernel features.

      Windows NT was initially designed to use single kernel for different subsytems (OS/2 subsystem, POSIX subsystems, etc.). Subsystems are implemented as dynamic modules talking with the kernel through LPC (Local Procedure Call, see http://en.wikipedia.org/wiki/Local_Procedure_Call [wikipedia.org] ). So in this case ktmw32.dll just wraps LPC calls in a nice API. That's actually a rather good design.
      • by EvanED ( 569694 ) <evaned@NOspAM.gmail.com> on Wednesday March 21, 2007 @05:46PM (#18435229)
        Windows NT was initially designed to use single kernel for different subsytems (OS/2 subsystem, POSIX subsystems, etc.)

        Not just initially designed, it DOES use a single kernel for different subsystems. You can't get the OS/2 one any more, but the POSIX subsystem morphed into (part of) the Services for Unix which has become the Subsystem for Unix-based applications.

        On 32-bit Windows, 16-bit Windows applications are handled by the "Windows on Windows" subsystem. On 64-bit Windows, 32-bit Windows applications are also handled by a "Windows on Windows" subsystem, though a different one than WOW16.
    • by Keeper ( 56691 )
      The dll performs the mapping from the "nice" function call you see in MSDN to the system call(s) required to perform the desired operation.
    • Its a small DLL that provides you with Win32 APIs to call kernel (Nt) functions.
  • by pegr ( 46683 ) on Wednesday March 21, 2007 @05:36PM (#18435061) Homepage Journal
    "If all the changes succeed, the application commits the transaction and the changes are applied, but at any time up to that point the application can roll back the transaction and the changes are then discarded."
     
    What, was my credit card declined for my upgrade to Vista Ultimate Edition?
    • Just like any database -- from any vendor. If you start running out of system resources, your transaction will likely roll-back.
  • by Ancient_Hacker ( 751168 ) on Wednesday March 21, 2007 @05:46PM (#18435239)
    Recovery? The Windows Registry is the exact opposite of recoverable:

    • "Regedit" has no Undo command. It applies changes immediately.
    • The Registry file is in a proprietary and undocumented format.
    • One scrozzled byte in the Registry can make it completely broken and make the system unbootable.

    Not exactly most people's idea of robust and recoverable.

    • by bmajik ( 96670 ) <matt@mattevans.org> on Wednesday March 21, 2007 @06:13PM (#18435653) Homepage Journal
      The registry can be backed up and restored, in whole or in subsections.

      The format of the registry is largely irrelevant, but it is described to some extent in the "Inside Windows xxx" series of books (which Russinovich co-authored)

      Which specific byte would you "scrozzle" in the registry to render a machine unbootable? How and why would you do it?

      Extra credit:
      Which files under /boot, /etc, /sbin, and so on would you be willing to stake your career on being "safe to corrupt by 1 byte" and still guarantee a bootable system?

      There are things to like or dislike about either a registry based approach (opaque data storage with a defined interface) vs a flat text based approach (clear data storage with an undefined interface). I don't think you make a compelling "anti-registry" argument with the points you list, however.

      • Re: (Score:3, Insightful)

        by fabs64 ( 657132 )
        Difference being you're not supposed to modify things in /boot and /sbin for all your settings, and /etc is text and therefore much harder to screw up. (you could put an EOF as the first byte in the file, but the system will still probably at least give you a "file x is empty" error message).

        What you've said is correct, the gp's gripe is really about using a binary configuration file.. a fairly stupid decision but that is only my opinion.

        My argument for flat text (or xml or whatever) over binary, is the sam
        • Just to be clear, there are some standard unix programs that store binary configuration information in /etc, such as tripwire. Either way, you can quite definitely completely fuck up your system by editing the wrong files in /etc. /etc is NOT a central configuration database by any means, and it certainly is not one with a standardized interface. This is not to say that the registry is any better or worse. (Note, I hate the registry, but it's largely in part due to the below reason)

          What it comes down to is
          • by fabs64 ( 657132 )
            How is the interface to /etc not standard..? Last I checked we have the navigation of filesystems pretty down-pat.
            Obviously what an app puts into its configuration files is undefined, but I'm pretty sure you can put Strings in the registry as well?
            The registry is essentially a (bad) re-implementation of a filesystem, it's no more standardised (less so really) than just USING the filesystem.
            • Re: (Score:3, Insightful)

              by bmajik ( 96670 )
              Are you seriously asking?

              what is the comment convention?
              What is the whitespace convention?
              Does this version of this flavor use spaces or tabs in /etc/fstab (or is it /etc/vfstab on today's OS?)
              What are the file locking semantics?
              Which set(s) of files are logically related to the same peice of functionality?
              What character encodings are supported?
              What _line termination_ is supported?

              nfs export your /etc directory r/w some time. Let someone edit your /etc/fstab in TextEdit on a Mac. Make sure they cut-and-pa
        • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Wednesday March 21, 2007 @09:08PM (#18437577)

          Difference being you're not supposed to modify things in /boot and /sbin for all your settings, and /etc is text and therefore much harder to screw up. (you could put an EOF as the first byte in the file, but the system will still probably at least give you a "file x is empty" error message).

          Users aren't supposed to modify the Registry, either.

          What you've said is correct, the gp's gripe is really about using a binary configuration file.. a fairly stupid decision but that is only my opinion.

          It was designed in the late '80s (well, arguably the late '70s) when a 20Mhz 386 with 4MB of RAM was "bleeding edge". Text-parsing is expensive.

          I think what it really comes down to is this; If you decided to write a new OS today from scratch and wanted to have a central configuration database (a good idea, as shown by /etc), would YOU come up with the windows registry?

          Windows NT wasn't designed today, it was designed in the late 80s.

          • by fabs64 ( 657132 )
            So about when did /etc become the de-facto standard in Unix for configuation?
            "It was designed a long time ago" is not a valid argument for how something is today.
            "Users" aren't supposed to modify the registry, but it's still where apps store settings, what app is storing settings in /sbin or /boot?
            • by drsmithy ( 35869 )

              So about when did /etc become the de-facto standard in Unix for configuation?

              No idea, but since /etc and the Registry are worlds apart in function, purpose, usage and capabilities, it's hardly relevant.

              "It was designed a long time ago" is not a valid argument for how something is today.

              It most certainly is. You can't just remake something as big as a modern operating system from the ground up every time the latest cool fad comes around.

              (And I'm sure you'd be happy to defend the train wreck that is /e

              • How is it relevant ? The complaint was that twiddling the right bit in the Registry could render the whole system unbootable. The point is that the same thing can happen on any system.

                Not to dispute your conclusion, but one relevant aspect is that it seems to happen one hell of a lot more often in registry-based systems. For example, my development system at work is XP-based, and for no apparent reason my Recycle Bin disappeared. Off the desktop, My Computer, just gone. No idea where it went. I first tri
        • I am being picky here but oh well. Most UNIX systems and at least Linux if nothing else have gotten rid of the EOF character in favor of just recording the file size seperately. Tossing a ^D or multiple ^Ds in a file doesn't have an effect on reading it though the parser used may make a difference.
      • Re: (Score:2, Interesting)

        by Talchas ( 954795 )
        /boot - kernels/initrds I'm not using (which should be tossed out). configs. I don't know what would happen if an insignificant byte in grub.conf changed, but I generally wouldn't bet on being safe. /sbin - oddly enough, lots. fdisk/cfdisk. fsck + friends would probably boot somewhat. e2image, e2label, ctrlaltdel,shutdown,mkfs*,installkernel. /etc - much of it, just to boot. Boot and have everything work, far less of it - but at least I could usually boot and I would usually know what was broken.

        But yes, I
        • Re: (Score:3, Informative)

          by drsmithy ( 35869 )

          However - a broken byte in an unbacked up (yeah a bad idea) registry [...]

          The Registry is automatically backed up at the completion of a successful system boot. This has been true since at least Windows 2000, and probably longer.

          • by Talchas ( 954795 )
            Huh, to where? I recently had to fix a computer which wouldn't boot because part of the registry was corrupted, and I couldn't find any backup other than in a System Restore point. Maybe I just didn't know where it was.
            • by Keeper ( 56691 )
              System restore is the backup mechanism. If you've got system restore enabled, it manages creation of restore points (which include registry backups) and deletion of old restore points on a regular basis. If your registry is corrupted, it is supposed to fallback to a version of the registry contained in a previously generated restore point. If none of the restore points have a valid snapshot of the registry (ex: you've disabled system restore or your drive is horked) you're SOL.
          • True, but it doesn't always work as advertised, especially in teh case of a half-munged registry. Say a user screws something up, thinks "reboot!", and does so... after it reboots, the registry is still hosed, and if it can get even barely to a running state (not necessarily one in which you can do anything...) Then the bad registry now gets copied over the backup, and now the once-good registry is now gone. I think (don't know for sure) that was one of the reasons and rationales for putting in the System R
      • either a registry based approach (opaque data storage with a defined interface) vs a flat text based approach (clear data storage with an undefined interface).

        The GP's point was not entirely about automatic recovery from a broken structure; it was about human-led recovery. In a binary dump, even with defined interfaces, a single byte can render the entire structure non-human-readable. With a plain text file, a human can look at it and try to diagnose the problem -- simply by looking.

        • by bmajik ( 96670 )
          There's no file to look at. The computer didn't boot.

          If you've booted the computer off of some other media and are attempting to repair the configuration info, you're using a special tool to do it (like a text editor for /etc files, or a registry editor for the registry)

          Also, I've not seen evidence that a single byte defect renders the registry unreadable or the computer unbootable. I don't dispute that this is possible, but I'd be curious to know if this is a practical issue or a theoretical complaint. F
      • >The registry can be backed up and restored, in whole or in subsections.

        ... not when it's scrozzled so the ReadRegistry API crashes to the BSOD. I'm not typing thru my hat-- I've had this happen more than once.

        >Which files under /boot, /etc, /sbin, and so on would you be willing to stake your career on being "safe to corrupt by 1 byte" and still guarantee a bootable system?

        Apples and Oranges. I can edit the text files and cp the binary files. Can't do either with the Registry.

        >The fo

    • by DeifieD ( 725831 )
      It does have an undo command, just not Edit - Undo. Welcome to a post Windows95 world.

      If you screwed up the registry you could always make them bootable again, you just had to screw around at the command line. Linux does not make leaps and bounds over this fix. X not booting? Time to go screw around at the command line again.
      • Re: (Score:3, Informative)

        You are confusing a windowing system (X11) with an OS (Linux). While you may have to "screw around on the command line" to get X working again, everything else will continue to work just fine (filesystem, webserver, internet, etc), all of which you can use either from a virtual console or a remote connection. If explorer.exe won't start, how exactly do you fix that without sitting down with a recovery CD?
        • Re: (Score:2, Informative)

          by weicco ( 645927 )
          If explorer.exe doesn't start, user gets (at least in XP) blue screen (not that BSOD, but just a blue screen) in front of him. Then user can press CTRL+ALT+DEL, execute Task Manager and use that to start other applications. You can test this by terminating all explorer.exe processes with Task manager. But you must be quick, since XP will try to automatically start shell process if it sees it has been terminated. Btw. there's a registry entry which tells which shell should be started: HKLM\SOFTWARE\Microsoft
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      I am surprised that no one mentioned the registry should be a CVS revisioning system. Being able to roll back/audit and have full control on what gets to read/write registry should help on security.
    • Clearly trolling.. Since the days of at least Windows 98 there were built in mechanisms to backup and restore the registry. Windows XP even does this on its own when needed.
  • by JanusFury ( 452699 ) <kevin...gadd@@@gmail...com> on Wednesday March 21, 2007 @05:48PM (#18435279) Homepage Journal
    A fairly common trend these days in PC games (mostly multiplayer ones) is the use of a kernel-mode windows driver (effectively a rootkit in most cases) to 'protect' the game from hacking. Many eastern (korean, taiwanese, etc.) game development companies opt to use this mechanism to secure their games instead of writing secure client and server code - for example, GunBound, Maple Story, Ragnarok Online, Rakion, etc... pretty much any MMO you see an ad for these days that isn't from a US or European studio uses this stuff for security. The basic mechanism it uses is that it hooks all the low level operations you can do on your system (file access, process access, etc.) and prevents you from touching anything related to the game. The end result is that you can't even so much as end-task a misbehaving game 'protected' by this driver.

    With the huge amount of popularity this approach seems to have (I personally suspect it's a result of some very, very aggressive marketing on the part of the driver's developers), I wouldn't be suprised to see many games start demanding that users run them on Windows Vista, so that the 'protected process' mechanism can be used to fully 'protect' the games from users' interference. While you'd at least be able to end-task them, I can't say I see this as an improvement. It's saddening that many companies believe the solution to security is a series of hacks, workarounds, and black boxes - the only real solution is careful, methodical design and engineering. It seems very likely to me that within a few years, many PC games will refuse to run on anything except a Vista system with nothing but signed drivers loaded, and that's saddening. I dislike the notion that I am denied even basic rights to investigate what an application is doing on my machine simply for the sake of 'security', when it's trivial to set up a second machine to inspect and modify a game's network packets and cheat all I want.
    • by mpapet ( 761907 ) on Wednesday March 21, 2007 @06:06PM (#18435551) Homepage
      the solution to security is a series of hacks, workarounds, and black boxes

      Outside of some very specific gov't spooks contract work, no one is willing to adopt the other because it's more expensive and requires more thought and risk than the average PHB can handle.

      Case in point: The number of cashless casino gaming systems that are centrally managed. The average casino manager understands their casino systems are a single, massive, point of failure. Just wait till they go to cashless floors and someone engineers a jackpot for themselves.

      Security programming is hard. Really hard. The developers at these companies may _want_ to do the right thing, but they don't because of the complexity and limited resources.

      The big-digital-bank-robbery just hasn't happened yet.
      • by nuzak ( 959558 )
        > Just wait till they go to cashless floors and someone engineers a jackpot for themselves.

        That would be a neat trick to do with the end-user UI of a slot machine. Physical security is a pretty important first step.

        > The big-digital-bank-robbery just hasn't happened yet.

        Actually, when you look at identity theft and fraud, you can see it's happening every single day. Think breadth and not depth.
        • by admactanium ( 670209 ) on Wednesday March 21, 2007 @07:38PM (#18436631) Homepage

          > Just wait till they go to cashless floors and someone engineers a jackpot for themselves. That would be a neat trick to do with the end-user UI of a slot machine. Physical security is a pretty important first step.
          unless i'm misunderstanding you, slot machines already work this way. you can play a slot machine with cash, or you can use a bar-coded receipt from other slot machines. i've sat down and played a few slot machines with no cash and, even after winning some money, stood up and walked away with my winnings without any additional cash. at some point i did have to put cash into the system (the first slot machine) but you can interact with slot machines, execute transactions (plays) and be paid all without cash.
        • by mpapet ( 761907 )
          It's clearly not going to happen on the floor in front of a slot machine.

          The backend running those games is on a network.

          What are the chances they've got a gateway that goes to the interweb?

          What are the chances they are relying on Windows for their security?

          The people running the casino are pretty busy with their day-to-day stuff, not infosec.
      • I don't think that you will see chips go a way even more so in poker games.
    • Re: (Score:1, Informative)

      by Anonymous Coward
      The basic mechanism it uses is that it hooks all the low level operations you can do on your system (file access, process access, etc.) and prevents you from touching anything related to the game. The end result is that you can't even so much as end-task a misbehaving game 'protected' by this driver.

      This is not true, and it's also not what a rootkit is. These games use rootkits to hide files and drivers from the Windows API, which you can do yourself just by creating a share that starts with '$' or a regis
  • But I wonder how long it will be before "APK" aka "AlecStaar" comes out of his rathole to talk about how Mark is a witless academic who can't possibly know more than he does, since he's the author of ZDNet-approved APKTools 2007+++++++ 99.8.10101022 SR6.
  • ...Somewhere... ...Yeah, I know where!

    So, they reinvented the wheel once again? It seems to be: every database more complex than a flat file processed by a pair of simple perl scripts has support for transactions like this. So they invented nothing, just applied an old patch to new code.
    • Re: (Score:2, Insightful)

      by EvanED ( 569694 )
      It seems to be: every database more complex than a flat file processed by a pair of simple perl scripts has support for transactions like this.

      Yes, DBs do. But traditionally file systems don't. The only other system that provides this that I know of that isn't a full-fledged database is VMS.
  • when the NTFS files access the GHY it extends a random signal to the DFT which emulates the chip switch Architecture (CSA). Hard drives can be extruded and raised to the eye level, the apt facing the sun and look for errors at the kernel module. Then the stubs in the IIOP cloud extends its virginity toward the distributed computing components. thats how the Eifel tower was made. Hope I cleared your doubts.
  • by NearlyHeadless ( 110901 ) on Wednesday March 21, 2007 @08:12PM (#18437009)
    I just noticed today that Russinovich's utilities are available in a single-file download: http://www.microsoft.com/technet/sysinternals/Util ities/SysinternalsSuite.mspx [microsoft.com]
    • I have a spider of sysinternals.com and all file downloads in a 24MB Zip file from the day it was announced Mark was hired by Microsoft. Not sure if anyone is interested in it. I don't have any where to host the files but I can try to email them to someone via Gmail if anyone can host it or wants a copy.
      • by user24 ( 854467 )
        you won't be able to gmail exe files, even inside a zip unless you rename the extension.

        slap it on mytempdir.com or rapidshare.de and then anyone can grab it.
      • by Reziac ( 43301 ) *
        Well, I'd be interested, in case there's anything I missed in my own smash-and-grab from that fateful day. (Not very organized here, I'm afraid)

  • This is... (Score:5, Funny)

    by Organic Brain Damage ( 863655 ) on Wednesday March 21, 2007 @11:50PM (#18439041)
    Windows Kernel. This is Windows Kernel on ACID. Any questions?
  • Instead of putting more and more RDBMS features in file systems, why don't we drop file systems entirely and use RDBMS instead? RDBMS already provide all the required mechanisms for information management (transactions, security, duplication, distribution, strong typing, queries, caching etc), and the concepts of file/directory/hard-soft link are outdated and create more problems than what they solve, in the end.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...