Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Operating Systems Software Windows Linux

Windows vs. Linux Security, Once More 489

TAGmclaren writes "The Register is running a very interesting article about Microsoft and Linux security. From the article: 'until now there has been no systematic and detailed effort to address Microsoft's major security bullet points in report form. In a new analysis published here, however, Nicholas Petreley sets out to correct this deficit, considering the claims one at a time in detail, and providing assessments backed by hard data. Petreley concludes that Microsoft's efforts to dispel Linux "myths" are based largely on faulty reasoning and overly narrow statistical analysis.' The full report is available here in HTML form, and here in PDF. Although the article does make mention of OS X, it would have been nice if the 'other' OS had been included in the detailed analysis for comparison."
This discussion has been archived. No new comments can be posted.

Windows vs. Linux Security, Once More

Comments Filter:
  • by RAMMS+EIN ( 578166 ) on Friday October 22, 2004 @01:47PM (#10599985) Homepage Journal
    What I would like to see is some security comparison of Microsoft software and FOSS, corrected for target size.

    FOSS advocates often whine about MS insecurity, whereas MS advocates often claim MS only gets more break-ins because it's used more. The MS folks are probably not right in the Apache vs IIS case, but what about other cases? Is FOSS really more secure?

    Unfortunately, I cannot think of any good way to measure this. Perhaps a little brainstorm on /. can come up with a good test, and some people can carry it out?
  • biased? (Score:2, Interesting)

    by Cat_Byte ( 621676 ) on Friday October 22, 2004 @01:49PM (#10600008) Journal
    Windows Design
    Windows has only recently evolved from a single-user design to a multi-user model
    Windows is Monolithic by Design, not Modular
    Windows Depends Too Heavily on the RPC model
    Windows focuses on its familiar graphical desktop interface
    Linux Design
    Linux is based on a long history of well fleshed-out multi-user design
    Linux is Modular by Design, not Monolithic
    Linux is Not Constrained by an RPC Model
    Linux servers are ideal for headless non-local administration

    Oh yeah thats unbiased.
  • SELinux (Score:5, Interesting)

    by Coryoth ( 254751 ) on Friday October 22, 2004 @01:50PM (#10600022) Homepage Journal
    I look forward to the Fedora SELinux project getting a good workable set of policies so that SELinux can default to being on for Fedora installs. Once that happens the "Linux is more Secure" claim will actually have some serious hard evidence behind it. SELinux and other Mandatory Access Control systems (anything hooking into the Linux Security Module in the kernel really) really are a serious step up in security, and there really is nothing comparable in the windows world.

    A good way to think of MAC or SELinux is as a firewall between processes on your machine and the files and devices etc. on your machine. At the kernel level there is a set of rules, at pretty much as fine a grained level as you care to write, as to what can access what. It's well worth readign the FAQ [nsa.gov] to et a fuller idea of what we're talking about here.

    Jedidiah.
  • Re:So... (Score:5, Interesting)

    by JPriest ( 547211 ) on Friday October 22, 2004 @01:54PM (#10600114) Homepage
    Ask some people that admin a mixed environment. Our Linux boxes get owned just the same as our Windows boxes do. When comparing older version of windows there is no doubt Linux owns windows but 2003 server it a pretty big improvement in security over NT 4.0 or 02. SP2 (with firewall) is also a huge improvement, just too bad it took MS this long to get it.
  • by pdxaaron ( 777522 ) on Friday October 22, 2004 @01:59PM (#10600302)
    Nice fuzzy logic there. How many of those 40 Microsoft vulnerabilities were related to Internet Explorer? Yes, it's Microsoft's fault for integrating it in the OS, but if you are using Server 2003 O/S to cruise the web with an admin rights role, you are the security problem, not the OS.

    Why don't we look instead at security vulnerabilities in a Server OS that are relative to functions a server should be performing. How many vulnerabilities has IIS 6.0 had versus Apache in the year and a half Server 2003 has been out?

    Hmmm one of those has had zero, and it sure the hell ain't Apache.
  • by QuietLagoon ( 813062 ) on Friday October 22, 2004 @02:00PM (#10600322)
    "I'm not proud," [Brian] Valentine [senior vice president in charge of Microsoft's Windows development] said, as he spoke to a crowd of developers here at the company's Windows .Net Server developer conference. "We really haven't done everything we could to protect our customers ... Our products just aren't engineered for security."

    http://www.infoworld.com/articles/hn/xml/02/09/05/ 020905hnmssecure.html [infoworld.com]

  • by mcrbids ( 148650 ) on Friday October 22, 2004 @02:08PM (#10600478) Journal
    OK, shocker subject line. But, in a sense, it's true!

    I've read about the fact that while XP/SP2 contains numerous changes that present real improvements, it is largely a recompile of XP with a new compiler that enforces buffer size.

    While that doesn't fix buffer overrun bugs, it certainly limits their potential negative security implications. When will this buffer enforcement be available for gcc!?!? I know, there are 3rd party apps, but as long as it's a 3rd party app, I won't get these benefits with a torrent-obtained Debian CD...

    I would be perfectly happy to live with a few percentage points of performance hit to get this benefit!

  • The MS take on it (Score:5, Interesting)

    by RealProgrammer ( 723725 ) on Friday October 22, 2004 @02:11PM (#10600565) Homepage Journal

    I used to wonder at the blinders-on group think of the hidden source folks. The elaborate unreality of their arguments was a puzzle, until I figured it out [healconsulting.com]. Now I understand; it's all about the dream.

    While some might dismiss the article because he is a Linux advocate, that's missing the point. His piece is geared toward Linux advocacy, but avoids the usual rhetoric. I kept looking for the usual Gates bashing, but didn't find any.

    What I found instead were hard facts, distilled from public data. He didn't say, "I performed some tests which prove Linux is better." He took the publicly available information, analyzed it, and reported the results.

    The response by the Microsoft marketing droids and vassal fudmeisters will be instructive to anyone who really thinks about it. Don't take away their dreams of a gold mine, at least not until they've got a Ferrari just like the guy in the next cube.

  • Re:enterprise 03 (Score:5, Interesting)

    by RealProgrammer ( 723725 ) on Friday October 22, 2004 @02:20PM (#10600776) Homepage Journal
    What people forget to mention is that MS security patches seem to like reboots, [due to] the way filelocking works on Windows. Thus, whenever a "critical" flaw is released, they have to either patch it with a workaround (firewall rules, etc.) or they need to reboot the server.

    That's sort of the point. You have to reboot a Windows server more often. If rebooting once a month or so is acceptable (see Murphy's Law for schedule), then that's fine.

    If you want it to stay up, doing its job, then don't run Windows on it.

  • by dgatwood ( 11270 ) on Friday October 22, 2004 @02:25PM (#10600877) Homepage Journal
    Well, the article's author is right. I tried to obtain similar results for Mac OS X just out of curiosity. The search system allows you to search for bugs by substring (with no way to just limit it to the vulnerable OSes---if the OS appears, it gets listed), and provides no severity information even after you look at the vulnerability. The only way to see the severity metric is to look at a list of every bug ever published ranked by severity and then go through page after page searching for the bug you're looking for.

    Basically, as bad as the CERT search system is, it's a wonder anybody can figure anything out at all about the security of computer systems. It may be better than nothing, but not by much. The security of the internet as a whole and of individual systems depends on CERT. For CERT's search to suck this badly hurts us all, so while I laud the author for mentioning it, that subject is worth of an article on its own, IMHO.

  • Same old arguments.. (Score:3, Interesting)

    by d_jedi ( 773213 ) on Friday October 22, 2004 @02:29PM (#10600944)
    Just as the authors of this report claim "it takes only a little scrutiny to debunk the myths and logical errors behind the oft-repeated axioms (that suggest Windows is more secure)" their myth busting arguments also do not stand up to scrutiny.

    For one, they speak at length about the uptime of web servers. While some downtime is related to security flaws, there is not a direct corrospondance between security flaws and uptime. I find this metric completely unreliable as a method of assessing web server security.

    This is essentially their only argument for the first two myths.

    For the third, they mention that flaws Microsoft will NEVER fix. They don't bother to mention that these flaws only occur in older, "obsolete" operating systems. Does Red Hat issue patches for version 1.0 anymore? The rest of their argument makes much more sense, however.

    (Haven't read the rest yet.. but this thus far makes me skeptical that this is an unbiased report.. )
  • by klingens ( 147173 ) on Friday October 22, 2004 @02:30PM (#10600964)
    You are right in your assessment: the Linux kernel is monolithic and the Windows one modular, but that's totally irrelevant.
    When have you seen the last vulnerability in either kernel? NTOSKRNL (or vmlinuz) isn't really the problem, it's all the crappy rest which is. Sure there have been some, but the vast majority of flaws are in various userland software. And Windows certainly is monolithic and Linux very modular, we aren't comparing kernels, but systems as a whole.
  • by Greyfox ( 87712 ) on Friday October 22, 2004 @02:38PM (#10601073) Homepage Journal
    The kernel patch has been around for ages. Some distributions (FC2 and Mandrake, I think) apply the patch in their kernel. It breaks some legacy apps, like Wine, though.
  • Re:SELinux (Score:2, Interesting)

    by skiman1979 ( 725635 ) on Friday October 22, 2004 @02:47PM (#10601206)
    I've noticed SELinux options in the kernel configuration under Gentoo (kernel 2.6.5), as well as other security features. I've never used it though. Are these features only available in certain distros, or are they in the main kernel?
  • by hackstraw ( 262471 ) * on Friday October 22, 2004 @02:52PM (#10601268)
    When will this buffer enforcement be available for gcc!?!?

    As soon as you do a search for StackGuard http://www.cse.ogi.edu/DISC/projects/immunix/Stack Guard/ [ogi.edu] or ProPolice http://www.trl.ibm.com/projects/security/ssp/ [ibm.com].
  • by Animats ( 122034 ) on Friday October 22, 2004 @02:53PM (#10601287) Homepage
    What you want for security are little processes communicating through narrow interfaces. That's RPC. The problem is that Microsoft's approach to RPC is insecure, because it comes from the old OLE system under Windows 3.1. Authorization and authentication across RPC connections is weak.

    Not that Linux is any better. The RPC systems for Linux/UNIX are clunky afterthoughts built on top of sockets.

  • by upsidedown_duck ( 788782 ) on Friday October 22, 2004 @03:04PM (#10601465)
    Windows just might be ahead of *NIX here...

    Nope. What Windows recently added, OpenBSD had been doing for quite a while. OpenBSD uses GCC, so, yes, there is a way to get GCC to provide the stack protection. Also, both OpenBSD and Solaris can provide execute protections for RAM, at least on SPARC. I'm sure other systems have this too, but I just don't know at the moment.

    Again, look to OpenBSD for the cutting edge (OpenSSH, stack protection, good firewall, audited code, clean install, etc.) and see it get implemented in Windows a few years down the road.
  • by gabebear ( 251933 ) on Friday October 22, 2004 @03:07PM (#10601542) Homepage Journal
    No matter how you cut the vulnerabilities in Win2K3 [msmvps.com] some of the vulnerabilities are definitely part of IIS 6.0. However I don't believe for a second that Microsoft is reporting all security problems, such as this [secunia.com] problem that M$ still hasn't acknowledged.

    The Apache group [apache.org] is much more forthcoming about security problems and I don't trust Windows as a server platform.
  • Re:biased? (Score:5, Interesting)

    by d_jedi ( 773213 ) on Friday October 22, 2004 @03:09PM (#10601586)
    OK:
    1) Windows is not monolithic. If you or the authors of this report knew anything about OS design, you'd know this to be true.

    2) They completely forget (or choose to ignore) that Windows was multiuser starting with NT. 2000 was multiuser as well. To say that XP is the first real multiuser Windows is completely false. And they use fast user switching to imply that Windows still isn't a true multi-user OS, which is complete nonsense.

    3) From a design perspective, it makes more sense to use the same functionality to communicate with a remote or local machine (ie. it doesn't matter where the other program is).
    And Windows is not "constrained" by an RPC model (as they seem to imply by saying that Linux is not).. application programmers can CHOOSE to use RPC, or they can use other methods.

    4) This point makes no sense whatsoever:
    "By advocating this type of usage, Microsoft invites administrators to work with Windows Server 2003 at
    the server itself, logged in with Administrator privileges. This makes the Windows administrator most vulnerable to
    security flaws, because using vulnerable programs such as Internet Explorer expose the server to security risks."

    That is a complete load of bull $hit.
  • What you would need: (Score:5, Interesting)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Friday October 22, 2004 @03:14PM (#10601686) Homepage Journal
    Take one recent Microsoft Windows box, with all official patches from Microsoft and relevent vendors applied and all standard security procedures adhered to.

    Now, take a recent Linux box (the distro doesn't matter) and apply all official patches and upgrades, as released by the distro and the various package maintainers.

    Each machine must have directly comparable software installed. Where possible, this should actually be the same software. You don't want to have too many variables in this. You're going to have some, but by keeping things uniform, you should be able to keep things sane. The other thing is that you want SOME closed-source software on Linux and SOME open-source software on Windows.

    Before we do the tests, we need some diagnostics software on the machines. Memory bounds checkers, system load monitors, host intrusion detection software, etc. This will tell us what impacts we are having, beyond simply seeing if the servers and/or OS fall over or not.

    At this point, we get to the tests themselves. Throw absolutely everything you can at the computers. Use every vulnerability scanner on the planet, every worm or trojan you can locate, use stress-testers, etc. Find DoS and DDoS packages, if any have been openly released.

    Now we have some actual data, based on comparable usage and comparable attacks. The data will show that the different OS' respond differently to different attacks. (Surprise there, Sherlock!) We now need to determine which of the remaining variables are important.

    The remaining variables are "underlying flaws within the OS", "inherent flaws, due to errors in the design methodology itself" and "unequal reporting of equal errors".

    What you want to do then is a four-way analysis of variance. The first of the three components is the different vulnerabilites found within the different applications. The second way is looking at the variation between the different vulnerabilities within the OS' themselves. The third way is the variation of bugs reported for any given application, OS or combination, vs. what actually gets reported by groups such as CERT. The fourth way would be the difference in licensing policy.

    The NULL Hypothesis for the applications is that all applications will have roughly the same number of vulnerabilities, regardless of what they do, what they're written for, the philosophy of the programmer, and the company producing the software.

    It's doubtful you'd find enough applications, and enough vulnerabilities in each, to split the study in sufficient ways to cover all these points. However, it should be possible to collect enough to do a statistically meaningful study on a few of them.

    The problem with AOVs is that you've got to have a lot of data, and that the amount of data you need increases very rapidly. You do get plenty of idiots out there who ignore the confidence level and even the methods of the study, looking for any slight comment that proves whatever they're wanting to say. Other times, even nominally sane people will do this, because they want/need the results too fast or too cheaply to do the work properly.

    Let's say, for example, that the number of vulnerabilities found within the applications, when studying the variance between them, is pretty random. There's no discernable pattern. Let's also say that there's no significant variance found between FOSS and Closed Source. Then, let's say that we're in the 1% confidence level for both of these, which means that this will likely hold true 99% of the time.

    We could then conclude that Closed Source vs. Open Source is purely a matter of personal choice. The net difference simply isn't significant to warrant going for one and ignoring the other.

    Continuing with this fictional scenario, let's say that Linux and Windows showes a VERY signficant level of variance. We know, at this point, that it's not the Closed vs. Open nature,

  • by Foolhardy ( 664051 ) <[csmith32] [at] [gmail.com]> on Friday October 22, 2004 @03:26PM (#10601967)
    1. So why is IE integrated into the kernel that the server is running on top of?
    Internet Explorer has never been, isn't now and never will be integrated into the kernel. It does not run in kernel mode. The only thing that IE is integrated in is the shell environment and what Microsoft calls the "Windows Expierence". This integration with the 'expierence is the excuse they used to say that it had to be a part of Windows; it's a marketing reason, not a technical one.

    The Windows shell environment is like what KDE is on Linux, and IE is integrated into it like Konqueror is integrated into KDE. The kernel has nothing to do with it.
  • Ok, its a troll... but I'll bite. First, run libsafe on linux. That will offer buffer checking for the "common" cases -- at very little cost. No "recompile" needed.

    And, you can go more paranoid from there...

    Ratboy.
  • Re:biased? (Score:4, Interesting)

    by NoOneInParticular ( 221808 ) on Friday October 22, 2004 @03:30PM (#10602051)
    On point 4. It's spot on, not bullshit. I gather you're a window user, but in Unix land you never ever run the GUI as root. Never. What you do is log in as a normal user, browse the internet as a normal user and when you located whatever it is you need to do as root, you go to a console, su and do the root thing there. Why? This makes sure that if you as user catch something on the big bad internet, it doesn't hose your entire system right away. If you run this piece of shit IE as Administrator, any flaw in IE can take over your system, when run as user, it can only take over with user priviliges and might give you time to take countermeasures.
  • by VitaminB52 ( 550802 ) on Friday October 22, 2004 @03:38PM (#10602218) Journal
    With extremely limited exceptions, there are no sites out there that need to be fscking around with ActiveX. Any sites that require it are the result of unprofessional design and should be considered highly suspect.

    So windowsupdate.microsoft.com is an example of unprofessional design - update functionality doesn't require ActiveX in a webbrowser, as dozens of automatic update packages prove. I use automatic updates for many software products, and only windowsupdate.microsoft.com does 'require' ActiveX in a webbrowser.
    The reason MS uses ActiveX at windowsupdate.microsoft.com is simple - you have to update Windows, and if you want to update Windows in a convenient way, then you have to use ActiveX and therefore Internet Explorer. It's just a part of the browser war, there is no technological necessity to use ActiveX for this purpose.

  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Friday October 22, 2004 @03:48PM (#10602401) Homepage Journal
    Since SELinux, these days, uses the LSM system, I think it's safe to assume that SELinux' impact outside of the LSM is going to be limited. I suspect it also means that SELinux would work fine with any filesystem that gets screened by the LSM.


    Looking at the list of stuff implemented, I don't really see a vast amount that's different. Both have a great deal on their wish-list, but have stuck almost exclusively to file access. Files are important, but they're not everything.


    I'll be impressed by the first security system that provides at least two of the following:


    • Per-thread MAC (control which threads can send what to which other threads, based on security model - this would only make sense if you did the same thing to shared memory)
    • Per-network connection MAC
    • Routing/Packet Mangling by Role
    • Strong Role-Based Compartmentalizing (ie: you can't fragment some file/data with a security model of X through some file/data with a security model of Y, where X and Y just don't mix, in memory, swapspace, the filesystem etc.)
    • CPU/Node Security Label Affinity (ie: you can designate some CPU and/or some node on a cluster as being permitted to run tasks with a given security label).


    I'm not completely sure the "Common Criteria" affect the higher-levels of the Orange Book. Last time I looked, I didn't see anything that matched the requirements of a B1 or A1 system, but I could just have missed that part.


    Personally, I'd love it if someone could produce a patch - even if it never got certified by the NSA - that provided a complete B1 security model. I'm not sure how I'd react if Linux (or some other FOSS OS) reached the giddy heights of A1. Remember, while there are a tiny handful of companies that have released B1 or B2 certified systems, these aren't exactly buy-in-Walmarts off-the-shelf. Not many are made. Or tested. Or sold. Absolutely no commercial company, to the very best of my knowledge, produces an A1 system, except maybe as a one-off specifically to the Government.

  • Thank you very much (Score:2, Interesting)

    by RAMMS+EIN ( 578166 ) on Friday October 22, 2004 @03:50PM (#10602438) Homepage Journal
    Thank you for that post. Posts of that quality are a rarity on Slashdot...

    I still have some concerns, though.

    ``At this point, we get to the tests themselves. Throw absolutely everything you can at the computers. Use every vulnerability scanner on the planet, every worm or trojan you can locate, use stress-testers, etc. Find DoS and DDoS packages, if any have been openly released.''

    See, that, right there, leads to the problem I cannot see how to circumvent. You throw everything _you_ can find at the machines - but what if you can more easily find exploits for certain software than for others? Conversely, if you don't use available tools, but have a bunch of people try to break systems from scratch, their might be a bias in their skills that favors certain software.

    ``The third way is the variation of bugs reported for any given application, OS or combination, vs. what actually gets reported by groups such as CERT.''

    I assume this corrects the problem mentioned above somewhat. You could try to exploit your test systems by hand, then compare your stast with CERT's, and conclude that either there is no apparent bias in either set of figures, or one of them is biased - but you wouldn't know which one. Or is there a thinko on my part?

    I am an OS enthusiast, and I have a decent number of OSes here to test with. If I can really get convinced that such a test can be conducted in a meaningful way, I would like to actully do it.
  • by Foolhardy ( 664051 ) <[csmith32] [at] [gmail.com]> on Friday October 22, 2004 @03:59PM (#10602579)
    If IE should never be used on production servers, why is IE so heavily integrated into the shell environment in which the server runs?
    There really isn't a good reason, but there is an explination. It goes back to the very first version of NT: 3.1. Since then and up to Win2k, the server and workstation versions of Windows use exactly the same binaries, with a few extras for server and a flag in the registry. This meant that the same exact patches could be applied to both. It was convenient because the server would provide the exact same environment that the workstations provided. Windows makes its money by being compatible. MS says it plans to fork the server and workstation codebases in the future: ws2k3 does not use the same binaries as XP does, it's not even the same version of NT (XP is 5.1 and 2k3 is 5.2). The shell is there on server in case the user runs some kind of app that depends on it. It provides a unified Windows environment.

    OH and last time I checked, many Linux distros install a shell environment, with a web browser, on a generic server install.
    BTW, to say that the integration of IE in Windows is somehow equivalent to the integration of Konquerer in KDE is rather ridiculous.
    You can remove all traces of Konqueror, not just the lanucher but all the HTML rendering and stuff, without breaking KDE? Can you have KDE without any web browser components?
    It is trivial to entirely replace one browser with another on a GNU/Linux system. Eradicating all traces of IE on MS Windows machines is nowhere near as simple.
    You can replace the shell with an entirely different one if you want on Windows. No, it isn't as easy since MS doesn't provide an uninstaller: you have a good point. It is possible; see nLite [msfn.org] or LitePC [litepc.com]. If you remove all traces of IE, it will break the shell, though. And breaking the shell will break any apps that depend on the shell, just like removing KDE would break KDE apps that depend on it.
  • It's gonna take time (Score:2, Interesting)

    by pgnas ( 749325 ) on Friday October 22, 2004 @04:00PM (#10602594) Journal
    This fight is worse than the damn US Presidential Election. "My OS is better than your OS". BLAH, BLAH.

    Do you know what matters? Cash, sales and total installations and lastly PERCEPTION.

    The truth of the matter is that it doesn't matter which is better, it only matters which LOOKS BETTER, or is PERCEIVED AS BETTER or MORE SECURE for that matter.

    Microsoft has pumped BILLIONS into making people BELEIVE that there products are the best,the most secure and the easiest to use and maintain. How much money has gone into the marketing of Linux vs. the amount that has goen into the marketing of Windows?

    When was the last time you went to a "kick off" of a new version of the Linux Kernel?


    Some people just never learn, you can spit the facts out until you are blue in the face, but the winner will have a bigger marketing budget!

    People are warming up to Linux and are realizing the benfits of Linux, in addition, they are taking hard looks as to how secure their current OS is. It will tke time for the Linux based ditributions to take a foothold in the enterprise.

    The problem is that Linux is so widely dispersed, there is no way that you can compete with the Marketing power of Microsoft.

    I am pgnas and I support this message
  • by druxton ( 166270 ) on Friday October 22, 2004 @04:37PM (#10603097)
    I think it would be interesting to create a 3D plot of the threat space using the metrics from the article as axes. Comparing the shape and size might be enlightening.

    PS Note I said "it would be interesting", not "I would be willing" - it would be a daunting task.
  • by Arker ( 91948 ) on Friday October 22, 2004 @05:03PM (#10603394) Homepage

    Nice fuzzy logic there. How many of those 40 Microsoft vulnerabilities were related to Internet Explorer? Yes, it's Microsoft's fault for integrating it in the OS, but if you are using Server 2003 O/S to cruise the web with an admin rights role, you are the security problem, not the OS.

    There are so many things wrong with that statement in the real world. Perhaps the most important one conceptually, and one that none of the other replies have touched on, is that you don't actually have to intentionally run IE in order for it to get invoked! I hear all the time how if people run Mozilla instead, all the worries with IE are gone, but that's not entirely true. It's a security risk just sitting on the disk, never intentionally used by anyone.

    Second, as has already been mentioned, patches and updates? Sure, on a server you probably shouldn't be running a web browser, but you shouldn't have a videocard and monitor on a server either. In the windows world, however, both are required. There is no apt-get, there is no console-only mode.

  • Re:So... (Score:1, Interesting)

    by StillAnonymous ( 595680 ) on Friday October 22, 2004 @05:44PM (#10603815)
    "Insightful!?" I can't believe this got modded insightful. It's the most idiotic non-response you could post in a security discussion. Not to mention the fact that it's just plain incorrect.

    Most machines get compromised because of a hole in one of the applications they are running. If there's a hole in some app that a hacker finds out about and exploits before anybody is made aware of it, then it's the admin's fault? How?
  • by slipstick ( 579587 ) on Friday October 22, 2004 @06:36PM (#10604408)
    First off this was not a "you should switch article".

    Secondly if you read the article at all you would see that Petreley bends over backwards to state that his methodology is one way of doing things and others may be used.

    Thirdly, since the point of the comparison was to determine the truth of a broad statement such as "X is more/less vulnerable than Y" it is reasonable to look at the data the way he described.

    Lastly, an unstated goal of the paper was to determine if Microsoft's statements regarding Windows being more secure than Linux is true or not. In that respect it is imperative that the researcher use a broad description rather than rely on a specific application or set of circumstances.

    The most important point of the article was that security can't just come down to which system has the most vulnerabilities reported but must take in to account at LEAST 3 factors, "potential damage", "technical feasibility of the attack", and the attackers ability to execute the attack(e.g. internet connection only required or local login necessary).

    Microsoft never does such a good job of setting up a comparison and than actually reporting the results reasonably fairly. Certainly their current marketing drive isn't presenting the facts fairly.
  • Re:biased? (Score:2, Interesting)

    by baggins2002 ( 654972 ) on Friday October 22, 2004 @07:19PM (#10604896) Journal
    I have yet to have two users logged into a Windows Machine NT, 2000 or XP at the same time using a GUI interface.
    Whereas 4 years ago, when I first started using linux I was able to have multiple users logged into a machine using a GUI interface independently.
    These are multiple users logged into the same machine at the same time.
    As far as NT being called a multi user system, yes multiple people can log onto the system, but not at the same time
    #4 The reason that most of these points don't make sense to you is that you have never truly used a multi user system. (that's the only way I can make sense of your statement)

    Another thing try applying a patch to a MS system remotely. Hopefully someones there with Administrative privileges to input the CD or mount the partition with the CD.(this is with apps mainly)

    #3 The use of RPC has been encouraged by MS. (See how simple it is to program remote apps with MS)

    #1 Okay maybe it is modular, but it is presented to everyone else as monolithic totally integrated design. If I can't work with the modules or seperate them out, then as far as I am concerned it is a monolith.
  • The author needed to provide some evidence that he/she did everything possible to make the argument for Windows to be stable and secure.

    OK, I'll have to agree that there's a bias there. The language could be better, and there's a few areas that could be broadened: for one example... there are features of the Windows domain model that are neglected in this analysis... but the problem is they're not really given proper credit in pro-Windows white papers either, and the security problems of the single-sign-on environment need to be considered. From a trust point of view a group of Windows computers in a strongly configured domain can be compared to a single timeshared computer. They have the advantage of very strong hardware protection boundaries (separate machines), but a relatively weak multi-user protection model, and poor confidentiality.

    Anyway, your approach (hack the crap out of both) isn't the only way to address the question. Taking the published data and re-analysing it to a common baseline, which is the approach this paper takes, is also useful. If you tone down the language you end up with a pretty honest comparison... I didn't see a lot missing from the discussion that could strengthen the security case for Windows.
  • Re:So... (Score:2, Interesting)

    by Anonymous Coward on Friday October 22, 2004 @08:59PM (#10605717)
    Easy.

    Solid Unix admins will fight tooth and nail before any application is ran as root. the only applications that should be ran as root are those that directly effect the kernel or system tools (that require it) directly. anyting else, and it's the unix admin being stupid for allowing it. If it's a business decision and the unix admin has no choice, then they need to make those people making the decision aware it's not their fault when the box is ultimatedly owned.

    Otherwise, for unsafe apps, there's chroots you can use, there's ways now you can run an entire instance of linux within linux (I forget the name of this right now). So even if that instance is toasted, remove the file, copy a backup in, wash, rince, repeat. (and you can just recompile it with the fix when you find it).

    You can firewall things off, at ports, users, groups, any mix you want. There's even APL's available you can use to lock down various things, or tie down resource usage per process, or anything else as well.

    Basically, if a unix box gets owned, there's got to be some very serious questions on why it did.

    Most likely it was something dumb like outdated software that should have been patched or upgraded long ago that was... shall we say... neglected.

There are two ways to write error-free programs; only the third one works.

Working...