Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security Operating Systems Software Windows Linux

Windows vs. Linux Security, Once More 489

TAGmclaren writes "The Register is running a very interesting article about Microsoft and Linux security. From the article: 'until now there has been no systematic and detailed effort to address Microsoft's major security bullet points in report form. In a new analysis published here, however, Nicholas Petreley sets out to correct this deficit, considering the claims one at a time in detail, and providing assessments backed by hard data. Petreley concludes that Microsoft's efforts to dispel Linux "myths" are based largely on faulty reasoning and overly narrow statistical analysis.' The full report is available here in HTML form, and here in PDF. Although the article does make mention of OS X, it would have been nice if the 'other' OS had been included in the detailed analysis for comparison."
This discussion has been archived. No new comments can be posted.

Windows vs. Linux Security, Once More

Comments Filter:
  • by WIAKywbfatw ( 307557 ) on Friday October 22, 2004 @01:36PM (#10599860) Journal
    What, no macro virus-infected Word file?
    • by niittyniemi ( 740307 ) on Friday October 22, 2004 @06:35PM (#10604401) Homepage


      > What, no macro virus-infected Word file?

      Yeah, I don't know why the Register is using that dangerous HTML stuff!!

      From the article (MS description of Windows Server 2003):

      "Security level for the Internet zone is set to High. This setting
      disables scripts, ActiveX controls, Microsoft Java Virtual Machine
      (MSJVM), HTML content, and file downloads."

      There are a lot of cynics and sneerers on Slashdot who say that
      Microsoft and their "Trustworthy Computing Initiative"®
      is a lot of hot air and BS. But how many of you with your Linux boxes are
      running a browser that renders that dangerous HTML stuff, eh?!

      Hats off to MS for shipping a system that can't render HTML is what I say!

      If they carry on in the same vein, we can extrapolate that Longhorn
      will in fact ship without a TCP/IP stack. Watch the script
      kiddies try and break into that!

      Microsoft is showing the world how to innovate and move forward as
      ever...by....going backwards......errr, wait a minute....

      Anyway, I just hope that the "Microsoft Crippled Software and
      Environment"
      ® (MCSE) initiative makes more headway and shows you
      filthy hippies/commies how things are done in the Real World!

  • Misleading article (Score:5, Insightful)

    by Anonymous Coward on Friday October 22, 2004 @01:42PM (#10599918)
    Nicholas Petreley is a Linux advocate... there is a basic problem with a partisan person presenting a "fair and balanced" argument. Kinda like doing research with fixed goals.
    • Funny; doesn't Microsoft fund most/all of the "Get the Facts" surveys?
    • No (Score:5, Insightful)

      by Anonymous Coward on Friday October 22, 2004 @02:09PM (#10600505)
      The article is not misleading because the author is a linux advocate.

      Now you are right if you want to remind readers to keep that in mind, but dismissing an article not on the base of its merits, but because the author is supposedly biased (mind, you didn't show or prove in any way that he was actually biased, you just wanted us to take it for granted) is a logical fallacy.

      If you don't like the findings of the article, please tell us why, simply accusing the author of bias won't change the facts, sorry.

      Argumentum ad Hominem
      "Circumstantial: A Circumstantial Ad Hominem is one in which some irrelevant personal circumstance surrounding the opponent is offered as evidence against the opponent's position. This fallacy is often introduced by phrases such as: "Of course, that's what you'd expect him to say." The fallacy claims that the only reason why he argues as he does is because of personal circumstances, such as standing to gain from the argument's acceptance."
      http://www.fallacyfiles.org/adhomine .html
  • I'd rather see (Score:5, Insightful)

    by bucketoftruth ( 583696 ) on Friday October 22, 2004 @01:46PM (#10599977)
    I'd rather see OSX security compared to Windows. I only have one user adventurous enough to use Linux on their desktop. The rest are about 70/30 Win/Mac.
  • Microsoft products are more vulnerable, despite that Microsoft uses statistics that says otherwise to make you believe otherwise.
  • by RAMMS+EIN ( 578166 ) on Friday October 22, 2004 @01:47PM (#10599985) Homepage Journal
    What I would like to see is some security comparison of Microsoft software and FOSS, corrected for target size.

    FOSS advocates often whine about MS insecurity, whereas MS advocates often claim MS only gets more break-ins because it's used more. The MS folks are probably not right in the Apache vs IIS case, but what about other cases? Is FOSS really more secure?

    Unfortunately, I cannot think of any good way to measure this. Perhaps a little brainstorm on /. can come up with a good test, and some people can carry it out?
    • I think comparing relative security based on the number of break-ins is flawed to begin with. Just because people break into one type of system more doesn't mean it's less secure, it just means people break into it more. Would you use a system for your enterprise if you knew that there was a huge root exploit in it, but it only gets used once every few months, so chances are good you'll never get hit? I would certainly hope not.

      The only way to really measure relative security is by the number and severi
    • by RealAlaskan ( 576404 ) on Friday October 22, 2004 @02:01PM (#10600324) Homepage Journal
      Well, he did address your question in the article [theregister.co.uk].

      He did use the Apache case as a counter-example, because that's one of the few cases where MS and Libre software compete, and Libre is the larger target. In that case, the smaller target comes out looking more vulnerable. Is there something special about Apache which makes you think that it wouldn't work that way for other Libre projects? If you know something we don't, by all means share it.

      ... I cannot think of any good way to measure this.

      Oddly enough, Petreley covered that question, too [theregister.co.uk].

    • What you would need: (Score:5, Interesting)

      by jd ( 1658 ) <imipak@ y a hoo.com> on Friday October 22, 2004 @03:14PM (#10601686) Homepage Journal
      Take one recent Microsoft Windows box, with all official patches from Microsoft and relevent vendors applied and all standard security procedures adhered to.

      Now, take a recent Linux box (the distro doesn't matter) and apply all official patches and upgrades, as released by the distro and the various package maintainers.

      Each machine must have directly comparable software installed. Where possible, this should actually be the same software. You don't want to have too many variables in this. You're going to have some, but by keeping things uniform, you should be able to keep things sane. The other thing is that you want SOME closed-source software on Linux and SOME open-source software on Windows.

      Before we do the tests, we need some diagnostics software on the machines. Memory bounds checkers, system load monitors, host intrusion detection software, etc. This will tell us what impacts we are having, beyond simply seeing if the servers and/or OS fall over or not.

      At this point, we get to the tests themselves. Throw absolutely everything you can at the computers. Use every vulnerability scanner on the planet, every worm or trojan you can locate, use stress-testers, etc. Find DoS and DDoS packages, if any have been openly released.

      Now we have some actual data, based on comparable usage and comparable attacks. The data will show that the different OS' respond differently to different attacks. (Surprise there, Sherlock!) We now need to determine which of the remaining variables are important.

      The remaining variables are "underlying flaws within the OS", "inherent flaws, due to errors in the design methodology itself" and "unequal reporting of equal errors".

      What you want to do then is a four-way analysis of variance. The first of the three components is the different vulnerabilites found within the different applications. The second way is looking at the variation between the different vulnerabilities within the OS' themselves. The third way is the variation of bugs reported for any given application, OS or combination, vs. what actually gets reported by groups such as CERT. The fourth way would be the difference in licensing policy.

      The NULL Hypothesis for the applications is that all applications will have roughly the same number of vulnerabilities, regardless of what they do, what they're written for, the philosophy of the programmer, and the company producing the software.

      It's doubtful you'd find enough applications, and enough vulnerabilities in each, to split the study in sufficient ways to cover all these points. However, it should be possible to collect enough to do a statistically meaningful study on a few of them.

      The problem with AOVs is that you've got to have a lot of data, and that the amount of data you need increases very rapidly. You do get plenty of idiots out there who ignore the confidence level and even the methods of the study, looking for any slight comment that proves whatever they're wanting to say. Other times, even nominally sane people will do this, because they want/need the results too fast or too cheaply to do the work properly.

      Let's say, for example, that the number of vulnerabilities found within the applications, when studying the variance between them, is pretty random. There's no discernable pattern. Let's also say that there's no significant variance found between FOSS and Closed Source. Then, let's say that we're in the 1% confidence level for both of these, which means that this will likely hold true 99% of the time.

      We could then conclude that Closed Source vs. Open Source is purely a matter of personal choice. The net difference simply isn't significant to warrant going for one and ignoring the other.

      Continuing with this fictional scenario, let's say that Linux and Windows showes a VERY signficant level of variance. We know, at this point, that it's not the Closed vs. Open nature,

  • biased? (Score:2, Interesting)

    by Cat_Byte ( 621676 )
    Windows Design
    Windows has only recently evolved from a single-user design to a multi-user model
    Windows is Monolithic by Design, not Modular
    Windows Depends Too Heavily on the RPC model
    Windows focuses on its familiar graphical desktop interface
    Linux Design
    Linux is based on a long history of well fleshed-out multi-user design
    Linux is Modular by Design, not Monolithic
    Linux is Not Constrained by an RPC Model
    Linux servers are ideal for headless non-local administration

    Oh yeah thats unbiased.
  • by MMaestro ( 585010 ) on Friday October 22, 2004 @01:49PM (#10600010)
    Nicholas Petreley's former lives include editorial director of LinuxWorld, executive editorial of InfoWorld Test Center, and columns on InfoWorld and ComputerWorld. He is the author of the Official Fedora Companion and is co-writing Linux Desktop Hacks for O'Reilly. He is also a part-time Evans Data Analyst and a freelance writer.

    Sorry, but as long as something like 90% of all the 'reports' about Linux being more secure and 'mythbusting' reports are writen by Linux supporters or have some business in seeing Linux succeed, I'm going to take this with a grain of salt. I'm not trying to say Windows is safe, but you can't expect me to believe this when a 'report' like this comes out every other week. If this guy was an ex-Windows programmer I'd be more understanding, but "former lives include editorial director of LinuxWorld"? Somehow I doubt they ran Windows on their machines.

    • by Anonymous Coward on Friday October 22, 2004 @02:18PM (#10600734)
      "Circumstantial: A Circumstantial Ad Hominem is one in which some irrelevant personal circumstance surrounding the opponent is offered as evidence against the opponent's position. This fallacy is often introduced by phrases such as: "Of course, that's what you'd expect him to say." The fallacy claims that the only reason why he argues as he does is because of personal circumstances, such as standing to gain from the argument's acceptance."
      http://www.fallacyfiles.org/adhomine .html
  • by NardofDoom ( 821951 ) on Friday October 22, 2004 @01:50PM (#10600020)
    There are lots of long words and numbers in that article. And it's really long. It makes my brain hurt. Linux must be complicated if it takes that long to explain its security benefits. And if they have to hide them in a long article like that

    And besides, last night while I was watching $stupid_cable_news_show I saw an ad for Microsoft. It said they were secure. Then I saw that same ad in $idiot_management_magazine. They can't advertise it if it's not true, so we should go with Windows Server 2003 for our new application.

    And, besides, I just got Microsoft to sell Windows Server 2003 for $50 per copy by saying we'd switch to Linux. Here's the box, now go install it.

    • You want to know the funniest part.

      I work in the advertising devision of a large communications company as their IT manager.

      these people know that advertising is lies, lies, a huge stretch of the truth and then a tad more lies.

      yet they are suckered in hard by advertising as much as the dolt that believes everything they see in an ad.

      if the people that make the ad's are suckered by them then the common manager and CEO has absolutely no hope but to believe every advertisment completely as truth.

      And yes,
  • SELinux (Score:5, Interesting)

    by Coryoth ( 254751 ) on Friday October 22, 2004 @01:50PM (#10600022) Homepage Journal
    I look forward to the Fedora SELinux project getting a good workable set of policies so that SELinux can default to being on for Fedora installs. Once that happens the "Linux is more Secure" claim will actually have some serious hard evidence behind it. SELinux and other Mandatory Access Control systems (anything hooking into the Linux Security Module in the kernel really) really are a serious step up in security, and there really is nothing comparable in the windows world.

    A good way to think of MAC or SELinux is as a firewall between processes on your machine and the files and devices etc. on your machine. At the kernel level there is a set of rules, at pretty much as fine a grained level as you care to write, as to what can access what. It's well worth readign the FAQ [nsa.gov] to et a fuller idea of what we're talking about here.

    Jedidiah.
    • by Anonymous Coward on Friday October 22, 2004 @02:06PM (#10600429)
      RSBAC should perhaps be considered. It is far more modular, been in production use a lot longer, has none of the disadvantages of selinux(eg works with any filesystem, needs no patches to filesystems, doesnt break other kernels on the same machone). It has a list of protections, has official PaX and virus(malware) scanner support, and the developer is always willing to take ideas from people and quickly fix issues. I would be interested for a detailed comparison of the two between slashdotters, thoughts and experiences etc.. But from everything I can see, RSBAC seems far superior. RSBAC.org [rsbac.org]
      • by jd ( 1658 )
        Since SELinux, these days, uses the LSM system, I think it's safe to assume that SELinux' impact outside of the LSM is going to be limited. I suspect it also means that SELinux would work fine with any filesystem that gets screened by the LSM.

        Looking at the list of stuff implemented, I don't really see a vast amount that's different. Both have a great deal on their wish-list, but have stuck almost exclusively to file access. Files are important, but they're not everything.

        I'll be impressed by the first

    • Re:SELinux (Score:3, Informative)

      by Pros_n_Cons ( 535669 )
      Selinux is already intergrated into Fedora Core 3, it has a "targeted" policy and protects certain daemons like apache, nfs, etc. It's not right now being used as a complete solution. Still quite good though.
  • ...are usually dismissed as "astroturfing" when Microsoft comes out on top.
  • meh... (Score:5, Insightful)

    by The_reformant ( 777653 ) on Friday October 22, 2004 @01:54PM (#10600133)
    meh..any system is only as secure as its users anyway..which i suspect is why linux has practically no problems.

    Basically anyone who knows what a terminal window is isn't likely to run suspect attachments or not configure a firewall

  • enterprise 03 (Score:3, Insightful)

    by man_ls ( 248470 ) on Friday October 22, 2004 @01:55PM (#10600161)
    The author bashes Enterprise Server 2003 as being unstable, quoting MS's average uptime of around 59 days as evidence of this.

    What people forget to mention is that MS security patches seem to like reboots, do the way filelocking works on Windows. Thus, whenever a "critical" flaw is released, they have to either patch it with a workaround (firewall rules, etc.) or they need to reboot the server.

    When I was running an internal-only Enterprise 2003 server (behind several firewalls, no public IP) the only reboots I ever experienced were those related to environmental factors: the power went out for longer than the UPS could keep the server online for; etc.

    After I started maintaining an externally-accessible 2003 server, I configured autopatching on it from Windows Update, and it reboots itself about once a month.

    According to my calculations, this still meets the 99.9999% reliability that MS claims the server to be able to provide, on enterprise-grade hardware (and what I am running on is decidedly not enterprise-grade, unless eMachines has recently broken into the enterprise market and I forgot to read the press release.) Reboots take about 4 minutes to shut down, restart, wait for the services to resolve themselves, and try again. If I was so inclined, I could tweak this to be lower (1 whole minute is that the web server loads before the network module does, can't find an IP to bind to because IP isn't enabled yet, and fails to load, then waits to retry.)

    It's a different design philosophy. My systems don't get "crufty" and crash, but they do have to be rebooted to apply security fixes. However, 4 minutes a month isn't a hardship, and anyone who says it is needs to either look into something transparently redundant, fault-tolerant, or reevaulate why they are so dependant on that one system in the first place.
    • Re:enterprise 03 (Score:4, Insightful)

      by hehman ( 448117 ) on Friday October 22, 2004 @02:15PM (#10600658) Homepage Journal
      After I started maintaining an externally-accessible 2003 server, I configured autopatching on it from Windows Update, and it reboots itself about once a month.

      According to my calculations, this still meets the 99.9999% reliability that MS claims the server to be able to provide


      Better revisit those calculations. Six 9s of reliability means that you're down for no more than 30 seconds a year. Unless your reboots take less than 3 seconds, you're already not meeting that metric.

      Besides which, five 9s (5 minutes a year) is considered carrier-grade. There isn't as firm a standard for enterprise-grade, but it usually permits occasional scheduled downtime outside business hours, and is usually in the two to four 9s range.

      BTW, I couldn't find anywhere that MS claims six nines of reliability; do you have a source?
      • Re:enterprise 03 (Score:3, Informative)

        by man_ls ( 248470 )
        My calc was flawed (the # of 9s in my head didn't match what I typed.)

        I'm citing your comment as a "reasonable standard" for enterprise grade equipment in another comment I'm writing, walking through the author's paper and clarifying important points.
    • Re:enterprise 03 (Score:5, Interesting)

      by RealProgrammer ( 723725 ) on Friday October 22, 2004 @02:20PM (#10600776) Homepage Journal
      What people forget to mention is that MS security patches seem to like reboots, [due to] the way filelocking works on Windows. Thus, whenever a "critical" flaw is released, they have to either patch it with a workaround (firewall rules, etc.) or they need to reboot the server.

      That's sort of the point. You have to reboot a Windows server more often. If rebooting once a month or so is acceptable (see Murphy's Law for schedule), then that's fine.

      If you want it to stay up, doing its job, then don't run Windows on it.

    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday October 22, 2004 @02:23PM (#10600840)
      According to my calculations, this still meets the 99.9999% reliability that MS claims the server to be able to provide, on enterprise-grade hardware (and what I am running on is decidedly not enterprise-grade, unless eMachines has recently broken into the enterprise market and I forgot to read the press release.)

      Nope.

      Reboots take about 4 minutes to shut down, restart, wait for the services to resolve themselves, and try again.

      4 minutes/month == 48 minutes/year.

      99.999 availablility means 5.26 minutes of downtime per year.

      At best, you've got around 99.99% availability.

      However, 4 minutes a month isn't a hardship, and anyone who says it is needs to either look into something transparently redundant, fault-tolerant, or reevaulate why they are so dependant on that one system in the first place.

      It isn't about "hardship". It's about reliability. Getting that last .009% is very difficult and really doesn't give you much in terms of real world reliability for MOST business needs.

      But for those that require it, it is available. And because it is available to those, it is available to everyone. Even those who do not need it.

      Sure, my print server probably doesn't need 99.999% reliability. But because it has it, I don't have to worry about it.

      In my experience, it's the reboot that causes the hardware failures. The fewer reboots, the fewer chances for hardware failure.

  • by Mad Martigan ( 166976 ) on Friday October 22, 2004 @01:55PM (#10600178) Homepage
    Petreley concludes that Microsoft's efforts to dispel Linux "myths" are based largely on faulty reasoning and overly narrow statistical analysis.

    Microsoft, official platform of the 2004 presidential campaign.
  • Window vs OS X (Score:5, Insightful)

    by linuxpyro ( 680927 ) on Friday October 22, 2004 @01:57PM (#10600251)

    Though this was interesting, it would be nice to see something comparing OS X security to Windows security. When you think about it, they're both relatively proprietary OSes. Sure, Microsoft has there "Shared Source" stuff, and OS X is based on Open Darwin, but really the two would be a better match because of thier commercial status.

    Sure, there are enterprise Linux distros from coimpanies like Red Hat, but you can still get a lot of use out of a non-commercial distro. There are so many ways that you can change Linux to make it more secure that comparing it to a rigid commercial OS is a bit inappropriate. I'm not saying that I think the article was pointless, just that we should give equal attentention to systems like OS X or even some of the other commercial UNIX distros for that matter.

  • by QuietLagoon ( 813062 ) on Friday October 22, 2004 @02:00PM (#10600322)
    "I'm not proud," [Brian] Valentine [senior vice president in charge of Microsoft's Windows development] said, as he spoke to a crowd of developers here at the company's Windows .Net Server developer conference. "We really haven't done everything we could to protect our customers ... Our products just aren't engineered for security."

    http://www.infoworld.com/articles/hn/xml/02/09/05/ 020905hnmssecure.html [infoworld.com]

  • The failure of windows and success of linux has nothign to do with linux's unique design. It is a mimic of unix to some degree, which does things in layers and all that goodness. The same can be said about OpenBSD, HP-UX, OSX and a few others.
  • ...no, I'm not kidding [thewhir.com] and I'm not talking about slashdotting. So special thanks are due to the poster of the "In case of slashdotting" article.

    I haven't been able to connect to The Register for three days now, BTW. I'm glad that others have been able to.
  • OK, shocker subject line. But, in a sense, it's true!

    I've read about the fact that while XP/SP2 contains numerous changes that present real improvements, it is largely a recompile of XP with a new compiler that enforces buffer size.

    While that doesn't fix buffer overrun bugs, it certainly limits their potential negative security implications. When will this buffer enforcement be available for gcc!?!? I know, there are 3rd party apps, but as long as it's a 3rd party app, I won't get these benefits with a t
  • The MS take on it (Score:5, Interesting)

    by RealProgrammer ( 723725 ) on Friday October 22, 2004 @02:11PM (#10600565) Homepage Journal

    I used to wonder at the blinders-on group think of the hidden source folks. The elaborate unreality of their arguments was a puzzle, until I figured it out [healconsulting.com]. Now I understand; it's all about the dream.

    While some might dismiss the article because he is a Linux advocate, that's missing the point. His piece is geared toward Linux advocacy, but avoids the usual rhetoric. I kept looking for the usual Gates bashing, but didn't find any.

    What I found instead were hard facts, distilled from public data. He didn't say, "I performed some tests which prove Linux is better." He took the publicly available information, analyzed it, and reported the results.

    The response by the Microsoft marketing droids and vassal fudmeisters will be instructive to anyone who really thinks about it. Don't take away their dreams of a gold mine, at least not until they've got a Ferrari just like the guy in the next cube.

  • by jxs2151 ( 554138 ) on Friday October 22, 2004 @02:11PM (#10600569)
    Read a book or two about coal, railroads, oil, computers and you'll find the verbiage and scare tactics used by the leaders of these industries are pretty similar to what Microsoft is saying now.

    "Open Source Software is inherently dangerous"

    Weasel words like "inherent" are convincing to dumbed-down folks. ./ ain't buying it though. God bless individualism.

    "Statistics 'prove'..."

    Ahhhh, the old "who can argue with scientific fact" line.

    Provide us with "science" to back up this claim. Properly vetted, peer-reviewed science from an unbiased source, unfunded by those with a vested interest in the outcome please.

    The psychological use of fear and "scientific" studies to convince the average American is not new. Read carefully the language of Microsoft and you'll hear JD Rockefeller, Andrew Carnegie, JP Morgan, etc. What you have to read carefully to find is their own fear that they are losing monopoly control. Big Oil was able to buy corrupt officials and maintain their decidedly un-capitalist ways. Will Microsoft?

    • Big Oil was able to buy corrupt officials and maintain their decidedly un-capitalist ways. Will Microsoft?

      Was that a rhetorical question, or did you miss the DoJ's dance with Microsoft?

  • by Ironsides ( 739422 ) on Friday October 22, 2004 @02:15PM (#10600671) Homepage Journal
    I don't know what this guy is talking about. Windows uses spheres for permisions to run stuff. On the inside, you have all Microsoft Programs and on the outside you have all Non-Microsoft programs. See? They use spheres just like Linux.
  • Same old arguments.. (Score:3, Interesting)

    by d_jedi ( 773213 ) on Friday October 22, 2004 @02:29PM (#10600944)
    Just as the authors of this report claim "it takes only a little scrutiny to debunk the myths and logical errors behind the oft-repeated axioms (that suggest Windows is more secure)" their myth busting arguments also do not stand up to scrutiny.

    For one, they speak at length about the uptime of web servers. While some downtime is related to security flaws, there is not a direct corrospondance between security flaws and uptime. I find this metric completely unreliable as a method of assessing web server security.

    This is essentially their only argument for the first two myths.

    For the third, they mention that flaws Microsoft will NEVER fix. They don't bother to mention that these flaws only occur in older, "obsolete" operating systems. Does Red Hat issue patches for version 1.0 anymore? The rest of their argument makes much more sense, however.

    (Haven't read the rest yet.. but this thus far makes me skeptical that this is an unbiased report.. )
  • by man_ls ( 248470 ) on Friday October 22, 2004 @02:31PM (#10600972)
    I read through the article, and was honestly shocked at some of the claims the author made when describing Windows in relation to Linux.

    Note that the purpose of this post is not to say "omg windows >>>> linux all you penguin lovers rot in hell" like a lot of this story will be. I am merely trying to clarify some of the author's points.

    "Myth: Safety in Small Numbers"

    "Furthermore, we should see more successful attacks against Apache than against IIS, since the implication of the myth is that the problem is one of numbers, not vulnerabilities.

    Yet this is precisely the opposite of what we find, historically."

    Running through 3GB of archived log files, from Apache running on 2003 Enterprise Server, I have concluded the following:

    54% of attacks against IIS (Unicode traversal, buffer overflow, cgi, alternate data streams, etc.)

    46% of attacks against Apache (htpasswd.exe, httpd.conf, .htaccess, some odd batchfile script attacks with args to copy httpd.conf into htdocs, etc.)

    "Precisely the opposite" is hardly the right phrase to use in this situation. Sampling error among different web sites (due to different audiences, traffic rates, etc.) could easily account for the fact that IIS out-edged Apache here.

    As for the *successful* part of the author's claim, there was a 0% success rate across all queries directed at servers I either have access to logs on, or directly control. I have also experienced Apache servers being compromised (more often due to user-induced security holes than design flaws.) but in the end, the user leaving a filedrop which allows php scripts to execute, and such, is as dangerous as a buffer overflow. They are each different but functionally equivilant ways to circumvent the security of the system it is running on.

    "But it does notexplain why Windows is nowhere to be found in the top 50 list. Windows does not reset its uptime counter. Obviously, no Windows-based web site has been able to run long enough without rebooting to rank among the top 50 for uptime."

    Part of the Windows operating system's underlying design involves its file locking symantics. Files in-use by the operating system, providing needed functionality, can't be easily replaced while the system is running. Windows solution? The in-use-file replacement tool is able to change the bits on disk, but not the memory addresses they map to. So, the copy in memory doesn't match the copy on disk -- and the copy in memory is the old (flawed) copy. This is rectified by...you guessed it...refreshing the copy in memory. And what's the easiest way to do this? Reboot the server and reload it from the disk, if the module you're talking about happens to be, say, the Local Security Authority or the Windows Kernel.

    I mentioned (with some flawed math) (http://slashdot.org/comments.pl?sid=126724&cid=10 600161) in more detail the reasons Windows servers are often down there on the patches. I did miscalculate availablilty. My servers average in the 99.9952% range. Which means they're down for a few hours a year. Sure, not carrier grade, but not too shabby either. Well within the reasonable expectations of most businesses. (Source: http://slashdot.org/comments.pl?sid=126724&cid=106 00658 by hehman) Note that the situations where Windows is likely to be used probably aren't nuclear power plants, airplane control software, etc. Thus, the additional powers of 9 aren't really a factor.

    "Myth: Open Source is Inherently Dangerous"

    I agree with the author here. Having the source code doesn't really have an impact as to whether or not a hacker can find an exploit -- there are enough tools to automate exploit finding in streamed data, especially web connections.

    "Myth: Conclusions Based on Single Metrics"

    Another valid point. One can spin statistics any way you want to, and have the math be perfectly valid, to reach a meaningless conclusion. Anyone who's taken statis
    • by mihalis ( 28146 ) on Friday October 22, 2004 @03:52PM (#10602468) Homepage
      "Furthermore, we should see more successful attacks against Apache than against IIS, since the implication of the myth is that the problem is one of numbers, not vulnerabilities.

      Yet this is precisely the opposite of what we find, historically."

      Running through 3GB of archived log files, from Apache running on 2003 Enterprise Server, I have concluded the following:

      54% of attacks against IIS (Unicode traversal, buffer overflow, cgi, alternate data streams, etc.)

      46% of attacks against Apache (htpasswd.exe, httpd.conf, .htaccess, some odd batchfile script attacks with args to copy httpd.conf into htdocs, etc.)

      "Precisely the opposite" is hardly the right phrase to use in this situation. Sampling error among different web sites (due to different audiences, traffic rates, etc.) could easily account for the fact that IIS out-edged Apache here.

      As for the *successful* part of the author's claim, there was a 0% success rate across all queries directed at servers I either have access to logs on, or directly control.

      Sorry, your statistical sample is not comparable. You quote Petreley discussing successful attacks, then you provide some figures about attacks on your machines, and then point out that none of them were successful. So, you aren't actually telling us anything about successful attacks, since you haven't seen any.

  • by Spoing ( 152917 ) on Friday October 22, 2004 @02:37PM (#10601057) Homepage
    Windows or Linux won't make you secure. As a friend pointed out, he's got the most secure computer around; it's in a box, unplugged. I told him I'd be glad to make it super secure for the cost of some consulting time and a full cement mixer. (I'd, ofcourse, keep the system in the box and unplugged.)

    What this report does is focus on the default potential for abuse by looking at recient publically known issues.

    That's handy, though if you only go with that and expect that your systems are secure you'd be better off doing what my friend did.

    General rules;

    If it's visible over a network, it's potentially abuseable. (http://www.nessus.org, http://www.insecure.org/nmap)

    If it's running locally, it's also abuseable. If you don't absolutely positively require it, remove it -- even if it runs by some proxy process (inetd/xinetd or a similar daemon under Windows).

    Wrappers, permissions, isolation at the router level...all should be configured.

    Monitor log files and check systems. Automate what you can.

  • by Animats ( 122034 ) on Friday October 22, 2004 @02:53PM (#10601287) Homepage
    What you want for security are little processes communicating through narrow interfaces. That's RPC. The problem is that Microsoft's approach to RPC is insecure, because it comes from the old OLE system under Windows 3.1. Authorization and authentication across RPC connections is weak.

    Not that Linux is any better. The RPC systems for Linux/UNIX are clunky afterthoughts built on top of sockets.

  • Up times.... (Score:3, Insightful)

    by kmeister62 ( 699493 ) on Friday October 22, 2004 @02:55PM (#10601333)
    I found the discussion of server uptime interesting. I know that for just about every Windows Security Patch the server must be rebooted. Given the release of critical security patches about once a month, the servers with 56 day uptimes haven't had the required patches applied and are vulnerable. The expense of redundant equipment necessary to keep windows applications running with no down time is far greater than other OS's.
  • by Foofoobar ( 318279 ) on Friday October 22, 2004 @02:59PM (#10601385)
    I Bill Gates can prove that Windows is more secure than Linux. Watch as I write it down on this piece of paper. SEE? See what it says? It says 'Windows is more safe'. Don't believe me? Watch me pay someone else to say it. Believe it yet? Well how about if I buy an expensive report and tell them to say Windows is safer. Now do you believe it? NO!!

    Damn, who do I have to buy off to make you people believe that Windows is safer?
  • by paulevans ( 791844 ) on Friday October 22, 2004 @03:05PM (#10601482) Homepage
    I'm sorry, I love linux (I use slack at home) but this "report" seems to be nothing more than another "yea linux!" cheerleader piece. I couldn't help but notice the authors' obliviousness to the other side of the argument (I'm not saying Windows is better, far from it, BUT there are points that need to be addressed. ) I was hoping that this would be a calm, well thought out piece on something that I believe in: Linux is more secure and stable than Windows. How I was wrong. What the linux community needs is a comprehensive BELIEVEABLE and intelligent paper on this subject. I need something that I can take to my boss and say, "Look! See, linux is better." If I gave him this paper, he'd laugh and say, "This is why we don't use linux, you people are nuts."
  • by Bruha ( 412869 ) on Friday October 22, 2004 @03:18PM (#10601785) Homepage Journal
    Clear Winner here is Linux. You could thrown RH 9 onto the net with no firewall or anything and there it would sit until someone hacked it.

    Do the same with XP or W2k and within 20 minutes or less it would become infected and begin zombie operations.

    Lets go to a patched server in both cases they're still vulnerable. However there is a clear difference in vulnerabilities with the majority of Linux ones being in the realm of local hacks where in Windows you're still dealing with remote hacks and buffer overflows.

    Yes in many cases both problems can be blamed on 3rd party apps but even in kernel to kernel comparisons Windows still is high on the list of being vulnerable.
  • Firewalls (Score:3, Funny)

    by Anonymous Coward on Friday October 22, 2004 @03:18PM (#10601786)
    The only thing you have to ask yourself is this: Is anybody using a Windows machine as a Firewall for a bunch of Linux boxes?

    Check back here for the answer at 3am...

  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Friday October 22, 2004 @03:24PM (#10601930) Homepage
    Does security really matter? I mean neither Windows nor Linux are secure, we see new ways to exploid them every few weeks or even days, be it some obscure attacks via manipulated pdf files or some remote root exploids via ssh or whatever. If people don't patch their system regularly they are lost no matter which one they use. So I see little point in comparing them on a my system "has more remote holes than yours" basis, especially when the breakins are more the result of popularity of the OS/app then anything else.

    The real question should not be which system is more secure, since neither are, the question should more focus on which system is easier to maintain and mak upgrades and patches easy to install. If a system fails at that, no matter how few exploids it has, one unpatched is enough to get you into a hell of a lot of throuble.

    Another question would be, what are the real alternatives and what will the future bring? I mean just patching C-bufferoverflow into all enternity is really not something on which I would build 'security', neither is the OpenBSD way of 'no features, no bugs' a real solution, since people will end up using 'features' and thus get bugs.

    • Does security really matter?

      YES

      I mean neither Windows nor Linux are secure, we see new ways to exploid them every few weeks or even days

      Um, no, there is a huge difference. UNIX applications are usually designed in an inherently secure manner, UNIX file permissions really do make a difference, and UNIX contains mechanisms that can be used to lock the system down to the point where you can give a user "root" access and they still can't modify anything outside the sandbox you set them up in.

      Windows does not, in practice, provide some of these kinds of security at all... and others are purely nominal protections at the same level of asking people "are you going to rob the bank" and letting them into the vault if they say "no".

      So where on Linux an error that lets someone break out of a CHROOT environment is listed as an "exploit", Windows doesn't even provide that kind of environment so you don't need an exploit to compromise it. When a Windows exploit is listed, it far more often means there's a way of completely compromising your computer and taking it over, rather than just letting the attacker from one locked room to another.

      That is, if I was running an "anonymous FTP server", and the server application has a buffer overflow in it, on Windows that exploit would let them inject a backdoor and take over my machine at will, and modify the boot sequence to restart the backdoor if the computer is rebooted. On Linux, they would be able to run the backdoor as an unprivileged user, they wouldn't be able to even see any executable files that could be used to restart the backdoor, and in some configurations they wouldn't even have network access. They would need to find and run two more exploits... one to break out of the CHROOT environment and one to get root privileges... before they could do anything.

      This is called "defense in depth". UNIX systems and applications, developed in an environment where you had to give mutually untrusting users access to the same computer at the same time in a timesharing environment, don't break down and give up with one attack.

      SO...

      Linux, like all UNIX systems, is built around inherent security and defense in depth, which means that it's MUCH harder to get in and MUCH harder to do anything once you are in.

      AND...

      It's not just a matter of relative popularity... for one example: back when 2/3 of the domains out there were running Apache on Linux, the less than 1/3 remaining IIS servers still represented 2/3 of the domains on the "defaced sites" list.
  • by dpilot ( 134227 ) on Friday October 22, 2004 @03:29PM (#10602041) Homepage Journal
    From everything I've read, NT has a good security model, under the covers - even better than most Unix variants. (like Linux) It's just that they don't use it effectively. Even further, the Windows culture is pretty much contrary to their making effective use of their own security.

    Perhaps Unices haven't had as much security capability, but we've had the culture to at least understand separation between root and users. We've also had the open exchange that gets bugs reported and fixed, another cultural aspect.

    But then again, now we have run-as-root Lindows / Linspire. This distribution REALLY SCARES ME, especially when they sell it into the novice market - the ones least likely to do proper maintenance and most likely to click on silly attachements. (as root, no less)

    I understand Lindows / Linspire is trying to make something simple for the novice. But IMHO, they've done it in entirely the wrong way. Far better than running the user as root would be to have standard setup of "user" and make the new user that. Then make a comprehensive set of sudu scripts, with extensive error checking, to administer the system.

    BTW, the Linux security model isn't standing still, either.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...