Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Operating Systems Software Windows Linux

Windows vs. Linux Security, Once More 489

TAGmclaren writes "The Register is running a very interesting article about Microsoft and Linux security. From the article: 'until now there has been no systematic and detailed effort to address Microsoft's major security bullet points in report form. In a new analysis published here, however, Nicholas Petreley sets out to correct this deficit, considering the claims one at a time in detail, and providing assessments backed by hard data. Petreley concludes that Microsoft's efforts to dispel Linux "myths" are based largely on faulty reasoning and overly narrow statistical analysis.' The full report is available here in HTML form, and here in PDF. Although the article does make mention of OS X, it would have been nice if the 'other' OS had been included in the detailed analysis for comparison."
This discussion has been archived. No new comments can be posted.

Windows vs. Linux Security, Once More

Comments Filter:
  • Misleading article (Score:5, Insightful)

    by Anonymous Coward on Friday October 22, 2004 @01:42PM (#10599918)
    Nicholas Petreley is a Linux advocate... there is a basic problem with a partisan person presenting a "fair and balanced" argument. Kinda like doing research with fixed goals.
  • I'd rather see (Score:5, Insightful)

    by bucketoftruth ( 583696 ) on Friday October 22, 2004 @01:46PM (#10599977)
    I'd rather see OSX security compared to Windows. I only have one user adventurous enough to use Linux on their desktop. The rest are about 70/30 Win/Mac.
  • by MMaestro ( 585010 ) on Friday October 22, 2004 @01:49PM (#10600010)
    Nicholas Petreley's former lives include editorial director of LinuxWorld, executive editorial of InfoWorld Test Center, and columns on InfoWorld and ComputerWorld. He is the author of the Official Fedora Companion and is co-writing Linux Desktop Hacks for O'Reilly. He is also a part-time Evans Data Analyst and a freelance writer.

    Sorry, but as long as something like 90% of all the 'reports' about Linux being more secure and 'mythbusting' reports are writen by Linux supporters or have some business in seeing Linux succeed, I'm going to take this with a grain of salt. I'm not trying to say Windows is safe, but you can't expect me to believe this when a 'report' like this comes out every other week. If this guy was an ex-Windows programmer I'd be more understanding, but "former lives include editorial director of LinuxWorld"? Somehow I doubt they ran Windows on their machines.

  • by Anonymous Coward on Friday October 22, 2004 @01:50PM (#10600019)
    The article was written by a person who has a vested interest in Linux. Im not saying that Windows is more secure or not, but you need to take in the bias in the article objectively. It's like politics, one side always think there side is the right side.
  • by TrollBridge ( 550878 ) on Friday October 22, 2004 @01:51PM (#10600032) Homepage Journal
    ...are usually dismissed as "astroturfing" when Microsoft comes out on top.
  • by Anonymous Coward on Friday October 22, 2004 @01:51PM (#10600036)
    Linux's design is not vulnerable in the same ways, and no matter how successful it eventually becomes it simply cannot experience attacks to similar levels, inflicting similar levels of damage, to Windows.

    So because someone says something it should be taken as truth? Crackers are an ingenious lot, and security holes are security holes are security holes. They WILL be exploited in linux sooner or later.

    Yeah. right. And there is a world market for perhaps 5 computers. Famous last words, that.
  • by RangerRick98 ( 817838 ) on Friday October 22, 2004 @01:52PM (#10600056) Journal
    Funny; doesn't Microsoft fund most/all of the "Get the Facts" surveys?
  • meh... (Score:5, Insightful)

    by The_reformant ( 777653 ) on Friday October 22, 2004 @01:54PM (#10600133)
    meh..any system is only as secure as its users anyway..which i suspect is why linux has practically no problems.

    Basically anyone who knows what a terminal window is isn't likely to run suspect attachments or not configure a firewall

  • enterprise 03 (Score:3, Insightful)

    by man_ls ( 248470 ) on Friday October 22, 2004 @01:55PM (#10600161)
    The author bashes Enterprise Server 2003 as being unstable, quoting MS's average uptime of around 59 days as evidence of this.

    What people forget to mention is that MS security patches seem to like reboots, do the way filelocking works on Windows. Thus, whenever a "critical" flaw is released, they have to either patch it with a workaround (firewall rules, etc.) or they need to reboot the server.

    When I was running an internal-only Enterprise 2003 server (behind several firewalls, no public IP) the only reboots I ever experienced were those related to environmental factors: the power went out for longer than the UPS could keep the server online for; etc.

    After I started maintaining an externally-accessible 2003 server, I configured autopatching on it from Windows Update, and it reboots itself about once a month.

    According to my calculations, this still meets the 99.9999% reliability that MS claims the server to be able to provide, on enterprise-grade hardware (and what I am running on is decidedly not enterprise-grade, unless eMachines has recently broken into the enterprise market and I forgot to read the press release.) Reboots take about 4 minutes to shut down, restart, wait for the services to resolve themselves, and try again. If I was so inclined, I could tweak this to be lower (1 whole minute is that the web server loads before the network module does, can't find an IP to bind to because IP isn't enabled yet, and fails to load, then waits to retry.)

    It's a different design philosophy. My systems don't get "crufty" and crash, but they do have to be rebooted to apply security fixes. However, 4 minutes a month isn't a hardship, and anyone who says it is needs to either look into something transparently redundant, fault-tolerant, or reevaulate why they are so dependant on that one system in the first place.
  • Re:I'd rather see (Score:4, Insightful)

    by Lumpy ( 12016 ) on Friday October 22, 2004 @01:55PM (#10600177) Homepage
    who cares about desktop...

    I know of no one brave enough to put a windows server DIRECTLY on the internet microsoft even strongly suggests that a firewall exist between the server and the net.

    Yet with the right configuration a linux or BSD box is as safe as that admin can make it.

  • by RangerRick98 ( 817838 ) on Friday October 22, 2004 @01:55PM (#10600179) Journal
    I'm not taking that statement as true simply because someone said it. If I did that, I'd believe all of Microsoft's claims in the other direction, too. I believe it's true because it's a logical argument and can be backed up with evidence, whereas the claim that if Linux were more popular it would be just as vulnerable is pure conjecture.

    Holes are holes, no doubt about that. Linux just has fewer of them because of good design principles.
  • Window vs OS X (Score:5, Insightful)

    by linuxpyro ( 680927 ) on Friday October 22, 2004 @01:57PM (#10600251)

    Though this was interesting, it would be nice to see something comparing OS X security to Windows security. When you think about it, they're both relatively proprietary OSes. Sure, Microsoft has there "Shared Source" stuff, and OS X is based on Open Darwin, but really the two would be a better match because of thier commercial status.

    Sure, there are enterprise Linux distros from coimpanies like Red Hat, but you can still get a lot of use out of a non-commercial distro. There are so many ways that you can change Linux to make it more secure that comparing it to a rigid commercial OS is a bit inappropriate. I'm not saying that I think the article was pointless, just that we should give equal attentention to systems like OS X or even some of the other commercial UNIX distros for that matter.

  • No (Score:5, Insightful)

    by Anonymous Coward on Friday October 22, 2004 @02:09PM (#10600505)
    The article is not misleading because the author is a linux advocate.

    Now you are right if you want to remind readers to keep that in mind, but dismissing an article not on the base of its merits, but because the author is supposedly biased (mind, you didn't show or prove in any way that he was actually biased, you just wanted us to take it for granted) is a logical fallacy.

    If you don't like the findings of the article, please tell us why, simply accusing the author of bias won't change the facts, sorry.

    Argumentum ad Hominem
    "Circumstantial: A Circumstantial Ad Hominem is one in which some irrelevant personal circumstance surrounding the opponent is offered as evidence against the opponent's position. This fallacy is often introduced by phrases such as: "Of course, that's what you'd expect him to say." The fallacy claims that the only reason why he argues as he does is because of personal circumstances, such as standing to gain from the argument's acceptance."
    http://www.fallacyfiles.org/adhomine .html
  • Re:I'd rather see (Score:5, Insightful)

    by caluml ( 551744 ) <slashdot@spamgoe ... minus herbivore> on Friday October 22, 2004 @02:09PM (#10600508) Homepage
    Come on, stop spreading the FUD. Of course it is possible to keep a Windows machine naked on the net without it getting cracked.

    It's the amount of work needed to keep it updated that means I'd never want to do it.
  • Re:biased? (Score:0, Insightful)

    by Anonymous Coward on Friday October 22, 2004 @02:10PM (#10600533)
    If you take good and bad statements and only apply the good to one and the bad to the other, it is bias. Do you think there isn't one bad thing about linux?
  • by jxs2151 ( 554138 ) on Friday October 22, 2004 @02:11PM (#10600569)
    Read a book or two about coal, railroads, oil, computers and you'll find the verbiage and scare tactics used by the leaders of these industries are pretty similar to what Microsoft is saying now.

    "Open Source Software is inherently dangerous"

    Weasel words like "inherent" are convincing to dumbed-down folks. ./ ain't buying it though. God bless individualism.

    "Statistics 'prove'..."

    Ahhhh, the old "who can argue with scientific fact" line.

    Provide us with "science" to back up this claim. Properly vetted, peer-reviewed science from an unbiased source, unfunded by those with a vested interest in the outcome please.

    The psychological use of fear and "scientific" studies to convince the average American is not new. Read carefully the language of Microsoft and you'll hear JD Rockefeller, Andrew Carnegie, JP Morgan, etc. What you have to read carefully to find is their own fear that they are losing monopoly control. Big Oil was able to buy corrupt officials and maintain their decidedly un-capitalist ways. Will Microsoft?

  • Re:enterprise 03 (Score:4, Insightful)

    by hehman ( 448117 ) on Friday October 22, 2004 @02:15PM (#10600658) Homepage Journal
    After I started maintaining an externally-accessible 2003 server, I configured autopatching on it from Windows Update, and it reboots itself about once a month.

    According to my calculations, this still meets the 99.9999% reliability that MS claims the server to be able to provide


    Better revisit those calculations. Six 9s of reliability means that you're down for no more than 30 seconds a year. Unless your reboots take less than 3 seconds, you're already not meeting that metric.

    Besides which, five 9s (5 minutes a year) is considered carrier-grade. There isn't as firm a standard for enterprise-grade, but it usually permits occasional scheduled downtime outside business hours, and is usually in the two to four 9s range.

    BTW, I couldn't find anywhere that MS claims six nines of reliability; do you have a source?
  • by Anonymous Coward on Friday October 22, 2004 @02:18PM (#10600734)
    "Circumstantial: A Circumstantial Ad Hominem is one in which some irrelevant personal circumstance surrounding the opponent is offered as evidence against the opponent's position. This fallacy is often introduced by phrases such as: "Of course, that's what you'd expect him to say." The fallacy claims that the only reason why he argues as he does is because of personal circumstances, such as standing to gain from the argument's acceptance."
    http://www.fallacyfiles.org/adhomine .html
  • by RangerRick98 ( 817838 ) on Friday October 22, 2004 @02:18PM (#10600745) Journal
    They addressed the Forrester survey's problem with patch speed very clearly, I thought. And your comment about the paper's professionalism is irrelevant to the points it makes.
  • Re:I'd rather see (Score:1, Insightful)

    by Anonymous Coward on Friday October 22, 2004 @02:22PM (#10600819)
    the scary part is that at bootup, the microsoft firewall or ANY software firewall is inactive and disabled for a long time after the ethernet and networking comes up and alive.

    there is a significant window of attack between the network coming online and the firewall starting.

    Linux, BSD, and everytihng else has firewall rules applied BEFORE the network interface is even started.

    problem #1 of windows, I can not control the boot order of resources and drivers. this is bad for anything but home or play use.

    the fact that apache servers outnumber IIS servers significantly is another example... windows servers are dangerous even when behind a hardware firewall.
  • by Anonymous Coward on Friday October 22, 2004 @02:22PM (#10600825)
    Are you kidding? The Register is not going to get slashdotted.
  • Re:So... (Score:5, Insightful)

    by Anonymous Coward on Friday October 22, 2004 @02:27PM (#10600913)
    Our Linux boxes get owned just the same as our Windows boxes do.

    Then your Linux admins don't know what they're doing.
  • by man_ls ( 248470 ) on Friday October 22, 2004 @02:31PM (#10600972)
    I read through the article, and was honestly shocked at some of the claims the author made when describing Windows in relation to Linux.

    Note that the purpose of this post is not to say "omg windows >>>> linux all you penguin lovers rot in hell" like a lot of this story will be. I am merely trying to clarify some of the author's points.

    "Myth: Safety in Small Numbers"

    "Furthermore, we should see more successful attacks against Apache than against IIS, since the implication of the myth is that the problem is one of numbers, not vulnerabilities.

    Yet this is precisely the opposite of what we find, historically."

    Running through 3GB of archived log files, from Apache running on 2003 Enterprise Server, I have concluded the following:

    54% of attacks against IIS (Unicode traversal, buffer overflow, cgi, alternate data streams, etc.)

    46% of attacks against Apache (htpasswd.exe, httpd.conf, .htaccess, some odd batchfile script attacks with args to copy httpd.conf into htdocs, etc.)

    "Precisely the opposite" is hardly the right phrase to use in this situation. Sampling error among different web sites (due to different audiences, traffic rates, etc.) could easily account for the fact that IIS out-edged Apache here.

    As for the *successful* part of the author's claim, there was a 0% success rate across all queries directed at servers I either have access to logs on, or directly control. I have also experienced Apache servers being compromised (more often due to user-induced security holes than design flaws.) but in the end, the user leaving a filedrop which allows php scripts to execute, and such, is as dangerous as a buffer overflow. They are each different but functionally equivilant ways to circumvent the security of the system it is running on.

    "But it does notexplain why Windows is nowhere to be found in the top 50 list. Windows does not reset its uptime counter. Obviously, no Windows-based web site has been able to run long enough without rebooting to rank among the top 50 for uptime."

    Part of the Windows operating system's underlying design involves its file locking symantics. Files in-use by the operating system, providing needed functionality, can't be easily replaced while the system is running. Windows solution? The in-use-file replacement tool is able to change the bits on disk, but not the memory addresses they map to. So, the copy in memory doesn't match the copy on disk -- and the copy in memory is the old (flawed) copy. This is rectified by...you guessed it...refreshing the copy in memory. And what's the easiest way to do this? Reboot the server and reload it from the disk, if the module you're talking about happens to be, say, the Local Security Authority or the Windows Kernel.

    I mentioned (with some flawed math) (http://slashdot.org/comments.pl?sid=126724&cid=10 600161) in more detail the reasons Windows servers are often down there on the patches. I did miscalculate availablilty. My servers average in the 99.9952% range. Which means they're down for a few hours a year. Sure, not carrier grade, but not too shabby either. Well within the reasonable expectations of most businesses. (Source: http://slashdot.org/comments.pl?sid=126724&cid=106 00658 by hehman) Note that the situations where Windows is likely to be used probably aren't nuclear power plants, airplane control software, etc. Thus, the additional powers of 9 aren't really a factor.

    "Myth: Open Source is Inherently Dangerous"

    I agree with the author here. Having the source code doesn't really have an impact as to whether or not a hacker can find an exploit -- there are enough tools to automate exploit finding in streamed data, especially web connections.

    "Myth: Conclusions Based on Single Metrics"

    Another valid point. One can spin statistics any way you want to, and have the math be perfectly valid, to reach a meaningless conclusion. Anyone who's taken statis
  • by Spoing ( 152917 ) on Friday October 22, 2004 @02:37PM (#10601057) Homepage
    Windows or Linux won't make you secure. As a friend pointed out, he's got the most secure computer around; it's in a box, unplugged. I told him I'd be glad to make it super secure for the cost of some consulting time and a full cement mixer. (I'd, ofcourse, keep the system in the box and unplugged.)

    What this report does is focus on the default potential for abuse by looking at recient publically known issues.

    That's handy, though if you only go with that and expect that your systems are secure you'd be better off doing what my friend did.

    General rules;

    If it's visible over a network, it's potentially abuseable. (http://www.nessus.org, http://www.insecure.org/nmap)

    If it's running locally, it's also abuseable. If you don't absolutely positively require it, remove it -- even if it runs by some proxy process (inetd/xinetd or a similar daemon under Windows).

    Wrappers, permissions, isolation at the router level...all should be configured.

    Monitor log files and check systems. Automate what you can.

  • by agallagh42 ( 301559 ) on Friday October 22, 2004 @02:43PM (#10601155) Homepage
    "And how do you download the latest service packs?"

    Certainly not by downloading them directly to the server via IE, that's for sure.

    In small shops, you would download the patches with your workstation, and then copy them to the server over the network or using a CD-R, and install them manually.

    In larger shops, you would set up a Software Update Services (SUS) server or SMS server to deploy the patches to the servers exactly when you're ready to do so (after testing in your lab first, of course).

    You should never be using IE on a critical production server. End of story.
  • by Lumpy ( 12016 ) on Friday October 22, 2004 @02:45PM (#10601186) Homepage
    You want to know the funniest part.

    I work in the advertising devision of a large communications company as their IT manager.

    these people know that advertising is lies, lies, a huge stretch of the truth and then a tad more lies.

    yet they are suckered in hard by advertising as much as the dolt that believes everything they see in an ad.

    if the people that make the ad's are suckered by them then the common manager and CEO has absolutely no hope but to believe every advertisment completely as truth.

    And yes, this fact makes me really sad and want to give up and say .... Bahhhhhh with the rest of the sheep.
  • Up times.... (Score:3, Insightful)

    by kmeister62 ( 699493 ) on Friday October 22, 2004 @02:55PM (#10601333)
    I found the discussion of server uptime interesting. I know that for just about every Windows Security Patch the server must be rebooted. Given the release of critical security patches about once a month, the servers with 56 day uptimes haven't had the required patches applied and are vulnerable. The expense of redundant equipment necessary to keep windows applications running with no down time is far greater than other OS's.
  • by paulevans ( 791844 ) on Friday October 22, 2004 @03:05PM (#10601482) Homepage
    I'm sorry, I love linux (I use slack at home) but this "report" seems to be nothing more than another "yea linux!" cheerleader piece. I couldn't help but notice the authors' obliviousness to the other side of the argument (I'm not saying Windows is better, far from it, BUT there are points that need to be addressed. ) I was hoping that this would be a calm, well thought out piece on something that I believe in: Linux is more secure and stable than Windows. How I was wrong. What the linux community needs is a comprehensive BELIEVEABLE and intelligent paper on this subject. I need something that I can take to my boss and say, "Look! See, linux is better." If I gave him this paper, he'd laugh and say, "This is why we don't use linux, you people are nuts."
  • Re:SELinux (Score:3, Insightful)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Friday October 22, 2004 @03:19PM (#10601805) Homepage Journal
    SELinux uses the LSM, and the LSM is now included in the standard Linux kernel. I believe that that means that most/all of the kernel side of SELinux is also in the standard kernel.


    The tricky part is that there are a lot of affected user applications. These are not part of the standard Linux kerenel (well, duh! :) and I'm unaware of any of the application writers including the SELinux code into their standard projects. For the most part, you need to go to the SELinux website for the user-space stuff.

  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Friday October 22, 2004 @03:24PM (#10601930) Homepage
    Does security really matter? I mean neither Windows nor Linux are secure, we see new ways to exploid them every few weeks or even days, be it some obscure attacks via manipulated pdf files or some remote root exploids via ssh or whatever. If people don't patch their system regularly they are lost no matter which one they use. So I see little point in comparing them on a my system "has more remote holes than yours" basis, especially when the breakins are more the result of popularity of the OS/app then anything else.

    The real question should not be which system is more secure, since neither are, the question should more focus on which system is easier to maintain and mak upgrades and patches easy to install. If a system fails at that, no matter how few exploids it has, one unpatched is enough to get you into a hell of a lot of throuble.

    Another question would be, what are the real alternatives and what will the future bring? I mean just patching C-bufferoverflow into all enternity is really not something on which I would build 'security', neither is the OpenBSD way of 'no features, no bugs' a real solution, since people will end up using 'features' and thus get bugs.

  • by dpilot ( 134227 ) on Friday October 22, 2004 @03:29PM (#10602041) Homepage Journal
    From everything I've read, NT has a good security model, under the covers - even better than most Unix variants. (like Linux) It's just that they don't use it effectively. Even further, the Windows culture is pretty much contrary to their making effective use of their own security.

    Perhaps Unices haven't had as much security capability, but we've had the culture to at least understand separation between root and users. We've also had the open exchange that gets bugs reported and fixed, another cultural aspect.

    But then again, now we have run-as-root Lindows / Linspire. This distribution REALLY SCARES ME, especially when they sell it into the novice market - the ones least likely to do proper maintenance and most likely to click on silly attachements. (as root, no less)

    I understand Lindows / Linspire is trying to make something simple for the novice. But IMHO, they've done it in entirely the wrong way. Far better than running the user as root would be to have standard setup of "user" and make the new user that. Then make a comprehensive set of sudu scripts, with extensive error checking, to administer the system.

    BTW, the Linux security model isn't standing still, either.
  • Re:Reverse FUD... (Score:2, Insightful)

    by DannyO152 ( 544940 ) on Friday October 22, 2004 @03:38PM (#10602213)
    It is an analysis. So then the following questions apply:
    • Are sources cited?
    • Are sources credible?
    • Did you check the sources and find the citation was accurate and not out of context or abridged to remove inconvenient parts?
    • Was the analysis presented in such a way that alternative interpretations of the facts were noted and discussed fairly?
    • Can you follow the logic or do you find there are assumed facts not in evidence?
    • Is the author's past history of any advocacy well-disclosed so the reader can be forewarned as to any potential bias?
    • Were the experiments/benchmarks single-blind or double-blind or no-blind?
    • Is the experiment/benchmark methodology well-explained and the results reproducible?
    • Where people were surveyed, were the subjects selected randomly (and is the selection method disclosed)?
    I haven't looked closely so I will not answer the question about reverse FUD. In any case, I have, at best, a mild interest in Windows TCO or Linux Security studies. I am not a PHB and I do not serve under one, so when I check slashdot comments about these studies, it's to see if someone criticizes the study in terms of the bases I set forth above. Because if a study is dubious, no matter what it advocates, a commenter will point flaws out in a specific manner. I believe there's some signal amidst the noise -- I must be an optimist.
  • by flossie ( 135232 ) on Friday October 22, 2004 @03:38PM (#10602217) Homepage
    Internet Explorer has never been, isn't now and never will be integrated into the kernel. It does not run in kernel mode. The only thing that IE is integrated in is the shell environment

    Fair enough - I'll modify my question then. If IE should never be used on production servers, why is IE so heavily integrated into the shell environment in which the server runs?

    BTW, to say that the integration of IE in Windows is somehow equivalent to the integration of Konquerer in KDE is rather ridiculous. It is trivial to entirely replace one browser with another on a GNU/Linux system. Eradicating all traces of IE on MS Windows machines is nowhere near as simple.

  • Re:biased? (Score:2, Insightful)

    by NoOneInParticular ( 221808 ) on Friday October 22, 2004 @03:42PM (#10602287)
    I think you misunderstand what most people mean with multi-user. In computing land this means that the operating system supports multiple users doing stuff on the machine at the same time, not that you have different logins/passwords for an essentially single-user environment. Although the NT kernel indeed has true multi-user support at its core(*), you need to get the 'Terminal Server' edition of the OS, not the 'Home', 'Professional', or even the 'Server' editions. These are crippled to single user systems. IIRC, the TS was introduced with w2k, not before.

    (*) Citrix made use of this by offering a true multi-user windows before Microsoft did.

  • Re:biased? (Score:4, Insightful)

    by Spoing ( 152917 ) on Friday October 22, 2004 @03:50PM (#10602427) Homepage
    I don't think you understand just how limited Windows is.

    1. 1) Windows is not monolithic. If you or the authors of this report knew anything about OS design, you'd know this to be true.

    OK. Remove IE. Boot without a GUI. Change libraries that are currently in use while the system is running.

    1. 2) They completely forget (or choose to ignore) that Windows was multiuser starting with NT. 2000 was multiuser as well. To say that XP is the first real multiuser Windows is completely false. And they use fast user switching to imply that Windows still isn't a true multi-user OS, which is complete nonsense.

    So, given any hardware you wish, how many different and unique users can use 1 NT 3.x or 4.x system at the same time? What restrictions do you encounter, if any? Are there differences between desktop and 'server' versions of NT in this respect?

    [rpc] -- I'll let someone else address that.

    1. 4) This point makes no sense whatsoever: "By advocating this type of usage, Microsoft invites administrators to work with Windows Server 2003 at the server itself, logged in with Administrator privileges. This makes the Windows administrator most vulnerable to security flaws, because using vulnerable programs such as Internet Explorer expose the server to security risks."

    This has been addressed by NoOneInParticular [slashdot.org], so I won't rehash it.

  • by mihalis ( 28146 ) on Friday October 22, 2004 @03:52PM (#10602468) Homepage
    "Furthermore, we should see more successful attacks against Apache than against IIS, since the implication of the myth is that the problem is one of numbers, not vulnerabilities.

    Yet this is precisely the opposite of what we find, historically."

    Running through 3GB of archived log files, from Apache running on 2003 Enterprise Server, I have concluded the following:

    54% of attacks against IIS (Unicode traversal, buffer overflow, cgi, alternate data streams, etc.)

    46% of attacks against Apache (htpasswd.exe, httpd.conf, .htaccess, some odd batchfile script attacks with args to copy httpd.conf into htdocs, etc.)

    "Precisely the opposite" is hardly the right phrase to use in this situation. Sampling error among different web sites (due to different audiences, traffic rates, etc.) could easily account for the fact that IIS out-edged Apache here.

    As for the *successful* part of the author's claim, there was a 0% success rate across all queries directed at servers I either have access to logs on, or directly control.

    Sorry, your statistical sample is not comparable. You quote Petreley discussing successful attacks, then you provide some figures about attacks on your machines, and then point out that none of them were successful. So, you aren't actually telling us anything about successful attacks, since you haven't seen any.

  • Re:enterprise 03 (Score:2, Insightful)

    by ergo98 ( 9391 ) on Friday October 22, 2004 @04:06PM (#10602677) Homepage Journal
    That's sort of the point. You have to reboot a Windows server more often. If rebooting once a month or so is acceptable (see Murphy's Law for schedule), then that's fine.

    But that's not the point - there is an implication that it is instability, i.e. uncontrolled downtime, when in reality it is controlled downtime (well accommodating the fact that sometimes security patches need to be installed relatively quickly). A controlled reboot of your server at 3 in the morning when all of your employees are at home is absolutely nothing like having your server crash at 10:00am. It is rhetorical hyperbole comparing them.

    Of course for web applications this should be an entirely moot point - web apps with any requirement for reliability should be running in a cluster or network load balance arrangement (fully supported by .NET for shared session), both of which Windows 2003 fully supports out of the box. In that case, with multiple balanced servers, you can freely patch any of them (or deal with failed hardware) with minimal or no customer impact -- maybe slightly slower responses with a smaller cluster.
  • by flossie ( 135232 ) on Friday October 22, 2004 @04:22PM (#10602882) Homepage
    You can remove all traces of Konqueror, not just the lanucher but all the HTML rendering and stuff, without breaking KDE? Can you have KDE without any web browser components?

    I don't use KDE so I can't answer that for certain, but I would be very surprised if you couldn't. It is certainly possible to remove all traces of a web browser from the alternative desktop environment: GNOME.

    Then again, why would you even want to run KDE or GNOME on a server? You can have a fully functional, graphical GNU/Linux machine without running those extra desktop applications.

    Of course, for a server, there is probably no need to run any graphical stuff at all. It is perfectly possible (and common) to have a GNU/Linux server without installing X11 - all configuration can be performed via the command line, or remotely if you prefer a graphical configuration interface.

  • by vsprintf ( 579676 ) on Friday October 22, 2004 @04:23PM (#10602887)

    Big Oil was able to buy corrupt officials and maintain their decidedly un-capitalist ways. Will Microsoft?

    Was that a rhetorical question, or did you miss the DoJ's dance with Microsoft?

  • by agrippa_cash ( 590103 ) on Friday October 22, 2004 @04:46PM (#10603210) Homepage
    In my expierence sometimes (about 60% of the time) RUNAS just doesn't work. Not that this excuses running as Admin, but if 'ease of use' counts in Windows favor then it is entirely fair to point it this flaw.
  • by avgjoe62 ( 558860 ) on Friday October 22, 2004 @05:25PM (#10603630)
    If the exploit is in a component that runs as a limited user, you'll need an additional local root exploit to get System rights - same as in any other OS.

    But the problem is (if you read the article...) that there are far more processes in Windows that run with privilege than those that are restricted.

    To quote TFA:

    RPCs are potential security risks because they are designed to let other computers somewhere on a network to tell your computer what to do. Whenever someone discovers a flaw in an RPC-enabled program, there is the potential for someone with a network-connected computer to exploit the flaw in order to tell your computer what to do. Unfortunately, Windows users cannot disable RPC because Windows depends upon it, even if your computer is not connected to a network. Many Windows services are simply designed that way. In some cases, you can block an RPC port at your firewall, but Windows often depends so heavily on RPC mechanisms for basic functions that this is not always possible. Ironically, some of the most serious vulnerabilities in Windows Server 2003 (see table in section below) are due to flaws in the Windows RPC functions themselves, rather than the applications that use them. The most common way to exploit an RPC-related vulnerability is to attack the service that uses RPC, not RPC itself.

    It is important to note that RPCs are not always necessary, which makes it all the more mysterious as to why Microsoft indiscriminately relies on them.

    THAT is what makes Windows different from any other OS and thus more vulnerable.
  • Re:No (Score:5, Insightful)

    by slipstick ( 579587 ) on Friday October 22, 2004 @06:25PM (#10604287)
    His point is irrespective of the version of Apache.

    His point is that Apache is the "most popular"(which it is), and is less likely to be attacked. This argument was in response to the idea that Windows is not more vulnerable simply the most prevalent. His counter example of Apache was used to point out that popularity does not directly lead to more attacks.

    Thus it does not follow that as Linux grows in popularity that the number of successful attacks will increase disproportionally.
  • Re:biased? (Score:3, Insightful)

    by Spoing ( 152917 ) on Saturday October 23, 2004 @12:21AM (#10606674) Homepage
    Thanks for the feedback. I had used Recovery Console before, though being reminded of it is a good thing.

    There is a qualitative difference between Unix-like systems and Windows on the issues I mentioned. Details are below...

      1. Boot without a GUI.

      That's too easy. Ever heard of the Recovery Console?

    Not counting GUI intensive applications, Windows does not work completely when the Recovery Console is enabled. Except for limited functions, Windows is crippled without a GUI and most programs (utility, server, and applications) require a GUI for proper functioning or for configuration at a minimum.

    Unix/Linux/BSD/... don't need a local display or graphics at all. If you want to run without a graphics card, you can and either skip graphics or export the display buffer to another computer. Most server apps can be monitored remotely and can use either a shell or web page for control.

      1. Change libraries that are currently in use while the system is running.

      That is impossible. Even to the extent that it is possible on Windows (you can do it if you try hard enough), it's a very bad idea. If a process doesn't load all of its libraries at startup, you can end up with mismatched binaries. That's a great recipe for data loss and other really bad things.

    Windows locks files on use. Unix/Linux/BSD/... use inodes to allow different processes to see the file system in a different way. (Search for inodes if this sounds interesting to you.)

    For example, if I'm editing file 'index.html' in one program I can delete it in another program. The editor neither cares nor knows that the file has been deleted...because to the editor index.html has not been deleted! You can even download files in one program and while the file is being transfered move it to another directory.

    I regularly replace system libraries, application libraries, whole applications, the GUI and system tools and the kernel while using the system. Rarely is it an issue, though with the kernel if the whole thing has been replaced, a reboot is required to enable any new program to use it. If only a module is added or removed, no reboot is needed is usually required.

    For example, if I update the desktop (KDE or Gnome) or the graphics subsystem (X), I usually don't bother shutting anything down or logging off right away. After a few hours *if* I encounter any oddities (say, when opening up a new application) I might be annoyed enough to log out and log back in to correct the problem...though it's such a trivial thing that I usually don't bother till I notice a few graphical glitches. The same can be done with a running server process...because the upgrades understand how to handle a running process safely and they do the right thing such as restarting the service after the files have been updated.

      1. So, given any hardware you wish, how many different and unique users can use 1 NT 3.x or 4.x system at the same time?

      I believe only one GUI session can be active at a time, but processes from any number of users can be running. (in fact, you can have processes running as different users on the same GUI session, but I would assume that's the same "physical user") You can play solitaire on a web server. Presumably not as the same user. I'm not the OP, and I don't really know much about this, so I'm not really gonna try to defend it properly.

    No problem.

    Unix/... supports as many users at the same time as both system resources and the configuration allows. By default, pressing Ctrl-Alt-F1/F2/... switched virtual terminals on Linux. Each one can allow a different user to login. Running nested X allows you to login as another user in another X session. Logging in remotely to a Unix system allows you to view the system as if it were your local one. It is all built in and depends only on if it is enabled or disabled in the configuration -- no special server software like terminal services is required.

    Take a look here for one example of this. [workspot.com]

  • by agallagh42 ( 301559 ) on Saturday October 23, 2004 @01:25AM (#10606911) Homepage
    "Confusing server room setup.
    20 server boxes, 20 monitors, 20 keyboards, 20 mice. Or using extepensive and error prone KVM setups which may only reduce the clutter by a third or so practically.
    More cable clutter, more power requirements, reduced efficiency."


    Geez. How long has it been since you've touched a windows server? Every one of the benefits you listed for Linux is not only possible on windows, it's common practice. It's very easy to run a windows server totally headless. The GUI will be there if you need it, but 99% of the time, you don't.

    Even my personal server at home, running W2K3, hasn't had a monitor connected to it for over a year. Everything you would ever want to do can be done remotely. You even have the choice of using Remote Desktop for the nice warm fuzzy GUI, or you can go totally command line if that's what turns your crank.

    Yes, every single function that you can perform in the GUI can also be performed from the command line. Remote access security can be had any number of ways, with or without spending money on software. Windows supports IPSec natively, as well as several flavours of VPN, or there are even several free (as in beer and/or speech) SSH products available for it.

    Basically, quit knocking MS for the shortcomings of NT4. That's ancient history and they've made giant leaps forward in quality and reliability. If you want to knock them for their business practices, or just general evilness, go right ahead, but the argument that windows is crap just doesn't cut it anymore.
  • Re:So... (Score:4, Insightful)

    by isorox ( 205688 ) on Saturday October 23, 2004 @07:13AM (#10607906) Homepage Journal
    And neither do their windows admins. PHB's think that Windows servers must be easy to admin as they look like Windows desktops. Of course in reality they aren't.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...