Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security

5,198 Software Flaws Found in 2005 257

An anonymous reader writes "Security researchers uncovered nearly 5,200 software vulnerabilities in 2005, almost 40 percent more than the number discovered in 2004, according to Washingtonpost.com. From the article: 'According to US-CERT...researchers found 812 flaws in the Windows operating system, 2,328 problems in various versions of the Unix/Linux operating systems (Mac included). An additional 2,058 flaws affected multiple operating systems.'"
This discussion has been archived. No new comments can be posted.

5,198 Software Flaws Found in 2005

Comments Filter:
  • by Ckwop ( 707653 ) * on Saturday December 31, 2005 @09:34AM (#14370041) Homepage

    There's two ways to look at this. I would say that it is quite unlikely that the quality of software with respect to security went down in 2005. Computer Security now has such high profile that software houses across the world are spending many dollars trying to provide better security.

    If you accept that security quality has not gone down, then you must conclude our ability to detect vulnerabilites is getting better. This is universally a good thing. Every vulnerability the "good guys" find before the "bad guys" is one we can have fix for before the bad guys take over our system.

    Then there's the other side of these figures. That's alot of vulnerabilities. Now, fair enough not all vulnerabilities are created equally but I'd bet at least 10% are serious enough to get your system taken over if you're not careful. That's a lot of ways to break in to my system and it's a lot of work to make sure you're not vulnerable.

    We have such a long way to go. For example, in PHP if they'd just follow Microsoft's example and put a SQL injection and XSS attack filter on information passed to web-pages we could close a serious hole in many web-applications. I've not looked at Ruby on Rails but I bet it fails this test too.

    For gods sake, if you're not writing an operating system you have no business using C. Read me lips: YOU CAN'T WRITE SECURE C. Not now, not after 20 thousand hours of training, not ever. Sure, it's possible to write secure C in theory but the difference between theory and practice is that in theory they're the same and in practice they are not. In practice, you have deadlines, in practice you have people on the team who have less security training than others, in practice you have developers who have just had children and don't get a lot of sleep. In practice, people make mistakes. Code reviews may help but they wont remove everything. If you write your software in C you're doomed to having silly security bugs. If you want to remove most of the worry about overflows, use a language that rules them out.

    Another thing, why should code we execute on our computers run at the maxmium privellege set of the user who's running it? Suppose my program checks a HTTP page against an MD5 hash periodically and sends an SMS through an internet based SMS gateway. Why should that program, if it wants to, be allowed to access the disk? I don't know about Java but C# has got a set of attributes that can control this type of behaviour. Really, we should be forcing declarations at the language level about what permissions each method of the program needs - the default being none of course.

    Simon.

  • Re:Axe Grinding (Score:5, Insightful)

    by ginotech ( 816751 ) on Saturday December 31, 2005 @09:36AM (#14370049)
    That is messed up. You're right, simply updating a vulnerability doesn't make it a new one. You know why Linux and co. have more updated ones, though? Because people can actually see the bugs in the code!
  • Re:Hooray \o/ (Score:2, Insightful)

    by spike1 ( 675478 ) on Saturday December 31, 2005 @09:47AM (#14370080)
    So, where did you read that windows is more secure all of a sudden?

    You didn't take those figures at face value did you?
    Those figures said they were for linux AND other univx variants like OSX...

    So, 2500 between OSX, openBSD, netBSD, freeBSD, Linux, Solaris, etc... (not to mention all the flaws listed for the dfifferent linux distributions probably got duplicated across several distros)

    versus 900 for windows
    (I'm rounding up)
    Was this 900 split between 95/98/98SE/ME/2000/XP/Vista?
    or just for XP?

    There're lies, damned lies, and statistics

  • Software Bugs (Score:1, Insightful)

    by Ice Wewe ( 936718 ) on Saturday December 31, 2005 @10:14AM (#14370143)
    If I were you, I'd keep my eyes out for a Windows logo on that web site. *cough*kickbacks*cough* From my experience, if Microsoft doesn't have more bugs, then their software sure is shitty. I mean, FireFox is open source, IE is not. Who is more secure, doesn't crash as much, and has nifty plug-ins? If you said IE, you're living in the past. Sure, Open Source is going to have more bugs, it's hundreds of thousands, if not millions of people contributing code. Of course not all of them are going to get everything perfect. Now compare how many people Microsoft has working on bugs. A few thousand at best. Now you see the reality of this. Linux is going to have more bugs simply because it has more software. Microsoft is going to take longer to patch their bugs because they only have a fraction of the people working on it.
  • by penguin-collective ( 932038 ) on Saturday December 31, 2005 @10:17AM (#14370149)
    If we listened to just the media you would have thought Windows has thousands and the others only had a few dozen. I promise I'm not trolling, but do those numbers stop and make anyone on the site re-think stances?

    No, it doesn't. First of all, there are a dozen different versions of UNIX and Linux, each with their own set of flaws. MacOS is an almost entirely different system except for a kernel compatibility module and a bunch of command line utilities. Second, the number of bugs discovered or number of bugs fixed tells you little about the security of an operating system. Individual bugs have very different consequences for system security.

    I find myself defending MS allot on this site, and it's nice to have some numbers from a respected neutral organization to debate some of you guys with. I'm sure after this piece they will be re-classified as MS zealots, but what can ya do.

    If you are using numbers like these to make an argument that MS products are "more secure", the zealot is you. But, then, you already admitted as much.
  • by bogie ( 31020 ) on Saturday December 31, 2005 @10:22AM (#14370156) Journal
    Because I know I just woke up but that CERT page is listing APPLICATIONS FLAWS and NOT OS flaws.

    Is a flaw in "Gold FTP explorer" or Photoshop a Windows OS flaw?

    Am I the only one seeing this?
  • by penguin-collective ( 932038 ) on Saturday December 31, 2005 @10:27AM (#14370164)
    Modern unmanaged C++ is fine (STL containers instead of arrays, RAII, etc.),

    Modern unmanaged C++ is NOT fine; STL permits many kinds of bugs that are analogous to buffer overflows. Furthermore, modern software systems are composed of many different modules, and just because you happen to be careful in your modules doesn't mean others are careful in theirs. Finally, without full garbage collection, you cannot have full runtime safety.

    but I often wonder why people still write in C at all, particularly when it comes to Open Source software.

    People prefer C to C++ because for the small increase in safety that C++ gives, it's far too complicated and complex a language. People don't use languages other than C/C++ because those languages interoperate poorly with existing C/C++-based libraries (this is C/C++'s fault), tend to have bloated runtimes, and have only a tiny user community. And, yes, many people don't even realize that there is a problem.

    We are not the bearded heroes of the 70s - it's time to write in a modern language.

    The bearded heroes of the 70s actually knew better. Back in the 1970's and 1980's, C was of no significance. When people were using HLLs back then, those languages were generally a lot safer than C. The rise of C was a historical accident, related to the rise of BSD UNIX and microcomputers.

    But, yes, I share your sentiment: it would be good to see security bugs by language choice. And I'll give you this much: C++ is an improvement over C, but it's not a solution.
  • by jc42 ( 318812 ) on Saturday December 31, 2005 @10:46AM (#14370212) Homepage Journal
    I often wonder why people still write in C at all, ...

    Well, my last big project was written almost entirely in C for the simple reason that that's what the client wanted. We did a lot of prototyping in perl and python, but that code wasn't acceptable for delivery; we had to rewrite all the production code in C. If not, it wouldn't be accepted.

    Much of the explanation was that the client had accepted C++ and java in earlier projects, and they were disasters for all the familiar reasons. They were determined that this wouldn't happen again, so they went with a "proven" language with a track record of use in major successful systems.

    Similarly, I have a couple of friends who recently did a project in Cobol. They hated it, but they wanted to get paid, and that's what the client would accept.

    In the Real World[TM], the decision about which language to use is very often made by managers who aren't programmers and don't have a clue about the real issues. So they make decisions based on things that they can understand and measure.
  • by lasindi ( 770329 ) on Saturday December 31, 2005 @10:50AM (#14370221) Homepage
    "researchers found 812 flaws in the Windows operating system, 2,328 problems in various versions of the Unix/Linux operating systems (Mac included). "

    If we listened to just the media you would have thought Windows has thousands and the others only had a few dozen. I promise I'm not trolling, but do those numbers stop and make anyone on the site re-think stances? We all saw the numbers that put Firefox with more holes then IE earlier this year too. Could MS be doing a better job, but just getting hammered in the press (who are mostly Apple users by the way)? MS holes get lots of press while other operating systems get a free pass.


    If you look at the first post [slashdot.org], you'll see that the real count of vulnerabilities isn't so shocking after all:

    Windows 671
    UNIX/Linux 891
    Multiple 1512


    Also, when you consider the fact that "UNIX/Linux" includes many different operating systems (e.g., GNU/Linux, *BSD, OS X, etc.), you can't give any one Unix operating system the blame. Remember that although some code is shared between projects, GNU/Linux and the *BSD are more or less completely different code bases. In any case, the simple counts of vulnerabilities don't take into account the severity of each, so the real winner is even more ambiguous.

    I find myself defending MS allot on this site, and it's nice to have some numbers from a respected neutral organization to debate some of you guys with. I'm sure after this piece they will be re-classified as MS zealots, but what can ya do.

    While Brian Krebs might be tainted by his misrepresentation (see the post I got the numbers from), I can't imagine anyone here claiming that US-CERT is somehow a bunch of MS zealots. In fairness to Microsoft, they've definitely come a long way with SP2, and I don't feel nearly as vulnerable when using an SP2 machine as I did with previous Windows versions (though the recent WMF hole makes me a bit more worried). without considering the severity of each vulnerability. But they're still no where near the point where I would switch from Linux.
  • Re:Axe Grinding (Score:3, Insightful)

    by pintomp3 ( 882811 ) on Saturday December 31, 2005 @11:03AM (#14370263)
    TFA keeps talking about vulnerabilities and flaws interchangabily. a flaw doesn't mean a vulnerability. although i believe updates should be included in the tally, the tally is trivial. few of the unix/linux flaws make your computer vulnerable compared to the windows ones. that is more a design issue though.
  • by wnknisely ( 51017 ) <wnknisely AT gmail DOT com> on Saturday December 31, 2005 @11:06AM (#14370274) Homepage Journal
    Counter nugget:

    Count the number of IIs exploits vs Apache and correlate to the number of installations. If your logic held, there should be many many more exploits out there for Apache.
  • by imroy ( 755 ) <imroykun@gmail.com> on Saturday December 31, 2005 @11:39AM (#14370380) Homepage Journal

    So why don't web servers count when 'entire operating systems' do? Web servers are always connected to some sort of network, if not the Internet. They wouldn't be much use otherwise. They often have all sorts of modules/plugins loaded, some third-party. They often have to run all sorts of interpreted languages (Perl, Python, PHP, ASP, etc) with scripts written by all sorts of people. They can also run other executables on the host system. They often have to access a database, either on the same machine or over the network. They often send email and even receive it (e.g confirmation emails).

    Most importantly, they're often very public machines (not including intranets). And they can be holding (or have access to) very valuable data e.g banking details, email addresses, passwords. Web servers may be out-numbered by desktop machines, but they're still very attractive targets.

    So, would you like to have another try at explaining why Apache HTTP server has been the most used web server [netcraft.com] for almost ten years now, but is not the most attacked?

  • Re:Axe Grinding (Score:2, Insightful)

    by TallMatthew ( 919136 ) on Saturday December 31, 2005 @11:57AM (#14370448)
    Evaluating the security of an operating system based on the number of CERT advisories is assinine, err, unreasonable. There's no debate here.

    Windows is more secure than it used to be, and you can absolutely make a Windows box more secure than a Linux box, but come on. Any OS that requires (or at least strongly encourages) all applications to be run with full admin privileges is de facto less secure. All these IE/Outlook exploits wouldn't do squat if Windows was Unix-like in that regard. I don't give my grandma root. Redmond does it by default.

    If M$soft could figure out a way to separate privs, the Net would be a better place. Their OS architecture, especially that God-awful anything-goes "read me write me 69 me" registry, is killing them.

  • by ceoyoyo ( 59147 ) on Saturday December 31, 2005 @11:58AM (#14370453)
    Well, for the purposes of security, if the runtime library is distributed with the OS then it should be counted as part of the OS. So installed-on-every-copy-of-windows-because-every-p rogram-needs-to-use-me.dll is counted as part of the OS but funky-library-for-doing-bad-stuff-from-Claria.dll isn't.
  • Unfair (Score:4, Insightful)

    by Kaelthun ( 940330 ) on Saturday December 31, 2005 @12:09PM (#14370515) Homepage
    "The ignorant define themselves" why is there even a discussion going on about the essence of the word "flaw"? Fact is that this research has not been fair because all Linux distro's, UNIX variants (such as BSD) and Mac are counted as one, and MS Windows as another. You cannot compare the multitude of Linux distro's to the one-man platform of MS Windows. If there would have been a tally between, say, Redhat, Ubuntu, FreeBSD, NetBSD, OpenBSD, Mac OS (I dunno what version it is in atm) and MS Windows, and all stats would have been listed seperately ... that would have been fair and clear. Now it's just a mash of all these stats with just one simple query on it SELECT bugs FROM stats WHERE os = Windows. THey just mashed the rest together and called it "the rest".
  • Re:Axe Grinding (Score:3, Insightful)

    by cbiltcliffe ( 186293 ) on Saturday December 31, 2005 @12:15PM (#14370542) Homepage Journal
    Anyone else got a favorite way of producing misleading bug scores?
    Not so much a misleading bug score, but misleading about bugs nonetheless:

    "All the bad guys know about all the bugs in Linux, because they can see the code. But only Microsoft knows about bugs in Windows, and they fix them before anybody finds out."

    Paraphrased, of course, but pretty much what every Microsoftie analyst says on a daily basis.
  • by Anonymous Brave Guy ( 457657 ) on Saturday December 31, 2005 @12:24PM (#14370580)
    Modern unmanaged C++ is NOT fine; STL permits many kinds of bugs that are analogous to buffer overflows.

    Huh? Granted there are some silly design decisions in the C++ standard library, like making the unchecked indexing use operator[] and the safer, checked version use at() on a std::vector. Still, it's much harder to get things like overruns using the STL, where much code is iterator-based, and harder still to do it in a way that won't be obvious to any remotely competent code reviewer (who will ask why you're not using the safer indexing if your loop does count instead). What sort of thing did you have in mind?

    Finally, without full garbage collection, you cannot have full runtime safety.

    Garbage collection is an answer to one category of programmer error, and it's far from the most serious. As I often mention in these discussions, I haven't caused a memory leak writing in C++ in years, and I've got automated tools running on the projects concerned just in case to prove it. It's actually quite hard to get a memory leak in C++ if you use basic good programming practices.

    On the other hand, in a language with GC you still have problems with messing up thread safety to cause race conditions, messing up thread synchronisation to cause deadlocks, greedy acquisition/lazy release of resources crippling the performance of your process (or, more commonly, other processes running on the same machine but not under the same run-time framework), SQL injection, using naive encryption and/or insecure transport protocols. These are all common flaws seen in real programs, and can be just as damaging as any buffer overflow, if not more so.

    And I make this case without, until now, mentioning the IME very real problem that a lot of cheaposoft programmers who grow up relying on GC don't have the same appreciation of low level mechanics as those who don't, which causes them to write unnecessarily naff code with problems of its own.

    At the end of the day, GC is a useful tool for many programming jobs, but it's only a tool, not a silver bullet. It's no substitute for a good programmer who knows what he's doing.

  • by Liam Slider ( 908600 ) on Saturday December 31, 2005 @12:26PM (#14370584)

    "researchers found 812 flaws in the Windows operating system, 2,328 problems in various versions of the Unix/Linux operating systems (Mac included). "

    If we listened to just the media you would have thought Windows has thousands and the others only had a few dozen. I promise I'm not trolling, but do those numbers stop and make anyone on the site re-think stances? We all saw the numbers that put Firefox with more holes then IE earlier this year too. Could MS be doing a better job, but just getting hammered in the press (who are mostly Apple users by the way)? MS holes get lots of press while other operating systems get a free pass.

    The situation is, as usual, more complicated. As typical, they are lumping together pretty much every flaw in pretty much every little piece of software that actually runs on Linux/Unix/Mac and is typically included in Linux/Unix distributions and/or Apple's software....vs the flaws found in Microsoft Windows, and Microsoft software... Gee, is it any surprise they are finding more flaws outside the Windows camp? Several different operating systems and all their software vs one. Furthermore, it does not address which flaws were critical and which weren't, which were fixed and which weren't, how quickly they were fixed....and which side does better in those regards. Also it doesn't address the fact that it's easier to find bugs to fix in an open source environment than it is in a closed source one.
  • Re:Unfair (Score:5, Insightful)

    by ComputerizedYoga ( 466024 ) on Saturday December 31, 2005 @01:38PM (#14370898) Homepage
    thing is, all of those distros use the same software base. Redhat, Ubuntu, *BSD, they're all host to apache, samba, bind, openssh, php, gcc ... they're all essentially the same, once you get past package management, the kernel and the c libraries.

    If you want to count "OS" flaws, you need to remove ALL the third-party apps. That means in linux, you'd JUST be counting the flaws in the kernel and glibc, and in BSD only the core system as well. And those aren't even going to be distro-specific.

    While you're right that it's probably not fair to shove os-x vulns in with the unix/linux category (os-x is its own unique animal and has a lot of things that no other *nix has) I think it is fair to mash together the F/OSS nixes. Or at least to mash together their non-os-specific parts.

    Of course, these comparisons are inherently unfair, if they're used as a metric for "which OS is more secure". That's become something of a moot point. No matter how someone calculates their metrics, someone or another is going to be displeased with their methodology. What's more interesting, and more to the point, is the sheer number of vulns found across the board, and that's the whole point of the story.
  • by Decaff ( 42676 ) on Saturday December 31, 2005 @08:04PM (#14372407)
    At the end of the day, GC is a useful tool for many programming jobs, but it's only a tool, not a silver bullet. It's no substitute for a good programmer who knows what he's doing.

    You write well on this matter, but I think the evidence really is to the contrary. Hundreds of millions (if not more) lines of code have now been written in languages that use garbage collection. Some of these languages are high-performance and some are used for real-time work, and they all work fine.

    Garbage collection is now routinely used even for multi-threaded languages (perhaps the best example is high-performance Java application servers like Tomcat).

    So I am afraid I do think it is a silver bullet, and manual memory management will soon look like part of history except for the most specialised uses.
  • by lordcorusa ( 591938 ) on Sunday January 01, 2006 @02:01AM (#14373262)
    I fully agree with you that the Java language is capable of efficient real-time use. However, I think that everyone needs to be perfectly crystal clear that the Java environment to which most Java developers are accustomed is not.

    Hard real-time Java programming is vastly different from normal Java programming. Most of the standard Java class libraries are gone. Exception handling is gone. Automatic garbage collection is gone. Almost all third party class libraries are gone. Coding hard real-time apps in Java feels very much like coding C apps from scratch, even if you don't have to manually allocate and deallocate blocks of memory. From the article:

    JRTK, a hard-real-time mission-critical subset of the Real-Time Specification for Java (RTSJ) as defined by the Java Community Process, includes many efficiencies over standard Java offerings. No garbage collection is used on objects in the real-time heap. A standard subset of Java libraries is restricted with each library's time and memory resources clearly defined. Partitioning clearly separates soft real-time components from hard-real-time components to ensure hard-real-time schedules as well as program reliability and robustness.


    I guess my point is this: hard real-time Java is not the Java with which 99.9% of so-called Java developers are familiar. Choosing Java over C or Ada for a hard real-time system will not enable you to hire lesser programmers, nor will it significantly increase your pool of eligible employees. No matter which language you use, to do hard real-time systems correctly and effiently you must hire only top-tier programmers. Top-tier programmers can make use of any relevant language. Hire any lesser programmers and they will screw up, regardless of language choice.
  • by penguin-collective ( 932038 ) on Sunday January 01, 2006 @05:11AM (#14373533)
    At the end of the day, GC is a useful tool for many programming jobs, but it's only a tool, not a silver bullet. It's no substitute for a good programmer who knows what he's doing.

    Perhaps your problem is that you don't understand what a "safe language" is. A safe language is a language that makes guarantees about type errors, error detection, and fault isolation. A language with dynamic memory allocation needs to have a GC in order to be safe. A safe language does not make guarantees about security or parallelism or race conditions, it doesn't necessarily make programming any easier, and it doesn't necessarily help the programmer avoid errors.

    And I make this case without, until now, mentioning the IME very real problem that a lot of cheaposoft programmers who grow up relying on GC don't have the same appreciation of low level mechanics as those who don't,

    No, the problem is that there are too many people like you in this industry, people who don't even understand what a basic concept like a "safe language" means.
  • by lordcorusa ( 591938 ) on Tuesday January 03, 2006 @07:54PM (#14388394)
    Note: When I refer to language, I mean the syntax and semantics and primitive types. When I refer to environment, I refer to the language plus any standard libraries; I am not refering to a particular VM. I think my definitions may be the source of some of the confusion, as Sun appears to want everyone to think that the java.* and javax.* libraries are part of the language. As far as I am concerned, they are just part of the standard environment. (I bend my own definition a little to include packages under the java.lang library as being part of the language, because they are object-oriented wrappers for the primitive types.)

    Your original post mentioned work by Aonix on hard real-time Java. My post referred to Java features that are not able to be used in hard real-time applications. Here is my reference; granted it's a year old, but I haven't found anything newer to contradict it. The author is (or was at the time of writing) in charge of the Aonix real-time JVM.

    http://www.stsc.hill.af.mil/crosstalk/2004/12/0412 Nilsen.html [af.mil]

    Note the table of differences between traditional Java, soft real-time Java, hard real-time Java, and safety-critical Java. As you can see, hard real-time and safety-critical Java are highly restricted compared to traditional Java. Safety-critical is a subset of hard real-time that is even more strict; it requires formal proofs of safety and therefore throws out most Java features and standard libraries, leaving only the core syntax and single-threaded semantics of the language. (Well technically, safety-critical allows multiple threads but they cannot overlap, so its like single threaded programming for purposes of proofs.) Safety critical requirements are defined by FAA specification DO-178B and several JVM suppliers are working on DO-178B-compliant JVMs.

    Soft real-time applications can use all (or almost all) of the Java standard libraries, whereas hard real-time applications can use only a restricted subset of the standard libraries and safety-critical applications are restricted to an even smaller subset of the standard libraries. Therefore, one great advantage of the Java environment (remember my definition of environment), its extensive standard library, is neutralized with respect to hard real-time and safety-critical applications.

    Furthermore, almost all third party libraries depend on standard libraries that are forbidden under hard real-time or safety critical constraints. Therefore, these libraries are also forbidden and another great advantage of the Java environment (remember my definition of environment), the extensive field of third-party libraries, is lost to hard real-time and safety-critical applications.

    The Real-Time Specification for Java defines a set of library calls and semantics which, when implemented within a general-purpose Java virtual machine.... [snip] In fact, Sun demonstrated a real-time Java application running alongside non-real-time applications on the same VM. > Note the emphasis on the word "within". Applications that implement real-time features get the JVM+RTS. Applications that do not implement real-time features fall through to use the traditional JVM. However, non-real-time applications do not automatically become real-time applications simply by being run on Java RTS. Nothing here contradicts my original assertions.

    I'd like to know more about the real-time application they demonstrated. What kind of real-time application was it: soft or hard? Did you see the source code of the real-time application? If so, what libraries did it use? The link you provided didn't provide much concrete information. However, it did provide a link to Sun's official Java RTS page:

    http://java.sun.com/j2se/realtime/ [sun.com]

    I am not sure how they can claim conformance for soft real-time, because they do not yet provide a real-ti

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...