Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security

X-Force Changes Vulnerability Disclosure Policy 98

BitHive writes "ISS has changed their policy for announcing security vulnerabilities. The new guidelines will give vendors thirty days to come up with a fix before disclosure is made, though there are a number of exceptions that can prompt faster disclosure. From the PC World article, these are: "The vendor issues a patch or announcement; an in-depth discussion of the problem occurs on a public mailing list; active exploitation of any form of the vulnerability occurs on the Internet; ISS receives reliable evidence that a vulnerability is in the wild; the media reports the vulnerability; or the vendor is unresponsive.""
This discussion has been archived. No new comments can be posted.

X-Force Changes Vulnerability Disclosure Policy

Comments Filter:
  • $40 billion (Score:2, Funny)

    by neurostar ( 578917 )

    No wonder we spent $40 billion [slashdot.org] on ISS! [iss.net]

    They needed to research and develop their policies.

    Whoops.... wrong ISS

  • Bad idea (Score:5, Insightful)

    by The Terrorists ( 619137 ) on Wednesday December 04, 2002 @10:33AM (#4810317)
    If you want to have your security reporters in cahoots with the corporations that have the holes, go right ahead. This opens the door to massive corruption if insecure firms pay off security reporters. Or, the government could stop a report permanently if it's deemed a security risk. Only the threat of disclosure is the enforcement for fixing these security breaches.

    • Re:Bad idea (Score:2, Interesting)

      by dakers27 ( 631152 )
      right on, what if holes get patched slower or not at all because they are "a threat to national security"
    • The new policy basically says "We'll keep quiet for 30 days... but if anybody else tells the public first, there's no point in keeping it secret anymore so away we go."
    • That's rediculous. (Score:1, Insightful)

      by Anonymous Coward
      This opens the door to massive corruption if insecure firms pay off security reporters.

      The same could (should!) be said about the police. Should we abolish policemen?

      Anarchy is a better answer than corporation-cum-government forced secrecy, but it's still uncivilized. It should be someone's job to tread that tricky middle ground where the vulnerability is not irresponsibly publicized, but the vendors of the insecure software are not allowed to unreasonably suppress the details of the vulnerability. In other words, someone to maintain the threat of publicity just long enough to force the vendor to patch the wares as fast as possible, but not at the expense of end users everywhere.

      Sounds like the ISS is stepping up to the plate and doing just that.
    • Re: Bad idea (Score:5, Insightful)

      by invi ( 198857 ) on Wednesday December 04, 2002 @10:45AM (#4810406) Homepage
      Come on?! If ISS does not document a security issue in time, somebody else will ... and therefore ISS' credibility will suffer over time. I'm not sure if I see the danger of corruption here.

      Personally, I think 30 days is a good time span for letting software companies fix their code. On the other hand, why wait 30 days until mentioning the vulnerability? ISS could simply announce that there *is* a problem with a given product without going into the details ("buffer overflow in Bind, tracking number #25521, details will be published December 16th 2002"). So, if your business runs a vulnerable piece of software which is not critical to your operation, you can disable the service until a patch is available. If the software is critical, it's up to you to take the risk.
      • Re: Bad idea (Score:4, Insightful)

        by WPIDalamar ( 122110 ) on Wednesday December 04, 2002 @11:22AM (#4810705) Homepage
        but then that gets all the badies looking for the hole. It's a lot easier to find something if you know it exists. Without details, the good guys don't know exactly what to do to fix/work around the hole. Espically if the software IS critical.

        • Hear hear.

          There has to be some balance. If a vulnerability has existed for months or years without known exploits, the discoverer must consider that there is a high likelihood that even the slowest vendor could fix it before any black-hat re-discovers and exploits it. If that's the case, it is irresponsible to disclose it without giving the vendor -- even slow, crap, poor-at-bugfixing vendors -- a reasonable window to fix it.

          I'm thinking particularly of the GreyMagic disclosures of cached object/XSS vulnerabilities here: As far as I know, they existed for around 18 months without anyone of any hat colour knowing, then GreyMagic unilaterally decided that 93% of internet users deserve to be rooted.

      • I'm not sure if I see the danger of corruption here.


        I think the point was that this kind of relationship being established between the enforcer and the enforcee can easily lead to corruption.

        Not that this was the end of security issue reporting as we know it.

        And I agree. What do we call it when the Police have any kind of "relationship" with the criminals?

        Corruption.
    • ISS Paid Off? (Score:5, Insightful)

      by Apathy costs bills ( 629778 ) on Wednesday December 04, 2002 @10:56AM (#4810504) Homepage Journal
      This opens the door to massive corruption if insecure firms pay off security reporters.

      Your argument is that this open change in their disclosure policy is a slippery slope to behind-the-scenes cash-for-silence deals. In my mind, the threat of such deals is not influenced whatsoever by the open and stated policy of ISS but rather by their corporate ethics. ISS and other security companies which deal with the government gain vast swaths of revenue due to the fact that they retain their integrity by laying out rules and following them. A single deal of the type that you mention would put the profits of the entire company and all its public shareholders at risk. In short, I believe your hypothesis is unfounded.
      • Re:ISS Paid Off? (Score:3, Insightful)

        by Chazmyrr ( 145612 )
        Corporate Ethics? Corporations don't have ethics. People have ethics. Or don't. If some executives decide they can cut a deal, sell their options, and get out before the shoe drops, it doesn't really matter what the shareholders might think about it. Think it's just Tyco, Enron, and Worldcom where executives put the profits of the entire company and all its public shareholders at risk? In short, I believe your hypothesis is unfounded and naive.
  • by szquirrel ( 140575 ) on Wednesday December 04, 2002 @10:35AM (#4810327) Homepage
    What were their old ones? In most circumstances 30 days notice to the vendor is the only responsible way to go. Most companies are responsible enough to turn around a fix in that time.

    BTW, the ISS press release is here [iss.net].
    • by LostCluster ( 625375 ) on Wednesday December 04, 2002 @10:46AM (#4810414)
      The change is that if either the mainstream media starts spreading (usually inaccurate) info about the problem, or there's already an exploit in the wild, the 30 period goes right out the window as pointless. ISS isn't gonna keep it already a secret if somebody else is already spilling...
      • The key phrase is "if somebody else is spilling." What if that somebody is someone like, oh, say GOBBLES, a skriptkiddie with as tenuous a grip on written communication as one can have and still be considered almost literate? How credible need the folks talking about a vulnerability be before ISS will release info.
  • by oni ( 41625 ) on Wednesday December 04, 2002 @10:35AM (#4810337) Homepage
    Their criteria sound pretty reasonable to me. They've tried to reach a balance between the rights of sysadmins to know their systems are vulnerable and their responsibility when the tell script kiddies about exploits before they've been fixed.
  • This appears to be a well-thought-out and practical approach to dealing with security problems in a responsible fashion. Kudos to ISS for the new policy, and for their recent successful docking with the Space Shuttle. Great job, guys!
  • by FreeLinux ( 555387 ) on Wednesday December 04, 2002 @10:38AM (#4810355)
    The only new aspect of this is that the Open Source projects will now be treated like the commercial vendors have been. They've always given the commercial guys lots of time but, there have been several occurrances where open source projects were given the shaft.

    The first to come to mind was when Apache was given less than a days notice before they disclosed the vulnerability.

    Under the new policy Apache will be given the same 30 days that Microsoft has gotten. Fair's fair.
    • As with anything ISS says, I recommend waiting for precedent before predicting that they're going to treat anyone fairly. Hopefully you are correct.
    • Apache gets 30 days if and only if the hole is still secret. If a black hat group looks at Apache's code and finds the same hole and puts an exploit into the wild, Apache gets no notice at all.

      Microsoft has an advantage at preventing this situation... black hats, or anybody else, can't look at MS's code.
      • by Anonymous Coward
        The standard method for finding an exploitable problem is to shove data at it till it crashes then examine how it crashes.

        Reverse engineer the crash itself and determine if you've corrupted the stack sufficiently to execute arbitrary code, then determine the required junk to send it to cause it to run the code you want.

        No source code is required for any of this to work.
  • exploit discovered by people looking for exploits; exploit get exploited and appears in 2600; vendors deny exploit until fix is built; dumb people open all email attachments looking for funny pictures; Anti virus and firewall companies make money; Questionable fix is released but some new exploit has taken the limelight; exploit is denied by vendors...
  • by Gothmolly ( 148874 ) on Wednesday December 04, 2002 @10:40AM (#4810373)
    With an uncertain future, high pricing, and alternatives [sourcefire.com] out there, why do people care what ISS says? Just because "X-Force" sounds cool?
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Wednesday December 04, 2002 @10:43AM (#4810395)
    Comment removed based on user account deletion
    • by stratjakt ( 596332 ) on Wednesday December 04, 2002 @11:16AM (#4810649) Journal
      >> Moreover, they're giving 30 days for the script kiddies to run amok while we are clueless

      The script kiddies are clueless too. Script kiddie != black hat hacker. A script kiddie is someone who downloads the exploit when posted and uses it. The black hats discover the exploit.

      The ratio of real 'hackers' to script kiddies is about 1 to a zillion.

      So sure, that 1 hacker can still be running amok for 30 days, but the zillion script kiddies are sitting around with their thumbs up their asses.
      • why can't ISS post the vulnerability immediately and wait until it is fixed to publish the known exploits? This way system administrators have some time to react to the problem (turning off services, implementing workarounds, etc.). In addition, it doesn't provide the known exploits to the script kiddies until the patch has been released.

        granted, this approach does open the door for more knowledgable blackhats to work on exploits... but its an interesting trade-off.
        • granted, this approach does open the door for more knowledgable blackhats to work on exploits...
          You just answered your own question. EVERY piece of software over 100 lines of code has bugs, and a decent number of those bugs can be used to compromise a system. Both black-hatters and white-hatters are always on the lookout for these, but it's not like they're all banging away at the same chunk of code. When a vulnerability is found, it's typically just one coder out there who stumbled upon it.

          But by even announcing that a hole has been found in a certain piece of software, you're giving a headstart to all the blackhatters, telling them where to start looking. If you can announce the hole and the patch at the same time, you at least give the sys admins ample opportunity to fix their machines before the bad guys figure out how to hack in.

      • It takes only one black-hatter, his installed base of zombies and a newly invented exploit to take out enormous quantities of vulnerable servers. Automagically. In under a day. Make that in under an hour. Ergo, you don't need script-kiddies. The only thing that saves us is that most black-hatters are not willing to risk getting caught so easily by doing the attacks themselves. They usually just want their fellow black-hatters to know how smart they are. 80% of the rest of them never even make their exploit known. They use it to their (financial) gain and get out. They are not going to tell anybody if they have any clue.

    • by Anonymous Coward
      Nice to know that black hats will always have better information than us. Thanks ISS. Another step backward in the fight to preserve our systems.


      You weren't aware that ISS has a history of hiring known black and gray hats to work in X-Force and product development? ISS is not the only company guilty of this either. Corporate America would likey have a kitten if they found out that a substantial portion of the code base of many of the security products on the market were developed by people with less than pristine backgrounds.

      And before anyone jumps up and says that it takes a hacker to catch a hacker, I certainly agree. However, is it the best possible outcome to have black hats working for security companies actively researching vulnerabilities, possibly funneling that information to the underground community, and ultimately into the hands of script kiddies? And just think of what bugs might be in the code of security software intentionally.
    • Moreover, they're giving 30 days for the script kiddies to run amok while we are clueless

      Which part of


      there are a number of exceptions that can prompt faster disclosure [...] an in-depth discussion of the problem occurs on a public mailing list; active exploitation of any form of the vulnerability occurs on the Internet; ISS receives reliable evidence that a vulnerability is in the wild


      did you not understand?

      If ISS follows these guidelines, then any evidence of the vulnerability being actively used will mean an immediate (or at least accelerated) release of information.

      This is a pretty good process, at least if it's held to for everyone fairly and equally.

      Look, I can understand not reading the article, but when you don't even bother to read the freaking summary of the article and then postulate stupidly you're an idiot.
      • Comment removed based on user account deletion
      • If ISS follows these guidelines, then any evidence of the vulnerability being actively used will mean an immediate (or at least accelerated) release of information.

        And when it's my network that gets cracked and becomes the "evidence of the vulnerability being actively used" I'll be a whole lot less than happy to discover that "We knew all about this problem 29 days ago but didn't bother to tell you about it."

        It's cold comfort to be told that your house is the first one to be burned down by the known-to-everyone-but-you faulty wiring when you're sitting in the middle of the smoking ruins.
    • How many people patch their systems the day the patch is released? Certainly, I do, but does even the majority do so? I doubt it. Moreover, they're giving 30 days for the script kiddies to run amok while we are clueless. They will certainly find out, if there is even an inkling of information about the exploit. IRC is much more effective than ISS anyway.
      First off, it's a sysadmin's job to patch vulnerabilities ASAP; considering you're charged with maintaining the security of your systems, I think it pretty much comes with the territory. As for the script kiddies, any admin worth his salt can find a makeshift solution to at least reduce if not stop the attacks completely. Remember, too, that this article refers to how ISS notifies the public, not the vendor. Patches can still be released at the same rate, but the public is kept in the dark as to how to exploit the hole in the first place. After all, it's better to force the script kiddies to figure out the exploit themselves than hand them the manual, no?
  • by Featureless ( 599963 ) on Wednesday December 04, 2002 @10:45AM (#4810410) Journal
    It sounds eminently reasonable - the best for all concerned. 30 days is not a long embargo, and their list of exceptions seems to me extremely thorough. This appears to answer criticism that "premature disclosure" is irresponsible (a criticism which I don't give much merit, but others disagree) with an intelligent and nuanced policy.

    The message to vendors: we'll cooperate with you, if you act responsibly and respond quickly.

    Quickly being the operative word. The tragic thing in the disclosure and response-time debate is the assumption that if the white-hat side discovers a flaw, they're the only ones who've found it... and just because you can't find a paper or an exploit after a bit of looking doesn't mean it's not out there.

    Certainly, there is a long history of big vendors (I wont name any names... ah, whatever, Microsoft) who completely ignore (i.e. wont return calls) or yes the helpful hackers to death (i.e. yes, it's on the list, we'll have a new patch _any day now_ - rinse, repeat for 6 months), and then whine when the disclosure becomes public... even as the publicity stings them to finally bestir themselves to release a patch. So I'm very glad to hear of those in the security community making a logical response to it all.
  • Odd (Score:3, Interesting)

    by CaseyB ( 1105 ) on Wednesday December 04, 2002 @10:46AM (#4810419)
    The new guidelines will give vendors thirty days to come up with a fix before disclosure is made ... [Sooner if] "the vendor is unresponsive."

    So, they give the vendors 30 days to respond -- unless the vendor doesn't respond sooner? Immediately? What's the point of the "30 day" rule if response is required BEFORE then?

    Sounds like a completely arbiratry process to me.

    • Re:Odd (Score:5, Informative)

      by Apathy costs bills ( 629778 ) on Wednesday December 04, 2002 @10:50AM (#4810440) Homepage Journal
      Unresponsive usually doesn't mean things like "doesn't answer". Unresponsive means things like:
      • "That's not a vulnerability."
      • "That vulnerability is purely theoretical"
      • "We're not fixing it, and if you release information about it, we'll sue you."
      • "What's a vulnerability?"
      • "la la la la la la la la la"
      In short, any response to the lines of "go ahead, we ain't fixing it".
    • They said "thirty days to come up with a fix." The vendor first has to respond and acknowledge there is a problem. If they don't do that, then details may get released sooner.
    • Being responsive doesn't mean issuing a patch.

      Being responsive only means ANSWERING to the e-mail, ACKNOWLEDGING the bug, and saying that you are TRYING to fix the problem.

    • Maybe they make a difference between a vendor answering "Oh, you are right, give us 30 days to release a patch !", and "This is a feature !"

      (but I see your point)
  • Why this is... (Score:4, Interesting)

    by 1155 ( 538047 ) on Wednesday December 04, 2002 @10:52AM (#4810461) Homepage
    Really good:

    Disclosure for the most part, is a good thing. Even with things such as smb, whereas the samba team found a way to shut down a server remotely with it, aren't disclosed, unless there is a threat of disclosure, in which you need to go ahead and patch your hole or you will be seen as, well, uncaring by those who care.

    This also allows for faster knowledge, i.e., if there is an active mailing list on it, but I am not on that list, then iss will inform me of the problem, this is in the mailing list, or whatever form of communication said project uses.

    The Cons

    As mentioned in comments already, I am assuming, people will be able to blackmail one another in order to keep said hack/hole/easter bunny out of the lime light. A little bit of cash can go a long way sometimes. Be wary of what is, and what isn't, reported.

    Why this is important to you:

    It gives you a more defined description of how things are going to go, and how much salt grain you should take with each hack. You should know that each hack/hole out there has already been out there for a month, and that it could have been out there for a lot longer. Joe blackhat just doesn't give up his tools, unless they are not useful.

    Why this is not important:

    ISS is not the only security site, and it should not be your only site to get updates from, either. Do a google...
  • Send in the real, original X-Force [comixtreme.com] !

    (Boy, did that headline have me confused or what?)
  • so i cable going to rejoin the team or what? heck domino would be good too
  • I'm skeptical. (Score:5, Insightful)

    by Anonymous Coward on Wednesday December 04, 2002 @11:04AM (#4810554)
    Well, these "guidelines" are common sense to every researcher who has a bit of heart for the field of work. I guess their partners were finally able to beat some reason into these ISS people. The recent BIND fiasco proved once and again that these "security researchers" value headlines more than their supposed mission statement. (Yes, I know, we all like to earn a buck, but in every profession you have your moral obligations.) ISS deliberately rushed advisories, and I don't think the issue was due to a lack of guidelines - this policy was a strategic move to get news stories at the expense of the users worldwide. These malicious practices are a disgrace to the security community that has come such a long way, and although ISS are not the only ones, they have probably been the most high-profile commercial predators.

    Anyway, we've heard similar promises before from OIS (of which ISS is a founding member) and it never stopped ISS from unethical behavior. But now apparently it bit them in the ass. I am surprised that nobody of their "alliances" denounced ISS for their malpractices earlier; I suspect this has been done behind the curtains, but granted, as long as it's effective, fine with me!

    So way to go ISS, but I wouldn't already sing hallelujah - they were always wrong and this is just normal. ;)
    • Malpractice? Shouldn't it be considerred malpractice to write insecure software to begin with? How can you sell an incomplete prepackaged product?
  • by mblase ( 200735 ) on Wednesday December 04, 2002 @11:06AM (#4810568)
    I'm waiting for the day when someone decides to threaten the software security agencies into silence, claiming "it's a feature, not a bug" and the DMCA gives them the right to silence public discussion about how to exploit the flaw.

    Hey, if Wal-Mart can invoke it because people are pre-announcing their sale prices....
    • Err, you don't need to wait. HP already did it, search the archives or google a bit.
    • Unfortunately, I believe this has been done before, on more than one occasion. I think something like
      this was featured on /. a while back.
      Another reason the DMCA is a completely evil law.

      It protects corporations from having to take responsibilty for security flaws in their software, and it turns the people who try to help users by providing information about the flaws and possible fixes into "criminals."
    • Uh, they didn't exactly threaten under the DMCA. They mentioned that the safe-haven portion of the DMCA protects the website until they are warned that they are infringing. And now that we are warning you, take it down or be sued. Before you could be sued just for having infringing content on your website, now they have to inform you that you are infringing, and then if you don't remove the content you can be sued.

      I don't like the DMCA, either; however, the DMCA really isn't an issue in that case.
  • by John_Renne ( 176151 ) <zooi@gniffeMENCK ... net minus author> on Wednesday December 04, 2002 @11:09AM (#4810596) Homepage
    It almost seems the 30-day limit is a pretty reasonable one both for vendors as for bughunters. Just yesterday in this [slashdot.org] article the PGP-foundation announced the same period as desirable for releasing exploit-information to the public. coincidence or not?

    In any case. The period looks pretty reasonable to me. The firm will have enough time to investigate and release a patch before the scriptkiddies out there will get their hands on exploit code. Now if all bughunters out there would follow this policy...
  • open source (Score:4, Interesting)

    by WPIDalamar ( 122110 ) on Wednesday December 04, 2002 @11:14AM (#4810633) Homepage
    Does this include open source projects? Aren't these the guys who released an apache hole a while back without telling them because they weren't a small cohesive group (or something like that?)
  • An idea (Score:3, Insightful)

    by Ektanoor ( 9949 ) on Wednesday December 04, 2002 @11:51AM (#4810918) Journal
    Well the guidelines are not bad at all but 30 days may be too much. We know that are frequent parallel discoveries and that there are some organisations that are quite stubborn to change their behaviour toward security. While 30 days might be a acceptable span for most problems, I would prefer a more graduated exposure timeline, based on some criteria. For example:

    If the exploit is highly dangerous, but complex, it would be preferrable a step-by-step disclosure in a period up to 30 days.

    If there are middle-term solutions capable of making a temporary solution, then the problem is disclosed in a shorter period.

    If the vendor/developer has a terrible record of playing "it's a feature not a bug", then no pitty on him. Either disclose ASAP or in shorter periods. This could be a good instrument to punish their lamerness.

    If the vendor/developer comes up with half-measures and dubious patches, disclose without pitty.

    And, besides, I believe it would be good to get some early warning stuff. Or disclosure may catch many people asleep. Maybe it would be good to get a standarized warning message 24 or 48 hours before disclosure, that something wrong may have happened with that or that app. This message should n no way be similar to press releases the Mass Media uses to pump over the crowd. Or else we may risk having information spoiled by some journalists trying to gain points in their careers.
  • They needed to (Score:4, Informative)

    by xrayspx ( 13127 ) on Wednesday December 04, 2002 @12:34PM (#4811241) Homepage
    ISS has been complained about and complained about from both sides of the Full Disclosure issue. Full disclosure to Bugtraq is great, but when ISS or certain others release without vendor notification/vendor acknowledgment, it's just dangerous and rude.

    I'm personally glad that they aren't held up as the norm in the community. Most people seem to follow some variation of Rain Forest Puppys RFPolicy [wiretrip.net] concerning vendor contact and reasonable time tables for releasing to the community when faced with unresponsive/uncaring vendors.

    Good for X-Force, good for the community for browbeating X-Force.

  • by Anonymous Coward on Wednesday December 04, 2002 @12:49PM (#4811359)
    Before you go congratulating ISS on their new security policy, you should read the whole article.
    "The security brief will be made available to X-Force Threat Analysis Service customers one business day after the initial vendor notification. X-Force will revise security briefs if additional information emerges during development of the advisory."
    This means that paying customers of ISS will receive the information 29 days before the rest of the world. This is part of an alarming trend of companies and organizations who are charging money for advanced notice of vulnerability information (e.g. iDEFENSE [idefense.com] and even CERT's new Internet Security Alliance [isalliance.org]).

    Let's not forget the way things *used* to be. A few years back, the rule was that a small cadre of elite people knew about the vulnerability before the rest of the world. This caused lots of problems, which was one of the reasons for rfp [wiretrip.net] to push for responsible full disclosure in the first place.

    The ISS policy represents a regression back to the old way of doing things, except now the cadre of people "in the know" are the ones who can afford to pay ISS for advanced vulnerability information. Presumably the rest of the world has to suffer and get hacked. Support companies and organizations who TRULY practice responsible full disclosure -- don't support companies trying to make a quick buck off this kind of extortion.

  • Especially not after Cable tried to kill Dr. Xavier.
  • by Anonymous Coward
    Of course if you pay ISS money you can be a customer of theirs then you will find out about security issues in advance, a day after the vendor is notified (or an attempt is made to notify).

    How can this be responsible disclosure unless they make sure that all their customers are "good guys"?
  • --it's a bad idea. for two reasons, one, it allows a vulnerability to exist WAY too long potentially, and there is NO guarantee that the exploit hasn't been spread around extremely sub rosa by blackhats. People running the software DESERVE to know if it's vulnerable, in a timely manner, not some arbitrary picked point in the future that one month" represents. The time to announce a vulnerability is when it's found, period. If my car springs a fluid leak I want to know about it now, if a fire starts I want to know about it now, if a manufacturer discovers a safety issue I want to know about it now, right after they find out, this gives me a CHOICE of what do I want to do.

    Two, the companies NEED to keep getting hammered with emergency DO IT NOW-NOW-NOW work, because EVENTUALLY it will sink in to code once, troubleshoot, audit, bugfix, do it again, do it again, THEN release it. It won't eliminate all bad code, that isn't happening, but it sure will slow it doen to a manageable level. We need bored maytag repairmen security guys because stuff is "a lot more secure outta the box", not this make work growth industry model we have now, releasing buggy stuff to create jobs is what it looks like to me. We need FEWER releases of BETTER audited code, not faster releases of still buggy stuff. I could care less if releases of this or that software were once a year, or once every two years, and extremely robust and stable and secure, as compared to willy nilly constantly needing bugfix after bugfix. Closed source or open source. Hardware or software. Less releases of much better quality.

    --generic rant--

    Same with detroit and tokyo, new models every other year, or even 5 years, not every single year, and I don't care what happens to the evolution of that industry either, there's too much crapola gets released all across the manufacturing spectrum, throw-away-itis and almost constant obsolesence is not a good idea, it simply costs too much in terms of money and resources. The world is credit-maxed out from this push to constantly throw away still useful stuff for "new and improved". It's ridiculous. Here's an example, I got a pile of older cellphones, the reason? Because they have made it so it costs twice as much for a new battery as buying yet again another phone! ALL my old phones still work swell, if they only had a battery that worked for more than 30 seconds. It's silly. Durable goods and software is the same. Yes, I know that at some point older stuff just needs to get chunked by geez loweez it's gotten out of hand with stuff only two years old being classed as antique worthless throw it away and replace it.
    • I agree with your opinions about software that is released to early without proper testing. However, your analogies to physical products are flawed. In those cases the presence of a condition is an an immediate threat, and therefore you should know about it immediately. In the case of software vulnerabilities the announcement of a working exploit increases the potential threat because of script kiddies. A more appropriate analogy would be a bank privately telling its customers that there might be a way to gain access to their accounts, rather than the bank going public and saying to any potential thieves, "Hey! Want to know how to get free money?"
      • --I can see where both ways have potential downsides. The deal is, anyone "you" using the software has zero way to know-unless you are told in advance or actually hacked, then it's semi too late. There's no easy answer. I see your point clearly. but again, one of my points was the exploit might already be in the wild, as in who knows? Just because the white hats know there is a potential doesn't mean the blackhats haven't alreadydiscovered it and deployed it stealthily through obscure and arcane channels with each other. It's a matter of degree I guess, is it better to temporarily stop the service until such a time as a patch is released, or wait a month while who knows who is aware of the vulnerability? Waiting this up to one month time frame might leave you open for that month, because the white hats themselves just *might not know* who else knows what they know.

        Personally, I think it's better to leave it up to the customer to decide "when" they are to be notified about any vulnerabilities, then whatever happens, the white hats and vendors are off the hook (merely ethically after all with free/open source) and can proceed at their own pace. Of course that would open up another set of problems, as you couldn't be sure that someone who wanted immediate notification would keep it secret. Closed source propietary for cash software-I gots no sympathy anymore. I am usually against more laws but something has to be done about massive mega for-profit corporations and their products that have ZERO warranties with them. No other consumer product enjoys that status, it needs to be altered to some sort of guarantee with liability. They want it both ways, pay for our stuff, but too bad if you are snafued and screwed blued and tatooed. That's a side issue but dang if I think it's any sort of "fair" now or even remotely ethical or equitable. Bet a buck that if commercial closed source had to carry normal consumer warranties that it would be written a LOT better, with a lot less conflicts or security vulnerabilities. If it means those various companies make slightly less profits or maybe release less-too bad.

        Oh well. Guess I just liked it the way it was before, it was an "almost" immediate public release on security vulnerabilities. A full month in computer time for unpatched vulnerabilities and no notification gives me the buckwheats. And I don't even run servers or anything. I also understand this "month" is the outside limit, but still--seems sorta excessive.
  • Lets say the Linux kernel develops a vulnerability (noo, never). So they tell Linus. What happens next? All the developers related to the problem have to discuss the problem somehow, and lkml is very very public.

    Ok so the kernel people might be able to keep quiet, but what about a smaller project that's more in the open and less critical? Sourceforge lists are public, so are CVS commits..

    This really only works with closed software. Open source stuff has such a public development process that keeping it quiet is next to impossible.

    Brian
  • Why do people pay good money to ISS for their Internet Scanner tool? It seems that this tool is very popular, but don't most people involved at these levels of security work know about the Nessus [nessus.org] project? I personally find the ability to customize the system to the Nth degree and build my own triggers blows away most commercial systems. I've used ISS's scanner, Cybercop, and a few others. Some of the reporting tools are good out of the box, but I always find myself returning to Nessus [nessus.org].
  • Whew! I was scared... for a second I thought this article was going to say that ISS hires black hats...

    But as we all know, that's just ludicrous, right?
  • I expect that noone has objections. However, if I'd only add these entries
    to the list because `I think it's the right thing to do', I'd get a lot of
    flames afterwards :)
    -- Christian Schwarz

    - this post brought to you by the Automated Last Post Generator...

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...