Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security

Sendmail Bug Tests US Dept Homeland Security 297

yanestra writes "CNET reports that the reported Sendmail bug has been a test for the US Department of Homeland Security which seems to have managed information flow in this case."
This discussion has been archived. No new comments can be posted.

Sendmail Bug Tests US Dept Homeland Security

Comments Filter:
  • by Anonymous Coward
    "Whats the sendmail bug of this week?"

    The trend is back!
  • Wow (Score:2, Informative)

    And it's taken them this long to set up a system like this. I'm glad Bush got his act together and appointed someone to the administration who actually cared about information technology, otherwise this may have taken much longer.
  • bleh (Score:5, Insightful)

    by Joe the Lesser ( 533425 ) on Tuesday March 04, 2003 @09:20AM (#5432078) Homepage Journal
    While keeping news of the issue from leaking to those who might exploit the vulnerability.

    Free flow of information > Security
    • Re:bleh (Score:5, Insightful)

      by Xzzy ( 111297 ) <sether@@@tru7h...org> on Tuesday March 04, 2003 @09:37AM (#5432183) Homepage
      hardly.

      If the parties involved are actively seeking to fix the problem, in a timely manner, I see no harm in not shouting from the mountain top what the problem is.

      Full disclosure after a patch is done, yes. But doing it before serves no purpose but to conform to some wishy washy idealism and potentially amplifies the damage an exploit could cause.

      And I'm talking in terms of a couple days. If the affected parties hit the snooze button and two weeks roll by, then yes, release the info and make fun of them for the havoc it causes. ;)
      • Re:bleh (Score:5, Informative)

        by embo ( 133713 ) on Tuesday March 04, 2003 @09:44AM (#5432222)
        And I'm talking in terms of a couple days. If the affected parties hit the snooze button and two weeks roll by, then yes, release the info and make fun of them for the havoc it causes. ;)

        FYI, this flaw was actually found in December [msnbc.com] and just reported yesterday, roughly two months later.
        • Timeline? (Score:3, Interesting)

          by Marty200 ( 170963 )
          FYI, this flaw was actually found in December [msnbc.com] and just reported yesterday, roughly two months later

          It would be interesting to see the time line on this... Did it take this long for the patch to be created or did it get left on someones desk of periods of time before some one spent an hour making the patch.

          MG

        • Not that bad (Score:5, Insightful)

          by siskbc ( 598067 ) on Tuesday March 04, 2003 @10:51AM (#5432624) Homepage
          FYI, this flaw was actually found in December [msnbc.com] and just reported yesterday, roughly two months later.

          Thanks for the link. You know, I don't think 2 months is exorbitant in this case. As your article states below,

          "Because there are so many different flavors of Sendmail, twenty software vendors had to develop a variety of patches for the flaw..."

          So, they had to patch a ton of different versions, and you don't necessarily want them issuing a shitty patch. So if you blame anyone, blame those sendmail monkeys for the delay. ;) Given the nature of the coordination effort, I think they did quite well.

      • by 4of12 ( 97621 ) on Tuesday March 04, 2003 @10:16AM (#5432404) Homepage Journal

        If the parties involved are actively seeking to fix the problem, in a timely manner, I see no harm in not shouting from the mountain top what the problem is.

        I think it reflects well on discoverers of vulnerabilities if they notify the software maintainers first by backchannel means and describe the vulnerability with enough precision for the authors to be able to fix the problem in a timely manner. DoVs should get extra credit if they submit an actual patch that fixes the vulnerability (does not apply to proprietary binary products, clearly).

        But the vulnerabiltiy is a ticking time bomb out there for users in the real world. The white hat DoV may have discovered the vulnerability after 3 black hats who are shoving it into their latest malware.

        The discoverer of the vulnerability and the maintainers of the software are jointly responsible for doing everything in their power to expedite their work, to notify users of the vulnerability, and to provide a patch for them.

        Finally, all software users have the responsibility to keep appraised of the latest security alerts and patches for vulnerabilities and to apply them.

        If any of the 3 parties: discoverer, software maintainers, software users fall short on any of these responsibilities, then all users will suffer.

        As a user, I must rely upon the goodwill of the DoVs and the maintainers.

      • by ChaosDiscord ( 4913 ) on Tuesday March 04, 2003 @12:17PM (#5433277) Homepage Journal
        If the parties involved are actively seeking to fix the problem, in a timely manner, I see no harm in not shouting from the mountain top what the problem is.

        The problem is that just because I (an innocent user of the product) don't know about the vulnerability doesn't mean that the evil crackers don't know about it. Sure, a public announcement increases the number of crackers who know about it, but also gives me enough information to react. There is a security hole in sendmail, but no patch yet? Well, without real information, I can't confirm if my particular installation is at risk. Once I know about it, I can take reactive steps. With enough information I could try to patch the vulnerability myself. With enough information I could try to limit my risk (say, changing my sendmail configuration to limit what an attacker can get, or adding a wrapper to detect the attack and terminate the connection). With enough information I reasonably weigh the options of disabling sendmail for security reasons versus keeping it up for my users.

        With no information, I'll just keep ignorantly running the vulnerable version, possibly getting attacked by crackers who already knew about it. With a little information, I don't have enough information to decide if I'm really at risk and to weigh my possible solutions.

  • by mdb31 ( 132237 ) on Tuesday March 04, 2003 @09:23AM (#5432090)
    Interesting to read that the government is involved with this -- kind of makes you wonder what happened to CERT, which always used to coordinate public disclosure of and vendor response to bugs like this.

    The fact that CERT always seemed to do a decent job makes this even more interesting. The biggest criticisms voiced about CERT were that they acted too slow and didn't provide enough detail information about problems (other than to acknowledge the general nature of it). How will the government do better in these areas?

    My guess is that the answer to the latter question is 'not much', and that we'll start hearing the same complaints about the Dept. of Homeland Security soon...
    • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Tuesday March 04, 2003 @09:45AM (#5432231) Homepage Journal
      I think you answered your own question:

      The biggest criticisms voiced about CERT were that they acted too slow and didn't provide enough detail information about problems

      In other words, CERT was a day late and a dollar short.

      we'll start hearing the same complaints about the Dept. of Homeland Security soon...

      I agree. Except they'll be a year late and ten billion dollars short.
    • My guess is that the answer to the latter question is 'not much', and that we'll start hearing the same complaints about the Dept. of Homeland Security soon...

      I don't want to sound parinoid, but if you complain about homeland security, or bypass their system. What makes you think you'll be around to complain about them for very long?

    • I was thinking the same thing you were at first with CERT being cut out of the picture. CERT is an independent organization.. and they rely on people telling them stuff. It seems in this case.. as far as patching and notification of the initial vulnerability.. but they weren't cut out of what they do best, which is Archiving all of the notifications and making it easy to get patch info, once it comes available. Its not like CERT actually makes they patch. As you can see HERE [cert.org] CERT has a notification about this one.. seems CNET left out that LinkCERT I think, at least now with this development, works much more like Slashdot.. in that they get notified of the news and they post it all on their site. Of course if its the first time anybody has heard of it they notify affecting people first, so as not to create unneeded havoc, with hackers getting to the vulnerability first.

      So CERT will still go on. In this case all the people involved cut out CERT voluntarily, ISS,SANS, FedCIRC and the like. I'm sure of course CERT (in that they have a notification about it) wasn't really cut out.. then again.. they didn't neccessarily do all the coordination work.. they're proabably happy about that one. They can worry about other stuff. My opinion everybody should be this involved in fixing security issues.
    • --CERT has been runing this "survey" about "internal threats" that companies might have observed between two specific dates. Not from such and such a date until the survey is taken by any respondents, but between two exact dates. I looked, maybe I missed it, but I haven't seen a reason for picking the end date. I can speculate why that might be, but I'll let someone else do that.

      begin more generic rant

      Don't know about anyone else, but with patriot act 2 coming into law soon, where the government can just call someone a "terrorist" on their say-so, and with the definition just vague enough to apply to-just about anyone it appears- and that means they are now not under any civil protection or rights, I am wondering if they are starting to set up even more infrastructure to add to "the lists".

      Anyone who don't take the "lists" serious is someday gonna be waving bye bye from the back of a truck heading..someplace.

      When I was growing up, the stuff the US government is doing right now was something we were taught only "bad" places like east germany did. And those bad places had a complete blend of bureaucracy, large corporations, and then the military and police. Everyone snitched on each other. government had all the rights, you had none, even if some word drivel was printed on paper someplace, government ignored it. That's exactly what those bad places were.

      We were taught that was definetly "wrong".

      Now it's "patriotic".

      Yes, we have a need for some sort of law enforcement effort on the net,and it's there and quite frankly it's more than enough to function, the net is part of society,but what we are seeing now goes WAY beyond it. And now all these other weird things? Model toy rocket permits now but leave the border just wide open, millions of illegals ayear free to just walk across? Huh? They are going to regulate or ban model airplanes, while they have been sprayinbg HUGE amounts of weird crap over america for several years now and outright lying about it? huh?? We have a MAJOR goon run cia front company called "wackenhut security" running private prisons,running for -profit manufacturing efforts using prisoners, running some mental institutions, and now RUNNING ROADBLOCKS on the public highway? This just broke a few days ago, private security org manning roadblocks. Just THINK on this one. We have "secret" Total Informational Awareness efforts codified into law? Is there something about the word "total" that isn't understood? Forced collection of DNA samples at roadblocks? Taking hair and blood samples and you aren't going to be able to say NO? Collation of all purchase records? High level officials who just blatantly WARN YOU that if you are NOT 100% behind their efforts that YOU ARE A TERRORIST? And now they are taking over these internet efforts when it comes to security, telling people what they can and can't do, and this "they" guy will tell you when an exploit gets noted and "official" patches released? Huh? What's to stop them from eventually making little cute distinctions between what they release and what they don't, suppose "they" decide they would like a little pre-patch hacking so they can get into machines THEMSELVES. Maybe they JUST DID THAT, hmmm?

      sweet deal for them.

      I am against non disclosure of exploits in a timely manner. Waiting months is not timely. Anyone writing code now can review it before release. Anyone NOT knowing about "security" in general needs to stop and step back away from the keyboard and stop writing code until they "get it" on security, because GUARANTEED if this constant release of buggy code continues,and if people who maintain what are historical examples of just dismal exploitable code that should just be chucked out as lame don't voluntarily just admit it's buggy and pull it off the distribution mirrors, this government will start regulating all releases themselves, after a "review". they don't do it now, but they sure as heck could make it a law tomorrow. In my opinion, it's better to be able to not give them any more excuses. If that's what everyone wants,because known sloppy stuff keeps being used and released, this is what's going to happen. You are going to see licenses, you are going to see full governmental review of code, probably fees attached, stuff like that, I tell you, the internet is going to turn into an electronic "highway" whoops they call it that, so that means that this highway is going to be full of smokey the bears and roadblocks and regulations. And I am NOT kidding on that. We saw them just hijacking sites last week. I can see them starting to do that on a much larger scale. And if sites get hosted overseas, you know what, government will have no problems dealing with that, if anyone cares to notice, they have no problems going over stomping on other nations, they can control some wires if they choose to. Host at home, you are going to outfox them? Not when they can just call up your isp and have you dropped, then they send over some goons to pick you up once you are on the "suspicious" list. And they'll do some of these efforts from major backbones or routers if they have to, I am not so convinced that carnivore and such-like efforts only have the capability to just sniff. /rant

  • Encouraging (Score:4, Interesting)

    by Peter_Pork ( 627313 ) on Tuesday March 04, 2003 @09:24AM (#5432095)
    This is actually quite encouraging. Having an organization that deals with the painful process of contacting each vendor and major user of a program with a newly discovered vulnerability is a major improvement. They also seem to have the law behind them (is this true?), so we finally have someone that can force people to fix security holes. I don't quite like the homeland-security big-brother model, but it worked nicely in this case and got the job done, something pretty hard in the Internet jungle.
    • Re:Encouraging (Score:5, Insightful)

      by ecalkin ( 468811 ) on Tuesday March 04, 2003 @09:33AM (#5432162)
      sadly, i don't see the 'force people to fix security holes' where we need it.

      we have (mostly) good timing getting patches out (even ms gets patches out), but getting end users to *apply* the patches has been a problem. lack of knowledge, time, technical skills, etc.

      at this point, this does seem to be addressed.

      how do we (ahum) fix the end user? my belief is that it should be required that end users have staff/contractors that are certified on their stuff *and* that hey maintain a maintenance log that documents actions or lack of them. if you look at radio stations and the requirements they include licensed radio engineers and logs and other must-dos and must-haves.

      it's time people understood that being connected to everyone else requires a little bit more work.

      eric
      • lack of knowledge, time, technical skills, etc ...

        you forgot the most important one. Refusing to let MS ownz your system with Service Pack EULAs. EVIL SP, EVIL EULA .... but, I need that security patch. Damn them....

      • by DeadSea ( 69598 )

        how do we (ahum) fix the end user? With a pair of pliers, of course.
      • " my belief is that it should be required that end users have staff/contractors that are certified on their stuff *and* that hey maintain a maintenance log that documents actions or lack of them"

        Gee... What about if I have to move away? As an end user, what will my mom do then? I guess under your plan she will just have to stop using the internet or be thrown in jail. Maybe you should be nicer to my mom.

        Come on folks! Remember that the internet and servers aren't tied directly to our nervous system (in most cases). When an internet connected computer goes down or is hacked we are talking about economic disruption at worst, but usually it is no more than an annoyance.

        • Re:Encouraging (Score:3, Insightful)

          by gmack ( 197796 )
          This is getting to less and less be the case. Keep in mind that the traffic caused by the slammer worm managed to disrupt 911 services.

          Also .. what is your mom doing running servers? If there is no one to maintain her systems then there should be no outside accessable daemons at all.

          • Re:Encouraging (Score:3, Insightful)

            by bigpat ( 158134 )
            Okay here's the thing. If we have to worry about malfunctioning (malicious or buggy) computers shutting down or disrupting the internet, then the internet is already broken.

            Does anyone remember that the Internet was a network designed to continue to operate after a nuclear war? We should not have to worry about this stuff. This is a problem for network architects, not the server admins.

            If my server get's hacked then that should and must remain only my problem. Don't tell me the obvious, and don't shift responsibilities. These challenges can only be solved with distribution of resources and by maintaining excess capacity.

            It must be taken as a given that a network like the internet will have bad actors whose malicious actions it must be able to absorb until the problem is corrected or blocked.
            • Re:Encouraging (Score:3, Insightful)

              by gmack ( 197796 )
              Yeah well I too miss the days when a rooted server on someone else's network was not my problem. But welcome to today.

              How exactly are network archetects supposed to design for 300 drones all sending traffic to one place? There is no amount of overcapacity that would handle that.
      • Re:Encouraging (Score:4, Insightful)

        by tacocat ( 527354 ) <tallison1@@@twmi...rr...com> on Tuesday March 04, 2003 @10:26AM (#5432480)

        I don't throwing a pile of Beareaucratic Bullshit is going to improve the situation. That's one of the points lauded by previous posters. This was an example of someone who was able to get something done technically without the forms in triplicate. You are advocating those forms!

        Like we have time for the patches already, you want to make us spend countless hours filling in stupid forms?

        Personally, I think that public humiliation of the company that fails basic security patches is a pretty effective method. It now becomes an interest to the company to maintain a positive PR profile. And we all know that the only thing greater to a Corporation than profits is the Image it portrays.

      • I don't think this would work mainly due to th economics. Orgnaizations are running on a shoe string and in some cases barely staying profitable. It could push a number of companies over the edge if all of a sudden it was required that they hire a bunch of contractors.
    • Re:Encouraging (Score:2, Interesting)

      I am not too sure of this.... , dhs has legal powers in the US, can force companies to do so in the US. Right now when a large chunk of software gets develped by US companies, this works fine.
      What happens when a non-US company/individual finds a bug? The information might be held back in the US for security reasons, but *might* break out outside. What would then happen is that US would be the most affected. Remember that a lot of the later viruses/worms were of non-US origin.In this case they got ISS to shut up, might not be true always.
  • So what? (Score:5, Insightful)

    by da3dAlus ( 20553 ) <dustin.grauNO@SPAMgmail.com> on Tuesday March 04, 2003 @09:24AM (#5432100) Homepage Journal
    Are they saying that this worked perfectly? If so, what about the next exploit? What if Joe Nobody finds a hole, and makes it public before the DHS gets with the makers of the software? What about the businesses in the private sector that fail to patch their systems? Wasn't the fix for SQL Slammer out for months? I'm sure this is a step in the right direction, but really, what happens next time?
    • Re:So what? (Score:3, Insightful)

      by dissy ( 172727 )
      > Are they saying that this worked perfectly? If so, what about the next exploit?
      > What if Joe Nobody finds a hole, and makes it public before the DHS gets with
      > the makers of the software? What about the businesses in the private sector that
      > fail to patch their systems? Wasn't the fix for SQL Slammer out for months? I'm
      > sure this is a step in the right direction, but really, what happens next time?

      I think no matter who is in control of oversite, be it CERN or the government or anyone, the same problem of "If we dont find out first, we cant do much about it" is true.

      You also have to keep in mind, this bug was discovered in December and released in March. This only pertains to one person at ISS.
      Not to belittle his work finding this bug, but its still technically possible someone else has already found it before, and is good at keeping secrets.

      If you assume that is true in all cases (Which from a security standpoint you need to assume) it really doesnt matter. That they are telling you about this hole now at all doesnt have anything to do with the fact that ALL systems using sendmail since version 5 have been exploitable for the past 10+ years.

      The hole being disclosed isnt what causes the security problem. Its the other way around.

      > Sometimes I doubt your commitment to Sparkle Motion.

      That sig sounds like a product of fear :P
  • by linuxkrn ( 635044 ) <gwatson@noSPaM.linuxlogin.com> on Tuesday March 04, 2003 @09:25AM (#5432108)
    Sendmail is a very flexible mail package...too flexible for most people.

    It's power and configuration settings make it a good choice for admins who have taken the time to read on it. However, more often then not we find that there are a lot of lazy admins out there who just get it "up and running" and don't care to understand the security issues with the server. While I've used sendmail for years in the past, but now use postfix. There are a slew of other mail programs out there that can be configured without having to use m4 rules, understand sendmail's rewrite metods etc. I would suggest that if you must have a mail server up, but don't want to take the time to learn sendmail, PLEASE, use something else. I realize this is a little off-topic but it's not too much. It all boils down to securing the net. That takes more then a few bug fixes (and YES you must apply all of them) and a good admin to configure the server/services.
  • Sendmail always has been and always will be a security risk.

    Superior alternatives exist... so why is anyone still using sendmail???
    • by Oculus Habent ( 562837 ) <oculus.habent@gma i l . c om> on Tuesday March 04, 2003 @09:40AM (#5432199) Journal
      Windows always has been and always will be a security risk.

      Superior alternatives exist... so why is anyone still using Windows???
      --
      Sure Joe runs sendmail, and sendmail is insecure. But does Joe's server get attacked frequently? Chances are it probably doesn't. If it does, Joe may be looking into alternatives, or Joe may have found one already.

      Joe doesn't have the time to fix every potential threat. Joe probably installs patches and updates as frequently as possible, maybe even on a schedule. Joe does his best to keep sendmail from being a problem, and at the same time Joe tries not to waste time.

      If Joe were working for a huge company that depended heavily on it's e-mail, Joe would probably spend more time on sendmail. But odds are Joe doesn't, and Joe is doing the best he can.
    • ...Why is anyone still reading this one? ;)

    • If you look closely, you'll find that there are quite a number of completely different programs now that are called "sendmail". It has been widely understood that the original sendmail program was an overly-complex beast that tried to do everything for everyone, and was probably not fixable in any general sense. So over the past 10 or 15 years, a number of other mail daemons have been written.

      Because there has been so much software installed that knows how to talk to the original sendmail, it has been common to make new mailers present the same UI to the world. This way, a new mailer can just be dropped in as a replacement for sendmail, and everything works.

      One of the oldest of these, written in the mid-80's, was called "smail". After a few releases, the authors listened to the complaints about the difficulty of installing it in place of sendmail. So they added code that checked argv[0], and if it was called as "sendmail", it interpreted its command line the same way as the original sendmail. It didn't do everything, but it had most of the functionality that was actually in use, and a simple ln command usually sufficed to replace the old monster with the new, smaller monster This made it spread very quickly among systems whose admins were unhappy with the problems with sendmail. Others have since used the same approach.

      Most of the newer "sendmail" programs are quite a bit smaller and less bloated with featuritis than the old one. Of course, this means that they don't have all the bells and whistles. But it means that there are a lot fewer places for obscure security holes. And since most people just install sendmail and run it, and never learn to config it, this works pretty well.

      In effect, "sendmail" is now just a description of a set of command-line options used in the rc and cron scripts. If a mail daemon implements these, it can be dropped in as a replacement for whatever "sendmail" is there, and it'll do the job required on your system.

      On several systems, I've replaced sendmail with a small (100-200 lines) perl script that mimics all the functionality in use there. This has given me a large number of geek points among non-perl-hackers. I just grin and say something like "That's trivial for a true perl guru." They don't have to know that it doesn't take a perl guru to do such a job.

      This does bring up a significant question about this news item. When they talk about a "sendmail flaw", which sendmail are they talking about? Presumably it only effects one of the N sendmails that are in use.

      Of course, one interpretation of the push to install a "patch" is that this purported patch is merely a way of getting one specific sendmail clone installed as widely as possible. I'd guess that this "patch" is not, say, a set of source diffs, but is a binary. When you install it, you are replacing your current sendmail with a completely different program. Since the article refers to the Sendmail Consortium, this "patch" is probably a version of the original, sendmail. When you install it, you have reverted to a version of the old, bloated sendmail, which probably now has zillions of security holes waiting to be discovered.

      The fact that they don't tell us what the security flaw was or how to test for it is supporting evidence that this is what they're doing.

      • by jeremyp ( 130771 ) on Tuesday March 04, 2003 @02:01PM (#5434089) Homepage Journal
        If you look closely, you'll find that there are quite a number of completely different programs now that are called "sendmail".
        No there aren't. There is one program called sendmail that you can obtain from sendmail.org. It's an open source program that has suffered from source code forks in the past. But there is pretty much only one source tree that counts now.
        It has been widely understood that the original sendmail program was an overly-complex beast that tried to do everything for everyone, and was probably not fixable in any general sense.
        It hasn't been a serious security risk for at least five years. Yes it's a complex piece of software, but providing the full functionality required of modern SMTP MTA is a complex task.
        Because there has been so much software installed that knows how to talk to the original sendmail, it has been common to make new mailers present the same UI to the world. This way, a new mailer can just be dropped in as a replacement for sendmail, and everything works.
        Providing a sendmail compatible command line interface does not make an MTA sendmail. Do not call other MTAs "sendmail" or the sendmail consortium lawyers may sue you. In fact to be a true drop in replacement a program would have to understand the sendmail config file. Since most replacements have tried to get away from using the config file aka programming language used by sendmail, I'd be surprised if any of them could be described as a true drop in.
        In effect, "sendmail" is now just a description of a set of command-line options used in the rc and cron scripts.
        No it isn't.
        If a mail daemon implements these, it can be dropped in as a replacement for whatever "sendmail" is there, and it'll do the job required on your system.
        Do you even know what the job of sendmail (or another MTA) is?
        On several systems, I've replaced sendmail with a small (100-200 lines) perl script that mimics all the functionality in use there. This has given me a large number of geek points among non-perl-hackers. I just grin and say something like "That's trivial for a true perl guru." They don't have to know that it doesn't take a perl guru to do such a job.
        I haven't seen your code, but I'm guessing you have just replaced the command line functionality that allows you to inject a text file as an SMTP message into port 25 of a real MTA. You probably haven't implemented proper queuing, background delivery, prioritisation, alias handling, masquerading, routing, TLS, SMTP AUTH, LDAP routing etc etc etc.
        This does bring up a significant question about this news item. When they talk about a "sendmail flaw", which sendmail are they talking about? Presumably it only effects one of the N sendmails that are in use.
        They are talking about sendmail. It apparently affects several releases of that package, see sendmail.org for more details.
        Of course, one interpretation of the push to install a "patch" is that this purported patch is merely a way of getting one specific sendmail clone installed as widely as possible. I'd guess that this "patch" is not, say, a set of source diffs, but is a binary. When you install it, you are replacing your current sendmail with a completely different program. Since the article refers to the Sendmail Consortium, this "patch" is probably a version of the original, sendmail. When you install it, you have reverted to a version of the old, bloated sendmail, which probably now has zillions of security holes waiting to be discovered.
        There are so many inaccurate statements in this paragraph, I almost don't know where to begin. The only true statement in it is: "Since the article refers to the Sendmail Consortium, this "patch" is probably a version of the original, sendmail" The article is only a news story about the way the flaw has been reported. If you want information on the patch go to sendmail.org where you will find a description of the problem and a patch in source diff format and sendmail 8.12.8 which is the new release with the patch applied. Note that they only distribute it in source code format.

        Please get a clue before your next post.
  • by Ivan Raikov ( 521143 ) on Tuesday March 04, 2003 @09:28AM (#5432124) Homepage
    Speaking of the Dept. of Homeland Security, here's an link [democratic...ground.org] to an article with some suggestions to Tom Ridge on how to improve his department, so that it actually keeps the citizenry well-informed and aware of possible terrorist threats and how to handle them (as opposed to keeping them scared and in an information blackout).
    • by Black Parrot ( 19622 ) on Tuesday March 04, 2003 @09:50AM (#5432262)


      > Speaking of the Dept. of Homeland Security, here's an link [democratic...ground.org] to an article with some suggestions to Tom Ridge on how to improve his department, so that it actually keeps the citizenry well-informed and aware of possible terrorist threats and how to handle them (as opposed to keeping them scared and in an information blackout).

      You're making a mighty big assumption about what the DoHS was created for.

  • by joe630 ( 164135 ) <bensherman@PASCA ... m minus language> on Tuesday March 04, 2003 @09:30AM (#5432137) Homepage
    We all got notified to patch our systems immediately.

    Everyone is working togther to get all the systems running sendmail patched.

    While this doesn't seem like a big deal in the corporate world, in the government world, all red tape has been removed and we can make changes to critical systems INSTANTLY.

    FIX FIRST, meet later. It's an entirely different attitude, and it allows me to do my job more efficently. It works.
    • yes, all that red tape that the gov't has that private industry lacked. right. that's why one of the hr people at one private company i worked at got mad at us for modifying our cubes. that's a job certified steelcase engineers.

      red tape exists everywhere. just ask people in banks or insurance companies.
    • That's nice. I can't even find out if this flaw is exploitable on my non-x86 platforms. ISS didn't bother to test non-x86 platforms. According to their release, "others" might be affected. But there is no information on how to test my systems for this vulnerability, so how can I tell if the patch is effective on my platform? It seems nobody but me is going to bother to check this. Is it now "In DHS and ISS We Trust?"

      No scanner, no tester, no exploit code, no help. Thanks ISS and DHS! I feel so much better with this new process.
    • by sckeener ( 137243 ) on Tuesday March 04, 2003 @12:31PM (#5433371)
      FIX FIRST, meet later. It's an entirely different attitude, and it allows me to do my job more efficently. It works.

      Gosh the exact opposite of that reminds me of NASA in the early 90s. A problem would happen. We'd have a meeting about the problem only to realize we needed another meeting to discuss the problem. Between the meetings to discuss the problem, we'd have a meeting to discuss the format for the next meeting. Of course in each meeting various contracting companies would be represented. The problem was always the fault of either A) the person or contract company not present at any of the meetings (hence why they have so many meetings) or B) the person to the left while seating around a table.

      I never knew how the problems were solved. I never saw any solutions at the meetings. It's my belief that NASA has trained MICE doing the repairs for slices of cheese.
  • Homeland Security (Score:3, Interesting)

    by benjiboo ( 640195 ) on Tuesday March 04, 2003 @09:31AM (#5432144)
    Are homeland security responsible for any tech security, or does that fall under the realm of CIA/FBI? (Forgive me, I'm not from the US.)


    The reason I ask is because this type of co-operation with public defense organisations and the private sector are likeley to become much more important as we come to rely more on these technologies, OR if we ever see any kind of cyber-terrorism. Ideally there would be a single point through which relevant information flows - as hinted at in the article, any leaks could be a problem.


    Do these agencies have a reputation for hiring good security people?

    • No, they have a reputation for recruiting good security people. I don't think they accept applications.
    • by mark_lybarger ( 199098 ) on Tuesday March 04, 2003 @12:28PM (#5433351)
      the homeland security is responsible for making us americans feel all warm and fuzzy inside that our government is doing something to protect its citizens on its soil.

      they're responsible for releasing alert warnings every so often. placing the country on a level 3 or orange alert whatever that means, but it sure spikes the sales of bottled water, canned foods, batteries and duct tape for when the big bombs and chemical warfare comes our way.

      to be honest this entire administration has been doing a complete knee-jerk reaction to the WTC and Pentagon events from 2001. they're molding those knee-jerk reactions into something they can use to bomb Iraq and overthrow Suddam because quite frankly there's some big roots in the big state of Texas where "all Your Oil are belong to us"

      here's my favorite quote from the folowwing article:
      http://www.msnbc.com/news/872585.asp?0cl =c1

      That warning regarding tape and three days of water is profoundly helpful to people who are choosing to go to war with Iraq and need to cause an environment of fear in order that the public will do anything to break the fear fever. It serves the administration for the public to be so afraid. When you are afraid enough, you'll get on any train that's leaving the station, even if it is not going where you want to go. That sentence says it all.

  • So what's... (Score:2, Insightful)

    by jpmahala ( 181937 )
    NSA going to do with all of their newfound freetime? According to the article:

    In the future, the Department of Homeland Security will be the U.S. agency that will manage any response to major cyberthreats.

    Will the DHS publish Security Recommendation Guides [nsa.gov] like the NSA?
  • Improved policy? (Score:5, Insightful)

    by Jeppe Salvesen ( 101622 ) on Tuesday March 04, 2003 @09:32AM (#5432155)
    Wouldn't it be best to issue a statement like "sendmail has an exploitable vulnerability, we recommend that you switch to your standby alternate mail system until we release a fix"? There is no way that blackhats would figure out where to look from a statement like that, and those of us with really good security could switch to our exim-based solution if we really feared to be hacked. Basically, do we trust the homeland security dept to determine our security policy?

    That being said, good to see a well coordinated patch release. I just wish the paranoids would get advance warning.
    • by eyeball ( 17206 )
      could switch to our exim-based solution if we really feared to be hacked

      Oh, yeah. I run a small ISP that does about 1.6 million messages / day. Other siblings of my department do 10 times that. If I tried implementing a safer stand-by system, I would be laughed right out of a job. Not to mention the safer backup systems for everything else -- web serving, news, authentication, online tools, etc..

    • ...why the hell aren't you using it in the first place?
  • by bigberk ( 547360 ) <bigberk@users.pc9.org> on Tuesday March 04, 2003 @09:36AM (#5432176)
    Is the U.S. Department of Homeland Security also going to try and take care of software developed internationally?

    For example, it seems that a lot of OpenSSH [openssh.org] development is done in Canada and Germany. And the server is run out of Canada.

    The OpenSSL [openssl.org] team looks primarily international too (UK, Germany, Sweden, New Zealand). There server is managed by Brits and Swedes.

    Actually... I think you'll find that a lot of crypto software is based outside the US. Probably due to constraints placed on crypto development in the last decade.
  • bugs (Score:2, Troll)

    by mschoolbus ( 627182 )
    I have heard that sendmail is the most complicated program ever developed, is this true in any way? Sendmail can do a lot and there are a frequent amount of security issues, most of which get fixed very timely, but it has to be better than exchange, isn't it?
    • No, that's mozilla. One of the attributed reasons for the failure of mozilla is that it's widely called spaghetti code. Yes mozilla is a great browser and it works, but it's too little too late, and apparently it's codebase is hell to figure out.
  • by jc42 ( 318812 ) on Tuesday March 04, 2003 @09:41AM (#5432204) Homepage Journal
    The article says:

    A critical flaw in Sendmail, the Internet's most popular e-mail server, ...

    But I've been reading all these claims that Outlook handles 99% of all email.

    Which of these claims is a lie?

    (Is it possible that they're both lies?)

  • by mcgroarty ( 633843 ) <brian DOT mcgroarty AT gmail DOT com> on Tuesday March 04, 2003 @09:42AM (#5432208) Homepage
    If I've got a vulnerable service running on on of my systems, I'd rather know about it right away so I can make the decision as to whether I want to keep it running or temporarily deploy an alternate service.

    I liked the handling of ssh's problems last year much better. "Heads up, there's a problem in these versions. We'll let you know exactly what after we get the patch out." It's not enough to give a hacker a reasonable foot up, but it gets the service off the network should anyone already be quietly taking advantage of the weakness.

  • by arvindn ( 542080 ) on Tuesday March 04, 2003 @09:42AM (#5432209) Homepage Journal
    The article reads like a showcase of the OS security model. Basically Sendmail Inc. made available a patch before news of the vulnerability leaked and exploits could be created. Classic case of the good guys spotting the bug before the bad ones.

    Quote:

    "Working with the private sector, we alerted key owners of the vulnerable software and got them talking," said David Wray, spokesman for the IAIP Directorate. "We think this is a great example of how this should, and does, work."

    The Department of Homeland Security got high marks from the security community for giving companies the necessary time to create the patch and for synchronizing its release.

    "This is the model for what you do if you want to find a vulnerability," said Alan Paller, director of research for the SysAdmin, Audit, Network and Security (SANS) Institute

    • Classic case of the good guys spotting the bug before the bad ones.

      No, this is a classic case of why this myth keeps getting passed on by the masses. Simply put, how do you know the bad guys didn't spot this a long time ago? You're assuming the bad guys will put out a big press release saying "We found a big bug in sendmail and we're exploiting it!"

      That is definitely not how it works and its not even logically consitent. Absence of evidence is not evidence of absence. "Bad guys" can and have kept their exploits to themselves in the past. We know this for a fact. So why should this case be any different? Its not.

      Why would anyone that has owned your servers tell you that they owned them, unless they didn't want to own those boxes in the future. If you're a "bad guy" and you figured out a nifty way to own 75% of all the mail servers out there, why would you be so stupid as to tell everyone?

      In short, if you think you're safe because the "good guys" found it "first", because the "bad guys" didn't put up a big notice that they found a flaw in your software, you're doomed. Software is flawed: it was written by people for goodness sake. It is very difficult to write "secure software" so you must assume that the software you use is filled with holes and they someone, somewhere, has figured out how to exploit one of or more of them.

      Real computer risk management is about acknowledging that fact. There are vulnerabilities that you and the good guys do not know about.

      The solution to computer security is not more obscurity, its about building your risk management model around reality. Your software has holes, your employees can not be trusted, life is dangerous: there be dragons out there.

  • ISS - proven shills (Score:5, Interesting)

    by Anonymous Coward on Tuesday March 04, 2003 @09:44AM (#5432224)
    Once again, ISS have let the community down. Instead of informing the vendors, or CERT, or even just posting to Bugtraq, they informed the USG first. As a result .mil sites had the patch four days before anyone else (so far as we know) were even aware that there was an issue. [Although they claim that they checked their private "sensor" networks, somehow I doubt they have better coverage than eg DShield.org. ) This is unacceptable behaviour for an info-sec company that wants to be a responsible member of the community, and of course is just the latest in a list of behaviour that I at least consider unethical. I work for an ISS reseller outside the USA, and I will be exercising my influence internally to push for replacing the ISS prodcuts either with Free alternatives, or proprietary products from companies with a better grasp of their responsibilities. BTW we have several very big global clients.
    • You are right, but that is not the most scary effect of the so called "Homeland Security." Imagin: comany X finds some major bug in a widespread security relevant application an informs first(!) the US gov. so the US "cyber warfare" units had a 2 month headstart exploiting servers around the world.

      nice eh?

      -- greetings from _OLD_ europe
    • Is that your corporate network's security is more important than the national security of the US?

      • by dmaxwell ( 43234 ) on Tuesday March 04, 2003 @12:16PM (#5433262)
        Which part of "outside the USA" did you miss? That's EXACTLY what he is telling you. This does not serve US' interests. Crypto development has already been pushed outside the country. This sort of behaivor could push most security work outside as well. The rest of the world isn't going to run their networks three-sheets-to-the-wind just so Tom Ridge can get his warm fuzzies.

        Nobody outside the US is going to place their security below that of the US. Yet everybody, US included, runs the same software. This means something has to give and if the issue is forced then yet another chunk of the industry leaves the country. How is this good?

        It's already started. Many developers won't visit the US because they discuss vulnerabilities "that could circumvent a copyright protection". Hello! They have to do that to fix problems. Pentagon-style paranoia could much worse than the DMCA. This industry is hurting as it is. We don't need more government imposed problems.
  • Sounds nice but... (Score:5, Insightful)

    by captaineo ( 87164 ) on Tuesday March 04, 2003 @09:45AM (#5432228)
    It sounds cool to have the US govt leaning on vendors to write patches, but I have a feeling that if this becomes the norm, vendors will just push DHS for longer and longer lead times. The article indicates this particular bug was known since January. Two months is a pretty long time to wait for patches!

    And this is just DHS's "first test" - I imagine after they build up a cozy relationship with the major security-problem vendors (i.e. Microsoft), they might not even disclose any known flaws until patches come out (i.e. months to "never").

    Remember that government officials will probably listen a lot more attentively to "captains of industry" (i.e. MS) than "those unwashed hippy hackers" (i.e. the open-source community).
    • by pjrc ( 134994 ) <paul@pjrc.com> on Tuesday March 04, 2003 @02:30PM (#5434418) Homepage Journal
      What's really cool is that they're leaning on admins to actually install the patch quickly.

      Sure, it sucks to be "left in the dark" while vendors slowly come up with patches. Sure, you'd like the vendor's "feet held to the fire" to write, test and release the patch as quickly as possible. If that's painful for them, well, they dman well deserve it since they wrote the but in the first place. Or at least that's how it feels to you and me, small-time admins (at least me) who find out when the patch is released weeks or even months (2 in this case) after the initial discovery. It's easy to feel this way.

      But historically, the biggest problem has not been the timeliness of releasing patches. The REAL problem has been that most admins/users do not install the patch until _after_ an attack has begun.

      Pathces not getting applied is by far the largest problem. It dwarfs the problem that of several weeks elapsing between initial discovery to patch availability to public announcment (where the "problem" is that some black-hats might have known for some time and might have been quietly exploiting systems for a long time).

      Sure, it rubs you and me the wrong way and might even hurt our feelings a bit that we were kept in the dark for 2 months. Yeah, it sucks that our servers were on-line and open to attack all that time (and long before initial discovery by ISS). But get over it.

      In the larger picture, what has always mattered much more is getting all or most systems patched. That has historically been a giant problem. Admins don't patch, for one reason or another. Some are overworked, a few might be lazy, many don't find out about the patch, and in a great many cases the admin isn't authorized to make "unnecessary" changes, or would be risking his job patching a critical system before upper management felt it was urgent.

      In the past, only a widespread attach has given most admins that sense of urgency to apply the patch. That sucks.

      The DOH using its clout to provide that sense of urgency to apply the patch before an attack begins is a good thing. To the extent they pull this off (it's still too early to judge), they'll have gone a long way towards solving the largest computer security problem.

      So whine all you like about being left in the dark. Mod me down for going against the flow here on slashdot. Complain about the extreemly unlikely chance that some black-hat knew before ISS and was quitely and undetectably exploiting the bug. But don't try to deny that by far, by at least an order of magnitude, the largest problem has been a widespread failure to apply released patches until after a highly successful and widespread attack.

      To the extent the DOH puts pressure on admins to install this patch before an attack, they will have made a huge improvement in overall security. The several weeks from initial discovery until patch availablity and security advisory just isn't significant in comparision.

  • That's It! (Score:3, Interesting)

    by eyeball ( 17206 ) on Tuesday March 04, 2003 @09:45AM (#5432230) Journal
    That's it. I'm guitting the profession as soon as I can find something that pays just enough.

    This is the beginning of the end. It's not hard to imagine an "Office of System Software Security Review" or some other government group of 'experts' that mandates all software go through their security analysis. I'm sorry. I have enough trouble explaining my code and system architecture to corporate 'security experts' (the types that don't understand TLS/SSL or SSH, and insist that we use tcp_wrappers enabled tftp since it doesn't use plain-text passwords going over the network!).

    So the big question is, what do I do with my life now? Maybe open a Subway sandwich shop. Any other suggestions?

  • by Hanashi ( 93356 ) on Tuesday March 04, 2003 @09:50AM (#5432263) Homepage
    IMHO, this was the best-managed vulnerability disclosure in recent years. I read the release pretty early on, and vendor patches were already available! Wow!

    Although there have been a few grumblings, it looks like there are a lot of others who feel the same way I do: it's perfectly OK to have a short lag time between vulnerability discovery and disclosure, as long as the Baddies don't start taking advantage of the situation before the patches are available. In this case, I read that the lag time was about 2 weeks, which seems perfectly reasonable.

    Kudos to all involved!

  • by Anonymous Coward on Tuesday March 04, 2003 @09:51AM (#5432266)
    Does anybody else find it disturbing that "good security" is being equated with "keeping exploits quiet"?

    It's precisely the threat of publicity that pressures vendors into patching their compromised software quickly. If that threat is relieved, by Official KeepYerDamnMouthShut Orders from a government body, those same vendors may start to think "Phew, now we can wait for the next upgrade".

    This is Not a Good Thing.
    • Crypto development had to be moved out of the US. If necessary, I suppose it can done for security disclosure as well. After all, our government would NEVER place a vendor's interests above those of consumers.
  • by perly-king-69 ( 580000 ) on Tuesday March 04, 2003 @09:53AM (#5432284)
    So what happens when a Finnish hacker finds a vuln in MS IE...should they tell a foreign government first? What about a French hacker? Or an Iraqi hacker? These problems now transcend national government interests.
  • by giberti ( 110903 ) on Tuesday March 04, 2003 @09:54AM (#5432286) Homepage

    I think it's interesting that the government is getting credit for working with the private sector in releasing information. Part of the the point of open sourced software is so that bugs can be found and patched quickly. The CERT email I got yesterday afternoon had MANY patch sources listed by vendor (RedHat, Apple, Sendmail etc) and was timely. I don't belive that the pat on the back goes to Uncle Sam in this situation, but rather the folks at Sendmail who worked to resolve this issue in a timely and organized fashion. They released the information to those who needed to know (including the DHS) and worked on a solution to get this stuff out to the public.

    To quote Eric Raymond, "Given enough eyeballs, all bugs are shallow"

    Kudos to Sendmail for getting this taken care of.

    • OK, sorry for a minor flame, but did you read the article?

      First, notice that they give credit to ISS and Sendmail.

      The agency's Directorate of Information Analysis and Infrastructure Protection (IAIP) worked with security company Internet Security Systems, which discovered the flaw, and Sendmail Inc. to create a patch while keeping news of the issue from leaking to those who might exploit the vulnerability.
      Then they discuss that they alerted key owners and facilitated communication.
      "Working with the private sector, we alerted key owners of the vulnerable software and got them talking," said David Wray, spokesman for the IAIP Directorate. "We think this is a great example of how this should, and does, work."
      Sendmail *themselves* noted that the coordination of the government helped...
      "They were a good resource in helping us make sure that the protection was put in place," Greg Olson, chairman and co-founder of Sendmail Inc., said of the response staff at NIPC, now with the directorate. "You need to contact a lot of people and make sure they understand this is important and (make sure they) apply the patch." Sendmail Inc. develops a proprietary version of the mail server.
      Bottom line, yes Sendmail gets kudos. But so does the government for being the coordinator of the entire effort. I'm not a big fan of this department of homeland defense, but in this case their agency did a nice job, and it deserves the mention it is getting.
    • The part you are missing is that it's the government's JOB to secure national infrastructure. While it's great that the private sector also took up the call, there is a DEFINITE NEED for an authoritative governmental coordination of things like patches. SQL Slammer propagated because sysadmins didnt patch, so the old system is obviously FLAWED. Perhaps government participation will convince lazy sysadmins that problems are real.
  • This is a nice, photogenic, easy dry run. Bully for DHS. But are they ready to get their hands really dirty and take on Microsoft? Patching Sendmail is easy - the OSS community wants to help, Sendmail themselves want to help. But somehow I think Microsoft is going to be a little tougher.
  • In the future, the Department of Homeland Security will be the U.S. agency that will manage any response to major cyberthreats.

    I hope these guys have Microsoft's number on speed dial...
  • by KeithH ( 15061 ) on Tuesday March 04, 2003 @10:08AM (#5432357)
    The article states:
    Internet Security Systems originally reported the flaw to the NIPC in mid-January. The agency helped notify other companies...

    I'm curious to know whether the NIPC notified non-commerical interests such as the Debian organization? Also, did they notify any non-US-based distributions such as Suse?

    It is not clear to me that the NIPC is anything more than a bureauratic clearing house and censor. I suspect that the security community that is referred to as giving high marks includes only the commercial side of the industry. I'll bet that Mr. Lemos could get a meatier article out of investigating some of these questions.

  • Maintain Obscurity!! (Score:5, Interesting)

    by tacocat ( 527354 ) <tallison1@@@twmi...rr...com> on Tuesday March 04, 2003 @10:21AM (#5432446)

    The one thing I didn't like about this article was the idea that this kind of process should be followed by everyone. This is what I saw as the process:

    1. Find a bug
    2. Tell only the owner.
    3. Keep it a secret until the owner comes back with a fix
    4. Now go tell everybody about the bug and the fix at the same time

    Here's the flaw(s) in this process:

    1. There is no interim action. While you wait for me to fix the bug, everyone in the world is vulnerable without the option of shutting down that service or taking additional safeguards against the bug. This could be days to months of insecurity. What makes you think DHS is always going to be the first to discover an exploit?
    2. I don't see how a Government Department is going to succeed where Public Voice has failed.
      • Microsoft has some huge security flaws in their browser that they have admitted will not be fixed in the near future. This is public knowledge. Public Voice has failed
      • Microsoft, as another example, has managed to avoid doing a lot of things it's supposed to by litigation. This can cause great delays in progressing a security notification.
      • Past practices by some companies is to sue the disclosures of bugs with a gag order. How will this be different? The government gets sued (and bought) all the time
    3. How is this process going to be handled when there is no Company supporting the code? I'm uncertain that this will be supportive in the OpenSource Model.

    I guess the biggest thing that I don't like about this is that idea that this model will support the Closed Source software model because of the arguments of:

    • What you can't see won't hurt you.
    • There's a great big company to yell at.
    • We (Govt and Corp) can talk in private. You open sources are all a bunch of security risks
    • If anybody tells of a bug early, they must be a terrorist.
    • I agree, ignorance isn't always best, but here it worked. A few things about this "bug"

      It was old - years old - and to knowledge, never used as an exploit.

      It was found by a white hat - so this isn't a case of "the criminals having all the guns."

      Therefore, what are the chances that, though no one found the bug in five years, that both a black hat and a white hat will find the same exploit within 2 months of each other? Pretty much nil.

      As usual,the chances of an exploit coming out are higher if disclosed. So, in terms of a damage perspective, we have to compare two things: greater chance of attack if disclosed, or greater damage per attack if not disclosed from people not being prepared.

      In this case, since the chance of double discovery of this bug was VERY low, the chance of total damage was greater if it was disclosed, giving black hats a head start. So I agree with what they did, and given the scope of the project (patching all flavors of sendmail), two months ain't all that bad.

      Ultimately, the government doesn't really care about any RMS-style "info wants to be free" crap. They just want the fewest exploited boxes possible. In this case, their actions were pretty well correct. I don't think this will always be the correct action, so we'll have to watch them on other issues, including how they interact with OSS groups, should the need arise.

  • who wrote a story where it was illegal to have a keyboard without a licence since keyboards were only used by hard-core programmers (everyone else used voice) and anyone that wanted to program without the government knowing about it must be some kind of cyber-terrorist.

    I thought it was a bit silly at the time (~10 years ago) but I'm starting to wonder.

    TWW

  • by netwiz ( 33291 ) on Tuesday March 04, 2003 @10:46AM (#5432583) Homepage
    How exactly is this helping? Control the information flow? How is it then, that links to, and a discussion of, the flaw and possible exploits were publicly available six hours ago on this very website? I wouldn't exactly call a discussion thread on one of the world's largest weblogs "controlling the flow of information."

    This is about the level of competency I've come to expect from Large Government Entities.
  • by anthony_dipierro ( 543308 ) on Tuesday March 04, 2003 @10:51AM (#5432625) Journal
    to make sure the DoHS hasn't gotten Sendmail Inc. to insert any "additional [homeland] security patches" into the build?
  • to create a patch while keeping news of the issue from leaking to those who might exploit the vulnerability.

    The debian version of the patch wasn't available yesterday. The whole point of delaying the announcement is to get the fix out there ahead of the knowledge of the vulnerability. I'd say their system for "working with vendors" needs some work.

    And what exactly is the knowledge dissemination path here? This time the mass media spread knowledge far and wide that attention was needed. They'll get bored after a couple more of these and stop prominantly reporting it. How does homeland security plan to get the message out then?
  • with the US Government being involved in this. I don't feel that it is their job to determine when I should hear about a security vunderability. Plus, I feel that this gives the US Government an unfair advantage on citizens and foreign governments that might be using the effected software.
  • qmail anyone? (Score:2, Insightful)

    by Chupa ( 17993 )
    Let's see...a search for advisories on Security Focus with "sendmail" = 100 hits. qmail gives 1 hit, and it isn't even for qmail, it's for "masqmail".

    It's time for the sendmail people to start from scratch. You can keep patching all you want (and apparently take two months to do it), but if your initial security design model is flawed, you are going to keep finding holes.
  • Dropping the 'terrorism' buzzword again, I see. There is no such thing as 'cyber' terrorism. Even Taking out the whole damned Internet does not equal the TERROR of torturing and killing even ONE human being.

    You want to accuse someone of 'cyberterrorism?' How about the RIAA, the MPAA, or those who passed the DMCA?

    Yes, the handling of this vulnerability was a good joint effort between ISS and the DHS. No, it wasn't anything spectacular. Maybe the DHS will be able to put pressure on our favorite monopoly to 'unenable' some of their terribly insecure features.

  • by Iamthefallen ( 523816 ) <Gmail name: Iamthefallen> on Tuesday March 04, 2003 @11:48AM (#5433061) Homepage Journal

    This isn't one of those "all our freedom and rights are being removed by the evil government" type posts. But yet...

    In this case DHS seem to have done a good thing, coordinated the patching and disclosure between different vendors. Now, for me it isn't a stretch to ask the question, what if someone had announced while DHS were still working on it? What if it is a truly critical bug or hole. Say wide open root-enabling flaw in SSH, Samba or some other service that's very common (for the geeks that can't take that as an example without saying that they should never be used as root bla bla bla, please just move on, I'm trying to make a point here, and it's not about best security practices).

    Say such a security hole of a great magnitude is discovered, and someone announces it publically on a mailinglist. Or say vendor A wants to release the patch immediately, but vendor B wants to test for another week. Vendor A goes ahead and releases it without DHS approval.
    In either case, will DHS see it as a risk to homeland security and a prosecutable offense? Is software security now suddenly a matter that the government should oversee? How far does their involvement stretch? Will security discussions require a DHS representative or approval to avoid premature disclosures that could be a threat to homeland security?

    I really don't wanna sound alarmist here, but I'm not sure the goverment getting involved in things like this is a great idea. Software bugs or flaws can be a real threat to a nation, and so DHS should perhaps be involved. But again, I can't help but wonder, where will that take us and where will that involvement stop.

  • by lpontiac ( 173839 ) on Tuesday March 04, 2003 @11:56AM (#5433121)

    Think about it, the Department of Homeland Security (and by proxy, the entire US Government) is getting a heads up on potential exploits.

    The US spies on it's allies [washingtonpost.com]. If you're the Germans, then the NSA are the blackhats. Nobody but the US government themselves should feel more comfortable knowing that they're being informed first.

  • This has got to be the most expensive form of BugTraq I've ever seen, I can hardly wait till they try it with an Windows bug and Microsoft trys to bargain with them...
  • by iabervon ( 1971 ) on Tuesday March 04, 2003 @12:33PM (#5433388) Homepage Journal
    In order for this to be exploitable, the compiler has to arrange the data segment such that there is a structure containing pointers shortly after the buffer that can be overrun. As it turns out, most builds of sendmail, including all of the Red Hat precompiled binaries tested and all of the commercial UNIX ones tested, are not directly exploitable (that is, it might be possible to get them to misbehave somehow, but not to crash in any predictable way). The exploits also depend on knowing what structure you've hit, which is only possible if you have access to the particular binary, and the exploits will only work for a particular binary.

    So this is not a good candidate for a worm or automated exploit, and only useful for a direct attack if you happen to be relatively unlucky and the attacker knows it.
  • Differing Agenda's (Score:3, Interesting)

    by ColaMan ( 37550 ) on Tuesday March 04, 2003 @05:07PM (#5435953) Journal
    As this has been mentioned a little bit in other peoples posts, I'll ask the question too :

    Why should I (an australian) have to rely on the "Department of Homeland Security" of another country for information regarding a sendmail patch?
    What if someone found a root exploit affecting 75% of say, iraq's servers and reported it to the "Department of Homeland Security"?
    I wonder how long it would take for them to issue a release about that one? As far as I'm concerned , the body that looks after this sort of thing should be international and not have any majority government control, as otherwise they start acting in their own interests, and not the greater interests of the other technically competent people on the planet.

    (And "Department of Homeland Security" always has a weird , 1984-ish sound to me, hence the quotes)

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...