Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Bug Encryption Google

Heartbleed Sparks 'Responsible' Disclosure Debate 188

bennyboy64 writes: "IT security industry experts are beginning to turn on Google and OpenSSL, questioning whether the Heartbleed bug was disclosed 'responsibly.' A number of selective leaks to Facebook, Akamai, and CloudFlare occurred prior to disclosure on April 7. A separate, informal pre-notification program run by Red Hat on behalf OpenSSL to Linux and Unix operating system distributions also occurred. But router manufacturers and VPN appliance makers Cisco and Juniper had no heads up. Nor did large web entities such as Amazon Web Services, Twitter, Yahoo, Tumblr and GoDaddy, just to name a few. The Sydney Morning Herald has spoken to many people who think Google should've told OpenSSL as soon as it uncovered the critical OpenSSL bug in March, and not as late as it did on April 1. The National Cyber Security Centre Finland (NCSC-FI), which reported the bug to OpenSSL after Google, on April 7, which spurred the rushed public disclosure by OpenSSL, also thinks it was handled incorrectly. Jussi Eronen, of NCSC-FI, said Heartbleed should have continued to remain a secret and be shared only in security circles when OpenSSL received a second bug report from the Finnish cyber security center that it was passing on from security testing firm Codenomicon. 'This would have minimized the exposure to the vulnerability for end users,' Mr. Eronen said, adding that 'many websites would already have patched' by the time it was made public if this procedure was followed."
This discussion has been archived. No new comments can be posted.

Heartbleed Sparks 'Responsible' Disclosure Debate

Comments Filter:
  • No Good Solution. (Score:5, Insightful)

    by jythie ( 914043 ) on Friday April 18, 2014 @07:54AM (#46786737)
    This really strikes me as the type of problem that will never have a good solution. There will always be competing interests and some of them will be mutually exclusive while still being valid concerns.
    • by gweihir ( 88907 ) on Friday April 18, 2014 @08:03AM (#46786777)

      Indeed. But there is a _standard_ solution. Doing it in various ways is far worse than picking the one accepted bad solution.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      There is no right, it's already gone bad so you've just got a lot of wrongs to choose from. So my opinions on disclosure are informed by risk minimization. Or to borrow a term, "harm reduction."

      The order people were informed about heartbleed smells more like matter of "It's about who you know." than getting the problem fixed. If OpenSSL isn't at or or real close to the top of the list of people you contact the first day, you're either activity working against an orderly fix or don't trust the OpenSSL fol

      • This was handled similarly to a flaw I discovered, and I think it makes sense. Facebook, for example, has about a billion users. If you have a colleague you trust at Facebook, informing that one colleague can protect Facebook's billion users.

        The risk is of a leak before a fix is widely deployed is dependent on a) the number of people you inform and b) how trustworthy those people are to keep quiet for a couple of days. It's quite reasonable to minimize the risk of a leak by keeping it low profile for a fe

        • Nobody was harmed by hearing about it on Tuesday rather than on Monday

          Isn't that assumption where the whole argument for notifying selected parties in advance breaks down?

          If you notify OpenSSL, and they push a patch out in the normal way, then anyone on the appropriate security mailing list has the chance to apply that patch immediately. Realistically, particularly for smaller organisations, it will often be applied when their distro's mirrors pick it up, but that was typically within a couple of hours for Heartbleed, as the security and backporting guys did a great job at ba

          • by raymorris ( 2726007 ) on Friday April 18, 2014 @12:14PM (#46788677) Journal

            > they had a whole day to attack everyone who wasn't blessed with the early knowledge, instead of a couple of hours

            Years, not hours. Assuming the bad guys knew about it, they had two YEARS to attack people. If we told people that there was an issue on Monday, that doesn't protect them - they just know that their vulnerable. They couldn't do anything about it until the update packages were available on Tuesday.

            On the other hand, had we made it public on Monday, we would have GUARANTEED that lots of bad guys knew about it, during a period in which everyone was vulnerable.

            I'm talking about what we did here. It appears to me that Google definitely screwed up by not telling the right people on the OpenSSL team much sooner. (Apparently they told _someone_ involved with OpenSSL right away, but not the right soemone.)

            > you protect some large sites, but those large sites are run by large groups of people. For one thing, they probably have full time security staff who will get the notification as soon as it's published, understand its significance, and act on it immediately.

            ROTFL. Yep, large corporate bureaucracies, they ALWAYS do exactly the right thing, in a matter of hours.

            • You're latching onto this specific case, perhaps because you have some connection to it, but I'm talking about the general principle here. In general, it is not unreasonable to assume that if a vulnerability has been found by two parties in rapid succession, there may be a common factor involved, which may mean that other parties will also find it in the same time frame, and that an extra day may therefore be very significant.

              Obviously most serious security bugs don't sit there for years, then have two grou

          • > Isn't that assumption where the whole argument for notifying selected parties in advance breaks down? ...
            > it will often be applied when their distro's mirrors pick it up, but that was typically within a couple of hours for Heartbleed

            How do you think those packages get on the mirrors? Do their servers magically patch the code, rebuild the packages, and set it as a high priority update? The fix gets on the mirrors as a result of "notifying selected parties in advance".

            • I think there is a qualitative difference between notifying large end users like Facebook in advance, and notifying people in the distribution system for a general release. It's the former that inherently means the people who aren't large end users with privileged access get left exposed for longer than necessary, and that's what I'm objecting to.

        • by sabri ( 584428 )

          This was handled similarly to a flaw I discovered, and I think it makes sense. Facebook, for example, has about a billion users. If you have a colleague you trust at Facebook, informing that one colleague can protect Facebook's billion users.

          Ah yes, the duckface pictures of a bunch of teens are way more important than, let's say, millions of tax returns.

          • There is no option that's going to protect those tax returns. Telling the bad guys about it will certainly endanger the tax return data, though.
            Since many (most?) people use the same or similar password for Facebook as they use for their tax service, protecting Facebook traffic actually protects a few tax returns.

            What clearly isn't an effective option would be to announce the vulnerability to hundreds of tax-preparer sites before a updated package is available, expecting them to manually (and correctly) pat

    • Blame Game. (Score:5, Insightful)

      by jellomizer ( 103300 ) on Friday April 18, 2014 @09:08AM (#46787111)

      That is the biggest problem. Other then rewarding the people who fix the problem, we try to figure out who is to blame for every freaking thing.

      Oh look a flood hit the city unexpected, well lets blame the mayor for not thinking about this unexpected incident.

      Or a random guy blew up something, why didn't the CIA/NSA/FBI know that he was doing this...

      We are trying to point blame on too many things, and less time trying to solve the problem.

      • That is the biggest problem. Other then rewarding the people who fix the problem, we try to figure out who is to blame for every freaking thing.

        "Fix the problem, not the blame."
        Rising Sun (1993) - Capt. John Connor (Sean Connery)

      • Or a random guy blew up something, why didn't the CIA/NSA/FBI know that he was doing this...

        If those organizations are going to continue receiving more money, more privilege, and less oversight in the name of protecting us from terrorists, then they deserve blame when they have nothing to show for what they have taken from us.

    • by sl149q ( 1537343 )

      The answer is actually simple. Once you have determined that there is an (to quote Bruce Schneier) 11 out of 10 security problem you need to get the servers turned off. Everywhere.

      If the FBI or Interpol or Bruce Schneier basically said "There is a serious exploit in OpenSSL, you (as in every organization running it) need to shut down every server now, we will provide the details and fix in 48 hours."

      Yes, the bad guys will now know that OpenSSL has an exploit. But they won't exactly know where to start looki

  • WTF? (Score:5, Insightful)

    by gweihir ( 88907 ) on Friday April 18, 2014 @07:58AM (#46786757)

    The only possible way is to disclose to the responsible manufacturer (OpenSSL) and nobody else first, then, after a delay given to the manufacturer to fix the issue, disclose to everybody. Nothing else works. All disclosures to others have a high risk of leaking. (The one to the manufacturer also has a risk of leaking, but that cannot be avoided.)

    The other thing is that as soon as a patch is out, the problem needs to be disclosed immediately by the manufacturer to everybody (just saying "fixed critical security bug" is fine), as the black-hats watch patches and will start exploiting very soon after.

    All this is well known. Why is this even being discussed? Are people so terminally stupid that they need to tell some "buddies"? Nobody giving out advance warnings to anybody besides the manufacturer deserves to be in the security industry in the first place as they do not get it at all or do not care about security in the first place.

    • Re:WTF? (Score:5, Interesting)

      by Tom ( 822 ) on Friday April 18, 2014 @08:22AM (#46786881) Homepage Journal

      The only possible way is to disclose to the responsible manufacturer (OpenSSL) and nobody else first, then, after a delay given to the manufacturer to fix the issue, disclose to everybody. Nothing else works. All disclosures to others have a high risk of leaking. (The one to the manufacturer also has a risk of leaking, but that cannot be avoided.)

      It's not about leaking. The reason I'm not alone in the security community to rage against this "responsible disclosure" bullshit is not that we fear leaks, but that we know most of the exploits are already in the wild by the time someone on the whitehat side discovers it.

      Every day you delay the public announcements is another day that servers are being broken into.

      • Re:WTF? (Score:4, Insightful)

        by Anonymous Coward on Friday April 18, 2014 @08:32AM (#46786933)

        If no fix is available yet, they're still being broken into - but you've just added the thousands of hackers who *didn't* know about it to the list of those exploiting it.

        • Exactly this.
          • Couldn't sys admins disable the heartbeat feature as a preventive measure while the patch was prepared? Please note that I'm rather ignorant on all the things involved, but AFAIK the feature in question in the very recent case was not crititcal and could be disabled with minimal damages to the functioning of the service.

            I agree with you, though, that the developers should be informed of it first. But I also think that it depends on the issue. If you tell me that feature x in software a has a security issue

            • by gweihir ( 88907 )

              Not really. Disabling the patch took changing the sources manually and rebuilding OpenSSL, something most sysadmins cannot do or cannot do fast.

              I think the main problem with the flavor of responsible disclosure some part of the security community is raging against is that this flavor allows the developers to say how long they need, and that has been abused. But giving them zero time is just malicious.

        • Re:WTF? (Score:5, Interesting)

          by medv4380 ( 1604309 ) on Friday April 18, 2014 @10:20AM (#46787649)
          Not to sound like too much of a conspiracy nut, but Heartbleed did look like a deliberate exploit to some people, and still does to others. If it had been, and had been put there by someone at OpenSSL they are the last ones you actually want to inform until you have already patched it yourself. From the timeline that's what Google did, and then tapped the shoulders of their closes friends so they could ether patch it or disable the heartbeat feature as CloudFlare did. I agree that OpenSSL should have been informed first, but what do you do when you suspect the proper channels are the ones who put it there in the first place.
        • by Tom ( 822 )

          Yes, this argument is being made a million times and it doesn't prove anything because it rests on so many assumptions that may or may not be true that it's total truth value is about as good as tossing a coin.

          The two most important:

          First, you assume that the official patch is the only thing that can be done. In many, many cases there are other (temporary) measures that can be taken to mitigate a problem or limit its impact. Who are you to decide for everyone on the planet with their different needs and sce

          • by gweihir ( 88907 )

            I am not talking about giving the manufacturer a lot of time. But if the bug is already exploited in the wild, chances are it has been for a while, so a few more days matter little. However, quite often nothing can be done before a patch is available and then too early public disclosure does a lot more harm than good.

            • by Tom ( 822 )

              You are right on those.

              Except for the "nothing can be done" part. That's not your judgement call to make. There is always at least one option - pulling the power plug - and it might well be a feasable temporary solution for some people affected.

      • It's not about leaking. The reason I'm not alone in the security community to rage against this "responsible disclosure" bullshit is not that we fear leaks, but that we know most of the exploits are already in the wild by the time someone on the whitehat side discovers it.

        Every day you delay the public announcements is another day that servers are being broken into.

        So are you going to take your server offline until there is a patch? Or are you going to write a patch yourself?
        I think giving the software vendor 2 weeks to fix the bug (1 week if it's trivial or you provide the patch)
        is reasonable as 99% of people are not going to be able to do anything about it until there is a patch anyways.
        As soon as the patch is available then it should be publicly announced.

        • by Tom ( 822 )

          So are you going to take your server offline until there is a patch?

          Depends, but yes for many non-essential services, that is indeed an option. Imagine your actual web service doesn't use SSL, but your admin backend does. It's used only by employees on the road, because internal employees access it through the internal network.

          Sure you can turn that off for a week. It's a bit of trouble, but much better than leacking all your data.

          Or if it's not about your web service, but about that SSL-secured VPN access to your external network? If you can live without home office for a

      • Every day you delay the public announcements is another day that servers are being broken into.

        Yes, but it's also easier to make use of the exploit information to produce an exploit than a patch. That's why it's responsible to report the bug to the maintainers before announcing it publicly. But your argument is the reason why you don't wait indefinitely for the maintainers to kick out a patch, either.

        As usual, the answer lies somewhere between extremes.

        • by Tom ( 822 )

          As usual, the answer lies somewhere between extremes.

          My preferred choice of being left alone or being beaten to a pulp is being left alone, not some compromise in the middle, thank you. Just because there are two opposing positions doesn't mean that the answer lies in the middle.

          I've given more extensive reasoning elsewhere, but it boils down to proponents of "responsible disclosure" conveniently forgetting to consider that every delay also helps those bad guys who are in posession of the exploit. Not only can they use it for longer, they can also use it for

          • Don't pretend sysadmins are powerlessly waiting with big eyes for the almighty vendor to issue a patch.

            But most of them in fact are in that situation. If you want to make no real sysadmin comments, I may well agree, but it doesn't change much.

            • by gweihir ( 88907 )

              In many large organizations, you have segregation of duties. This boils down to the sysadmins not being allowed to patch and recompile code. They are allowed to install a vendor patch though. And yes, segregation of duties is a good idea.

              • by Tom ( 822 )

                Absolutely.

                But we were talking about mitigating measures. That is almost never patch and recompile, it's things like turning off a service, changing the firewall rules, moving servers into a different network - things that are very much within the duties of the sysadmin (with proper clearance and risk acceptance by management, etc. etc.)

                Basically, if you have a bug that makes your internal network open to the world, but you can avoid it by disabling feature X in the config file, and your company doesn't req

                • by gweihir ( 88907 )

                  Sorry, in many large organizations, the sysadmins are not allowed (or able) to change firewall configurations either. And sign-offs, even in emergencies like these, may take a few days.

                • But we were talking about mitigating measures. That is almost never patch and recompile, it's things like turning off a service, changing the firewall rules

                  But we're talking about this in the context of Heartbleed, where pre-patch mitigation involved disabling critical services... A patch is what was needed here, and nothing else would suit.

                  • by Tom ( 822 )

                    Yeah, there was absolutely nothing anyone could do. Oh wait, except for this brutally complex and technically challenging thing right from the official vulnerability announcement:

                    This issue can be addressed by recompiling OpenSSL with the -DOPENSSL_NO_HEARTBEATS flag. Software that uses OpenSSL, such as Apache or Nginx would need to be restarted for the changes to take effect.

                    That was definitely not a feasabole option for anyone on the planet...

        • by gweihir ( 88907 )

          Indeed. There is however a certain type of wannabe "hacker" that needs to turn things into power-plays. These will disclose immediately and inflate their ego that way, no matter what damage this does.

      • by gweihir ( 88907 )

        Sorry, but that really is nonsense. All that immediate disclosure can cause is panic. It does not help at all. It increases attacks in the short-term, because most people are not able to do anything without patch.

        Sure, you can argue for very short time given to the manufacturer, like a few days (they will not make that large a difference for the ones already using the weakness, most weaknesses exist for quite a while before they are discovered by the white-hats and analysis also takes some time), and some

        • by Tom ( 822 )

          The thing is that the manufacturer must not be the one to set the time they get to fix this

          I agree on that 100%

          most people are not able to do anything without patch.

          That depends a lot on the particular problem. In many cases, there are mitigating measures that can be taken until a patch is available, and I'd argue strongly that the people affected should make the call on that, not you or I or anyone else.

          By withholding information, you are making decisions for other people. But you are not in a position to make that call, because you are not the one who suffers the consequences.

          I advocate for giving everyone all the information so they all can act a

    • by paskie ( 539112 )

      "Very well known?" This is very much *not* the way how for example many security bugs in linux distributions are handled (http://oss-security.openwall.org/wiki/mailing-lists/distros). Gradual disclosure along a well-defined timeline limits damage of exposure to blackhats and at the same time allows enough reaction time to prepare and push updates to the user. So typically, once the software vendor has fixed the issue, they would notify distributions, which would be given some time to prepare and test an upd

    • by WD ( 96061 )
      "High risk of leaking?" And what would the consequences of such a leak be? The affected vendors are only slightly better off than they were with how it actually turned out with Heartbleed?

      When Heartbleed was disclosed, virtually no affected vendor (e.g., Ubuntu, Cisco, Juniper, etc.) had an update available. So there was a window where the vulnerability was public, but nobody had official updates from their vendor that would protect them. You are claiming that this is better than a coordinated rele
    • There's no one-size fits all solution. I've made the argument for informed disclosure [bfccomputing.com] here in the past, but in this case it probably wouldn't work. The DTLS code is so small and self-contained and the code so obvious to an auditor that just saying that there's an exploit in DTLS or to compile without heartbeat is probably enough to give the blackhats a running start. But there are other situations where informed disclosure is better than responsible disclosure.

      Did Google do the right thing here? I'm n

  • As bad ideas go... (Score:4, Insightful)

    by ClayDowling ( 629804 ) on Friday April 18, 2014 @08:05AM (#46786793) Homepage

    This notion ranks right up there. Manufacturer was told. Everybody else was then told. That's how it's supposed to work. This notion of "let's just tell our close friends and leave everybody else in the dark" is silly. You'd only wind up leaving most people open to exploit, because if you think your secret squirrel society of researchers doesn't have leaks, you're deluding yourself.

  • >> Google notified OpenSSL about the bug on April 1 in the US – at least 11 days after discovering it.

    "OK, maybe it was caught up in legal. Suits at large corporations can take a while."

    >> Google would not reveal the exact date it found the bug, but logs show it created a patch on March 21,

    "On second thought, if the geeks on the ground had the authority to patch and roll to production, then why the finger to the Open Source community, Google?"

  • Issue? (Score:5, Insightful)

    by silanea ( 1241518 ) on Friday April 18, 2014 @08:14AM (#46786839)

    What exactly is the issue here? Maybe I misread TFS and the linked articles, but as I understand the chief complaint - apart from Google's delay in reporting to OpenSSL - is that some large commercial entities did not receive a notification before public disclosure. I did not dig all too deep into the whole issue, but as far as I can tell OpenSSL issued their advisory in lieu with a patched version. What more do they expect? And why should "Cisco[,] Juniper[,] Amazon Web Services, Twitter, Yahoo, Tumblr and GoDaddy" get a heads-up on the public disclosure? I did not get a heads-up either. Neither did the dozens or so websites not named above that I use. Neither did the governmental agency I serve with. Nor the bank whose online-banking portal I use. Are we all second-class citizens? Does our security matter less simply because we provide services to fewer people, or bring lower or no value to the exchange?

    A bug was reported, a fix was issued, recommendations for threat mitigation were published. There will need to be consequences for the FLOSS development model to reduce the risk for future issues of the sort, but beyond that I do not quite understand the fuss. Can someone enlighten me please?

  • wtf ? (Score:4, Interesting)

    by Tom ( 822 ) on Friday April 18, 2014 @08:16AM (#46786845) Homepage Journal

    IT security industry experts are beginning to turn on Google and OpenSSL, questioning whether the Heartbleed bug was disclosed 'responsibly.

    Are you fucking kidding me? What kind of so-called "experts" are these morons?

    Newflash: The vast majority of 0-days are known in the underground long before they are disclosed publicly. In fact, quite a few exploits are found because - drumroll - they are actively being exploited in the wild and someone's honeypot is hit or a forensic analysis turns it up.

    Unless you have really, really good reasons to assume that this bug is unknown even to people whose day-to-day business is to find these kinds of bugs, there is nothing "responsible" in delaying disclosure. So what if a few script-kiddies can now rush a script and do some shit? Every day you wait is one day less for the script kiddies, but one day more for the real criminals.

    Stop living in la-la-land or in 1985. The evil people on the Internet aren't curious teenagers anymore, but large-scale organized crime. If you think they need to read advisories to find exploits, you're living under a rock.

    • Newflash: The vast majority of 0-days are known in the underground long before they are disclosed publicly. In fact, quite a few exploits are found because - drumroll - they are actively being exploited in the wild and someone's honeypot is hit or a forensic analysis turns it up.

      It's not that black and white. You expose the vulnerability to even more crackers if you go shouting it around like was done here.

      • Re:wtf ? (Score:4, Insightful)

        by MrL0G1C ( 867445 ) on Friday April 18, 2014 @09:04AM (#46787083) Journal

        As an end-user I'm glad it was shouted about because it gave me the chance to check that any software that could affect me financially was updated or invulnerable.

        So, can you tell me why I shouldn't be notified?

      • Yes, which is why the best compromise is a private disclosure to whoever can *fix* the bug, followed by a public announcement alongside the fixed release. That limits the disclosure to the minimum necessary while the flaw is unfixed.

      • by Tom ( 822 )

        There's a black market where you can buy and sell 0-days.

        Sure you give it to more people (and for free) than before. But the really dangerous people are more likely than not to already have it.

  • Once again the evil of Information Disparity rares its ugly head. To maximize freedom and equality entities must be able to decide and act by sensing the true state of the universe, thus knowledge should be propagated at maximum speed to all; Any rule to the contrary goes against the nature of the universe itself.

    They who seek to manipulate the flow of information wield the oppression of enforced ignorance against others despite their motive for doing so. The delayed disclosure of this bug would not change the required course of action. The keys will need to be replaced anyway. We have no idea whether they were stolen or not. We don't know who else knew about this exploit. Responsible disclosure is essentially lying by omission to the world. That is evil as it stems from the root of all evil: Information Disparity. The sooner one can patch their systems the better. I run my own servers. Responsible disclosure would allow others to become more aware than I am. Why should I trust them not to exploit me if I am their competitors or vocal opponent? No one should decide who should be their equals.

    Fools. Don't you see? Responsible disclosure is the first step down a dangerous path whereby freely sharing important information can be outlawed. The next step is legislation to penalize the propagators of "dangerous" information, whatever that means. A few steps later will have "dangerous" software and algorithms outlawed for national security, of course. If you continue down this path soon only certain certified and government approved individuals will be granted license to craft certain kinds of software, and ultimately all computation and information propagation itself will be firmly controlled by the powerful and corrupt. For fear of them taking a mile I would rather not give one inch. Folks are already in jail for changing a munged URL by accident and discovering security flaws. What idiot wants to live in a world where even such "security research" done offline is made illegal? That is where Responsible Disclosure attempts to take us.

    Just as I would assume others innocent unless proven guilty of harm to ensure freedom, even though it would mean some crimes will go unpunished: I would accept that some information will make our lives harder, some data may even allow the malicious to have a temporary unfair advantage over us, but the alternative is to simply allow even fewer potentially malicious actors to have an even greater power of unfair advantage over even more of us. I would rather know that my Windows box is vulnerable and possibly put a filter in my IDS than trust Microsoft to fix things, or excuse the NSA's purchasing of black-market exploits without disclosing them to their citizens. I would rather know OpenSSL may leak my information and simply recompile it without the heartbeat option immediately than trust strangers to do what's best for me if they decide to not do something worse.

    There is no such thing as unique genius. Einstein, Feynman, and Hawking, did not live in a vacuum; Removed from society all their lives they'd have not made their discoveries. Others invariably picked up from the same available starting points and solve the same problems. Without Edison we would still have electricity and the light bulb. Without Alexander Bell we would have had to wait one hour for the next telephone to enter the patent office. Whomever discovered this bug and came forward has no proof that others did not already know of its existence.

    Just like the government fosters secrecy of patent applications and reserves their right to exclusive optioning of newly patented technology, if Google had been required keep the exploit secret except to government agencies we may never have found out about heartbleed in the first place. Our ignorance enforced, we would have no other choice but to keep our systems vulnerable. Anyone who thinks hanging our heads in the noose of responsible disclosure a good idea is a damned fool.

  • The real scandal is how organisations are giving information to their users as to how they are affected and what users should do. Many big-name companies are using very specific phrasing such as "key services were not vulnerable", but no mention of secondary services...sounds like a liar's hiding place to me. There are also far too many who don't understand the problem such as Acronis [twitter.com], the Aus bank [theregister.co.uk] etc. Then the likes of Akamai who can't make their mind up. Some irresponsibly down-playing the whole thing an

  • If this hadn't been publicly disclosed, it would have just gone into the 0-day libraries which Intelligence agencies around the globe have been amassing. We'd never learn we were vulnerable, and their ability to impersonate and eavsdrop would have increased beyond any reasonably-articulatable expectation.

    Responsible disclosure to sufficient parties to address the issue would also expose it to potential attackers, and there will always be players with need-to-know who won't be identified for notification.

  • > and not as late as it did on April 1

    That must have been the most expensive April Fool's joke EVER.

    -f

  • The only thing you do by hiding this kind of information is limit the number of heads working to fix it. I'm tired of these attempts at plugging the hole in the dam by pretending the hole isn't there until someone plugs it.
  • Look, Google knew it. Google is part of prism. You are still wondering, if the NSA may have used Heartbleed?

  • Once the discoverer of the bug patched their own servers and the software creator has an official fix, the only ethical thing is to tell everyone at once. It is not realistic to expect a secret to be kept in a dozen independent companies with thousands of employees each. Also, why should Facebook get an unfair business advantage over Yahoo? Most users having dozens of accounts where overlapping private information is stored and get no benefit from just one server being patched.

    Make sure a fix is available a

  • by DERoss ( 1919496 ) on Friday April 18, 2014 @10:37AM (#46787789)

    Historically, so-called "responsible disclosure" has resulted in delayed fixes. As long as the flaw is not public and causing a drum-beat of demands for a fix and a possible loss of customers, the developer organization too often treats security vulnerabilities the same as any other bug.

    Worse, those who report security vulnerabilities responsibly and later go public because the fixes are excessively delayed often find themselves branded as villains instead of heroes. Consider the case of Michael Lynn and Cisco in 2005. Lynn informed Cisco of a vulnerability in Cisco's routers. When Cisco failed to fully inform its customers of the significance of the security patch, Lynn decided to go public at the 2005 Black Hat conference in Las Vegas. Cisco pressured Lynn's employer to fire him and also filed a lawsuit against Lynn.

    Then there was the 2011 case of Patrick Webster, who notified the Pillar Administration (major administrator of retirement plans in Australia) of a security vulnerability in their server. When the Pillar Administration ignored Webster, he used the vulnerability to extract personal data from about 500 accounts from his own pension plan (a client of the Pillar Administration). Webster made no use of the extracted personal data, did not disseminate the data, and did not go public. He merely sent the data to the Pillar Administration to prove the existence of the vulnerability. As a result, the Pillar Administration notified Webster's own pension plan, which in turn filed a criminal complaint against Webster. Further, his pension plan then demanded that Webster reimburse them for the cost of fixing the vulnerability and sent letters to other account holders, implying that Webster caused the security vulnerability.

    For more details, see my "Shoot the Messenger or Why Internet Security Eludes Us" at http://www.rossde.com/editoria... [rossde.com].

    • by raynet ( 51803 )

      Webster was wrong, you never ever should exploit a system you don't own or are hired to pentest. If you find a security hole in a server and they don't respond, you should just go public with the exploit and most likely let someone else to hack the system for you.

  • by drolli ( 522659 )

    If i find a bug which is critical to my employer while being plaid by my employer, the first and only thing which is do is assess the impact to my emplyer, and identify the most important measures for the employers business.

    IMHO they acted correctly: protect your own systems, and then the systems with the biggest impact.

  • This is foolish when you apply a patch to an open source project it essentially becomes public knowledge to anyone who is paying attention at that point. The more you do this the more eyes on patches. This only yields ignorance and suppresses urgency.

    Only telling a select few (normally by subscription to very expensive security services) gives giant media an advantage it is not clear to me they have a right to or in any way deserve.

    Finally as much money locked up in black/gray hat activities we don't need

  • One thing I haven't heard discussed is whether affected companies should be notifying their end users about whether they were affected and when it was fixed. I haven't heard from my bank, for example. Where they ever vulnerable? Should I update my password? If they were vulnerable, is it fixed now or would I just be handing an attacker my new password if I were to reset it today?

    I wrote up a proposal called Heartbleed headers [heartbleedheader.com] for communicating this information to site visitors. While I'd like it if everyone

My sister opened a computer store in Hawaii. She sells C shells down by the seashore.

Working...