Heartbleed Sparks 'Responsible' Disclosure Debate 188
bennyboy64 writes: "IT security industry experts are beginning to turn on Google and OpenSSL, questioning whether the Heartbleed bug was disclosed 'responsibly.' A number of selective leaks to Facebook, Akamai, and CloudFlare occurred prior to disclosure on April 7. A separate, informal pre-notification program run by Red Hat on behalf OpenSSL to Linux and Unix operating system distributions also occurred. But router manufacturers and VPN appliance makers Cisco and Juniper had no heads up. Nor did large web entities such as Amazon Web Services, Twitter, Yahoo, Tumblr and GoDaddy, just to name a few. The Sydney Morning Herald has spoken to many people who think Google should've told OpenSSL as soon as it uncovered the critical OpenSSL bug in March, and not as late as it did on April 1. The National Cyber Security Centre Finland (NCSC-FI), which reported the bug to OpenSSL after Google, on April 7, which spurred the rushed public disclosure by OpenSSL, also thinks it was handled incorrectly. Jussi Eronen, of NCSC-FI, said Heartbleed should have continued to remain a secret and be shared only in security circles when OpenSSL received a second bug report from the Finnish cyber security center that it was passing on from security testing firm Codenomicon. 'This would have minimized the exposure to the vulnerability for end users,' Mr. Eronen said, adding that 'many websites would already have patched' by the time it was made public if this procedure was followed."
No Good Solution. (Score:5, Insightful)
Re:No Good Solution. (Score:4, Insightful)
Indeed. But there is a _standard_ solution. Doing it in various ways is far worse than picking the one accepted bad solution.
Re:No Good Solution. (Score:4, Interesting)
Standard means jack. As long as there is no good reason (like, say, avoiding a fine that breaks your back or jail time) bugs like that are not being told, they're being sold.
Re: (Score:2)
I don't.
Re: (Score:2)
Indeed. But there is a _standard_ solution.
Citation needed.
Re: (Score:2)
So the solution to competing interrests and mutually exclussive valid concerns is to always pick only one concern/interrest and always ingore the others?
Good luck with that.
Re: (Score:3, Interesting)
There is no right, it's already gone bad so you've just got a lot of wrongs to choose from. So my opinions on disclosure are informed by risk minimization. Or to borrow a term, "harm reduction."
The order people were informed about heartbleed smells more like matter of "It's about who you know." than getting the problem fixed. If OpenSSL isn't at or or real close to the top of the list of people you contact the first day, you're either activity working against an orderly fix or don't trust the OpenSSL fol
We protected 1 billion people by notifying trusted (Score:3)
This was handled similarly to a flaw I discovered, and I think it makes sense. Facebook, for example, has about a billion users. If you have a colleague you trust at Facebook, informing that one colleague can protect Facebook's billion users.
The risk is of a leak before a fix is widely deployed is dependent on a) the number of people you inform and b) how trustworthy those people are to keep quiet for a couple of days. It's quite reasonable to minimize the risk of a leak by keeping it low profile for a fe
But what if someone *is* harmed by the delay? (Score:2)
Nobody was harmed by hearing about it on Tuesday rather than on Monday
Isn't that assumption where the whole argument for notifying selected parties in advance breaks down?
If you notify OpenSSL, and they push a patch out in the normal way, then anyone on the appropriate security mailing list has the chance to apply that patch immediately. Realistically, particularly for smaller organisations, it will often be applied when their distro's mirrors pick it up, but that was typically within a couple of hours for Heartbleed, as the security and backporting guys did a great job at ba
Wrong math. 2 years of vulnerability. (Score:4, Insightful)
> they had a whole day to attack everyone who wasn't blessed with the early knowledge, instead of a couple of hours
Years, not hours. Assuming the bad guys knew about it, they had two YEARS to attack people. If we told people that there was an issue on Monday, that doesn't protect them - they just know that their vulnerable. They couldn't do anything about it until the update packages were available on Tuesday.
On the other hand, had we made it public on Monday, we would have GUARANTEED that lots of bad guys knew about it, during a period in which everyone was vulnerable.
I'm talking about what we did here. It appears to me that Google definitely screwed up by not telling the right people on the OpenSSL team much sooner. (Apparently they told _someone_ involved with OpenSSL right away, but not the right soemone.)
> you protect some large sites, but those large sites are run by large groups of people. For one thing, they probably have full time security staff who will get the notification as soon as it's published, understand its significance, and act on it immediately.
ROTFL. Yep, large corporate bureaucracies, they ALWAYS do exactly the right thing, in a matter of hours.
Re: (Score:2)
You're latching onto this specific case, perhaps because you have some connection to it, but I'm talking about the general principle here. In general, it is not unreasonable to assume that if a vulnerability has been found by two parties in rapid succession, there may be a common factor involved, which may mean that other parties will also find it in the same time frame, and that an extra day may therefore be very significant.
Obviously most serious security bugs don't sit there for years, then have two grou
PS: how do you think it gets on the distro mirror? (Score:2)
> Isn't that assumption where the whole argument for notifying selected parties in advance breaks down? ...
> it will often be applied when their distro's mirrors pick it up, but that was typically within a couple of hours for Heartbleed
How do you think those packages get on the mirrors? Do their servers magically patch the code, rebuild the packages, and set it as a high priority update? The fix gets on the mirrors as a result of "notifying selected parties in advance".
Re: (Score:2)
I think there is a qualitative difference between notifying large end users like Facebook in advance, and notifying people in the distribution system for a general release. It's the former that inherently means the people who aren't large end users with privileged access get left exposed for longer than necessary, and that's what I'm objecting to.
Re: (Score:2)
This was handled similarly to a flaw I discovered, and I think it makes sense. Facebook, for example, has about a billion users. If you have a colleague you trust at Facebook, informing that one colleague can protect Facebook's billion users.
Ah yes, the duckface pictures of a bunch of teens are way more important than, let's say, millions of tax returns.
Nothing can protect those tax returns, only endang (Score:2)
There is no option that's going to protect those tax returns. Telling the bad guys about it will certainly endanger the tax return data, though.
Since many (most?) people use the same or similar password for Facebook as they use for their tax service, protecting Facebook traffic actually protects a few tax returns.
What clearly isn't an effective option would be to announce the vulnerability to hundreds of tax-preparer sites before a updated package is available, expecting them to manually (and correctly) pat
agreed, openssl should have been notified immediat (Score:2)
> OpenSSL should have been near, if not at the top of, the list of groups contacted.
Absolutely. In the case I mentioned where I found the vulnerability, the FIRST contact I made was the development team.
As to the fact that people can't be protected on every site until the updated packages are out, how does that mean they should NOT be protected when possible? Are you sad that it's "unfair" that they are protected on some sites and not others? So you'd like to remedy that by exposing their data ALL the
Blame Game. (Score:5, Insightful)
That is the biggest problem. Other then rewarding the people who fix the problem, we try to figure out who is to blame for every freaking thing.
Oh look a flood hit the city unexpected, well lets blame the mayor for not thinking about this unexpected incident.
Or a random guy blew up something, why didn't the CIA/NSA/FBI know that he was doing this...
We are trying to point blame on too many things, and less time trying to solve the problem.
Re: (Score:2)
That is the biggest problem. Other then rewarding the people who fix the problem, we try to figure out who is to blame for every freaking thing.
"Fix the problem, not the blame."
Rising Sun (1993) - Capt. John Connor (Sean Connery)
Re: (Score:2)
Or a random guy blew up something, why didn't the CIA/NSA/FBI know that he was doing this...
If those organizations are going to continue receiving more money, more privilege, and less oversight in the name of protecting us from terrorists, then they deserve blame when they have nothing to show for what they have taken from us.
Re: (Score:2)
Yeah I don't think the East is any better than West in this area. Communism focuses on the big picture and community? What, by pretending problems don't exist, denying they exist, waiting for a catastrophe and then accepting the blame, but oh well it's too late, and by the way we're the all powerful communist party, don't even think about criticizing us?
Re: (Score:2)
Well at least in the west we actually state that there is a problem. In eastern cultures there is too much ignoring that there is even a problem.
There is nothing wrong about making a fuss about a problem. But after we make the fuss you need to do something to fix it.
Not making a fuss about it makes it too easy to hide away.
Re: (Score:2)
The answer is actually simple. Once you have determined that there is an (to quote Bruce Schneier) 11 out of 10 security problem you need to get the servers turned off. Everywhere.
If the FBI or Interpol or Bruce Schneier basically said "There is a serious exploit in OpenSSL, you (as in every organization running it) need to shut down every server now, we will provide the details and fix in 48 hours."
Yes, the bad guys will now know that OpenSSL has an exploit. But they won't exactly know where to start looki
Re: (Score:2)
Therefore the best solution is to public release so everyone has the information at the same time. Let them compete for the patch; Awful software publisher will be the one caught with bugs. Good one will be patch and secure while everyone else suffer their bad choice.
Over time the best software will prevail and only idiots will still be using Microsoft products... that the theory. In practice there is corruption and bad software will linger for decades.
It's not about how fast you patch, it's about how fast you can get patches to your customers. And for the OpenSSL flaw, there were devices where the patch process is "throw it away and buy a new one".
Anyhow, Microsoft is far and away the worlds leading expert at distributing security patches - no one really has more experience or such a well-tuned corporate ecosystem. MS pushed a critical security patch out to WU, and every major corporation knows just what to do, and understand the urgency, and has a wel
WTF? (Score:5, Insightful)
The only possible way is to disclose to the responsible manufacturer (OpenSSL) and nobody else first, then, after a delay given to the manufacturer to fix the issue, disclose to everybody. Nothing else works. All disclosures to others have a high risk of leaking. (The one to the manufacturer also has a risk of leaking, but that cannot be avoided.)
The other thing is that as soon as a patch is out, the problem needs to be disclosed immediately by the manufacturer to everybody (just saying "fixed critical security bug" is fine), as the black-hats watch patches and will start exploiting very soon after.
All this is well known. Why is this even being discussed? Are people so terminally stupid that they need to tell some "buddies"? Nobody giving out advance warnings to anybody besides the manufacturer deserves to be in the security industry in the first place as they do not get it at all or do not care about security in the first place.
Re:WTF? (Score:5, Interesting)
The only possible way is to disclose to the responsible manufacturer (OpenSSL) and nobody else first, then, after a delay given to the manufacturer to fix the issue, disclose to everybody. Nothing else works. All disclosures to others have a high risk of leaking. (The one to the manufacturer also has a risk of leaking, but that cannot be avoided.)
It's not about leaking. The reason I'm not alone in the security community to rage against this "responsible disclosure" bullshit is not that we fear leaks, but that we know most of the exploits are already in the wild by the time someone on the whitehat side discovers it.
Every day you delay the public announcements is another day that servers are being broken into.
Re:WTF? (Score:4, Insightful)
If no fix is available yet, they're still being broken into - but you've just added the thousands of hackers who *didn't* know about it to the list of those exploiting it.
Re: (Score:2)
Re: (Score:2)
Couldn't sys admins disable the heartbeat feature as a preventive measure while the patch was prepared? Please note that I'm rather ignorant on all the things involved, but AFAIK the feature in question in the very recent case was not crititcal and could be disabled with minimal damages to the functioning of the service.
I agree with you, though, that the developers should be informed of it first. But I also think that it depends on the issue. If you tell me that feature x in software a has a security issue
Re: (Score:2)
Not really. Disabling the patch took changing the sources manually and rebuilding OpenSSL, something most sysadmins cannot do or cannot do fast.
I think the main problem with the flavor of responsible disclosure some part of the security community is raging against is that this flavor allows the developers to say how long they need, and that has been abused. But giving them zero time is just malicious.
Re:WTF? (Score:5, Interesting)
Re: (Score:2)
Yes, this argument is being made a million times and it doesn't prove anything because it rests on so many assumptions that may or may not be true that it's total truth value is about as good as tossing a coin.
The two most important:
First, you assume that the official patch is the only thing that can be done. In many, many cases there are other (temporary) measures that can be taken to mitigate a problem or limit its impact. Who are you to decide for everyone on the planet with their different needs and sce
Re: (Score:2)
I am not talking about giving the manufacturer a lot of time. But if the bug is already exploited in the wild, chances are it has been for a while, so a few more days matter little. However, quite often nothing can be done before a patch is available and then too early public disclosure does a lot more harm than good.
Re: (Score:2)
You are right on those.
Except for the "nothing can be done" part. That's not your judgement call to make. There is always at least one option - pulling the power plug - and it might well be a feasable temporary solution for some people affected.
Re: (Score:2)
And to amplify it in the meantime. Well done.
Re: (Score:2)
Apparently you have not heard about companies that collapse if they are offline for a day or so. But then, with the level of stupidity your answer displays, you would not have...
Re: (Score:2)
It's not about leaking. The reason I'm not alone in the security community to rage against this "responsible disclosure" bullshit is not that we fear leaks, but that we know most of the exploits are already in the wild by the time someone on the whitehat side discovers it.
Every day you delay the public announcements is another day that servers are being broken into.
So are you going to take your server offline until there is a patch? Or are you going to write a patch yourself?
I think giving the software vendor 2 weeks to fix the bug (1 week if it's trivial or you provide the patch)
is reasonable as 99% of people are not going to be able to do anything about it until there is a patch anyways.
As soon as the patch is available then it should be publicly announced.
Re: (Score:2)
So are you going to take your server offline until there is a patch?
Depends, but yes for many non-essential services, that is indeed an option. Imagine your actual web service doesn't use SSL, but your admin backend does. It's used only by employees on the road, because internal employees access it through the internal network.
Sure you can turn that off for a week. It's a bit of trouble, but much better than leacking all your data.
Or if it's not about your web service, but about that SSL-secured VPN access to your external network? If you can live without home office for a
Re: (Score:2)
Every day you delay the public announcements is another day that servers are being broken into.
Yes, but it's also easier to make use of the exploit information to produce an exploit than a patch. That's why it's responsible to report the bug to the maintainers before announcing it publicly. But your argument is the reason why you don't wait indefinitely for the maintainers to kick out a patch, either.
As usual, the answer lies somewhere between extremes.
Re: (Score:2)
As usual, the answer lies somewhere between extremes.
My preferred choice of being left alone or being beaten to a pulp is being left alone, not some compromise in the middle, thank you. Just because there are two opposing positions doesn't mean that the answer lies in the middle.
I've given more extensive reasoning elsewhere, but it boils down to proponents of "responsible disclosure" conveniently forgetting to consider that every delay also helps those bad guys who are in posession of the exploit. Not only can they use it for longer, they can also use it for
Re: (Score:2)
Don't pretend sysadmins are powerlessly waiting with big eyes for the almighty vendor to issue a patch.
But most of them in fact are in that situation. If you want to make no real sysadmin comments, I may well agree, but it doesn't change much.
Re: (Score:2)
In many large organizations, you have segregation of duties. This boils down to the sysadmins not being allowed to patch and recompile code. They are allowed to install a vendor patch though. And yes, segregation of duties is a good idea.
Re: (Score:2)
Absolutely.
But we were talking about mitigating measures. That is almost never patch and recompile, it's things like turning off a service, changing the firewall rules, moving servers into a different network - things that are very much within the duties of the sysadmin (with proper clearance and risk acceptance by management, etc. etc.)
Basically, if you have a bug that makes your internal network open to the world, but you can avoid it by disabling feature X in the config file, and your company doesn't req
Re: (Score:2)
Sorry, in many large organizations, the sysadmins are not allowed (or able) to change firewall configurations either. And sign-offs, even in emergencies like these, may take a few days.
Re: (Score:2)
But we were talking about mitigating measures. That is almost never patch and recompile, it's things like turning off a service, changing the firewall rules
But we're talking about this in the context of Heartbleed, where pre-patch mitigation involved disabling critical services... A patch is what was needed here, and nothing else would suit.
Re: (Score:2)
Yeah, there was absolutely nothing anyone could do. Oh wait, except for this brutally complex and technically challenging thing right from the official vulnerability announcement:
This issue can be addressed by recompiling OpenSSL with the -DOPENSSL_NO_HEARTBEATS flag. Software that uses OpenSSL, such as Apache or Nginx would need to be restarted for the changes to take effect.
That was definitely not a feasabole option for anyone on the planet...
Re: (Score:2)
Indeed. There is however a certain type of wannabe "hacker" that needs to turn things into power-plays. These will disclose immediately and inflate their ego that way, no matter what damage this does.
Re: (Score:2)
Sorry, but that really is nonsense. All that immediate disclosure can cause is panic. It does not help at all. It increases attacks in the short-term, because most people are not able to do anything without patch.
Sure, you can argue for very short time given to the manufacturer, like a few days (they will not make that large a difference for the ones already using the weakness, most weaknesses exist for quite a while before they are discovered by the white-hats and analysis also takes some time), and some
Re: (Score:2)
The thing is that the manufacturer must not be the one to set the time they get to fix this
I agree on that 100%
most people are not able to do anything without patch.
That depends a lot on the particular problem. In many cases, there are mitigating measures that can be taken until a patch is available, and I'd argue strongly that the people affected should make the call on that, not you or I or anyone else.
By withholding information, you are making decisions for other people. But you are not in a position to make that call, because you are not the one who suffers the consequences.
I advocate for giving everyone all the information so they all can act a
Re: (Score:3)
"Very well known?" This is very much *not* the way how for example many security bugs in linux distributions are handled (http://oss-security.openwall.org/wiki/mailing-lists/distros). Gradual disclosure along a well-defined timeline limits damage of exposure to blackhats and at the same time allows enough reaction time to prepare and push updates to the user. So typically, once the software vendor has fixed the issue, they would notify distributions, which would be given some time to prepare and test an upd
Re: (Score:2)
When Heartbleed was disclosed, virtually no affected vendor (e.g., Ubuntu, Cisco, Juniper, etc.) had an update available. So there was a window where the vulnerability was public, but nobody had official updates from their vendor that would protect them. You are claiming that this is better than a coordinated rele
Re: (Score:2)
But isn't the Heartbeat feature a part of the software that is optional and can be disabled?
Re: (Score:2)
There's no one-size fits all solution. I've made the argument for informed disclosure [bfccomputing.com] here in the past, but in this case it probably wouldn't work. The DTLS code is so small and self-contained and the code so obvious to an auditor that just saying that there's an exploit in DTLS or to compile without heartbeat is probably enough to give the blackhats a running start. But there are other situations where informed disclosure is better than responsible disclosure.
Did Google do the right thing here? I'm n
As bad ideas go... (Score:4, Insightful)
This notion ranks right up there. Manufacturer was told. Everybody else was then told. That's how it's supposed to work. This notion of "let's just tell our close friends and leave everybody else in the dark" is silly. You'd only wind up leaving most people open to exploit, because if you think your secret squirrel society of researchers doesn't have leaks, you're deluding yourself.
CISSP opinion: the patch proves Google f***ed up (Score:2)
>> Google notified OpenSSL about the bug on April 1 in the US – at least 11 days after discovering it.
"OK, maybe it was caught up in legal. Suits at large corporations can take a while."
>> Google would not reveal the exact date it found the bug, but logs show it created a patch on March 21,
"On second thought, if the geeks on the ground had the authority to patch and roll to production, then why the finger to the Open Source community, Google?"
Issue? (Score:5, Insightful)
What exactly is the issue here? Maybe I misread TFS and the linked articles, but as I understand the chief complaint - apart from Google's delay in reporting to OpenSSL - is that some large commercial entities did not receive a notification before public disclosure. I did not dig all too deep into the whole issue, but as far as I can tell OpenSSL issued their advisory in lieu with a patched version. What more do they expect? And why should "Cisco[,] Juniper[,] Amazon Web Services, Twitter, Yahoo, Tumblr and GoDaddy" get a heads-up on the public disclosure? I did not get a heads-up either. Neither did the dozens or so websites not named above that I use. Neither did the governmental agency I serve with. Nor the bank whose online-banking portal I use. Are we all second-class citizens? Does our security matter less simply because we provide services to fewer people, or bring lower or no value to the exchange?
A bug was reported, a fix was issued, recommendations for threat mitigation were published. There will need to be consequences for the FLOSS development model to reduce the risk for future issues of the sort, but beyond that I do not quite understand the fuss. Can someone enlighten me please?
wtf ? (Score:4, Interesting)
IT security industry experts are beginning to turn on Google and OpenSSL, questioning whether the Heartbleed bug was disclosed 'responsibly.
Are you fucking kidding me? What kind of so-called "experts" are these morons?
Newflash: The vast majority of 0-days are known in the underground long before they are disclosed publicly. In fact, quite a few exploits are found because - drumroll - they are actively being exploited in the wild and someone's honeypot is hit or a forensic analysis turns it up.
Unless you have really, really good reasons to assume that this bug is unknown even to people whose day-to-day business is to find these kinds of bugs, there is nothing "responsible" in delaying disclosure. So what if a few script-kiddies can now rush a script and do some shit? Every day you wait is one day less for the script kiddies, but one day more for the real criminals.
Stop living in la-la-land or in 1985. The evil people on the Internet aren't curious teenagers anymore, but large-scale organized crime. If you think they need to read advisories to find exploits, you're living under a rock.
Re: (Score:3)
Newflash: The vast majority of 0-days are known in the underground long before they are disclosed publicly. In fact, quite a few exploits are found because - drumroll - they are actively being exploited in the wild and someone's honeypot is hit or a forensic analysis turns it up.
It's not that black and white. You expose the vulnerability to even more crackers if you go shouting it around like was done here.
Re:wtf ? (Score:4, Insightful)
As an end-user I'm glad it was shouted about because it gave me the chance to check that any software that could affect me financially was updated or invulnerable.
So, can you tell me why I shouldn't be notified?
Re: (Score:2)
Re: (Score:2)
Yes, which is why the best compromise is a private disclosure to whoever can *fix* the bug, followed by a public announcement alongside the fixed release. That limits the disclosure to the minimum necessary while the flaw is unfixed.
Re: (Score:2)
There's a black market where you can buy and sell 0-days.
Sure you give it to more people (and for free) than before. But the really dangerous people are more likely than not to already have it.
Re: (Score:2)
That is true. However, you also need to take a few other things into account. I'll not go into detail, I think everyone has enough knowledge and imagination to fill in the blanks:
One Cyberneticist's Ethics (Score:3)
Once again the evil of Information Disparity rares its ugly head. To maximize freedom and equality entities must be able to decide and act by sensing the true state of the universe, thus knowledge should be propagated at maximum speed to all; Any rule to the contrary goes against the nature of the universe itself.
They who seek to manipulate the flow of information wield the oppression of enforced ignorance against others despite their motive for doing so. The delayed disclosure of this bug would not change the required course of action. The keys will need to be replaced anyway. We have no idea whether they were stolen or not. We don't know who else knew about this exploit. Responsible disclosure is essentially lying by omission to the world. That is evil as it stems from the root of all evil: Information Disparity. The sooner one can patch their systems the better. I run my own servers. Responsible disclosure would allow others to become more aware than I am. Why should I trust them not to exploit me if I am their competitors or vocal opponent? No one should decide who should be their equals.
Fools. Don't you see? Responsible disclosure is the first step down a dangerous path whereby freely sharing important information can be outlawed. The next step is legislation to penalize the propagators of "dangerous" information, whatever that means. A few steps later will have "dangerous" software and algorithms outlawed for national security, of course. If you continue down this path soon only certain certified and government approved individuals will be granted license to craft certain kinds of software, and ultimately all computation and information propagation itself will be firmly controlled by the powerful and corrupt. For fear of them taking a mile I would rather not give one inch. Folks are already in jail for changing a munged URL by accident and discovering security flaws. What idiot wants to live in a world where even such "security research" done offline is made illegal? That is where Responsible Disclosure attempts to take us.
Just as I would assume others innocent unless proven guilty of harm to ensure freedom, even though it would mean some crimes will go unpunished: I would accept that some information will make our lives harder, some data may even allow the malicious to have a temporary unfair advantage over us, but the alternative is to simply allow even fewer potentially malicious actors to have an even greater power of unfair advantage over even more of us. I would rather know that my Windows box is vulnerable and possibly put a filter in my IDS than trust Microsoft to fix things, or excuse the NSA's purchasing of black-market exploits without disclosing them to their citizens. I would rather know OpenSSL may leak my information and simply recompile it without the heartbeat option immediately than trust strangers to do what's best for me if they decide to not do something worse.
There is no such thing as unique genius. Einstein, Feynman, and Hawking, did not live in a vacuum; Removed from society all their lives they'd have not made their discoveries. Others invariably picked up from the same available starting points and solve the same problems. Without Edison we would still have electricity and the light bulb. Without Alexander Bell we would have had to wait one hour for the next telephone to enter the patent office. Whomever discovered this bug and came forward has no proof that others did not already know of its existence.
Just like the government fosters secrecy of patent applications and reserves their right to exclusive optioning of newly patented technology, if Google had been required keep the exploit secret except to government agencies we may never have found out about heartbleed in the first place. Our ignorance enforced, we would have no other choice but to keep our systems vulnerable. Anyone who thinks hanging our heads in the noose of responsible disclosure a good idea is a damned fool.
Public-facing disclosure (Score:2)
The real scandal is how organisations are giving information to their users as to how they are affected and what users should do. Many big-name companies are using very specific phrasing such as "key services were not vulnerable", but no mention of secondary services...sounds like a liar's hiding place to me. There are also far too many who don't understand the problem such as Acronis [twitter.com], the Aus bank [theregister.co.uk] etc. Then the likes of Akamai who can't make their mind up. Some irresponsibly down-playing the whole thing an
WRONG (Score:2)
If this hadn't been publicly disclosed, it would have just gone into the 0-day libraries which Intelligence agencies around the globe have been amassing. We'd never learn we were vulnerable, and their ability to impersonate and eavsdrop would have increased beyond any reasonably-articulatable expectation.
Responsible disclosure to sufficient parties to address the issue would also expose it to potential attackers, and there will always be players with need-to-know who won't be identified for notification.
Doesn't ANYONE get it??? (Score:2)
> and not as late as it did on April 1
That must have been the most expensive April Fool's joke EVER.
-f
Global release is preferable. (Score:2)
Full disclosure, nothing else (Score:2, Interesting)
Look, Google knew it. Google is part of prism. You are still wondering, if the NSA may have used Heartbleed?
Certainly semi-public state is the worst (Score:2)
Once the discoverer of the bug patched their own servers and the software creator has an official fix, the only ethical thing is to tell everyone at once. It is not realistic to expect a secret to be kept in a dozen independent companies with thousands of employees each. Also, why should Facebook get an unfair business advantage over Yahoo? Most users having dozens of accounts where overlapping private information is stored and get no benefit from just one server being patched.
Make sure a fix is available a
Actual Experience Against "Responsible Disclosure" (Score:5, Interesting)
Historically, so-called "responsible disclosure" has resulted in delayed fixes. As long as the flaw is not public and causing a drum-beat of demands for a fix and a possible loss of customers, the developer organization too often treats security vulnerabilities the same as any other bug.
Worse, those who report security vulnerabilities responsibly and later go public because the fixes are excessively delayed often find themselves branded as villains instead of heroes. Consider the case of Michael Lynn and Cisco in 2005. Lynn informed Cisco of a vulnerability in Cisco's routers. When Cisco failed to fully inform its customers of the significance of the security patch, Lynn decided to go public at the 2005 Black Hat conference in Las Vegas. Cisco pressured Lynn's employer to fire him and also filed a lawsuit against Lynn.
Then there was the 2011 case of Patrick Webster, who notified the Pillar Administration (major administrator of retirement plans in Australia) of a security vulnerability in their server. When the Pillar Administration ignored Webster, he used the vulnerability to extract personal data from about 500 accounts from his own pension plan (a client of the Pillar Administration). Webster made no use of the extracted personal data, did not disseminate the data, and did not go public. He merely sent the data to the Pillar Administration to prove the existence of the vulnerability. As a result, the Pillar Administration notified Webster's own pension plan, which in turn filed a criminal complaint against Webster. Further, his pension plan then demanded that Webster reimburse them for the cost of fixing the vulnerability and sent letters to other account holders, implying that Webster caused the security vulnerability.
For more details, see my "Shoot the Messenger or Why Internet Security Eludes Us" at http://www.rossde.com/editoria... [rossde.com].
Re: (Score:2)
Webster was wrong, you never ever should exploit a system you don't own or are hired to pentest. If you find a security hole in a server and they don't respond, you should just go public with the exploit and most likely let someone else to hack the system for you.
No. (Score:2)
If i find a bug which is critical to my employer while being plaid by my employer, the first and only thing which is do is assess the impact to my emplyer, and identify the most important measures for the employers business.
IMHO they acted correctly: protect your own systems, and then the systems with the biggest impact.
I don't trust "secret circles" (Score:2)
This is foolish when you apply a patch to an open source project it essentially becomes public knowledge to anyone who is paying attention at that point. The more you do this the more eyes on patches. This only yields ignorance and suppresses urgency.
Only telling a select few (normally by subscription to very expensive security services) gives giant media an advantage it is not clear to me they have a right to or in any way deserve.
Finally as much money locked up in black/gray hat activities we don't need
Next up: customer notification (Score:2)
One thing I haven't heard discussed is whether affected companies should be notifying their end users about whether they were affected and when it was fixed. I haven't heard from my bank, for example. Where they ever vulnerable? Should I update my password? If they were vulnerable, is it fixed now or would I just be handing an attacker my new password if I were to reset it today?
I wrote up a proposal called Heartbleed headers [heartbleedheader.com] for communicating this information to site visitors. While I'd like it if everyone
Re: (Score:2)
Mindless propaganda and, as it happens, untrue. See for example http://developers.slashdot.org... [slashdot.org]
But I guess proponents of closed source will always use any lie that is handy, just to propagate their ideology.
Re:Not that good (Score:4, Insightful)
Would you put your life on closed source software not having any bugs that we just don't know about because it's closed source and hence can NOT be reviewed sensibly?
Closed source and open source share one problem: Both can and will have bugs. Open source only has the advantage that they will be found and published. In closed source, usually NDAs keep you from publishing anything you might come across, ensuring that knowledge about these bugs stays within certain groups that have a special interest in not only knowing about it but abusing them.
Re: (Score:3)
Open source only has the advantage that they will be found and published. In closed source, usually NDAs keep you from publishing anything you might come across, ensuring that knowledge about these bugs stays within certain groups that have a special interest in not only knowing about it but abusing them.
That doesn't still automatically mean that closed source fares worse in found bugs. Companies often have quite bad-ass internal quality assurance measures. They have money to put in it and, it actually produces them value. There is an incentive to do it properly. Of course the tools and methodologies vary from company to company. But let's take Microsoft: they have very rigorous code quality standards and very thorough code audits, before anything gets out from the house.
Sure, we can have lots of eyeballs s
Re: (Score:3)
Sorry, but no. Just because it produces them revenue doesn't mean they have an incentive to do it properly. They have an incentive to do it good enough that people buy it. That does not necessarily mean that the software is of high quality.
What is necessary to this end is that the software appeals to decision makers. They are rarely if ever the same people that are by any means qualified to assess the technical quality of code.
For reference, see SAP.
Re: (Score:2)
I recently worked on a project where two weeks of development time was followed by 4 months of testing.
Although 3 and a half months of those 4 was waiting for environments to be built and paperwork to go through
Re: (Score:2)
So it was 2 weeks of development and 2 weeks of testing. If you can't get laid for a year and then get laid you don't say that you had sex for a year and ten minutes, do you?
Re: (Score:2)
Re:Needless subject (Score:4, Insightful)
The whole point of OSS is that I do not need to trust it. I can review it if I please.
Trustworthiness is only a matter with closed source. Because there all I can really do is trust its maker.
False sense of security (Score:2)
The whole point of OSS is that I do not need to trust it. I can review it if I please.
But you didn't review it and find the vulnerability, did you?
And apparently, despite the significance and widespread use of this particular piece of OSS, for a long time no-one else did either, or at least no-one who's on our side did.
Your argument is based on theory. The AC's point is based on pragmatism. It's potentially an advantage that OSS can be reviewed by anyone, but a lot of the time that gives a false sense of security. What matters isn't what could happen, it's what actually does happen.
Re: (Score:3)
What I really don't like about the whole statement behind it is the implied assumption that closed source offered any kind of better protection.
You know what's the main difference between an OSS and a CSS audit? That I can't go "hey, psst, take a look at $code. Maybe you see something interesting..." to you when I find something in CSS software and someone in a badly fitting suit tells me to shut up about it.
Re: (Score:2)
What I really don't like about the whole statement behind it is the implied assumption that closed source offered any kind of better protection.
Which statement do you think implied that? I don't see anything about it in this thread.
Re: (Score:2)
There are exactly two possible ways you can handle your source code: You can keep it secret or you can publish it. Everything else is only a variant of either.
So if you claim one of them offers bad security protection, you imply that the other one offers a better one.
Re:Not that good (Score:4, Interesting)
Open source software is often made freely available at no costs to downloaders and embedders. There is little incentive for these users to pay anything for it, including for support, since the main reason to adopt this software is to not pay at all.
Well, one could hope that issues like this will prompt those selfish companies to begin either developing their own software & quit relying on the freely given work of others or give them an incentive to support those who are building the critical software components. My personal opinion is that if a company is going to utilize a FOSS project and do self support, that they would provide some sort of resource back to the project.
Further aggravating the issue is the claim by activists that the software code is reviewed by millions of people as it is freely available to anyone. The fallacy of this claim resides in the lack of interest of anyone to do this. Indeed, who would review other people's code for free or for fun?
I happen to know several people who like reviewing & examining other people's code, especially complex code like what one would find in OpenSSL. These are the same type of people who just so happen to be the ones fixing a lot of the bugs you run into in OSS projects. It is people like that who make OSS projects succeed. I mean Linus Torvalds wrote Linux as a hobby project, and continued to review people's additions as a part of that hobby(now he gets paid to do what he was doing for fun). I personally don't do it because my free time interests lie elsewhere, but I enjoy software development enough that I would without those other distractions. So I'd say your argument is invalid.
Re:Not that good (Score:4, Interesting)
Several fundamental mistakes in there.
First, OpenSSL is not typical of Free Software. Cryptography is always hard, and other than, say, an Office Suite, it will often break spectacularily if a small part is wrong. While the bug is serious and all, it's not typical. The vast majority of bugs in Free Software are orders of magnitude less serious.
Second, yes it is true that the notion that anyone can review the source code doesn't mean anyone will actually do it. However, no matter how you look at it, the number of people who actually do will always be equal or higher than for closed source software.
Third, the major flagships of Free Software are sometimes, but not always picked for price. When you're a fortune-500 company, you don't need to choose Apache to save some bucks. A site-license of almost any software will be a negliegable part of your operating budget.
And, 3b or so, contrary to what you claim, quite a few companies contribute considerable amounts of money to Free Software projects, especially in the form of paid-for support or membership in things like the Apache Foundation. That's because they realize that this is much cheaper than having to maintain a comparable software on their own.
Re: (Score:2)
However, no matter how you look at it, the number of people who actually do will always be equal or higher than for closed source software.
Why? I see little evidence that this is happening in general.
Most established OSS projects seem to require no more than one or two reviewers to approve a patch before it goes in, and then there is no guarantee that anyone will ever look at that code again later.
How does that guarantee that more experts will review a given piece of security code than in a proprietary, closed-source, locked-up development organisation that also has mandatory code reviews?
Re: (Score:2)
I didn't see it's the thousands of eyes that fanatics claim.
I'm simply saying that if your source code is open, your number of eyes on the project is (dev team) + (people looking at it) while for a closed source project the number is (dev team).
Since "people" cannot be negative, by necessity (dev team) + (other people) >= (dev team)
How does that guarantee that more experts will review a given piece of security code than in a proprietary, closed-source, locked-up development organisation that also has mandatory code reviews?
It doesn't.
It does guarantee that the number of reviewers is equal to or higher, provided everything else is equal.
Re: (Score:2)
Since "people" cannot be negative, by necessity (dev team) + (other people) >= (dev team)
You're still assuming that the dev teams, or to be more precise the parts of the dev teams who will actively review new code, are the same size. That isn't necessarily true at all, so the "provided everything else is equal" part of your last sentence is the problem here.
Re: (Score:2)
A site-license of almost any software will be a negliegable part of your operating budget.
It depends on what the software is. Some things are genuinely expensive, enough that while maybe a Fortune 500 can handle it, the many smaller companies out there tend to swoon at the prices charged. (These pieces of software tend to be in areas without major OSS competition.)
Re: (Score:2)
See other reply.
Yes, of course, a closed source development that does external code reviews can have more eyes on the project then an open source development that does no external code reviews. But then you're comparing apples and oranges.
Why free and fun? I review FOSS for a living. (Score:4, Informative)
> Indeed, who would review other people's code for free or for fun?
Some people do, of course. I have, specifically for security issues, because that's a major resume point in the security world - having actually found and fixed real-world security issues.
99% of the time, I'm being paid to review and improve open source code. All of those companies that use open source, including Google, have a vested interest in making sure that the code they use is good. Since it's open source, the Google techs can actually dig into the code and find issues like this, then fix it, just like they did in this case. They didn't do it for free and for fun, they did it because Google relies on OpenSSL.
My employer also relies on OSS. My job is to administer, maintain, and improve the OSS software we use. I've found and fixed security issues. Not for free and for fun, but because we want our systems to be secure, and having the source allows me to do that.
When I craft an improvement, at LEAST three people have to look at it before it's committed upstream. Typically, five or six people will comment on it and suggest improvements or state their approval before it's finalized.
Re: (Score:2)
Indeed, who would review other people's code for free or for fun?
Well, right offhand, Coverity will [coverity.com]. They're not perfect, of course, but they're pretty good. Their system didn't flag Heartbleed, but Heartbleed showed them how they could add a new test that would and that has reportedly found other possible issues [coverity.com], which are being investigated and will either be fixed or found to be false positives and used to refine the new test. Either way, not a bad thing.
Re:are we seriously blaming google (Score:5, Insightful)
>> are we seriously blaming google and not NSA who found the bug 4 years ago when the bug was first introduced?
Yes. The NSA is the US gov's lead black hat. Google's an advertising company that depends on people trusting the Internet for information and commerce. I'd expect the NSA to hoard information to assist their black-hatting, and I'd expect Google to quickly share anything they know so security vulnerabilities can be patched and people don't lose faith in the Internet*.
* = (Seriously, when people have asked me what to do about Heartbleed, I've said "don't buy anything you don't need, and try to avoid paying any bills online or doing any online checking for a week or two - then change your password as soon as you sign on.")
Re: (Score:2)
Yes. You don't have to notify people of the exact flaw and how it can be exploited, to help them protect themselves while waiting for a patch. The immediate response should have been to tell people to disable heartbeat, or barring that, shutdown their affected systems. Yes, it would suck, but since you don't know for sure that the exploit is known only to the researchers, you should assume it's in the wild, and this is the only safe thing to do in the interim. (Could this be used as a form of DoS? Sure, if
Re: (Score:2)
So does that mean you're suggesting the safest course would be to:
(a) tell everyone to shutdown ALL OpenSSL-backed services, urgently.
(b) after 1 day, tell everyone they can bring their 0.9.8 services back online.
(c) after 1 day, tell the remainder that it's okay to come back online, with heartbeat disabled.
(d) have the patch ready for distribution around this time.
?
I agree with the caution. TBD:
(1) there's the risk that telling admi
Re: (Score:2)
Shutting down the servers for a day will have some really major impacts on certain companies and customers. You have to provide a very good reason to urge it, and you're going to have to convince people that you do indeed know what you're doing, apparently without telling them what the vulnerability is. At that point, people can judge the possibility that you're correct against the cost (and possibly legal liability) of shutting down the servers. Different people will come to different decisions, and yo
Re: (Score:2)
This is really the only point that matters in this whole discussion: Is it fair for someone to have this information before someone else. The answer isn't a simple as the question.
If you ask me, I would want to have full immediate disclosure. The suggestion that the person reporting the bug is the first person to have found it is absurd. Black Hat interests are actively looking for these kinds of problems, and finding them is how they make a living. Forget corporations, Governments are the ones who will pay