NSA Allegedly Exploited Heartbleed 149
A user writes: "One question arose almost immediately upon the exposure of Heartbleed, the now-infamous OpenSSL exploit that can leak confidential information and even private keys to the Internet: Did the NSA know about it, and did they exploit if so? The answer, according to Bloomberg, is 'Yes.' 'The agency found the Heartbeat glitch shortly after its introduction, according to one of the people familiar with the matter, and it became a basic part of the agency's toolkit for stealing account passwords and other common tasks.'"
The NSA has denied this report. Nobody will believe them, but it's still a good idea to take it with a grain of salt until actual evidence is provided. CloudFlare did some testing and found it extremely difficult to extract private SSL keys. In fact, they weren't able to do it, though they stop short of claiming it's impossible. Dan Kaminsky has a post explaining the circumstances that led to Heartbleed, and today's xkcd has the "for dummies" depiction of how it works. Reader Goonie argues that the whole situation was a failure of risk analysis by the OpenSSL developers.
Re:It's time we own up to this one (Score:5, Interesting)
Re:This seems plausable (Score:4, Interesting)
Then it is analyzed by genius hackers who are paid top dollar for the job.
"Top dollar"? This is a government agency. They pay based on the GS scale. Even if the NSA's security hackers were classified at GS-15 (the highest rate), that's about $120K a year to begin – if they really are "geniuses" then they could do better in Silicon Valley, and probably feel better about their jobs as well.
In general, the GS scale pays somewhat more than typical private-sector rate for low-end jobs, but considerably less for high-end jobs.
Government contractors rake in the dough, but that money goes to politically-connected businessmen, not rank-and-file employees.
Re:It's not a bug (Score:3, Interesting)
You could probably try this thousands of times without actually obtaining any information of value. Sure, you might luck out and get the keys to the kingdom, but it seems like a crapshoot. From an attackers point of view, this might be better than nothing, but unless they have pretty near nothing to start from, it does not seem exciting.
And we know they have a lot more than nothing to start from. With Total Surveillance in effect on the net, with rootkits and zero-day exploits to deliver them, it's just really hard to see how this would add anything substantial to their toolkit.
No, I suspect this is exactly what it appears to be - a critical bug resulting from too much emphasis on fast and not enough on good. That's hardly unique to OpenSSL, it's a chronic problem across the industry as a whole.
Highly likely that NSA knew early on (Score:3, Interesting)
...
I have not yet grasped the full scope of the implications of this bug, but if you take the stance that things that could have been done also has been done (imho the only safe assumption), is this a good characterization? Or are there any limiting factors that makes this impossible? Like for example the amount of memory that could be leaked while the application is running (as servers aren't restarted often) is certain information that is stored statically in memory potentially not reachable?
During the last two years:
1. Any/all certificates used by servers running openssl 1.0.1 might have been compromized and should be revoked (the big cert-reset of 2014?)
2. Because of 1, any/all data sent over a connection to such servers might now be know by a bad MITM (i.e. for large scale: the various security services/hostile ISPs, local scale/targeted attacks: depends on who else happened to know, and this person/organization happened to be your adversary, looks unlikely, but who knows...)
3. Any/all data stored in SSL-based client applications might have been compromised.
From a users perspective - change all passwords/keys that has been used on applications based on openSSL-1.0.1? How to know what services? To be safe, change them all? Consider private data potentially sent over SSL to be open and readable by the security services?
Thinking about the large-scale:
For how long has the NSA been picking up information leaked by Heartbleed (assuming that they have at least since late evening the 7:th or early morning the 8:th seems a given)?
-Not in the Snowden documents that has been revealed so far (absence of proof != proof of absence, but language might give a hint)
-No report of unusual heartbeat streams being spotted in the wild (was anyone looking?)
Let's assume for the sake of argument the NSA does not have people actually writing the OpenSSL code in the first place.
When did they know about it's existence?
time_to_find_bug = budget * complexity_of_bug / size_of_sourcecode * complexity_of_sourcecode * intention_to_find_bugs
Where
budget = manpower * skillset
and
time_to_find_bug < inf.
when
skillset >= complexity_of_bug
Heartbeat bug:
complexity_of_bug = low
OpenSSL:
size_of_sourcecode = 376409 lines of code (1.0.1 beta1)
complexity_of_sourcecode = high
NSA:
intention_to_find_bugs = 1
budget = $20 * 10^9 ?
=> manpower = 30k ?
skillset = high
Guesstimate: one to a few months -> early 2012 to go through the changes made to 1.0.1 building on earlier work already done on the 0.8.9 branch...
...
Or to say it another way, I think it is safe to assume that, given the simplicity of the bug, NSA knew about Heartbleed in early on. The anonymous comments to Bloomberg gives nice confirmation of this.
Re:This seems plausable (Score:4, Interesting)
This patch was submitted at 7pm on Dec 31st, 2011, so the only people looking at it were the ones expecting it. I guess they were not disappointed.
http://git.openssl.org/gitweb/... [openssl.org]
Re:It's time we own up to this one (Score:4, Interesting)
It was discovered and fixed so quickly *because* it's open source
For crikessakes, the heartbleed vulnerability existed for over 2 years before being discovered and fixed!
Sorry my bad, that sentence was confusing -- I meant the fix was fast, not finding the bug.
An exact timeline for Hearthbleed is hard to find, but it looks like there was some responsible disclosure of the bug to some large parties about a week before public disclosure and release of the fixed SSL library.
In contract, Apple learned of its SSL vulnerability [nist.gov] over a month [theguardian.com]before they released an IOS patch and even after public disclosure of the bug, it was about a week before they released the OSX patch. And just like the OpenSSL bug, Apple's vulnerability was believed to have been in the wild for about 2 years before detection. (of course, since the library code was opensourced by Apple, several unofficial patches were released before Apple's official patch).
Heartbleed Challenge Over (Score:5, Interesting)