NSA Allegedly Exploited Heartbleed 149
A user writes: "One question arose almost immediately upon the exposure of Heartbleed, the now-infamous OpenSSL exploit that can leak confidential information and even private keys to the Internet: Did the NSA know about it, and did they exploit if so? The answer, according to Bloomberg, is 'Yes.' 'The agency found the Heartbeat glitch shortly after its introduction, according to one of the people familiar with the matter, and it became a basic part of the agency's toolkit for stealing account passwords and other common tasks.'"
The NSA has denied this report. Nobody will believe them, but it's still a good idea to take it with a grain of salt until actual evidence is provided. CloudFlare did some testing and found it extremely difficult to extract private SSL keys. In fact, they weren't able to do it, though they stop short of claiming it's impossible. Dan Kaminsky has a post explaining the circumstances that led to Heartbleed, and today's xkcd has the "for dummies" depiction of how it works. Reader Goonie argues that the whole situation was a failure of risk analysis by the OpenSSL developers.
Re:NSA put the bug there, of course they exploited (Score:5, Informative)
The author of this bug and the reviewer of the commit have both been very forthcoming about the mistake. There's little reason to suspect malicious intent in this particular instance.
That doesn't mean the NSA didn't know about it or exploit it, though.
You don't understand, yep! (Score:5, Informative)
Glad you asked: it happens all the time, ever since the Tort Claims Act of 1948 substantially waived the sovereign immunity doctrine. You can read more about it at Wikipedia [wikipedia.org].
People sue the government all the time. It's literally an everyday occurrence.
Re:It's time we own up to this one (Score:4, Informative)
Re:It's time we own up to this one (Score:4, Informative)
Once you start down the math path the classes get smaller and fewer stay for needed years vs lure of private sector telco or unrelated software work.
Most nations really do produce very few with the skills and keep them very happy.
Trips, low level staff to help, good funding, guidance, friendships all just seem to fall into place.
Bringing work home and helping open source could be seen as been an issue later vs students or team members who did open source games or made apps.
Failure of risk analysis by more than OpenSSL devs (Score:5, Informative)
Obfuscated Variable Names (Score:3, Informative)
I challenge anybody to review it and find (or notice) the bug.
It's actually kind of easy to see. I just use the same trick I use when trying to read almost anyone's code: I assume that some jackass obfuscated all of his variable names and so I rename them as I figure out what they actually represent so that the new names actually describe the variable. Once that's complete, I'm left with "memcpy(pointer_to_the_response_packet_we_are_constructing, pointer_to_some_bytes_in_the_packet_we_received, some_number_we_read_out_of_the_packet_we_received)" and it immediately raises a red flag.
I've read that the reason there's a packet length sent from the remote host is because this data is sent with random padding bytes added to each packet and so the packets need to indicate how much of the data is actually valid. So why isn't the packet size figured out closer to when the data first enters the program? First thing I would do when receiving a packet is read out this packet size, verify that the actual size of received packet is large enough to contain it, and toss the packet if it wasn't large enough since it was obviously corrupted (or malicious). Then I'd write the size into a structure for the packet's meta-data, along with any other data we find in every packet (like a packet type number), and every other part of the entire program would read the data from that structure. That's how you do these things. Everything received is "tainted" and, once you verify it isn't poisonous, you move it out into a data structure that the rest of your program trusts. Otherwise you have every piece of code that needs that data having to verify it every time it accesses it which just creates enormous opportunity for error.
So when you come across code like this which pulls data out of the packet and just uses it, it isn't just wrong, but it doesn't even resemble anything that might be correct. Thus, the poor variable naming just might be why this wasn't noticed. Since the data pulled out of the packet is stored into a variable named "payload" it's easy to imagine it's simply payload data, which doesn't have to be checked as it won't ever be used for anything other than being returned to the remote host, and so the absence of code that checks the validity of that data might be expected. If it were named even something as ambiguous as "payload_size" then you have to immediately wonder if it's a size that needs to be checked against anything when you see it being pulled out of a buffer of untrusted data. ...but then, you don't see that either, since the pointer is named "p" which doesn't scream "this is untrusted data" and, even if you look above to see that "p" was assigned from "&s->s3->rrec.data[0]" you're still left wondering what the fuck that might be. Maybe "rrec" refers to some sort of received record? Fuck, who knows.
I mean, right after the memcpy I see "RAND_pseudo_bytes(p, padding)." Is this even putting the padding bytes in the correct place? Well, "p" could be a pointer to anything so it's pretty easy to assume it could be correct. Hel
Private key compromise is indeed possible (Score:4, Informative)