Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Encryption Government Privacy

NSA Allegedly Exploited Heartbleed 149

A user writes: "One question arose almost immediately upon the exposure of Heartbleed, the now-infamous OpenSSL exploit that can leak confidential information and even private keys to the Internet: Did the NSA know about it, and did they exploit if so? The answer, according to Bloomberg, is 'Yes.' 'The agency found the Heartbeat glitch shortly after its introduction, according to one of the people familiar with the matter, and it became a basic part of the agency's toolkit for stealing account passwords and other common tasks.'" The NSA has denied this report. Nobody will believe them, but it's still a good idea to take it with a grain of salt until actual evidence is provided. CloudFlare did some testing and found it extremely difficult to extract private SSL keys. In fact, they weren't able to do it, though they stop short of claiming it's impossible. Dan Kaminsky has a post explaining the circumstances that led to Heartbleed, and today's xkcd has the "for dummies" depiction of how it works. Reader Goonie argues that the whole situation was a failure of risk analysis by the OpenSSL developers.
This discussion has been archived. No new comments can be posted.

NSA Allegedly Exploited Heartbleed

Comments Filter:
  • by 93 Escort Wagon ( 326346 ) on Friday April 11, 2014 @05:44PM (#46729647)

    The author of this bug and the reviewer of the commit have both been very forthcoming about the mistake. There's little reason to suspect malicious intent in this particular instance.

    That doesn't mean the NSA didn't know about it or exploit it, though.

  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Friday April 11, 2014 @05:45PM (#46729651)

    One cannot simply sue a branch of the government without asking permission from the government to allow it to be sued - guess how often THAT happens?

    Glad you asked: it happens all the time, ever since the Tort Claims Act of 1948 substantially waived the sovereign immunity doctrine. You can read more about it at Wikipedia [wikipedia.org].

    People sue the government all the time. It's literally an everyday occurrence.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Friday April 11, 2014 @06:02PM (#46729769) Homepage Journal
    I'd say more than just the "community". We have a great many companies that incorporate this software and generate billions from the sales of applications or services incorporating it, without returning anything to its maintenance.I think it's a sensible thing to ask Intuit, for example: "What did you pay to help maintain OpenSSL?". And then go down the list of companies.
  • by AHuxley ( 892839 ) on Friday April 11, 2014 @08:14PM (#46730655) Journal
    Re even qualified to implement protocols like this. Thats a very interesting point. How many have their tools of the trade via a top university settings and a security clearance option and dependant funding.
    Once you start down the math path the classes get smaller and fewer stay for needed years vs lure of private sector telco or unrelated software work.
    Most nations really do produce very few with the skills and keep them very happy.
    Trips, low level staff to help, good funding, guidance, friendships all just seem to fall into place.
    Bringing work home and helping open source could be seen as been an issue later vs students or team members who did open source games or made apps.
  • Just a minor correction - my piece does indeed suggest that the OpenSSL developers have some strange priorities. However, it lays the larger blame at the companies that used OpenSSL, when all the information necessary to suggest that this kind of thing could happen was already available, and the potential consequences for larger companies of a breach are easily enough to justify throwing a little money at the problem (which could have been used any number of ways to help prevent this).
  • by Sanians ( 2738917 ) on Saturday April 12, 2014 @06:30AM (#46732513)

    I challenge anybody to review it and find (or notice) the bug.

    It's actually kind of easy to see. I just use the same trick I use when trying to read almost anyone's code: I assume that some jackass obfuscated all of his variable names and so I rename them as I figure out what they actually represent so that the new names actually describe the variable. Once that's complete, I'm left with "memcpy(pointer_to_the_response_packet_we_are_constructing, pointer_to_some_bytes_in_the_packet_we_received, some_number_we_read_out_of_the_packet_we_received)" and it immediately raises a red flag.

    ...but more seriously, the code in that check-in is why I hate to let anyone work on any programming projects with me. Worthless variable names create code that's as worthless as English text that refers to everything as "that stuff" and "those things." It's just a step away from choosing purposefully obfuscated variable names. If the variable is named "payload" then not only should it be the actual payload data, rather than just its size, but it should also be the only payload in existence such that no distinction needs to be made between "received_payload" and "payload_to_be_sent." ...and then there's the single-letter variables, some of which are incremented at times so that they don't even consistently refer to the same thing over time, creating a variable that not only doesn't indicate what it refers to, but one which actually might refer to anything.

    I've read that the reason there's a packet length sent from the remote host is because this data is sent with random padding bytes added to each packet and so the packets need to indicate how much of the data is actually valid. So why isn't the packet size figured out closer to when the data first enters the program? First thing I would do when receiving a packet is read out this packet size, verify that the actual size of received packet is large enough to contain it, and toss the packet if it wasn't large enough since it was obviously corrupted (or malicious). Then I'd write the size into a structure for the packet's meta-data, along with any other data we find in every packet (like a packet type number), and every other part of the entire program would read the data from that structure. That's how you do these things. Everything received is "tainted" and, once you verify it isn't poisonous, you move it out into a data structure that the rest of your program trusts. Otherwise you have every piece of code that needs that data having to verify it every time it accesses it which just creates enormous opportunity for error.

    So when you come across code like this which pulls data out of the packet and just uses it, it isn't just wrong, but it doesn't even resemble anything that might be correct. Thus, the poor variable naming just might be why this wasn't noticed. Since the data pulled out of the packet is stored into a variable named "payload" it's easy to imagine it's simply payload data, which doesn't have to be checked as it won't ever be used for anything other than being returned to the remote host, and so the absence of code that checks the validity of that data might be expected. If it were named even something as ambiguous as "payload_size" then you have to immediately wonder if it's a size that needs to be checked against anything when you see it being pulled out of a buffer of untrusted data. ...but then, you don't see that either, since the pointer is named "p" which doesn't scream "this is untrusted data" and, even if you look above to see that "p" was assigned from "&s->s3->rrec.data[0]" you're still left wondering what the fuck that might be. Maybe "rrec" refers to some sort of received record? Fuck, who knows.

    I mean, right after the memcpy I see "RAND_pseudo_bytes(p, padding)." Is this even putting the padding bytes in the correct place? Well, "p" could be a pointer to anything so it's pretty easy to assume it could be correct. Hel

  • by pop ebp ( 2314184 ) on Saturday April 12, 2014 @06:45AM (#46732525)
    CloudFlare has retracted their statement that private key compromise is very hard. They started a challenge [cloudflare.com] and at least 2 people successfully got private keys from their Heartbleed-enabled server with as few as 100K requests. (I am sure that with some optimization, the number could be even lower.)

"If it ain't broke, don't fix it." - Bert Lantz

Working...