Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Programming

Akamai Reissues All SSL Certificates After Admitting Heartbleed Patch Was Faulty 56

SpacemanukBEJY.53u (3309653) writes "It took security researcher Willem Pinckaers all of 15 minutes to spot a flaw in code created by Akamai that the company thought shielded most of its users from one of the pernicious aspects of the Heartbleed flaw in OpenSSL. More than a decade ago, Akamai modified parts of OpenSSL it felt were weak related to key storage. Akamai CTO Andy Ellis wrote last week that the modification protected most customers from having their private SSL stolen despite the Heartbleed bug. But on Sunday Ellis wrote Akamai was wrong after Pinckaers found several flaws in the code. Akamai is now reissuing all SSL certificates and keys to its customers."
This discussion has been archived. No new comments can be posted.

Akamai Reissues All SSL Certificates After Admitting Heartbleed Patch Was Faulty

Comments Filter:
  • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Monday April 14, 2014 @09:05AM (#46746239) Homepage

    for having the integrity to admit that they screwed up the first time.

  • by BitZtream ( 692029 ) on Monday April 14, 2014 @09:34AM (#46746523)

    What is 'verisign' ... I mean, I know of the company named verisign that functions as a root CA, but they don't have magical certs that are safe, they are just like all other certs.

    A quick Google search yields too much about the company, can you point me at what you're referring to so I can clear my ignorance?

  • by gnasher719 ( 869701 ) on Monday April 14, 2014 @09:44AM (#46746643)

    The fact that they are re-issuing certificates clearly indicates that they were open to Heartbleed.

    That seems to be the US thing, where trying to fix a problem is taken as admission of guilt. (I heard this weird story that US hospitals have a problem if one of their X-ray machines breaks and the replacement is a better model, because anyone examined using the older machines can claim they didn't get the best possible treatment).

  • a) I would not say this is the worst ever - it allows random data to be viewed, which may or may not contain something valuable. There is no evidence (yet) that this was actually exploited prior to its publication. Various other breeches have resulted in proven loss of millions of identities, and near-billions in actual money. If it had been exploited very much, it would probably have been tracked down earlier.

    Technically it's not the worst - it's the same as literally thousands of other exploited bugs, and just yet another example of why C should not be used for applications programming, at least without a very strong IDE to catch these kinds of problems and perhaps a macro system that forces bounds-checking, etc. 'Programming without a net' is _sometimes_ necessary for programming at the metal interface, but OpenSSL, though needing high performance, is not an example of that. It's also an example of why SW quality methods need to be followed for this kind of code, especially for a relatively new member of the programming team - and why OpenSSL and other OSS projects need our support.

    b) Fortunately, the barn door seems to have been shut before much got out. We'll see, but that's the present apparent situation. There will probably be a few relatively small ongoing successful exploits on servers that don't get fixed, as usual. But this is not anything like a wholesale loss of 100 million credit card records.

    c) In this case there was a failure of the open source model of 'many eyes'. But there have been thousands of such failures in proprietary software, some of which resulted in most of the really big exploits, that were invisible until the exploit was used. Here, open source at least allowed researchers to identify it before it was really exploited (as far as we know today).

It is easier to write an incorrect program than understand a correct one.

Working...