Heartbleed OpenSSL Vulnerability: A Technical Remediation 239
An anonymous reader writes "Since the announcement malicious actors have been leaking software library data and using one of the several provided PoC codes to attack the massive amount of services available on the internet. One of the more complicated issues is that the OpenSSL patches were not in-line with the upstream of large Linux flavors. We have had a opportunity to review the behavior of the exploit and have come up with the following IDS signatures to be deployed for detection."
what? (Score:5, Insightful)
Was this badly translated from another language, or have I been out of system administration too long?
Re:what? (Score:5, Informative)
Was this badly translated from another language, or have I been out of system administration too long?
Allow me to translate from buzz-ard to sysopian:
SSL-Ping Data Exfiltration Exploit: Detection and mitigation even a flaming lamer that can't patch OpenSSL can use
"Since this 0-day vuln was published skiddies have been exploiting it to leak data available to OpenSSL 64KB at a time via running one of the pre-written exploit proof-of-concept sources (as skiddies are wont to do) against a bunch of affected Internet facing services. This SNAFU is particularly FUBAR since all the distros that noobs use are building an ancient OpenSSL ver so they can't even push out a simple patch, obviously. We fingered the exploit in use and have a signature so your punk-buster scripts can detect the crackers and ATH0 before your cipher keys get the five-finger discount."
Re:what? (Score:5, Funny)
let me run that thru the jive translator:
"well, shit!" ==> "golly!"
Re: (Score:2, Informative)
If someone is using an ancient openssl library (0.9.8) they have nothing to worry about. The problem was introduced in 1.0.0
Re: (Score:3)
Wrong, in 1.0.1.
Re: (Score:3, Insightful)
VortexCortex should be a slashdot editor then. That would be entertaining at least.
Re: (Score:3)
It is some folks trying to drag IDS back out from the grave. The issue is that generally, IDS does work extremely poorly and causes extreme operations effort (somebody has to look at all the alerts). For this specific thing for once IDS can be used to detect the problem and the whole story revolves about that. Of course the approach is fundamentally flawed: If you patch management is so bad that you cannot fix all affected OpenSSL installations pretty fast, then you are doomed anyways security-wise.
As this
Thank you for the mess (Score:5, Insightful)
We have to thank the security researchers that chose to break the embargo on the news before OpenSSL coordinated with downstream project.
Thank you for the mess, guys!
Re:Thank you for the mess (Score:5, Insightful)
To be fair, nobody knows if this was exploited in the wild or not already - so the "mess" was going to happen anyway (unless you planned to patch your server, assuming your certificate was still good, and not tell any of your users that their passwords may have been exposed in the last couple years).
Re:Thank you for the mess (Score:5, Informative)
Sadly, this is not the case. The evidence is that bad actors had this exploit for months: http://arstechnica.com/securit... [arstechnica.com]
Re: (Score:3, Insightful)
Re:Thank you for the mess (Score:5, Informative)
Re:Thank you for the mess (Score:5, Insightful)
Midnight_Falcon, you are indeed a rare bird. :)
Re:Thank you for the mess (Score:5, Funny)
Not really. Lots of people are wrong on the internets! :-)
Re:Thank you for the mess (Score:5, Informative)
For people who didn't follow the link chain [seacat.mobi], it has since been updated:
Important update (10th April 2014): Original content of this blog entry stated that one of our SeaCat server detected Heartbleed bug attack prior its actual disclosure. EFF correctly pointed out that there are other tools, that can produce the same pattern in the SeaCat server log (see http://blog.erratasec.com/2014... [erratasec.com] ). I don't have any hard data evidence to support or reject this statement. Since there is a risk that our finding is false positive, I have modified this entry to neutral tone, removing any conclusions. There are real honeypots in the Internet that should provide final evidence when Heartbleed has been broadly exploited for a first time.
Re: (Score:3)
Sadly, this is not the case. The evidence is that bad actors had this exploit for months: http://arstechnica.com/securit [arstechnica.com]...
One of the two sites cited as evidence have since taken a step back,
Important update (10th April 2014): Original content of this blog entry stated that one of our SeaCat server detected Heartbleed bug attack prior its actual disclosure. EFF correctly pointed out that there are other tools, that can produce the same pattern in the SeaCat server log (see http://blog.erratasec.com/2014... [erratasec.com] ). I don't have any hard data evidence to support or reject this statement. Since there is a risk that our finding is false positive, I have modified this entry to neutral tone, removing any conclusions. There are real honeypots in the Internet that should provide final evidence when Heartbleed has been broadly exploited for a first time.
Re:Thank you for the mess (Score:4, Insightful)
News about a vulnerability should never be delayed longer than a workaround is known. That is, if there is a way to defend your servers, you need to let people know about it so they can defend their servers. Attackers don't wait for disclosure.
In this case, there was a simple fix, recompiling OpenSSL with the proper flag and going, so letting people know as soon as possible is the best option. Those who are serious about security don't wait for Ubuntu to update their apt servers.
Re: (Score:3)
Re: (Score:3, Informative)
Re:Thank you for the mess (Score:5, Interesting)
Yes, there are some people who are incapable of compiling their own software who will have to wait until the patch comes through. Those people shouldn't be managing security for a large website (or any website really, in an ideal world).
Nonsense. I'd want only vendor supplied fixes applied, unless the vendor is so slow as to be incompetent (but then, why would you be using them?)
Why? Because user applied fixes tend to be forgotten, and if the library isn't managed by the package system (you've uninstalled the package you're overwriting, right?) you might miss subsequent important updates.
An example from a far from fuckwitted user:
http://marc.info/?l=sqlite-use... [marc.info]
Yes, the author of the SQLite library fell pray to this very issue. Let the package manager track packages.
Of course, you could also build binary packages from source, but then that assumes the upstream source packages have been patched, or you're happy to patch the source packages yourself.
Re: (Score:3)
Ubuntu did provide apt patches for all affected versions, including those not supported anymore (12.10 comes to mind). They did it right. If you had configured your security patches to install automatically, it was even transparent. I don't see a problem there.
Re: (Score:2)
Churchill, I suspect
Re:Thank you for the mess (Score:4, Informative)
Is OpenVPN affected? (Score:2)
Re:Is OpenVPN affected? (Score:5, Informative)
Some versions are. The OpenVPN appliance I was running was affected, and there were no updates for it this morning so I had to kill it.
https://security.stackexchange... [stackexchange.com]
I read somewhere that there is a TLS flag you can use in the config to disable the affected code, but for the life of me I can't find it for this post. :(
Re:Is OpenVPN affected? (Score:5, Informative)
I think if you had enabled the tls-auth option it prevents the attack.
IDS != Remedy (Score:2)
Re: (Score:2)
It can help. If you detect and block the traffic, how is the exploit performed?
What version does OpenBSD use? (Score:2)
If it's using one of the affected versions, how did it get past the famed OpenBSD audits?
Re: (Score:2)
Theo claims OpenBSD is unaffected. http://undeadly.org/cgi?action... [undeadly.org]
Re:What version does OpenBSD use? (Score:5, Informative)
Theo claims OpenBSD is unaffected. http://undeadly.org/cgi?action... [undeadly.org]
Theo claims OpenSSH is unaffected, because it isn't. OpenSSL, even on OpenBSD, is quite affected.
Re: (Score:2)
or is if you didn't apply patch they put out the same day
Re:What version does OpenBSD use? (Score:5, Interesting)
That's good post. I'm going to blatantly copypaste it because Theo gets to the crux of why Openssl is terrible:
From: Theo de Raadt cvs.openbsd.org>
Subject: Re: FYA: http://heartbleed.com/ [heartbleed.com]
Newsgroups: gmane.os.openbsd.misc
Date: 2014-04-08 19:40:56 GMT (1 day, 6 hours and 15 minutes ago)
> On Tue, Apr 08, 2014 at 15:09, Mike Small wrote:
> > nobody gmail.com> writes:
> >
> >> "read overrun, so ASLR won't save you"
> >
> > What if malloc's "G" option were turned on? You know, assuming the
> > subset of the worlds' programs you use is good enough to run with that.
>
> No. OpenSSL has exploit mitigation countermeasures to make sure it's
> exploitable.
What Ted is saying may sound like a joke...
So years ago we added exploit mitigations counter measures to libc
malloc and mmap, so that a variety of bugs can be exposed. Such
memory accesses will cause an immediate crash, or even a core dump,
then the bug can be analyed, and fixed forever.
Some other debugging toolkits get them too. To a large extent these
come with almost no performance cost.
But around that time OpenSSL adds a wrapper around malloc & free so
that the library will cache memory on it's own, and not free it to the
protective malloc.
You can find the comment in their sources ...
#ifndef OPENSSL_NO_BUF_FREELISTS /* On some platforms, malloc() performance is bad enough that you can't just
OH, because SOME platforms have slow performance, it means even if you
build protective technology into malloc() and free(), it will be
ineffective. On ALL PLATFORMS, because that option is the default,
and Ted's tests show you can't turn it off because they haven't tested
without it in ages.
So then a bug shows up which leaks the content of memory mishandled by
that layer. If the memoory had been properly returned via free, it
would likely have been handed to munmap, and triggered a daemon crash
instead of leaking your keys.
OpenSSL is not developed by a responsible team.
it's all over (Score:2)
Re: (Score:3)
Back in my day this wouldn't have been an issue since we ran a host of different custom interfaces and clients. We had to organize our own cross country backhaul via overlapping local calling networks, and orchestrated email routing networks using outdials. Probably only hackers used clients with encrypted links for their BBSs.
I don't know what you're talking about with that fed-speak. I never heard of any crazy lossy crap like duct-taping payphones together neither, but there may have been a few railroa
Re: (Score:2)
NSA, heartbleed, whatever. you'll tell your grandchildren about "back in the day" internet
What in particular do you think will be different about my grandchildrens' Internet?
Re: (Score:3)
Situation is a Shambles (Score:5, Insightful)
I'm running Linux Mint Olivia -- the next to current version -- an no openssl patch is yet available as of this afternoon. I image there are quite a few similar distros. Since I have actual work to do, and can't risk wasting two hours on a potentially borked upgrade, I'm stuck to trying not to use programs affected by the exploit for the duration.
While something tells me this exploit is somewhat overblown, what really ticks me off is that this is all the result of delegating memory management to C pointers and basically mmap. As far as I'm concerned, in this day and age, that amounts to spaghetti code and I can't say it endears me to the reliability of openssl.
Please, we need SSL to be secure, not fast. Just use a less efficient method to make things more secure.
Coding Style versus Language (Score:5, Insightful)
There is well written C, and there is poorly written C. I've been through the bowels of OpenSSL, and there are parts of it that frighten me. Ninety percent of the issues in OpenSSL could be solved by adopting a modern coding style and using better static analysis. While static analysis tools can't find vulnerabilities, they can root out code smell that hides vulnerabilities. If, for instance, I followed the advice of two of the quality commercial static analyzers that I ran against the OpenSSL code base, I would have been forced to refactor the code in such a way that this bug would have either been obvious to anyone casually reviewing it, if the refactor did not eliminate the bug all together.
C and C++ are not necessarily the problem. It's true that higher level languages solve this particular kind of vulnerability, but they are not safe from other vulnerabilities. To solve problems like these, we need better coding style in critical open source projects.
Re: (Score:2)
In my experience, focusing on "coding style" makes code quality drop since it creates a culture where "review" is simply making sure you dotted the i's and crossed the t's without actually reading the sentence.
If there is one common belief held by all developers it is that their style is "correct" while everyone else is "wrong". The only difference is now the define wrong: "ugly", "inconsistent", "unclear", "confusing", "hard to maintain", "brittle", etc. If you want to see what they actually mean, ask t
Re:Coding Style versus Language (Score:5, Insightful)
Style, or the lack thereof, is absolutely related to this issue. It created the festering environment that this bug hid in for two years before it was discovered.
Style is about more than pretty print formatting. It's about avoiding the god-awful raw pointer math found in this function. It's about properly bounding values. It's about enforcing the sorts of checks that come naturally to programmers with more experience and less bravado. You may not appreciate the need for good style yet, but I bet you that the OpenSSL team is rethinking this now. To know that such a sophomoric mistake lingered for two years, even though hundreds of eyes passed over that code, is the epitome of why good programming style matters. The people who looked at this code are likely much smarter than you or I. They could not follow the logic of this code, because their eyes glossed right over this glaring bug. That's bad style. Everything else is window dressing.
Re: (Score:2)
C and C++ are not necessarily the problem. It's true that higher level languages solve this particular kind of vulnerability, but they are not safe from other vulnerabilities. To solve problems like these, we need better coding style in critical open source projects.
It's better to remove a very large class of bugs by the language making them impossible rather than insisting that a certain coding style will save you, "This time for sure!"
Re: (Score:2)
I meant that the refactor would make the bug obvious. However, as is the case with any bit of refactoring, one often finds bugs, writes test cases to capture these bugs, and then comes back to eliminate them. While the pedantic would argue that refactoring keeps functionality the same, refactoring is just one step in a larger process of code stewardship that includes the isolation and elimination of bugs. When a refactor makes a bug obvious, I contend that the refactor helps to eliminate that bug.
Either
Re: (Score:2)
While something tells me this exploit is somewhat overblown, what really ticks me off is that this is all the result of delegating memory management to C pointers and basically mmap. As far as I'm concerned, in this day and age, that amounts to spaghetti code and I can't say it endears me to the reliability of openssl.
It has nothing to do with mmap or C pointers per se. The issue is simply bad programming. Someone wrote code that trusted unvalidated user input and they got bit in the ass. Whomever performed the code review should have known better, even if the developer didn't..
Re: (Score:2)
I don't get why we have to say "the developer"? It was Robin Seggelmann that submitted this bit of buggy openssl code. He either works for the NSA or is grossly incompetent...
If competence were a requirement for being a developer, how many developers do you think would be out of work?
Re:Situation is a Shambles (Score:5, Insightful)
It was Robin Seggelmann that submitted this bit of buggy openssl code. He either works for the NSA or is grossly incompetent...
Or he made a dumb mistake, as 100% of programmers have done and will do again in the future. Anyone who expects programmers (even the best programmers) to never make mistakes is guaranteed to be disappointed.
The real issue here is that the development process did not detect the mistake and correct it in a timely manner. Code that is as security-critical as OpenSSL should really be code-reviewed and tested out the wahzoo before it is released to the public, so either that didn't happen, or it did happen and the process didn't detect this fault; either way a process-failure analysis and process improvements are called for.
Re: (Score:2)
I'm running Petra (16) and patch was out yesterday morning. sucks to be you.
Re:Situation is a Shambles (Score:5, Interesting)
This is not a memory management issue per se, and has nothing to do with mmap or malloc. In fact, the malloc succeeds just fine. Rather than just explaining in text, it might be easier if i simplify the issue in C parlance (this would look neater if slashdot allowed better code formatting):
char *rec_p = record;
uint16_t rec_len = SSL3_RECORD_LEN;
uint16_t user_len = *(uint16_t*)rec_p;
rec_p += sizeof(uint16_t);
char *buf = malloc(user_len);
memcpy(buf, rec_p, user_len);
reply(buf);
Due to the fact that this code works more or less exactly as designed, the exploit functions across architectures and operating systems. This bug is so amateurish, i almost find it difficult to believe that it was unintentional.
Re: (Score:3, Interesting)
This is not a memory management issue per se, and has nothing to do with mmap or malloc.
But what the grandparent post said still applies. It's how C treats memory via pointers. The issue, from looking at the code you posted, is that memcpy() copies from beyond the length of rec_p. In a sane language that doesn't treat memory as free-for-all, this isn't possible.
Due to the fact that this code works more or less exactly as designed, the exploit functions across architectures and operating systems. This bug is so amateurish, i almost find it difficult to believe that it was unintentional.
It's the kind of mistake programmers make all the time in C. Sure, you can tell me battle-hardened, conscientious, professional programmers wouldn't make this mistake. Whatever, we've seen this kind of thing too many times for this sent
Re: (Score:3, Informative)
This is not a memory management issue per se, and has nothing to do with mmap or malloc.
But what the grandparent post said still applies. It's how C treats memory via pointers. The issue, from looking at the code you posted, is that memcpy() copies from beyond the length of rec_p. In a sane language that doesn't treat memory as free-for-all, this isn't possible.
No, that's not the issue, in fact there really isn't any significant pointer arithmetic used here. Yeah, it does use a bit to pull the size field out of the incoming request, but there's nothing wrong with that part of the code.
The issue is that the code allocates a buffer of a size specified by the user, without validating it, and doesn't zero the allocated memory. Yes, many languages automatically zero heap-allocated arrays, which is good, but it's also a performance cost which is often unnecessary and
Re: (Score:2)
So "shambles" means user supplied data...
eating it without question means not validating it before use...
1 Cor. 10:25 "Whatsoever is sold in the shambles, that eat, asking no question for conscience sake"
then.. uhhh...
Wait.. is this a GOOD thing??
Re: (Score:2)
This has little to do with any C-specific. If you were re-using a buffer in some managed runtime, you would still see the same problem.
The problem is related to a missing check on a user-provided value. It is a pretty common kind of bug, really, since it is isn't often obvious which level of the stack was supposed to check it (hence why assertions are helpful - this would have been a crash, rather than a security hole).
The unfortunate thing is that this kind of bug detection isn't easily automated (since
Re: (Score:2)
This has little to do with any C-specific. If you were re-using a buffer in some managed runtime, you would still see the same problem.
Most managed runtimes perform bounds checks, C does not. As a result, the same bug couldn't happen in Java or C#. Of course, bounds checks come with a cost, and one that most people wouldn't want from low level code, which means that C/C++ developers must be extra vigilant.
Re:Situation is a Shambles (Score:4, Informative)
That sounds like a Mint thing. Seriously, Debian (the great grandparent of Mint) had the patch as fast as anybody. Heck, by the time I logged into my Mac at work, MacPorts had pushed the patch.
I wouldn't make such a sweeping statement about the "situation" when you've hitched your wagon to a project that's pulling from a project that's pulling from a project that's (etc).
Re: (Score:3)
That sounds like a Mint thing. Seriously, Debian (the great grandparent of Mint) had the patch as fast as anybody. Heck, by the time I logged into my Mac at work, MacPorts had pushed the patch.
I wouldn't make such a sweeping statement about the "situation" when you've hitched your wagon to a project that's pulling from a project that's pulling from a project that's (etc).
Interestingly our Debian servers are completely unaffected by this bug since we use Debian 6 :) Sometimes it pays to be a little behind the times.
Re: (Score:2, Informative)
You're amazingly wrong.
http://article.gmane.org/gmane.os.openbsd.misc/211963
This has nothing to do with unmanaged languages. It has to do with somebody actively sidestepping security devices that are already in place because they don't grok the way the world works outside of their test bench.
What do you think Python was written in? Here's a hint, it wasn't another managed language.
Re: (Score:3)
Unmanaged languages have their place.
C was designed to write operating systems.
Java and the like are designed to write applications.
Unmanaged languages are used to write the manages language virtual machines. You can't get away from that.
JVM's are written in C and C++, the CLR is the same. Which managed language do you suggest to use that was not built with C?
Re:Situation is a Shambles (Score:4, Insightful)
JVM's are written in C and C++, the CLR is the same. Which managed language do you suggest to use that was not built with C?
The point isn't to eliminate C code entirely, but to minimize the number of lines of C code that are executed.
If (statistically speaking) there will are likely to be N memory-error bugs per million lines of C code, then the number of memory-error bugs in a managed language will be proportional to the size of the interpreter, rather than proportional to the size of the program as a whole.
Add to that the fact that interpreters are generally written by expert programmers, and then they receive lots and lots of testing and debugging, and then (hopefully) become mature/stable shortly thereafter; whereas application code is often written by mediocre programmers and often receives only minimal testing and debugging.
Conclusion: Even if the underlying interpreter is written in C, using a managed language for security-critical applications is still a big win.
Re:Situation is a Shambles (Score:4, Interesting)
Add to that the fact that interpreters are generally written by expert programmers, and then they receive lots and lots of testing and debugging, and then (hopefully) become mature/stable shortly thereafter; whereas application code is often written by mediocre programmers and often receives only minimal testing and debugging.
I'd wager that most of those writing/maintaining OpenSSL are not only expert programmers, but, overall, are more security concious than the authors/maintainers of interpreters. You point would be completely valid if the topic was some builitin board / wiki / chat program / etc. Sadly, that's not the case at hand.
Re:Situation is a Shambles (Score:5, Funny)
Also, managed languages like Java and .NET are written in other managed languages running bytecode, making them extra secure. At no time do any of these languages use libraries or environments written in lower level languages such as C++, C, or assembler. So to the GP's credit, programmers who know those languages are okay to die off since we do not need them anyway.
Re: (Score:2)
To be fair, not many of the security bugs in Java are caused by Java code. Off the top of my head the only recent one was an early version of Java 7 that allowed untrusted code to bypass the security manager.
Most of it comes from the Java Browser plugin, which is written in C++, and why you should never run Java code in a browser.
Is there a way to tell? (Score:2)
Like some site that is (like "what that site is running?" (Apache, IIS etc)) where we can see who gets what fixed when. No point in changing my passwords on a still-affected site.
Re:Is there a way to tell? (Score:5, Informative)
Qualys SSL Test [ssllabs.com] is including a flag for Heartbleed vulnerability and auto-fails any domain tested that is affected.
Re: (Score:2, Informative)
http://filippo.io/Heartbleed/ [filippo.io]
Re: (Score:2)
there are scripts that can scan for the vulnerability. I'm amused that many major banks, credit card companies and a certain well known pay-your-friend site (at least a couple of their URL, not all their services) have neither acknowledged the bug, nor patched it.
Re: (Score:2)
Site title Slashdot: News for nerds, stuff that matters Date first seen September 2004
Site rank 5747 Primary language English
Description Not Present
Keywords
Several! (Score:5, Informative)
There have been a number of sites.
SSLLabs scanner has been updated to check for Heartbleed, and also will report when the cert validity starts (handy if you want to see whether they're using a new cert). https://www.ssllabs.com/ssltes... [ssllabs.com]
LastPass has a pretty decent scanner that just focuses on Heartbleed (without all the other info that you get from SSLLabs): https://lastpass.com/heartblee... [lastpass.com]
There are some others out there as well, of course.
There's even one for client-side testing (almost as critical):
Pacemaker is an awesome little POC script (python 2.x) for testing whether a *client* is vulnerable (many that use OpenSSL are...). https://github.com/Lekensteyn/... [github.com]
Re: (Score:3)
Where's the changelog entry for this (Score:2)
Is bypassing/wrapping/whateverthey'redoing OS calls changing how memory is managed not a big enough change to warrant an entry in the 1.0.0h -> 1.0.1 log?
Reality Check. The sky is not falling. (Score:4, Informative)
1. The issue only exposes 64k at a time. Let's assume that the average enterprise application has at least a 1G footprint (and that's actually on the low end of most applications I work with). That's 1,048,576K. At best, this means that this exploit can access 0.006% of memory of an applications memory at one time.
Ahh you say, I will simple make 16,667 requests and I will retrieve all the memory used by the application.
2. The entire basis of this issue is that programs reuse memory blocks. The function loadAllSecrects may allocate a 64k block, free it and then that same block is used by the heartbeat code in question. However, this code will also release this same block which means that the block is free for use again. Chances are very good (with well optimized code), that the heartbeat will be issued the same 64k block of memory on the next call. Multi-threaded/multi-client apps perturb this but the upshot is that it's NOT possible to directly walk all of the memory used by an application with this exploit. You can make a bazillion calls and you will never get the entire memory space back. (You're thinking of arguments to contrary, your wrong... you wont.)
Congratulations, much success... you have 64k internet.
3. Can you please tell me where the passwords are in this memory dump:
k/IsZAEZFgZueWNuZXQxFzAVBgNVBAMTDk5ZQ05FVC1ST09ULUNBMB4XDTEwMDMw
MzIyNTUyOFoXDTIwMDMwMzIyMTAwNVowMDEWMBQGCgmSJomT8ixkARkWBm55Y25l
There will be contextual clues (obvious email addresses, usernames, etc) but unless you know the structure of the data, a lot of time will be spent with brute force deciphering. Even if you knew for a fact that they were using Java 7 build 51 and Bouncy Castle 1.50, you still don't know if the data you pulled down is using a BC data structure or a custom defined one and you aren't sure where the boundaries start and end. The fact that data structures may or may not be contiguous complicates matters. A Java List does not have to store all members consecutively or on set boundaries (by design, this is what distinguishes it from a Vector).
Long story short. Yes, there is a weakness here. However, it's very hard to _practically_ exploit... especially on a large scale (no one is going to use this to walk away with the passwords for every gmail account... they'd be very, very lucky to pull a few dozen).
This doesn't excuse developers from proper programming practices. It's just putting "Heartbleed" in perspective.
Re: (Score:3)
But you know the vulnerable host is running OpenSSL 1.0.1 -> 1.0.1f, so you can look at the source code to figure out what the memory around the private key is supposed to look like.
Re: (Score:2)
This guy has retracted part of his analysis based on comments, but tries to make a case that passwords and cookies in the http headers are more likely to be exposed than keys. Remember, http-auth is still used a lot. http://blog.erratasec.com/2014... [erratasec.com]
Re: (Score:2)
Re:Reality Check. The sky is not falling. (Score:5, Informative)
Can you please tell me where the passwords are in this memory dump ...
Have you ever seen a real exploited piece of data?
These are taken from Yahoo production servers, a day or two ago:
http://cdn.arstechnica.net/wp-... [arstechnica.net]
http://cdn.arstechnica.net/wp-... [arstechnica.net]
Can you guess where the password is, now? (And those didn't even take that many tries)
I have not seen actual SSL private keys floating around just yet, but given that the original researchers said [heartbleed.com] they managed to get private keys from their own servers, I think it is reasonable to assume that some production servers must have already leaked them.
IDS signatures might not work in all cases (Score:3)
While the proof of concept exploit used an unencrypted attack, the vulnerability can still be exploited AFTER the session is encrypted.
Since the IDS probably cannot decrypt the SSL connection... it is unlikely to detect an attack that occured after encryption was negotiated, and the extension message is invisible to the IDS
what is the point of IDS? (Score:2)
What is the point of IDS? If you detect an attack, your private keys are compromised and the game is over.
And then you try to recover, you make new keys, renew certificates, revoke the old one... but since certificate revocation is quite broken, you never recover. An attacker that stole your old private key will still be able to masquerade as the legitimate server.
Re: (Score:2)
Tell your users, too! (Score:2)
Follow the proposed specification at http://heartbleedheader.com [heartbleedheader.com] to tell your users when you've patched your servers. This eliminates the guessing: "is it OK to update my password now? Do I even need to? Can I trust that I'm not being MITMed with their old SSL key that an attacker stole?" It's bad enough using the tools at hand to detect that information from a single site, let alone the hundreds you might have in your password manager.
What I want to know is... (Score:2)
If OpenSSL is (as quite a few people who know what they are talking about have claimed) poorly written and hard to maintain, why no-one has tried to come up with a simple, easy to evaluate solution.
Or is SSL/TLS really that hard to properly implement?
Re: (Score:2)
People have, it's called nss.
Re:Mountain out of a molehill (Score:5, Insightful)
Except now pretty much every affected machine needs to have its SSL certificates and private keys revoked and trashed, and new keys/certificates issued.
In the meantime, thousands (if not millions) of sites leaked sensitive data to anyone who wanted to snoop on it.
Yeah, no big deal, none at all...no repercussions will come of this.
Re:Mountain out of a molehill (Score:5, Interesting)
It's worse than that. Most browsers don't check certificate revocation lists [spiderlabs.com], and the certificate authority might not have a CRL infrastructure in place that can support the number of revoked certs generated by this incident.
Granted, they could take the money they receive from all the reissued certificates and use it to build such an infrastructure, but they probably won't.
Web-SSL was already a broken system [theregister.co.uk]. Now that it's been cracked open even wider, maybe we can throw it out and implement something better.
Oh, who am i kidding? We'll just pretend everything's okay again after most people have patched the hole.
Re: (Score:2)
Oh, who am i kidding? We'll just pretend everything's okay again after most people have patched the hole.
Don't worry, there are plenty of other holes.
Re: (Score:2)
You were better off using non-SSL, unless you were on wireless or something easily snooped. I'm not aware http:80 [80] servers have a little query that gets you memory dumps. Do I misunderstand?
Re:Mountain out of a molehill (Score:5, Informative)
It's not easy to fix leaked data.
You can revoke keys, change passwords, and patch the software, but you can't revoke the data that was already sent with them (and can now be decoded) no more than you can you revoke the bits of data that could have been stolen.
Re: (Score:3)
You can't unsend that data, but perfect forward secrecy [wikipedia.org] means that old data can't be decrypted even if the SSL key leaks, and new data can only be decrypted with an active MITM.
...if only people would actually turn it on.
Of course, this particular vulnerability is even worse than just exposure of on-wire traffic. It also exposed potentially anything in memory for the past two years, including the things you didn't even want to send to other people -- and it exposed them to anybody on the internet, not just
Re: (Score:3)
Nope. I am a senior engineer for an IT security firm. I fix this shit for a living, thank you.
Re:Mountain out of a molehill (Score:5, Insightful)
I think you completely missed my point. The hand wringing is useless. Fix it, mitigate it, and try to move on. Any damage that has been done is one. All that cane be done now is to patch and mitigate. All the wrangling going on on the 'net is amusing. The past can't be changed. We can learn from it and move on. There are plenty of ways to stop the bleeding. People are acting like the sky is falling. It's truly sad that you're one of them.
Re: (Score:2)
WOW, bad spelling and typos. I chalk that up to the beers I am drinking :)
Seriously though, it's not as big a deal as it's being made out to be. Yes it caused a security scramble, and rightly so. No, the sky is not falling.
Re:Mountain out of a molehill (Score:5, Informative)
I work for a large financial organization. While fixing the hole itself was easy- having to tell a bunch (I can't even legally give you a ballpark, but its a lot) of customers to change their passwords (or forcing them to change) is very bad PR. Plus we don't know if any financial data was accessed. The data could literally bankrupt very large companies or my own company. This is no small problem!
Not exactly a molehill. (Score:2)
There are many organizations that not only can't patch, do not know how to patch, or simply haven't completed patching, but also don't _have_ an IPS or IDS in place. In fact, even if a company is in a position (and has the know-how) to install one, using either one of these options may come with what is perceived as an unacceptable performance impact.
I managed to write an exploit for this issue within about 30 minutes. The bug is almost trivial to exploit. In my meager tests, i gathered usernames, passwo
Re: (Score:2)
That is why part of the remediation process is new certs. I didn't say it wasn't a pain in the ass, but it's trivial with regards to the amount of work involved.
Re:Mountain out of a molehill (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
And don't forget the GnuTLS failure similar to Apple's
Now we're just waiting to hear that Microsoft's IIS was also borked in some unexpected way, and it'll be a royal flush eh?
Re: (Score:2)
Well, Microsoft's CAPI (CryptoAPI) actually, not IIS. IIS uses CAPI, but IIS is no more a crypto toolkit than Apache or lighttpd are. A vuln in CAPI (they've happened before) could also affect clients (IE, Outlook, anything else using the platform APIs...).
Besides, we're still waiting on a NSS issue. NSS isn't so much *broadly* used - I know of only a few product families that use it - as it is *heavily* used. The product families in question are Mozilla anything (Firefox, mostly; the N stands for "Netscape
Re: (Score:2)
Re: (Score:2)
Maveriks has openssl version 0.9.8y, which is too old to be vulnerable. Macports will give you a vulnerable 1.0.1 version that last I checked earlier today had no patch.
Re: (Score:2)
0.9.8 doesn't support any protocol newer than TLS 1.0, so while it's safe from heartbleed it's also old and verging on deprecated.
Also, it's not that rare for software to use its own copy of OpenSSL, either is a bundled library or statically compiled into the program. I don't actually know of any Mac software that I'm sure does this, but that's not saying much since I don't use a Mac. Things I would expect to find it in are cross-platform programs that use OpenSSL but want a newer branch than 0.9.8 (Python
Re: (Score:2)
your popular PHP5 platforms will be so safe on that