SSL Holes Found In Critical Non-Browser Software 84
Gunkerty Jeb writes "The death knell for SSL is getting louder. Researchers at the University of Texas at Austin and Stanford University have discovered that poorly designed APIs used in SSL implementations are to blame for vulnerabilities in many critical non-browser software packages. Serious security vulnerabilities were found in programs such as Amazon's EC2 Java library, Amazon's and PayPal's merchant SDKs, Trillian and AIM instant messaging software, popular integrated shopping cart software packages, Chase mobile banking software, and several Android applications and libraries. SSL connections from these programs and many others are vulnerable to a man in the middle attack."
Re: (Score:2, Informative)
Shocking indeed. It's pretty much this story but without the Android reference
http://yro.slashdot.org/story/12/10/20/0545252/poor-ssl-implementations-leave-many-android-apps-vulnerable [slashdot.org]
Who'd have thought that poorly coded code is poorly coded?
This again? (Score:5, Insightful)
News Flash: People bypass inconvenient security features. Security reduced as a result.
How does this at all lead to a "death knell" for SSL?
Death knell? Really? (Score:5, Insightful)
The death knell for SSL is getting louder
What does this mean? Just that vendors should be using the newer versions of SSL that were rebranded TLS? Or is there another, competing technology that is recommended instead?
Re:Death knell? Really? (Score:5, Informative)
It means that both Gunkerty Jeb and Timothy didn't read TFA and are both fucking stupid.
Summary: libraries allow you to selectively ignore part or all of the certificate chain verification, including OpenSSL, which is exactly what your fucking browser asks you to do when you visit a site with a self-signed or expired cert. TFA argues that this is the wrong behavior. TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.
TFA also doesn't understand what the layers of security are around Amazon's EC2 toolkit, either.
Re: (Score:1)
Mine are!
Re: (Score:3, Insightful)
TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.
Your session is not going to be very opaque if there's a man in the middle listening in.
Re:Death knell? Really? (Score:4, Insightful)
TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.
That allows you to have a wonderfully secure conversation with whoever is snooping. Great step forward there!
It's important that clients verify the identity of the servers to which they connect, but they can do so in many ways. Public HTTPS does it in a particular pattern, but a self-signed certificate also works (provided you've distributed the server's public key to clients in a trusted way first). The problem with self-signed on the public HTTPS web is that there's too many sites for it to be at all practical for you to acquire all their self-signed public certificates before connecting to any of them; that advantage (of the CA system) ceases to be very relevant on a closed system such as an intranet, though larger intranets can go for things like a private CA.
Expired certificates or non-matching host certificates are a demonstration of poor deployment.
Re: (Score:1)
Yup. And a lot of times people just don't give a damn.
There are other reasons to use weak SSL - most of them rather stupid and created by not-so-bright IT policies. For example: tunneling through stupidly configured proxy servers often requires an SSL connection - it does not matter to anyone involved what certificate is used to establish such a connection, the proxy simply often wants the connection to be encrypted using SSL.
ssh gets by just fine w/o Uzbekistan's CA (Score:1)
The public CAs are fundamentally untrustworthy. Your only hope is to do like ssh: keep track of certificates that have been seen, and raise an alert if a site's certificate ever changes. Self-signed isn't worse than China-signed, Belarus-signed, Russia-signed, France-signed, Israel-signed, or any of the other supposedly "trustworthy" public CAs you so love.
Re: (Score:2)
I'm sure that would work really well: NOT
This policy does not work because one of the first website people visit is Google, they change their certificated on a weekly basis (yes all servers; every week they roll out new certs).
Re: (Score:1)
(yes all servers; every week they roll out new certs).
Apparently I just hit the one server, they forgot about:
Common Name (CN) www.google.com
Serial Number 4F:9D:96:D9:66:B0:99:2B:54:C2:95:7C:B4:15:7D:4D
Issued On 10/26/11
Expires On 10/1/13
Unless "new certs" means one that was issued a year ago.
Re: (Score:2)
I don't know that is what I heared in a presentation, I think it is somewhere online. I'll see if I can find it somewhere.
Re: (Score:2)
Doesn't matter, unless they roll out a new CA cert every week.
It's the longevity, and security, of the CA cert that matters.
Re: (Score:2)
I don't really like the CA model either, but your suggestion doesn't seem thought through. SSH asks you to actually verify the key fingerprint of the new host key you are trying to connect to; this would be quite hard for non technical users that want to visit their bank website etc.. And like other commented, that would also be a PITA with key rollovers.
No, the real solution I think is developed in the DANE IETF WG: distributing keys through DNS, secured by DNSSEC.
Re: (Score:3)
That's because that's a nonsense idea. If the scheme you use is vulnerable to MITM, the session using that scheme are not opaque to unintended eavesdroppers. That's what "MITM" means.
Re: (Score:2)
TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.
Others have already weighed in, but I have to pile on too. You need to realize this is an absurd position.
What is the point of an 'opaque session' if you are having it with an unknown party?
If you are willing to talk to anyone who presents themselves as the endpoint and you don't authenticate them, what does it matter if someone else can't listen in... for al
Re: (Score:2, Informative)
Your parent stated it badly. It's not that you aren't worried about Monkey in the Middle. It's that you aren't worried about third party identity verification to avoid MITM. If you self-sign your own certificate, then you know that it's valid. You aren't relying on a third party signer (e.g. VeriSign) to validate it. You are validating it.
The thing is that for this to work, you need to verify the certificate. If you don't verify the certificate, then you can end up with MITM attacks. There is a mecha
Re: (Score:2)
MITM in cryptography usually stands for "man in the middle". "Monkey in the middle" is a kid's game where a group stands in a circle and tries to keep the ball away from a single kid designated the "monkey".
Re: (Score:2)
MitM success means opaque fail. If you want opaque, you must prevent MitM. If the transit has no MitM opportunity, then what's the point of opaque in the first place.
Re:Death knell? Really? (Score:5, Informative)
It means that this "post" is really clickbait. And now we know why no one RTFA.
Re:Death knell? Really? (Score:5, Informative)
Yes, it is and it's bs that libcurl got caught in the middle. By default libcurl is secure.
Re: (Score:3)
It means that this "post" is really clickbait. And now we know why no one RTFA.
Yes - please, nobody make any more topical comments about the "death knell" phrase or you're just going to encourage this kind of submission whoring and editors who play along. They'd love to see a long thread debating the merits of whether or not TLS is about to go extinct, and there will be trolls to fuel such an absurd thread if you allow it.
They'll have to make do with a small number of page views on the meta bitching about
Re: (Score:2)
Unfortunately they can't leave TLS because of IE 6 and maybe IE 7(?) support. These apps use HTML from IE for functionality and need to support these older browsers for corps and people who refuse to upgrade to a modern browser.
Another reason to also get rid of XP as you can't upgrade someone's IE from your own setup.exe program.
Arstechnica.com had an article (older as I can't find it to link) which showed TLS to be ineffective by 2016 as computers became faster and through collisions will be able to hack t
Death knell? (Score:2)
Re: (Score:2)
or Glaive ;-)
http://en.wikipedia.org/wiki/Krull_(film) [wikipedia.org]
Not as interesting as.. (Score:2)
A death cruller!
Man in the middle? (Score:2)
Re:Man in the middle? (Score:4, Insightful)
As long as you are using a legit SSL cert ( avalible for less than $10 anually) with decent cipher strength ( again, avalible for less than $10 anually), man in the middle should be impossible with TLS/SSL and the proper use of it by the client ( don't connect and send sensitive data, if the ssl cert isn't valid or the signer isn't trusted).
Re:Man in the middle? (Score:5, Insightful)
You're better off running your own CA and distributing that CA's public key to your internal apps. Then you can ignore outside CAs but still avoid MITM attacks.
Re: (Score:2)
And for the record, OpenCA [openca.org] is all you need, although it's not super simple to use.
JigJag
Re:Man in the middle? (Score:5, Informative)
There's not really any such thing as a "legit" certificate; you're referring to a signed one. This does nothing to protect against a man-in-the-middle attack. What it does do is establish a chain of trust linking your certificate back to an authority. If that authority is trusted then your cert can be too (to the extent you trust the authority). If, and that's a big if, we trust that _all_ trusted authorities will thoroughly vet the certificates they sign then we can _trust_ that a MITM attack cannot occur, but realistically "legit" certificates do nothing more than that. If, say, the US DoD (once/often? a trusted authority) decides to MITM you, they can just sign a cert and MITM you.
The only way to actually prevent MITM is to exchange the certificate (or some verification mechanism like a hash) in some sort of trusted manner (e.g. distributing it's hash with a client app).
Re: (Score:1)
Vote parent up, now.
No he is not, the whole purpose of certificate chains is to prevent MITM attacks without having to exchange all the certificates individually like he is saying.
Re: (Score:2)
the whole purpose of certificate chains is to prevent MITM attacks
But it can only do that if those who hold root or intermediate certificates that your application trusts can be trusted can be trusted not to issue (whether through their own choice, through trickery by the attacker or through pressure applied by the attacker) a fraudulent certificate to your attacker for the service you are connecting to.
Take a look at this list.
http://www.mozilla.org/projects/security/certs/included/ [mozilla.org]
AIUI everyone on that list (and anyone they delegate to!) can generate certificates that w
Re: (Score:2)
You can get them for free from StartSSL, and most browsers/vendors will trust them.
Re:Man in the middle? (Score:5, Insightful)
The current versions of SSL/TLS are never vulnerable to man-in-the-middle attacks unless a trusted certificate authority is compromised (as long as both client and server implement RFC 5746). Whether the certificate authorities are trustworthy is another question, of course.
This particular problem is caused by folks disabling the SSL stack's built-in chain validation and then not implementing their own. As far as I know, there are exactly two correct ways to support self-signed keys in Android: provide your own trust store that includes trust for that specific self-signed key or subclass the X509 validation class to add that specific self-signed key as an additional trusted anchor into the list of trusted anchors that it returns. Unfortunately, there's a lot of very bad advice out there, particularly on sites like Stack Overflow, telling folks to disable chain validation entirely. The result is that not only does the app trust that self-signed key, it also trusts any self-signed key.
It doesn't help that there's no canonical source for that information from Google, so there are many, many questions on sites like Stack Overflow that all ask the same basic question in different ways and get different answers....
Patient: Doctor, when I drill a hole in my hand, I can't scoop up water from the bucket to drink.
Doctor: Why did you drill a hole in your hand?
Patient: So that the acid wouldn't stay in my hand.
Doctor (alarmed): Why did you put acid in your hand?
Patient: Because the bucket dealer wanted too much money for a bucket.
Yeah, it's like that.
Re: (Score:2)
Protocol (Score:5, Insightful)
How is the wrong implementation of a protocol in a framework library a fault of the protocol?
Either devs need to be aware that there's extra steps in validating using an SSL library in their framework of choice, or the framework needs to be patched appropriately, but based on the concepts the article's provided, sounds like bad implementation aka crap code, and not enough QC. Some OOP would help make the implementation easier though...
Re: (Score:2)
Hmmm... you've just summed up a lot of what's wrong with modern day developers in your sentence.
This is not an SSL problem (Score:5, Insightful)
This is a problem of bad APIs and people not competent to select libraries with better ones. The same would happen with any other encryption protocol. Implementing and using cryptography is hard, in particular because testing will usually not show anything is wrong and testing is still the only thing most software "developers" have in their bag of tools to ensure correctness. As long as people without a clue decide they can implement cryptographic libraries or use them, these things will continue to happen.
Re:This is not an SSL problem (Score:5, Insightful)
This is a problem of bad APIs and people not competent to select libraries with better ones.
While that might sound true, I think the problem is deeper than that. The issue in a lot of cases is developers having to deal with non-ideal SSL/TLS setups that they have no control over.
It usually goes like this:
Dev monkey gets told by PHB, we need to make our communications secure, so implement SSL. Dev monkey adds SSL support to the app. Code seems to work. Testing (or even worse, someone in Production) comes back and says: dev monkey's SSL code doesn't work with our Customer XYZ's server. Dev monkey tests things himself and finds that Customer XYZ is using a self signed cert or an expired cert. Dev monkey tells PHB that Customer XYZ needs to fix their setup. PHB tells dev monkey that the setup cannot be changed because of ABC and that dev monkey needs to "code around the issue". Dev monkey updates app to not choke on bad certs. Code gets released, and Customer XYZ's remote worker gets p0wned by a man in the middle attack. Customer XYZ blames PHB, PHB blames dev monkey. Dev monkey sighs and gets another mountain dew.
Re: (Score:3)
Dev monkey updates app to not choke on bad certs [...] PHB blames dev monkey
The PHB got the blame exactly right. The Dev Monkey proved he didn't understand SSL as soon as he did a blanket "trust everything." Dev Monkey screwed the pooch.
Re: (Score:2)
In the real world, dev monkey doesn't get to do what dev monkey wants, a good step would be to ask management what to do since customer won't change set up that way monkey covers monkey's ass and if the customer gets backed, management can deal with it on a my guy told you so basis.
Re: (Score:2)
Exactly. In the real world, dev monkey doesn't get to make the decisions. If dev monkey doesn't code around the problem, PHB finds a different code monkey to make the change. Not everyone gets to work for themselves or for a small startup where they can make their own decisions.
Re: (Score:3)
To be honest...
If I was consulting for a company and I clearly outlined the risk of allowing self-signing certs to them and they said we'll take it, I'd make sure I had something in writing... like an email & I'd do it for them anyways, if they get MITM'd later, I'd refer to them to the email. You can't always stop people from taking short cuts, but making them aware of the risks is more than most people probably do.
Re: (Score:2)
The last time I came across this in the real world, I was writing a
Re: (Score:2)
As a MITM attacker, can't I just spoof the sender's domain here?
Granted, it does add a fairly decent layer of complexity to the MITM : http://www.windowsecurity.com/articles/understanding-man-in-the-middle-attacks-arp-part2.html [windowsecurity.com]
Re: (Score:2)
Re: (Score:2)
0xF00B4R12
Sorry for the nitpick, but 'R' isn't a valid hex character.
Re: (Score:1)
Your dev monkey is fucking incompetent and should be slapped with a smoked mackerel in the face.
A reasonable developer would just add XYZ's cert to the list of trusted certificates manually. If you think about it a little, it doesn't matter who tells you that a purported XYZ's certificate is indeed XYZ's. It could be one of the trusted CAs, selected for by Microsoft, Mozilla or Google. Or it could be the holy trinity of XYZ's CEO, CTO and CFO, materializing in your office in person. It could even be your PH
Re: (Score:2)
Sure but good luck updating XYZ cert in the certificate store for hundrends and hundreds of clients.
If you try to document it so that the user can update it then there is a 50-50 chance they will screw their certificate store and then not even their banking, gmail or shopping would work.
Re: (Score:2)
While this does not seem to be the issue in the OP, this is definitely a realistic scenario. Maybe I should have said that the implementation on both sides of the tunnel needs to be competently done.
libcurl is not insecure (Score:5, Interesting)
The compliant about libcurl is baseless. It's said VERY CLEAR in the documentation how to use the feature. If stupid devs can't figure it out that's hardly the fault of a library developer. I've never had an issue with it and I've used it in C, C++, and PHP.
To repeat what I said on the mailing list. If I break my thumb with a hammer do blame the hammer or do I blame myself?
As Yehezkel Horowitz pointed out on the mailing list.
This is the quote from the FAQ
>Q: How do I use cURL securely?
>A: CURLOPT_SSL_VERIFYPEER must be set to TRUE, CURLOPT_SSL_VERIFYHOST must be left to its default value or set to 2. Anything >else, such as setting CURLOPT_SSL_VERIFYHOST to TRUE, will result in the SSL connection being insecure against a man-in-the-middle attacker.
The real answer should be - cURL defaults are secure - no need for any code to use it securely.
==================
In general I think the very short answer for this publication should be RTFM.
The little bit longer answer would be -
1. cURL is a C code library - you can't set a value to TRUE since this is not in the language syntax.
So you has somewhere in your includes something like "#define TRUE 1" - you must be aware to this issue - this is an important part of the relations between computers/compilers/programmers.
2. Before setting any option to cURL - you should read the very clear documentation about this option.
==================
As to what we can do to make cURL even better (in order to protect unprofessional users that don't know what they are doing), We could make '1' to act as '2' (verify peer identity), and add a special magic value (i.e. 27934) that will act as todays '1' (check for CN existence but don't verify it).
I think they owe everyone at libcurl an apology.
Re: (Score:2)
Just to clarify Yehezkal didn't say they owed every at libcurl an apology. I did.
Re:libcurl is not insecure (Score:4, Insightful)
It is good that the default is secure, but this is bad API design. There are at least two ways it can be improved:
1) The name "CURLOPT_SSL_VERIFYHOST" implies a boolean value, so "set(CURLOPT_SSL_VERIFYHOST, TRUE)" looks like reasonable code after a quick glance. Since the option is a multiple choice option, not a boolean, it should be named something like "CURLOPT_SSL_VERIFYHOST_MODE".
2) C has had enums since forever. The values "1" and "2" are opaque magic numbers, and flags that are this important should be set with well-named enums, not with magic numbers. Further, if the API setter function was typed with the appropriate enums, the compiler would have complained when it saw "set(CURLOPT_SSL_VERIFYHOST, TRUE)".
Yes, the application devs used libcurl incorrectly, and yes, the above criticisms are nitpicks, but a library this important should be designed very defensively to minimize the change that users will make dumb mistakes.
Re: (Score:2)
Oh please please mod parent up!!!
+1
That api design is retarded.
Re: (Score:2)
While those are valid points and should be corrected I still say it's the fault of the developers and not libcurl. Every option is documented well and the documentation is easy to find.
Re: (Score:2)
I think fault is not 0-sum. It can be totally the developer's fault and still a flaw in libcurl.
Re: (Score:2)
Sometimes, and sometimes, respectively. If the head rotates 90 degrees on the handle, or flies off the handle, on a low-mileage hammer without having undergone any abuse, and the thumb was hit a result of that fault, then you blame the manufacturer of the hammer and perhaps sue them. They almost certainly have insurance for this kind of thing.
Otherwise you're well advised to blame yourself.
Re: (Score:2)
Touche, I was using that phrase in the context of libcurl though. In this case none of those things happen ;).
Re: (Score:1)
And, following the rules of English grammar, it's VERY CLEAR that one modifies a verb with an adverb, not an adjective.
But such a mistake isn't important, surely? I mean, we understood what you meant, so the communication worked, didn't it?
Re: (Score:2)
Forgive me for not proof reading a post on the Internet. Hopefully, the world won't come to an end, but if it does I hope grammer nazis die first.
Re: (Score:2)
Re: (Score:2)
Oh my god, a parameter that takes either '2' or TRUE???
What the hell is that?
I will stick with my strong typed languages thank you very much.
Lousy documentation (Score:3)
While libraries like cURL have excellent documentation, other libraries such as OpenSSL have terrible documentation. Assuming that the cURL developers understood how to use OpenSSL correctly, it's quite simple for me to use their library to establish a secure connection.
What's harder is to figure out how to do it with OpenSSL. There is no obvious starting point for opening a secure connection that you can glean from reading the man pages. There are books you can buy on the subject, but that doesn't excuse the library authors from writing easy to understand documentation. The library itself is quite elegant: with just a few steps you have a secure connection that you can read and write just as if it were any other network connection (or, for that matter, a file on disk). But figuring out how to correctly set up and tear down a connection using that library isn't well documented at all.
Re: (Score:2)
By default libcurl is secure. It's only insecure if you mess with an option. Personally, I'm glad that option is there.
This is the quote from the FAQ
>Q: How do I use cURL securely?
>A: CURLOPT_SSL_VERIFYPEER must be set to TRUE, CURLOPT_SSL_VERIFYHOST must be left to its default value or set to 2. Anything >else, such as setting CURLOPT_SSL_VERIFYHOST to TRUE, will result in the SSL connection being insecure against a man-in-the-middle attacker.