SSLStrip Now In the Wild 208
An anonymous reader writes "Moxie Marlinspike, who last week presented his controversial SSL stripping attacks at Black Hat Federal, appears to have released his much-anticipated demonstration tool for performing MITM attacks against would-be SSL connections. This vulnerability has been met with everything from calls for more widespread EV certificate deployment to an even more fervent push for DNSSEC."
Alternatives (Score:4, Interesting)
Re: (Score:2, Informative)
Re:Alternatives (Score:5, Informative)
Can you make the claim you are 100% vigilant 100% of the time?
It's more subtle than that. It takes away one of the biggest indicators that there is an SSL problem--the dialogs. Watch the presentation video [blackhat.com]. It's pretty cool. What Moxie shows is that often the indicators of SSL enabled and not enabled are practically non-existent. It's easy to see how most users, even tech savvy ones, could be fooled.
Re: (Score:2)
Slashdotters from the prior related story suggested the following tweaks to make SSL connections more obvious and identified:
Site name in "Site ID Button":
http://news.cnet.com/8301-13554_3-9974672-33.html [cnet.com]
Yellow background in address bar:
http://news.cnet.com/8301-13554_3-9974221-33.html [cnet.com]
Re: (Score:2)
Nifty. Thanks.
Re: (Score:2)
Re: (Score:2)
And a lot of managed switches are garbage. Basic attacks of fake arp requests will easily overload these kinds of switches.
And ARP poisoning sill is very effective.
Re:Alternatives (Score:5, Informative)
Also, as interesting as this attack is, it should be noted that it does require the attacker to have network access (so he can perform the MITM attack, usually through ARP spoofing). There are a number of ways to fight arp spoofing, but if you're on a small network, just set static arp tables on your machines and you've done pretty much all you can do. The attacker can still attempt to get access at your ISP and on the other end, at the web host, but handling that much traffic without being noticed would be difficult, so I doubt one would try it. (and I'm sure someone will now prove me wrong...:P)
Re: (Score:3, Insightful)
Even this doesn't work. Legitimate banks do this (http://www.usbank.com is one, who I've banked with in some fashion since I had a net worth of over $50). Note that after you type your username in, you're taken to a
Re: (Score:2)
Check to see if the URL to the site begins with http:/// [http] before you login. If it does, and it's displaying a padlock icon (suggesting that it is 'secure'), then you're being attacked. Really, you should already be wary when a site asks you for login information over HTTP rather than HTTPS.
Try Wachovia's site: http://wachovia.com/ [wachovia.com]
Lock icon: check.
Unsecured HTTP page: check.
I don't have a Wachovia account, so I can only assume that the actual UID/password info goes over SSL, but that's irrelevant to this at
Re: (Score:3, Informative)
Lock icon? No.
Re: (Score:3, Interesting)
Really, you should already be wary when a site asks you for login information over HTTP rather than HTTPS.
Maybe. The login form might be located on an HTTP page, but as long as the form submits to an HTTPS page, your login credentials are still SSL-encrypted. Conversely, if you have an HTTPS login form, but the form action goes to an HTTP site, your credentials are NOT encrypted.
Re:Alternatives (Score:4, Informative)
The login form might be located on an HTTP page, but as long as the form submits to an HTTPS page, your login credentials are still SSL-encrypted.
In general, yes, but one of the 'tricks' of sslstrip is that it changes the content of the HTTP-served page so that the (formerly) HTTPS submission page is no longer HTTPS, but HTTP.
Re:Alternatives (Score:5, Informative)
>as long as the form submits to an HTTPS page, your login credentials are still SSL-encrypted.
No, If any part of a page is not encrypted then an attacker can effectively strip all encryption from the entire page. See this page from a Microsoft Internet Explorer programmer: http://blogs.msdn.com/ie/archive/2005/04/20/410240.aspx [msdn.com]
and this page about airpwn where attendees at a security conference had the images in their web pages turned upside down.
http://www.informit.com/guides/content.aspx?g=security&seqNum=158 [informit.com]
Say for example you're using an unsecured wireless access point at an Internet cafe. There can be an attacker five miles away with a high gain antenna listening for someone to log into their bank by a login page that only encrypts the password. When your computer sends out the request for your bank's page, if the hacker's computer is fast enough, it can impersonate the wireless access point and send a version of your bank's login page with the password encryption stripped and the password redirected to whatever computer your attacker wants. When the real server finally responds to your request a few milliseconds later, your computer will think it's a mistaken duplicate and ignore it. This is not a theoretical attack, it has been publicly demonstrated. Your first login attempt may fail as the password is redirected to the attacker, but once your attacker has your password, he can return things to normal so your second login attempt will succeed. You'll just think you mistyped the password on the first try.
Re: (Score:3, Interesting)
Your first login attempt may fail as the password is redirected to the attacker, but once your attacker has your password, he can return things to normal so your second login attempt will succeed. You'll just think you mistyped the password on the first try.
That's why I always type my password in wrong on purpose the first time!
Re: (Score:2)
Shared private keys. (Score:2)
Public key crypto-systems.
Not buying anything online via the web-browser.
OK, that last is crazy talk.
Re:Alternatives (Score:5, Insightful)
We don't need an alternative to SSL. We need browsers to implement proper UI. The user MUST be made aware if clicking a button would transmit a password in cleartext. The user MUST be made aware exactly which domain they are connected to during an SSL session. On a large busy screen, a tiny bit of text in a corner is the wrong way to do this.
Re: (Score:2)
Absolutely not! Overload a user with information and you get a trained clicker. Only unencrypted authentication information should trigger warning/are-you-sure UI.
Re: (Score:2)
Re:Alternatives (Score:5, Funny)
Re: (Score:2)
Re: (Score:2, Funny)
Re: (Score:3, Funny)
EPA Crackdown: No More IP Over Open Air (Score:2)
UPI - Crow Agency, WYOMING - The Federal Environmental Protection Agency has issued a crackdown order on a traditional method of communication and possible free public access to the Internet. The method utilizes traditional Native American techniques with a modern twist. Computer signals called packets are translated into smoke signals and released into the atmosphere. The first message sent using this technique contained the message, "The casino is now open. $3 Black Jack and Texas Hold 'em. Smokey Robinso
Not the end of the world (Score:5, Informative)
Reading TFA, it seems to me that there IS something that the end user can do to protect themselves: Look for the https:/// [https] in the address bar and DON'T LOOK THERE (favicon.ico area) FOR THE PADLOCK... the padlock should be down in the statusbar area where it always is.
Out of reflex, I always check that my URL starts with https:/// [https] and I check the cert when I'm dealing with someplace new. Now, I'm just always going to check the cert... even if I'm connecting to a site I use all the time.
If Moxie really wanted to make things tougher, they could maybe add a cert to their tool. THAT would make it so you'd only notice if you read the cert and realized it wasn't what it was supposed to be.
THAT's scary.
Re:Not the end of the world (Score:4, Interesting)
If you read some of the articles (Forbes and a linked one) he can spoof the appearance of a valid certificate as well using International Domain Names. The certificate won't be valid for the site that you wanted, but that won't matter because it'll have redirected you to https://a/ [a] load of characters that look like 'paypal.com/somepath' but are actually non-ASCII characters].evil.com with a wildcard certificate for *.evil.com and look like https://paypal.com/some-path-here-that-is-really-really-really-really-long.evil.com/ [paypal.com]
For the basic attack then actually checking for HTTPS and a proper validation (not just a padlock, but a padlock and the other markers), but for the fuller attack that takes advantage of the IDN then you'd probably need to read the certificate itself, which would require you to know which certificate you're expecting, which would require something like a page with the signature on saying "look for this", which could then also be spoofed (in cases where it was worth it, e.g. a bank).
Re: (Score:3, Interesting)
Hrm. I must have missed that; it's a clever trick. Then again, I've always thought international domain names were gratuitously unneces
Re: (Score:3, Interesting)
Do you know if FF will detect blacklist characters for all TLD's or just the non-IDN TLD's like
Re: (Score:3, Insightful)
Another, perhaps more reliable option would be to change how long URLs are displayed.
Right now, if a domain name exceeds the length of the address bar, it's truncated on the right. Consider:
www.paypal.com/foo/bar/qux/sessionid/12341/do.myevildomain.com. If this URL is displayed as:
www.paypal.com/foo/bar/qux/sessionid/123
the user will be fooled. What if, instead, the truncated domain name looked like this?
www.paypal.com/foo/bar/...341/do.myevildomain.com
That way, no matter what evil junk is in the domain nam
Re: (Score:2)
What if the address bar highlighted the portion of the address that is the domain name? It would then be obvious if the domain name was being spoofed.
Re: (Score:2)
That's another good idea, but I think users might not be alarmed by seeing the entire address bar highlighted in that way as long as it still reads paypal.com/foo/bar/qux/blah/blah/blah. Again, it's the difference between making a security indicator subtle and making it almost impossible to miss.
Re: (Score:2)
Re: (Score:2)
The answer is not so much to highlight the domain name (it can be very long in some spoofing URLs). It is to show clearly the [most significant parts of the] top level domain - if necesary in a separate area. HOW it's done is a matter for the browser developers, but the INFORMATION needs to be made clear to users. Highlighting the domain name MAY be one way to do that, but teh answer depends on screen real estate available, user expertise, and other factors.
The end result is no more confusing MyBank.blah.
Re: (Score:2)
No, I'm talking about truncating the domain name in the middle, not the entire URL. Long URLs will still have their path and query-string portions trail off to the right, but the beginning and end of the domain name will always be visible.
Re: (Score:2)
Still doesn't help if you can also find fake international characters to replace things like ?, & and =. Say the evil domain is 74h34.be, then your evil url could look like:
https://mybank.com/bunch/of/paths/banking.jsp?bunch=of&arguments=here&session=0WjEc.74h34.bE [mybank.com]
Now it just looks like an innocent session hash.
Re: (Score:2)
Okay, so that will show up as:
https://mybank.com/b...h34.bE/real_path_here
It's clear that something is wrong.
Re: (Score:2)
Except there is no need for a /real_path_here. All the information the MITM needs to do the proxy is contained in the stuff to the left of the domain (ie the subdomain). He can manage the path any way he wants on his webserver using mod_rewrite etc. This is a wickedly nasty attack.
Re: (Score:2)
Good point. What about completely stripping punctuation from IDNs?
Re: (Score:2)
Look for the https:/// [https] in the address bar and DON'T LOOK THERE (favicon.ico area) FOR THE PADLOCK
So great.. you teach everyone to LOOK FOR THE HTTPS, and that means you're safe!
Then, the attackers simply use a variant of the (similar URL method), and are sure to include SSL, so the URL is (for example) https://www.mybank.com.com.cn/ [com.com.cn] (or whatever fools the user).
Even if you could somehow re-train millions of people successfully to understand one particular attack mode, another one will always be right aroun
The real problem. (Score:2)
For example, browsers don't do the SSH style thing where they warn you if the cert changes for a previously recognized site.
Go look at all the built-in CA certs in your browser sometime. Count them if you can.
How sure are you that none of the CAs there won't be tricked/bribed into signing a cert for *.mybank.com ? Who has audited them?
All it takes is just one CA, and AFAIK, none of the popular
Re: (Score:2)
The problems you identify with the CA infrastructure are real, but switching to an SSL-style OH-MY-GOD-THE-CERT-CHANGED system introduced vulnerabilities to several different attacks, desensitizes users to security problems, and cannot provide assurance of identity --- only that the site is the same one that you spoke to last time.
Closing your eyes, plugging your ears, and singing "la la la ssh la la la self-signed la la la" won't solve a single thing when so
Re: (Score:2)
The real problem is the browser makers do not care about security. They only care about the appearance of security.
I largely agree. If I had a "secure" browser to use, I'd use it to do all my banking with. Online purchases I don't so much care about as the CC company is really the one taking all the risks.
Browser for banking (Score:2)
If you're using windows use "run as".
e.g.
Say you have an account called vellmont.
Create an account called _www_vellmont and make its home directory fully accessible by your vellmont account.
Then in your vellmont account, create a shortcut:
C:\WINDOWS\system32\runas.exe
Re: (Score:2)
Start->Run firefox -ProfileManager -no-remote
Create a profile called, say, 'bank'.
Then run
Start->Run firefox -Pbank -no-remote
Re: (Score:2)
The problem with your proposal is that sites must change certificates once every few years. So now, every three years the certs change and your browser complains. Say you visit 36 ssl sites. That means that you will have a ssl-alert about once a month - you will just click through, and not check.
In addition to that, this attack, for example, would bypass that! You visit www.mybank.com - all is good, no ssl. You are then redirected to secure.theevilbank.com - again, all is good because secure.theevilban
Security is a social issue. Educate! (Score:5, Insightful)
This attack does not break SSL in any way. It simply tricks users into entering sensitive information into unencrypted context.
The solution is user education. We need to train users to look for the browser padlock icon. We need to add browser extensions that heuristically detect credit card numbers being entered into unencrypted sites and to warn the user. We need to train users to click "no" on security dialogs when they appear. We need to tell users that a padlock icon a website puts next to a form is unacceptable. We need to train users to be vigilant, because nasty people are trying to steal their information.
I'd like to see fewer people using self-signed certificates that train users to ignore SSL warnings. I'd like to see public service advertisements. I'd like to see basic computer safety classes in public schools. User education is the only hope we have against stupid users!
The fault lies partly with browsers too. Firefox, particularly, should never have toned-down the non-EV SSL user-interface --- sure, making EV special is fine, but allowing sites to spoof the SSL UI with a favicon is unacceptable. People have been saying this ever since Firefox 3 came out, but maybe now someone will pay attention to us.
Re: (Score:2)
But this can spoof that as well for many users (and even for Firefox users it might make the unwary feel safe).
He also mentions methods for using IDN (Internation Domain Names) and wildcard SSL certificates to spoof HTTPS versions that look even more like the real thing than https://yourbank.com.some.evil.website. [website.com]
Re: (Score:2)
This problem is easy to solve technically with an IDN character blacklist. The https-to-http redirection is far more insidious.
Re: (Score:2)
What I don't understand is that Firefox _used_ to change the address bar text area background color to pale yellow to indicate a secure site and it stopped doing that some time ago. I always thought that was a great feature and much more obvious than that tiny little icon on the status bar, which can be lost in the noise next to the Greasemonkey icon and the NoScript icon, etc.
Re: (Score:2)
HTTPS puts a blue background behind the favicon and the padlock and certificate domain in the status bar. What kind of favicon can ever spoof the entire blue background. More importantly, what favicon can ever spoof the status bar section?
Why can't a favicon start with a blue background? Can't a favicon from the target site get a new blue background dynamically easily? Who pays attention to the status bar? What about TFA's assertion of being able to use
with a real SSL cert for fakedomain.ru? Note, apparently | can be a character that is not | or / but shows up just like /
Re: (Score:2)
Education won't actually help. Most users do not know what a certificate is for. They have been trained to know that if a secure icon is present it's good. Many will have a basic knowledge that a secure connection is a sign that the info they send can't be eavesdropped; and maybe that it has something to do with being stored safely on the bank's (or whatever) web site.
When web browsers or people start talking about certificates, eyes glaze over. Most people don't know what a certificate is. If they g
Re: (Score:2)
You're right. We can't count on users being any more than idiots. They'll just look for a secure icon. The trick is convincing them to look for the correct security icon, and to check that the address in the browser matches the site in the window.
These techniques are very simple, on par with looking both ways before crossing the street. I think almost everyone can be taught basic due
Re: (Score:2)
Why is that any more likely to work than training people not to be nasty?
Re: (Score:2)
The solution is user education.
fail.
You're dealing with average people, not bright geeks. People won't read. They don't learn, and arguably they shouldn't need to do so to use someone's web site.
People - even some knowledgeable IT folks I know - think that the "lock" icon means an entire transaction, from client to server, is secure - not just that a transaction is conducted through an encrypted pipe.
They think the lock means:
Re: (Score:2)
One problem with this approach is that it's impossible to revoke a self-signed certificate if it's compromised. Another is that users will always see the OH-MY-GOD-THE-CERT-CHANGED warning every time the certificate expires, or when the site moves to a conventional CA-signed certificate.
A better option that still gives you want you want is a special free CA that the browser recognizes and that will give th
Re: (Score:2)
That'd be nice, but it's not going to happen soon, and for plenty of different reasons. We need to focus on solutions that are feasible now, not pie-in-the-sky ideas that will only be implemented after even more massive damage is done.
Re:Security is a social issue. Educate! (Score:5, Informative)
No, they must be handling out mod points to people who have a fucking clue how SSL works. SSL is designed specifically to counter your simplistic scenario.
The MITM won't be able to give the client the proper certificate for the domain name the client thinks he's connecting to. The browser will detect this mismatch and give the user a broken padlock icon and a security warning. Because we've educated the user, he'll know to look for the padlock icon, and that a broken padlock icon means "danger". Attack averted.
Re: (Score:2)
I don't think the dancing bunny problem is quite so severe when users have to enter their credit card numbers or bank account information. The media has done a good job of whipping up a frenzy about identity theft.
EV certificates (Score:2, Interesting)
"for more widespread EV certificate deployment"
That's probably being sold by Thawte. And considering that a lot of browsers out there still don't support EV.
Extended validation? When I pay for a digital cert, I expect a high level of validation anyways. Makes you wonder, what level of validation they've been doing for the past few years.
SSL always MITM as one of its exploits. There's a lot of network gear (e.g. Cisco's IronPort) that do just that in order to enforce security policies of an organization.
Re: (Score:2)
That's still impossible unless you're okay with the client getting the wrong certificate. In a corporate environment, you can just install a CA certificate in every client and the user will be none-the-wiser. But in general, SSL is not vulnerable to undetected MITM attacks.
Re: (Score:2)
If you've paid for digital certificates, shouldn't you know what level of validation they've been doing?
Also, as far as I know, all modern browsers support EV certificates, but not all of them differentiate EV certs from regular ones. Firefox 3, however, does.
Re: (Score:2)
Makes you wonder, what level of validation they've been doing for the past few years.
If the credit card making the purchase was declined or not.
I do not like the whole "EV certification" thing. Not because I dislike the process or what it's designed to do, but because CA's were already supposed to be doing the verification in the first place (apparently, they weren't). Now they want to charge more money to do what they were originally supposed to do.
Re: (Score:3, Insightful)
Like most human behavior, this problem can be explained by economics.
The problem is that the CA system creates the wrong incentives. You, as a CA, want to sell as many certificates as possible. There ar
Re: (Score:3, Insightful)
There are a couple solutions to the incentive problem:
I wish I could come up with better ideas.
Having a CA funded by anyone but the website also doesn't work, since the site needs to get a certificate from the CA before going live. And unless it's ICANN running the CA, a site might need to get certs from multiple CAs if people in different countries or with different browsers want to talk to them.
Hmm, there's a thought. Self-signed certs, with the root cert fingerprint available as a DNS record, using DNSSEC. Then get the real-world identity info from 'whois'.
Or use something based on "can't fool all
Re: (Score:3, Insightful)
I don't see why a site requesting a CA needs to be live. Consider FooCA, a for-pay CA that users subscribe to, or that ISPs subscribe to on behalf of their users. (There are other models --- this is just an example). If BarInc wants a certificate from FooCA, BarInc just applies to FooCA as soon as BarInc incorporates and obtains BarInc.com. Why would BarInc.com need to be l
Re: (Score:2)
I don't see why a site requesting a CA needs to be live. Consider FooCA, a for-pay CA that users subscribe to, or that ISPs subscribe to on behalf of their users. (There are other models --- this is just an example). If BarInc wants a certificate from FooCA, BarInc just applies to FooCA as soon as BarInc incorporates and obtains BarInc.com. Why would BarInc.com need to be live at this point?
If FooCA just gives BarInc a cert, how do they extract any sort of payment from the visitors? Once the cert is in the wild, anyone can use it to verify BarInc.com. So FooCA would have to not provide it to anyone until asked (and paid) by one of their customers.
The CA is only involved in issuing the certificate, nobody talks to them per site visit. Having the visitors instead of site owner pay the CA would require changing this, which doesn't seem feasible.
I'd rather have a robust CA system so that nobody has to be in that group of fooled people
It shouldn't be the users getting fooled, just some
Re: (Score:2)
You are wrong. It is impossible to MITM properly-implemented SSL without having access to a trusted CA.
Re: (Score:2)
That's probably being sold by Thawte. And considering that a lot of browsers out there still don't support EV.
IE7 does
Firefox 3 does
Safari does
So what browsers don't that people actually use? No, Opera doesn't count. :)
Hype (Score:2, Interesting)
An anarchist resource since 2004? (Score:2, Interesting)
Has SSL ever actually worked properly ? (Score:2)
I'm going to stick my neck out and say that SSL is a false security. Any twit with $29.95 can buy an SSL cert. The mere fact that a page is encrypted via SSL seems to convince people that they are dealing with a reputable site.
I'm much less worried about packet sniffing, and more about fake sites like I see every day in the steady flow of spam. There are many ways to steal someone's private data, all much easier than being on the right network, at the right time, sifting through gbits/sec of garbage for
Re: (Score:2)
SSL works very well for the kind of attacks it was designed to counter. We need other mechanisms to protect against others. Why would you expect one technique to cover all vulnerabilities?
Major browsers already include anti-phishing features that check blacklists. These blacklists will reduce, but not completely eliminate, the fraud caused by phishing schemes. A few pe
Re: (Score:2)
Mitigation (Score:2)
Firefox Helpies (Score:2, Informative)
about:config
browser.identity.ssl_domain_display
Set it to 2 to see the Common Name of the cert in the address bar. Very helpful to see side-by-side with the URL. EV certs will still show the Organization and Country, but it makes non-EV certs a little more obvious.
Re: (Score:2)
Is this so bad as it sounds?
Technical: Yes.
Real-world-home-server: No.
Talk about choices...
Re:Sounds ugly (Score:5, Insightful)
Huge pet peeve (Score:5, Insightful)
A site should never lead the user to type sensitive information into a form on an unencrypted page, even if the form's data goes to an encrypted location when submitted. Doing this trains users to be lazy. What's even worse is trying to alleviate users very correct fears by putting a padlock icon next to the form. That's even worse: doing that trains users to believe that a website can signal its own trustworthiness apart from the browser UI, and that could have disastrous consequences.
I have a technical solution, but it won't be popular: browsers should display a warning when submitting a form on an unencrypted page to an encrypted URL. Since web designers are afraid warnings will spook users, they'll switch to making the form-entry pages encrypted as well.
Re: (Score:2)
IE did that, at least back in the 6 days. I don't know if it still does.
It trained users to click "don't warn me again".
Since there was (and still is) no way to discern what is being sent (important stuff or just a Google search) that box is obnoxious and useless.
Re:Huge pet peeve (Score:5, Insightful)
IE's warning appeared on all form submissions. I agree that warning was worse than useless.
I'm talking about warning only when the following conditions apply:
The user should not be able to disable the warning; its existence will lead webmasters to change condition 1.
Re: (Score:3)
This would be great, except that most users don't read warnings. They click through them in whatever looks like the most likely path to allow them to finish what they started.
Re: (Score:3, Interesting)
Of course users won't actually read the warning. The point is to annoy users so that webmasters eliminate the behavior causing the annoying warning.
Re: (Score:2)
Of course, plenty of sites send passwords over HTTP (hopefully no banks...), so a blanket warning on passwords being sent unencrypted would just train users to ignore the warning. Furthermore, the use of passwords sent via HTTPS forms is still training users to type their passwords into phishing sites (if they manage to get to one via a typo or convincing e-mail (my credit card company includes links to their site in their e-mails, faking those e-m
Re: (Score:2)
Hopefully, the anti-phishing features of recent browsers will reduce the danger of this attack vector.
Re: (Score:2)
They already do. In fact, that's the very first thing I disable on a newly-installed browser. Otherwise I get this annoying and useless dialog everytime I use Google!
Maybe you mean displaying a warning on forms that include an input type="password" field, those where characters appear as stars?
Re: (Score:2)
Entirely agree. There was a nascent guideline for users: Check the "padlock". Check that the protocol is https.
Designers then started breaking this. To avoid an extra https serve, particularly on a front page or popular page. For the sake of "Design", including putting a sign in form on the front page. Etc.
At least I knew to, if at all possible, force the site to serve up an https version of the sign on page. Most users have no clue about that. And the means for accomplishing this vary. Sometimes,
Re: (Score:2)
Designers then started breaking this. To avoid an extra https serve, particularly on a front page or popular page. For the sake of "Design", including putting a sign in form on the front page.
Errr... you really think the DESIGNERS cared about extra HTTPS hits??? They were probably told to put the login on the home page. Then, the sysadmins balked at the idea of an increased SSL load, but still said the login could be done securely if the form action was HTTPS.
The real problem is browsers. They should
Re: (Score:2)
A site should never lead the user to type sensitive information into a form on an unencrypted page, even if the form's data goes to an encrypted location when submitted.
I couldn't agree more. I have seen this a number of times and it really bothered me. Every time I came across this, I would check the HTML source to be sure the "action" attribute pointed at https (or could that still bite me somehow?). I haven't seen this bad behavior in awhile, though.
Re: (Score:2)
According to Moxie Marlinspike, (creator of SSLStrip), some browsers will re-download the page when you click view source. It would be plausible to convert https to http for the first request only, so if the user requests the source for checking the submission form, it would say https when they checked it.
Re: (Score:2)
More people need to use RFC3546 Server Name Indication [hightechsorcery.com] for SSL name-based virtual hosting. All major browsers already support it; the only barrier is Apache and OpenSSL. mod_gnutls for Apache, however, works perfectly.
Even in the absence of RFC3546 [ietf.org] support, there are several workarounds:
Re: (Score:2)
You misunderstood my point. That's not the warning I'm talking about.
I'm talking about a far more selective warning [slashdot.org].
Re: (Score:2)
FTPS for example, which is FTP over SSL, is hardly used at all and instead protocols like SCP reign for secure FTP. If people would just stop what they're doing for a moment and realize that the issue is that we're using an inherently unsecure, stateless protocol wrapped inside a security blanket for trans
Re: (Score:2)
Let me guess: you think SMTP is to blame to spam, too.
In both cases, the protocol is not the problem. In fact, both protocols are well-designed and effective. Blaming the protocol is a gross oversimplification of the problem and a cop-out. If you were to do enough work to propose a solution, you'd realize that quickly. Any new protocol that did the same job as HTTP or SMTP would run into the same intrinsic problem
Re: (Score:2)
Though I think if you know that particular IP is that bank, you can still initiate a MITM attack by generating a certificate that supposes to have violated the EV rules. And the crappy client will be more than happy to accept it.
Hey, IPHostname in SSL is almost one to one mapping, if it's not already. There is basically no virtual host in SSL website...
Re:Sounds ugly (Score:5, Informative)
The attack breaks down two ways. Proxying web traffic between a user and a sensitive site like a bank and/or repsenting a URL to a user that looks legitimate but isn't.
The indicators that you are on an SSL site are varied. A lock in the lower right of the window (FF3), to the right of an address bar (IE 6 and below), or a green address bar (IE7 EV cert) or a green indicator to the left of the address bar (FF3). All except the EV SSL certs are pretty subtle. The success relies on the fact that there are so many varied ways that SSL protection is presented to the user, can you keep track of it all. Quick, which sites use EV certs? You don't know so you don't know what to expect.
So, the attack does a couple of things to fool you. First it proxies your web traffic to secure sites re-writing urls that start with HTTPS to HTTP. The only indicator in browsers is no lock. If you are not looking for it, then you probably won't miss it. But wait, since we are rewriting URL's, why not replace the favicon with a lock. Yummy.
The second type of attack is to proxy HTTPS to HTTPS, but this time the SSL session between you and the proxy is enabled with a valid and trusted SSL certificate. No SSL dialog boxes. Here is how it works. IDN is used so that countries can represent URL in their native character sets. Some non-ascii characters look like characters. So use them to fool the user. These are called homographs. Browsers will convert some IDN based on the TLD. But other TLD, like country codes TLD, the browser won't. The assumption being a
Moxies video is pretty clear.
Re: (Score:2)
Hm, the article and summary both list it as SSLStrip but the only software I can find on the site is SSLSniff [thoughtcrime.org], which appears to be it? Maybe it was renamed because the link as given in the summary redirects to the main page.
SSLStrip redirects to index now (Score:2, Informative)
Re: (Score:2)
OK good answer.
Thanks; my mistake. :)
Re: (Score:2)
There is nothing you, as a web application developer, can do to mitigate this problem other than educating users and busting the IT department's balls to get them to improve real network security.
(Well, you could use clever Javascript to try to heuristically detect funny business in the page location, but if you're important enough, an attacker will just the detection Javascript when proxying your site.)