1 Billion Mobile Apps Exposed To Account Hijacking Through OAuth 2.0 Flaw (threatpost.com) 77
Threatpost, the security news service of Kaspersky Lab, is reporting a new exploit which allows hijacking of third-party apps that support single sign-on from Google or Facebook (and support the OAuth 2.0 protocol). msm1267 quotes their article:
Three Chinese University of Hong Kong researchers presented at Black Hat EU last week a paper called "Signing into One Billion Mobile LApp Accounts Effortlessly with OAuth 2.0"... The researchers examined 600 top U.S. and Chinese mobile apps that use OAuth 2.0 APIs from Facebook, Google and Sina -- which operates Weibo in China -- and support single sign-on for third-party apps. The researchers found that 41.2% of the apps they tested were vulnerable to their attack... None of the apps were named in the paper, but some have been downloaded hundreds of millions of times and can be exploited for anything from free phone calls to fraudulent purchases.
"The researchers said the apps they tested had been downloaded more than 2.4 billion times in aggregate."
"The researchers said the apps they tested had been downloaded more than 2.4 billion times in aggregate."
None of the apps were named (Score:1)
Very helpful to those who may be using them. Thanks guys!
Re: (Score:1)
Very helpful to those who may be using them. Thanks guys!
As usual they don't want to hurt anyones feelings.
Name the "apps" or get lost.
Re: (Score:2)
Re: (Score:2)
Almost everyone turned to oauth as the bastion of mobile security. You want a list of almost every mobile app that connects to a server?
This is not a protocol bug, but a common implementation bug in mobile apps relying on OAuth for authentication. So no, not "almost every mobile app that connects to a server" will be vulnerable... only all the poorly coded ones.
The problem is that the third-party app goes through OAuth, and the third-party backend server then trusts the app when the app says "yup, I checked with Google/Facebook/whatever, and this is user so-and-so", even though the app is running in an untrusted context. There are various w
Re: (Score:3)
Almost everyone turned to oauth as the bastion of mobile security. You want a list of almost every mobile app that connects to a server?
This is not a protocol bug, but a common implementation bug in mobile apps relying on OAuth for authentication. So no, not "almost every mobile app that connects to a server" will be vulnerable... only all the poorly coded ones.
Right, so pretty much all of them then.
Re: (Score:3)
Re: (Score:2, Funny)
Re: None of the apps were named (Score:1)
Identify which App Store(a) is/are affected or quit baiting me
Re: (Score:2)
You can be the victim of this attack even if you don't own a smart phone. The attacker uses an app to attack the service... and the attack still works, even if the victim only uses the desktop version of the vulnerable service.
Re: (Score:1)
Re: (Score:2)
We, rather urgently, need to protect ourselves from your annexation.
What makes me think we even want you?
Re:Hows the turnkey tyranny doing? (Score:4, Insightful)
I never thought GOP would work with Manafort, given his links to the Russian election strategists (and likely the hacks) and his involvement in the Ukraine takeover, yet they did exactly that
I'm going to tell you something, the average American doesn't care about Ukraine, or even Russia really, despite all the attempts at scaremongering in the last few months (the fact that the scaremongering didn't work is further evidence that Americans don't care about Russia).
Not only does the average American not care about Ukraine, they would also have trouble finding it on a map. Russia is easy because it's big.
Re:Hows the turnkey tyranny doing? (Score:4, Funny)
Most Americans can't find America on the map
Re: (Score:2)
Most Americans can't find America on the map
Where's the moderation for "Sad but true"?
they cant find earth in our system (Score:2)
of planets.
ahaha
Re: (Score:2)
Re: (Score:2)
Most Americans would rather die than actually use their brains.
Most Americans believe whatever their choice of "news" they choose to consume tells them.
Most Americans think that by hard work and diligence, they can get ahead.
Most Americans think they are living in a democracy.
Most Americans never visit other countries, and if they do, I'm ashamed of how they act for the most part.
Back on topic: Does anyone remember when Yahoo account credentials were the defacto "Single Sign on"? They didn't track failed lo
Implementation not protocol (Score:5, Insightful)
Reading through the published paper, it's a flaw with the implementations, not the protocol itself, which is reassuring. It can be fixed by adding the missing checks, rather than having to replace OAuth2.
Re: (Score:1)
Re:Implementation not protocol (Score:5, Insightful)
Re:Implementation not protocol (Score:5, Insightful)
This. One of the reasons I (as a sysadmin who understands protocols quite well, particularly HTTP) tend to shy away from things like OAuth 2.0 is because when I ask either front-end or back-end folks (or app folks) "so, can you explain to me how this works?", I have yet to encounter a single person who can actually explain it. Instead, the reaction is always: "look, it works, we use {Ruby gem XYZ,Python egg ABC,npm package HIJ}, who cares about the rest?". This is a mentality that troubles me greatly, and *not* how the same sector operated in 90s.
Re: (Score:1)
Your message should be to the people who can't explain how it works, not to me (the AC). :-)
Re: (Score:1)
Re: (Score:1)
> Oauth 2.0 isn't all that complicated at all
I think this article proves conclusively that that's not true. At least it is consistently more complicated than what people can reliably implement.
The security community really needs to stop blaming widespread security failures on people other than themselves.
Re:Implementation not protocol (Score:4, Insightful)
I think the issue for a lot of sysadmins is that trends have ultimately resulted in them losing the practical ability to manage what the software is doing security wise, but are still left accountable for mistakes. There is a great deal of pressure in the industry to be fast, and to be fast, just let the developers own deployment of their own software, enabling various technologies to let the 'user' be 'root' in some special domain to give them freedom.. However, somehow the admins continue to stay on the hook for problems that arise from how that software is deployed, despite having no control over deployment. So an admin in such a position is justified to quiz the developers to make sure *they* understand what they are doing to themselves, and perhaps lead them to more deeply understand the lego block modules they are haphazardly slapping together. Those modules are of widely varying levels of quality and commitment, and no good way to know at a glance if it's a wise decision to use them or not. Even when they are done well, any tool used incorrectly can lead to trouble. Of course, in these cases, the admin staff would take the heat, so they are actually making the correct call on their end, since they are shielded from those sorts of consequences.
I have seen a lot of this 'cobble stuff together' mentality. In my experience, nodejs is the worst (applications that on deployment just npm whatever the latest version of every little bit, and there are a *lot* of little bits people pull in because javascript core is missing so many builtins), though every language with a package repository suffers to some extent. There's no longer any time for test. People don't even mirror a known working copy of their libraries, instead just assuming latest is always greatest and never causes a problem, no matter how many times new problems smack them in the face.
That's not to say that there aren't a lot of good things in these trends, but there hasn't been enough interest in keeping the good bits of the way things used to work and *way* too much confidence in random anonymous peoples' development, support, and test skills and methodologies. If the developers are empowered, they should also be the ones to face consequences. The admin staff can be held accountable for the infrastructure bits they own, but generally speaking they have no real control over any facet of an internet facing service (in select environments, I do know a lot of places where the admins still manage things very thoroughly, much to the chagrin of the application owners).
Re: (Score:2)
Oauth 2.0 isn't all that complicated at all and to be honest it is you that should be going out and learning it not the developer. For most developers Oauth is just a library they have been told to use in order to secure their app, just like if you asked them what a syn/ack is they would also look at you dumbfounded.
Strongly disagree. I think (1) every developer who incorporates networking code (directly or through a library) should always understand exactly what the network protocol is, and (2) it's conceptually impossible to "secure" your app by incorporating something you don't understand.
Yes, I took two days out of my life to understand the OAuth2.0 for my web app. I didn't fully get to grips with the every possible OAuth2.0 flow; just the one that my app was going to use. I asked some security experts about bits t
Re: (Score:3)
OAuth2 isn't uselessly complicated. OAuth (version 1) was, because they wanted to not require HTTPS, but wanted all the security mechanisms HTTPS would have provided. OAuth2 requires HTTPS, and removed the complex handshaking required in version 1.
Re: (Score:2, Insightful)
Apparently, the 2.0 protocol's history has been fraught with heated disputes over its very design, with key people distancing themselves from the result. So, maybe you don't know what you're talking about.
Re: (Score:2)
They likely dont validate a signature at a step which allows man in the middle.
Re: (Score:2)
This is all over https so presumably mitm is already very unlikely.
Unless apps are trusting self-signed certs or something along those lines.
Re: (Score:2)
Indeed. The attacker is running the app, and thus controls the list of trusted CA roots.
The bug is in the third-party backend, which trusts the app to do authentication.
Re: (Score:1)
That still doesn't excuse not identifying who screwed up so we can purge the offending apps. Who are they protecting, and why?
tl;dr Summary (Score:3, Informative)
I read the paper, here is my understanding:
In a normal OAuth2 transaction, the access token does not pass through the user's browser as app's site and identity (i.e. Google/Facebook). In a typical mobile app OAuth2, it proxies through the Facebook app for example and the access token passes through (but does not seem to be stored in) the device as it passes from identity site and app site.
Therefore, if an attacker can install an SSL MITM service on the device to capture all network traffic, it can obtain an access token. In web-based OAuth2, this is impossible because not all information passes through user's browser so even a malicious app on user's machine can obtain the access token to access provider's (i.e. Google/Facebook) information on the user.
You do need to compromise the mobile device to have the SSL MITM proxy. But, the attacker could be the user themselves, who could impersonate the app's backend servers to the identity provider (like Facebook). I'm not sure what damage, if any, that could cause.
The mobile identity provider app can remedy the situation by better validation of requests from client app and responses from its own backend server. SSL certificate pinning helps but there are ways to subvert that via tools on Android to disable it, or modifying the provider app itself.
Some notes:
* I didn't notice anything in the article (I could have missed it) that explained how reasonable it is for an attacker who is not the user to install an SSL proxy on a mobile device.
* I don't understand why they didn't say a remedy is not to have the identity provider backend send the access token directly to the app backend servers as it would in a web-based OAuth2, or why mobile apps do it differently.
* In one of their scenarios they propose it's possible to subvert protections against the SSL proxy by reverse engineering and installing a new version of the identity provider app (like Facebook app), but it seems to me if you can install arbitrary apps you wouldn't need the proxy as you could just modify the app itself send the token to the attacker.
Attacker MITM's their OWN device (Score:5, Insightful)
The attacker doesn't need to man-in-the-middle the VICTIM'S device, they would MITM their OWN device. That is, I can pretend to be you by manipulating the traffic on my phone.
The TLS MITM stuff is really a distraction from the actual vulnerability, though. The real vulnerability is a couple flavors of the following:
I send a request to Facebook for an authentication token for my account, raymorris@slashdot.org. I get a valid authentication token, by which Facebook vouches that I really am who I say I am. I send that token to a third-party app, like this:
I am taco@slashdot.org and here's my Facebook authentication token affirming that I really am who I say I am.
The app checks that the token is valid, but doesn't check WHICH user it's valid FOR, and accepts it.
Other apps fail to check the validity of the token at all.
Because I've changed the token from "Affirmed, he is raymorris@slashdot.org" to "Affirmed, he is taco@slashdot.org", if the token is sent via TLS I have to MITM the TLS on my device, but that's a bit of a minor implementation detail.
Re: Attacker MITM's their OWN device (Score:1)
but you wouldnt know who the token is valid for until you use it to get the user information. normally you would present just the token and that's it. You never tell your email prior to authenticating with the OAuth provider.
That's the web version, not the app version (Score:4, Informative)
That's how it commonly works for web sites - the third-party site uses the auth token to retrieve the user profile.
With mobile apps, the system is commonly made faster by returning the user profile along with the signed token. That works fine IF the app checks two things a) the signed token matches the profile and b) the signed token is in fact verifiably signed by the correct identity provider. Forgetting either check then leaves the third-party app vulnerable.
OpenId Connect amd similar (Score:2)
They're talking about OpenId Connect and similar extensions.
Re: (Score:1)
Forgetting either check then leaves the third-party app vulnerable.
Yet these "researchers" are leaving the user in the dark by not identifying the apps. That makes their paper pretty useless.
40% of apps is a long list and they fixed it (Score:3)
The researchers said two important things:
40% of the many apps they checked were broken.
They contacted the companies, who said they did/would fix it.
> That makes their paper pretty useless.
The paper is useful to app developers by telling them what prpblems to check for and fix in current apps, and avoid in future apps. It points out that framework and standards developers can reduce the risk by providing a known-good process. It's helpful to everyday users in that it points out that 40% (!!!) of apps a
Re: (Score:1)
It's just as important, if not more so, that the users know if their apps are vulnerable. It is irresponsible not to inform them.
Mostly same apps have same vulnerability on iOS (Score:2)
> People getting paranoid that their iPhones are putting them at risk can relax, (Maybe...).
Most assuredly not. Frequently the Android and iPhone versions of an app are compiled from the same source. If the source code doesn't include checking the that the user name matches the token, which OS happens to be three layers under that doesn't matter a bit.
If the app developer has two sets of source code, one for Android and one for iOS, and forgets the check in one copy, they probably forgot the ch
There's a whole paper with details, more than TFS (Score:2)
It seems like you're trying to read a whole lot into one word in the summary. Linked in that summary is an entire paper which explains the details. However, it may not be understandable if you're not at least a little bit familiar with programming.
I've read and understood the paper. I'm a career internet security professional, so the paper makes perfect sense to me. I'm not speculating that the problem MIGHT be platform-independent, I'm letting you know it IS platform-independent. It's an easily missed
I can believe that (Score:1)
I've seen many implementations of OpenAuth in web apps. And everywhere I looked, one step was always missing. The verification of the token.
Token is a little XML fragment with information such as your e-mail address or public ID in the service that you are using for authentication. For example, Google authentication contains your gmail address, and Twitter has integer, if I remember right. And it contains the digital signature to ensure the token wasn't created in notepad. Websites will not try to check the