Reverse Engineering a Bank's Security Token 55
An anonymous reader writes "An engineer from Brazil has posted a technical walkthrough of how he was able to reverse engineer his bank's code-generating security token. He found a way to accurately generate his unlock codes with some custom code and an Arduino clone. (Don't worry: his method doesn't give him access to anybody else's codes.) 'Every exception thrown by this piece of code is obfuscated, as well as many of the strings used throughout the code. That is a major roadblock, since exception messages and strings in general are a great way of figuring out what the code is doing when reverse engineering something. Luckily, their developers decided to actually show useful text when a problem occurs and an exception gets thrown, so they wrapped those obfuscated strings with a.a, presumably a decryption routine that returns the original text. That routine is not too straightforward, but it is possible to get a high level understanding of what it is doing.'"
Re: (Score:3)
Ugh. Amateurs! Generate error codes, not descriptive strings! Asshats! Security isn't supposed to reward someone for reverse engineering something. Layers, man. Layers of security!
Uhhh right, because that's so much use to the customer when their security token simply says "error -382".
Re: (Score:1)
It's not supposed to be useful for the consumer. If they are running into errors with authentication related processes they need to either get tech support or try again later.
Useful authentication error strings make breaking in a guessing game. You shouldn't even tell them that their password is wrong. "Login Failed!" is more than enough (never confirm they had the correct username but a bad password).
Usernames are unique keys. Try registering. (Score:4, Insightful)
never confirm they had the correct username but a bad password
My bank must not have got the memo. After I submit my username, the login process presents an image associated with my account to prove to me that it is my bank and not a phishing site before I enter my password. Both GE Capital and Ally do this. Besides, one can verify whether a username corresponds to an account by attempting to register for an account under that username.
Re: (Score:2)
The point being that it restricts the attack to MITM, simple phishing no longer works.
Re: (Score:1)
But if you are at the phishing site they can easily pass the information to your real bank and MITM the response back to you. I've written reverse proxy filters that do this kind of thing for some of the internal 3rd party services I manage at my work. This kind of thing stops the really easy to make phishing sites (such as form service based phishing or phishing from compromised web pages) but anyone who is paying even the slightest amount of attention will see that these aren't the real banking site to be
Re: (Score:1)
Pretty sure that's not actually to prevent phishing: the MITM attack to do phishing that way is too obvious. My guess is that the actual reason is to prevent you from accidentally locking someone else's account because bank systems often require you to call to unlock an account after a very small number of incorrect password attempts.
Also, it doesn't verify that a username exists: my bank will generate fake ones for non-existing usernames (tested by mashing on my keyboard to generate a username).
Re: Usernames are unique keys. Try registering. (Score:2)
Re: (Score:2)
I'll bet if you clear cookies you will not get the intermediate image and must answer extra questions, after which the cookie gets set. Bank of America does this, too.
Re: (Score:1)
administrators/developers are users too. I'd prefer to have the error code AND the description.
Re: (Score:2)
Re: (Score:2)
You shouldn't tell them their PIN is wrong and that if they get it wrong 2 more times, then their card is going to be blocked? Riiiiight....
Re: (Score:3, Insightful)
The code should not need to be hard to reverse engineer. Good cryptographic systems need the secret to be secret and nothing else. Obfuscation can be a layer, but more often it is used to hide shoddy algorithms.
The medium on which the secret is stored (Score:2)
Re: (Score:3)
That's security through obscurity, and it can often be extremely detrimental...
When a piece of code runs on a device the user controls, it's not a case of *if* it can be reverse engineered, but simply a case of how long it takes and wether anyone is sufficiently motivated.
So given that, what's more important is that the algorithm itself has no flaws, and the seed/key values it uses cannot be compromised, neither of which should ever depend on the code being obfuscated.
However the obfuscation will deter/dela
Re: (Score:2)
I doubt that the original source code used for development and debugging is obfuscated. Chances are they run it through a script that renames everything and removes and left over debug data. It's a fairly standard thing to do.
In other words I doubt that they are relying on obfuscation for security, it was just a checkbox in the build system that costs them nothing.
Read between the lines (Score:1, Insightful)
He found a way to accurately generate his unlock codes with some custom code and an Arduino clone.
By itself, this isn't a bad thing. But the fact that they've obscured the crap out of their code suggests to me this wasn't done by a crypto expert, but an insecure programmer forced by management to develop a solution in a field he didn't fully understand, and did it homebrew. The overwhelming, vast, pitifully large, number of attempts made by non-crypto experts to do this result in a house of cards when it comes to security.
There are standard, tested, and amply documented alternatives available. It's just
Re: (Score:3, Funny)
I will not disclose the name of the company tho..
In other words: Your post is security by obscurity too.
Re:Read between the lines (Score:5, Insightful)
They used a standard algorithm (TOTP from RFC6238, with a Time Step X of 36 seconds and a T0 of April 1, 2007). They obfuscated it for what amounts to no good reason, but there's no loss of security. The problem of preventing someone who controls the device from obtaining the key is the DRM problem, unsolvable without specialized hardware.
Re:Read between the lines (Score:5, Informative)
Unsolvable even with specialized hardware, you just increase the costs for both yourself and any potential attacker... Probably increasing your own costs far more than that of the attacker.
Re: (Score:2)
Right. But a big problem is preventing some hostile app on the user's phone from obtaining the user's secret key used in the challenge/response algorithm. With the code reverse engineered, atttackers now know where to go looking for that key and what to do with it when they have it.
Smartphones are not a secure platform. The carrier and Google (for Android) or Apple (for their phones) have total backdoor access. So does anyone who has their signing keys.
Re: (Score:2)
The obfuscated it for the same reason that passwords on-disk should be encrypted: to make it harder for idiots to figure out anything. If you really want to make life difficult you write self-modifying code, but I'm not sure that's possible anymore...
As an aside, I've wondered how mobile developers protect against someone decompiling their stuff and using their API key. I guess the answer is "they can't."
Re:Read between the lines (Score:5, Insightful)
This is security through obscurity at its worst,
I don't get that impression from reading TFA. It sounds like the implementation is mostly OK. Remember that all this generator is supposed to do is verify that you possess the token. Knowing the algorithm, so long as it is sound, shouldn't be a security problem - someone would still need to get their hands on the real token in order to clone it.
Now, had he figured out a way to divine the secret device ID from the generated codes, well now that would be bad.
Re: (Score:1)
It does break some assumptions though: For any token device, someone can clone it without your knowledge and abuse it.
Having the physical token is no longer the assurance against abuse by a third party that one would assume.
Re: (Score:2)
Such an assumption has always been false.
The problem is the obscurity of the code, if you don't know how it works then you can't be sure...
If you do know how it works (as mentioned above, TOTP from RFC6238) then you know that it can be cloned, but only if you have the initial seed values...
Knowledge is power, if you as a user know how the system works then you know what to protect, and you can more easily raise the appropriate red flags if you detect compromise of the seeds.
As a user i would never be happy
Re: (Score:2)
But this "token device" is a smartphone, and the bank generator is just an app. You have to assume that physical access means security has been compromised, just as with any other computer. There is nothing on a smartphone that can't be cloned with Titanium Backup and friends plus a few minutes of time.
Re: (Score:3, Insightful)
There is still a problem here. Even though physical access is needed to clone the device, it should be prohibitively difficult to do so, otherwise you leave yourself open to an attack where, for instance, someone steals the token while you're sleeping or left it unattended at home and clones it, then replaces it.
They retain a valid access token, and you're not aware of it because you still have yours.
Re: (Score:2)
I doubt the bank is worried about that kind of attack. They likely get thousands of phishing sites trying to trick the user into entering the security code themselves that are a much less risky form of fraud.
Re: (Score:2)
Now, had he figured out a way to divine the secret device ID from the generated codes, well now that would be bad.
Since has has duplicated the functionality of the device, including its ability to generate codes... then the "secret device id" is no longer secret. It also invalidates the security model that you need to be in physical possession of the token to access the account.
He has effectively copied a key that had "do not duplicate" stamped on it. This attack could be carried out against a customer and then used to impersonate them in the future.
This is not my definition of security that is working, and I'm disappo
Re: (Score:2)
It also invalidates the security model that you need to be in physical possession of the token to access the account.
That was not the security model. If that were the security model, the bank would require a dongle instead of an Android app. The security model is that he possesses a unique ID, a password, and a token (in this case, generated from his Android's ID). An attacker would need all three of those things to access his account. One can be read in plain text, one can be picked up with a keylogger, and the third by cloning his phone. Certainly not perfect security, but every piece makes an attacker's life more diffi
Re: (Score:2)
Now, had he figured out a way to divine the secret device ID from the generated codes, well now that would be bad.
Worse than "bad".
Looking at the (admittedly obfuscated) screen grabs and the comments that say the bank provide RSA hardware tokens if anyone wants one - I reckon it's a software implementation of an RSA SecurID token, probably bought in directly from RSA. And if it's bought in from a third party, it follows that anyone else who's bought in the same product would almost certainly be vulnerable to the same issues.
Do I care? (Score:2)
Speaking of which, I've disputed charges on my credit card twice, and my credit card provider has made it quite painless. If my bank was that forgiving, I'd probably use my chequing account more than my credit account.
Re: (Score:2)
You are generally on the hook for fraudulent credit card charges under $50, with any provider.
Re: (Score:2)
Contrary to popular opinion, there is intelligent life outside of the USA. For example: many of the credit card companies operating in canada offer $0, 0% liability.
How can I get what you have? (Score:2)
OK, so first I need a medical exam (Score:2)
Tokens? (Score:5, Funny)
Does this work on Chuck E Cheese tokens too? I need to feed my skee ball addiction.
Google Authenticator stores in cleartext..? (Score:2)
The most interesting comment for me was this:
This is not a security vulnerability or even criticism by any stretch. The bank‘s app is (arguably) more secure than Google Authenticator (which keeps secrets around in plaintext), and this article should be seen as praise for the bank’s app, which does things the right way by (mostly) adhering to the TOTP standard, and protects its data as well as technically possible.
Yes, because any TOTP app must be able to read the secrets to generate the OTP, it must have any encryption keys internally, so it can never really be safe from cloning (unless it relies on a hardware encryption component which the phones don't have). Still, storing in plaintext makes grabbing the token data particularly easy.
This is not a security breach (Score:4, Informative)
FYI: This is not a security breach. The algorithm is not supposed to be the secret. There are lots of android/iphone apps to do this, and most of them use HOTP or TOTP which is an IETF standard algorithm. The security is in the secret key that is generated when you run the app the first time. This key is synchronized between the server and the key generator when it is setup.