Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Cellphones

With Rising Database Breaches, Two-Factor Authentication Also At Risk (hackaday.com) 84

Two-factor authentication "protects from an attacker listening in right now," writes Slashdot reader szczys, "but in many case a database breach will negate the protections of two-factor." Hackaday reports: To fake an app-based 2FA query, someone has to know your TOTP password. That's all, and that's relatively easy. And in the event that the TOTP-key database gets compromised, the bad hackers will know everyone's TOTP keys.

How did this come to pass? In the old days, there was a physical dongle made by RSA that generated pseudorandom numbers in hardware. The secret key was stored in the dongle's flash memory, and the device was shipped with it installed. This was pretty plausibly "something you had" even though it was based on a secret number embedded in silicon. (More like "something you don't know?") The app authenticators are doing something very similar, even though it's all on your computer and the secret is stored somewhere on your hard drive or in your cell phone. The ease of finding this secret pushes it across the plausibility border into "something I know", at least for me.
The original submission calls two-factor authentication "an enhancement to password security, but good password practices are far and away still the most important of security protocols." (Meaning complex and frequently-changed passwords.)
This discussion has been archived. No new comments can be posted.

With Rising Database Breaches, Two-Factor Authentication Also At Risk

Comments Filter:
  • It's too bad no major web service uses smartcards for client authentication except the DOD and they're trying to get rid of them.
    • I read through the analysis. I was surprised to find that the server and client are both sharing the same password. Since the authentication is always assymetric (the client is authenticating to the server) it seems like the rational way to do this would be for the server to send the one time code encrypted in the client's public key. The client decodes it with a private key and then puts those numbers on the screen for the user to type into the web site wanting the authorization code.

      the problem with th

      • If the app was communicating with the site, why require the user to enter anything at all? Just have them click a confirm button on the device. (This, incidentally, is how Google's 2FA works on some newer Android phones.)

        A major design criteria for TOTP is that code generation is an entirely offline process. Your phone can be completely offline, and it can still produce codes for you.

        SMS is, contrary to TFA's claims, no longer considered safe, as hijacking of phone numbers is feasible for a skilled and

    • by gweihir ( 88907 )

      This is the MBA "bean counter" mindset at work. Zero understanding of the risks or the subject matter, these people have been trained to squeeze out the last cent from everything that costs money. "Save a penny, lose a million (or much, much more)" is what this stupidity nowadays boils down to. There will eventually be a backlash and I hope all these morons will find themselves unemployed (hey, I can dream, can't I?) and things will change. But there will need to be a lot more catastrophes first.

  • Youâ(TM)re making some assumptions when you say that the system only requires a OTP to function. In a business co text thereâ(TM)s typically a device registration process as well, separate to BaU password usage, to prevent exactly this sort of compromise - you essentially enable a device-level certificate thatâ(TM)s used in combination with your OTP.

    At the consumer level, this is less common, granted.

  • Security has infinite bypasses. As soon as a new layer is added another method appears to circumvent. Life, uh, finds a way.

    https://media.giphy.com/media/... [giphy.com]

    • by gweihir ( 88907 )

      The aim is not to be absolutely secure. Thinking that is a beginner's mistake. The aim is to make attacks sufficiently expensive that attackers lose interest. The easiest way to see that is to stop thinking about this as a black/white security question and use the actually correct framework of risk management.

  • U2F FTW (Score:5, Interesting)

    by icknay ( 96963 ) on Sunday October 22, 2017 @09:49AM (#55413013)

    One big problem with 2FA is that they can phished. U2F is the neat solution in this space (I'm not not affiliated with them, just impressed with it). It's a little hardware key that...

    -not fooled by phishing
    -each site just gets a big random number at registration, so no user tracking from U2F
    -integrates SSL to resist MITM
    -it's a free standard and the devices are cheap
    -Chrome supports it, Firefox is now in beta. Microsoft has made noises about support.
    Apple is .... Apple is a no-show thus far.

    U2F https://en.wikipedia.org/wiki/... [wikipedia.org]
    FAQ: https://medium.com/@nparlante/... [medium.com]

    • U2F is too falliable. It assumes I always have a USB port available on a machine, and this may not be the case. Plus, with Google Authenticator TOTP seeds, I can back them up, so if I lose a device, I can restore them to something new [1].

      Maybe the solution is to have a TOTP protocol which takes the time, and instead of hashing it with the seed, does some crypto operation with a public key to sign it, and hash it down to six digits, where the six digits can be validated by the server. Of course, the hard

  • Frequently changed (Score:5, Insightful)

    by Geoffrey.landis ( 926948 ) on Sunday October 22, 2017 @09:50AM (#55413017) Homepage

    "...good password practices are far and away still the most important of security protocols." (Meaning complex and frequently-changed passwords.)"

    Frequently changed?

    That has been proposed for security repeatedly, but I don't see this as a big help.

    (If I had to list one thing, it would be "not re-used for other platforms.")

    • by twobithacker ( 1093873 ) on Sunday October 22, 2017 @10:00AM (#55413041)
      NIST recently revised their recommendations and removed password expiration as a recommended practice. I generally think it's better to use a password manager, use a different password for every service, and change the password on that service when there's evidence of a breach.
      • by mark-t ( 151149 )

        Then the point of greatest vulnerability becomes whatever is protecting whatever keys or passwords that the password manager uses. A password manager adds no additional security by itself, and only is superior to using individual passwords in that it can be more convenient to use, but it is certainly not any more secure (arguably, it may be less secure, because all of your passwords are stored in one place, and if that is compromised, you have to change *ALL* of your passwords).

        Anyone can invent their

        • by sjames ( 1099 )

          On the other hand, password managers remove the barriers to longer more complex passwords and remove incentives to re-use passwords.

          Compared to a server, a single password manager presents a relatively low value target (but not no value, so actually secure it).

        • by Minupla ( 62455 )

          Then the point of greatest vulnerability becomes whatever is protecting whatever keys or passwords that the password manager uses. A password manager adds no additional security by itself, and only is superior to using individual passwords in that it can be more convenient to use, but it is certainly not any more secure (arguably, it may be less secure, because all of your passwords are stored in one place, and if that is compromised, you have to change *ALL* of your passwords).

          The big advantage for me is t

        • Your algorithm method is probably fine, as long as deep state NSA types don't target you specifically as a high value target. Then they might put the resources in to figure out your algorithm.

    • by schwit1 ( 797399 ) on Sunday October 22, 2017 @10:01AM (#55413043)
      NIST's recent password recommendations say frequent PW changes are not good practice.
      https://www.schneier.com/blog/... [schneier.com]

      NIST recently published its four-volume SP800-63b Digital Identity Guidelines [nist.gov]. Among other things, it makes three important suggestions when it comes to passwords:

      1. Stop it with the annoying password complexity rules. They make passwords harder to remember. They increase errors because artificially complex passwords are harder to type in. And they don't help [wsj.com] that much. It's better to allow people to use pass phrases.
      2. Stop it with password expiration. That was an old idea for an old way [sans.org] we used computers. Today, don't make people change their passwords unless there's indication of compromise.
      3. Let people use password managers. This is how we deal with all the passwords we need.

      These password rules were failed attempts to fix the user [ieee.org]. Better we fix the security systems.

      • by sdh ( 129963 )

        I always figured the forced password change was more a way to detect compromised passwords. This leaves an undetected attacker a month to look around, but then the account breech would be noticed or eliminated.

    • by Anonymous Coward

      In theory, frequently changed password make no sense. If someone guesses a complex password, it is either not complex, or obtained by asking the owner, or keylogged. None of these methods are prevented by frequent changes. And it encourages non complex, written down password.

      • In theory, frequently changed password make no sense. If someone guesses a complex password, it is either not complex, or obtained by asking the owner, or keylogged. None of these methods are prevented by frequent changes. And it encourages non complex, written down password.

        The latest NIST guidelines, SP800-63B rev 3, specifically recommends against requiring regular password changes, except in cases of possible compromise. If a bad guy gets your password, they will use it soon, not wait three months. On the other hand, advice against writing down passwords is also flawed, as long as the paper is kept someplace reasonably safe, like a locked drawer or your wallet. The main threat these days is remote attacks. If a sophisticated bad guy gains physical access to your property,

    • This whole article depresses me, but the ignorance of suggesting complex, frequently changed passwords as the solution is the biggest indication of someone who does not know what he is talking about. We have known for many years that such an approach is counterproductive, because it virtually guarantees that passwords will be written down and saved in the browser.

      In fact, there is a need to get rid of passwords. Approaches like U2F and biometric methods are superior, but far from perfect. Sadly, passwords a

      • by Vairon ( 17314 )

        If you compare the article and originally submitted Slashdot story to what's posted now it appears that EditorDavid is the person who added the "(Meaning complex and frequently-changed passwords.)"

      • by mark-t ( 151149 )

        However, passwords remain a weak form of authentication.

        They are only weak when people choose weak passwords. Inventing an algorithm that you can perform without using a computer to do it which can generate your passwords for you from some given key can produce passwords which appear no less strong than the unmemorizable passwords that are generated by systems that use random passwords, and are still no easier to guess simply by virtue of there being an algorithm because nobody else actually knows exactl

    • by Kohath ( 38547 )

      Complex and frequently changing passwords are not something that works well for humans. Security should be designed for people to use. When it isn’t, people work around it.

    • by gweihir ( 88907 )

      Frequently changed passwords do not increase security. They do _decrease_ it. Smart security experts have known that for a long, long time. Those that do not understand security but only follow the rituals are clueless about this, as usual.

      Here is a reference that nicely sums this up:
      https://www.schneier.com/blog/... [schneier.com]

      Incidentally, it is also better to use a complex password and write it down than to use a simple one and remember that. Most attacks on passwords are over the net and not by stealing your wallet

  • Comparison (Score:5, Informative)

    by dissy ( 172727 ) on Sunday October 22, 2017 @10:14AM (#55413069)

    This should make it crystal clear as to the priorities of security for most companies and people.

    Pros for hardware tokens:
    - The private key is exceptionally difficult to extract, and in cases like RSA tokens, currently impossible.
    - Many protection features built into the hardware, such as the key being stored in RAM and a battery that is designed to become disconnected upon disassembly, trace contacts only maintaining connection via points inside the enclosure that are disrupted upon tampering, etc.
    - Expiration date enforced by battery life
    - Can't be copied so must be taken from you, on the assumption that lack of your token would then become noticed at which time the entire keypair is removed from the trust chain.

    Compared with pros for software tokens:
    - Cheap, generally free

    That's it, just the cost, everything above is given up in exchange for not having to pay for hardware.

    Now I'll be the first in line to say I wish RSA tokens were not as expensive as they are. In fact I'm certain I'd have to wait in that line right along with you.
    But despite the price you actually are getting quite a lot in return, and many things not possible to duplicate in software simply due to the nature of software.

    Phones and even computers that would be running that software are not designed with self-destruct capabilities in mind.

    The software requires both saving the private key, which is typically going to be on hardware designed explicitly to be readable (HDs, flash, etc), as well as such that the private key needs to be installed in it meaning that key is likely stored elsewhere to be copied.

    Which leads to copying of the software/key as a possibility. With a hardware token I would need to deprive you of its use in order to use it myself, something that should be noticeable and set off red flags.
    One may say the same would be true for your cell phone, but the reality is I don't need to deprive you of your phone, I can simply copy the data off of it and/or access it remotely, or being a multi-purpose device use some other software running on it to get at that data (Eg a web browser exploit you initiate yourself)

    All of those protection features get traded away completely in exchange for a lower price.
    Which really highlights exactly where security falls in the order of priorities when these software apps are used.

    Like with the "https everywhere" crowd, there definitely are situations where a software token makes more sense, but equally similar they tend to be edge cases that shouldn't require much security in the first place.
    Development work, educational purposes, setting up a test system to protect one server on your LAN from the rest of your LAN just to learn how the backend setup works before deploying the real deal elsewhere. etc.

    But for anything "real world" those hardware token features shouldn't be dismissed simply due to cost.

    • by AmiMoJo ( 196126 )

      The other massive benefit to software tokens is that you can have as many as you want, for free and with zero weight, no sorting through tags on your keychain etc.

      In fact with well designed ones you can't use the same token for more than one site, so a single token being compromised doesn't compromise your 2FA on other sites.

      So while there are disadvantages to software keys, I'd say that on balance and human nature being what it is, they are probably a better solution for most people.

      • by Calydor ( 739835 )

        But how many do you actually need? And how many of them do you actually need while on the move, rather than sitting at your desk?

        • by AmiMoJo ( 196126 )

          I've got about 15 I carry with me. I need to carry them so that, for example, I can log in from work or from home.

      • by gweihir ( 88907 )

        Well, yes. Only that a "soft token" actually happens to not be a token in the sense required for 2-factor authentication, unless stored on an independent device. If you do not mind that little detail, then they are really quite convenient. Of course, just simply doing without that token would be even more convenient and cheaper and offer about the same level of security. But then, the "ritual" of 2FA would not be followed anymore and even the dumbest person might get a clue that it is not actually 2FA, but

    • Security enclaves as you recommend also exist in the iPhone to some extent, so an app could definitely do âoesomething similarâ given it doesnâ(TM)t have remote capabilities, relatively secure. You could probably get a chip with similar properties on an Android.

      The problem is indeed cost. Even at $5 per dongle and a $50 plugin (USB) on the server side the cost for a medium size enterprise is easily an $100k investment once you include server-side hardware and people-cost with an ongoing $15-5

    • by gweihir ( 88907 )

      Compared with pros for software tokens:
      - Cheap, generally free

      And that is the real problem: People that are willing to give up 2FA in order to save a few cents. The fascinating thing is that the customers that buy these 1FA (which it is if it is on the same device) will strongly defend this as still being 2FA, despite it being utterly clear that it is not. We run into this regularly with customers and we routinely state that 2FA must be implemented with both factors being independent or it is not 2FA. Some customers then want that part of the report dropped (which we

  • I'm not worried about someone getting my password. I'm worried about someone getting the admin password (or hacking in) and stealing the whole database. A good example is the OPM hack. Same applies to hospitals. If I let a hospital keep my medical records in a huge DB, I assume eventually the whole db will be stolen.

    Passwords, two factor passwords, RSA fobs are something of a Maginot Line. Figure out how to walk around them and get the whole db.

    I'm thinking specifically about stealing data. I'm not ta

  • Not a new risk (Score:5, Insightful)

    by thegarbz ( 1787294 ) on Sunday October 22, 2017 @10:23AM (#55413101)

    The RSA tokens had exactly the same exposure as the apps. If you gain access to the database of token IDs you know what key it is currently generating.

    This actually happened back in 2011 https://arstechnica.com/inform... [arstechnica.com]

    • That means RSA tokens are in many ways worse than a well-chosen password (like the XKCD method [xkcd.com]) because at least in the case of a well-chosen password, the attacker will need to put some effort into discovering it when the password file leaks.
      • Of course they are, they can be stolen. But you missed the point, this is not a "which is better" question. There's no reason to use an RSA-Token and NOT use a password. You should universally use both, the almighty number "2" in the "Two-Factor Authentication".

        • Not going to matter if the password files get hacked.
          • Exactly. Unless the system is beyond stupid and reveals the password to be correct before proceeding to the second factor. No need to tell the hacker that the password is still valid. With 2FA you have the option not to validate until all parts have been submitted and then you get either PASS or FAIL. Not knowing which (or both) factors failed with increase the search space by orders of magnitude. You may need to try every single password in the world with every single possible username and with every singl

  • A solution? (Score:5, Interesting)

    by Hrrrg ( 565259 ) on Sunday October 22, 2017 @11:12AM (#55413271)

    I've wondered why no one creates a USB/bluetooth device for security that has the ability to encrypt data based on a private key stored on that device. Then there would be no need for a database of private keys (nothing to be compromised except the device itself). I imagine it to work like this:

    A new device is manufactured and loaded with the private key. The manufacturer then deletes any record of the private key and loads the public key to a public database.

    When you need to identify yourself, the website sends to data to your device for encryption. The data is encrypted by your device using your private key and returned to the sender. The website then decrypts the data with the public key. If it decrypts properly, then you are authenticated. The private key never leaves the device and the public database could be protected by a block chain to prevent tampering.

    Thoughts?

    • by Average ( 648 )

      That's baked. It's called FIDO U2F. (Almost) exactly what you're describing, but with a few plusses. Google refers to it as a 'security key'. The YubiKey is the best known model, but it's an open standard and there's a half-dozen manufacturers of U2F keys on Amazon. Limitations... only works in Chrome (and Firefox betas) right now, only available on a limited number of sites (but that does include some big ones like Google, Facebook, and GitHub).

      • by Hrrrg ( 565259 )

        No, I think you are wrong. From this website describing Yubikey:

        https://wiki.archlinux.org/ind... [archlinux.org]

        I found this:

        "Security risks
        AES key compromise
        As you can imagine, the AES key should be kept secret. It cannot be retrieved from the Yubikey itself (or it should not, at least not with software). It is present in the validation server though, so the security of this server is very important. "

        In other words, if the validation server is compromised, they everyone is screwed. This is the same vulnerability that th

        • by Average ( 648 )

          You're discussing the classic 'keyboard style OTP' Yubikey protocol (their first products, circa 2008). Still an available option in some devices, but decidedly legacy at this point. The U2F standard came along later (~2014) and is based on elliptic-curve PKI, not the shared-secret protocol in question.

        • by Vairon ( 17314 )

          Each Yubikey supports several security methods. It supports RSA with GPG/PGP, PKCS#11 and OTP. You are describing a flaw of OTP when an HSM is not used on the server containing the OTP key.

    • by Vairon ( 17314 )

      Companies such as Yubikey create exactly what you are describing. The Yubikey 4 supports both RSA 4096 for OpenPGP, PKCS#11 and OTPs that this article talks about. Either of those first two features would do what you suggest. The problem with them is software. You need a browser and software on the OS (Linux, Windows, OS X) to interface with the Yubikey. The software doesn't even need to be Yubikey specific for all use cases. Some OSs will need a yubikey driver other OSs can use built-in drivers. For enrol

      • by Average ( 648 )

        I'm just a little fascinated that you know as much about Yubikeys as you do and don't seem to mention U2F mode. Which does pretty much what he needs. And, while it does need 'special software' (for now), the special software in this case is Google Chrome... the most popular browser in the world.

        • by Vairon ( 17314 )

          Other people, like yourself, had already mentioned it in previous comments. I was not trying to list every feature that Yubikeys have and I don't normally use Chrome. I was under the impression that PKCS#11 can already do what U2F does with Chrome but with more browsers. I distrust giving a web browser access to USB devices directly. I admit that I am ill informed concerning U2F so I am researching it now so I can properly determine the pros/cons of U2F vs PKCS#11 on Linux/Windows and multiple browsers.

  • by Todd Knarr ( 15451 ) on Sunday October 22, 2017 @12:14PM (#55413483) Homepage

    One thing: the big TOTP-key database isn't on your phone or computer or RSA fob/dongle. It's on a server run by whatever service you're authenticating with. And it's that database that's most likely to be compromised. That's true even for the RSA dongles, the host still has to have a database of all the keys so they can validate the code you entered against what your dongle should've generated.

    A better solution would be USB hardware-based 2FA using public-key cryptography, and those aren't too expensive. It's just that there's no big money to be made there, by their nature the dongles can't be a locked-in part of a proprietary system so the big vendors and their salespeople have little reason to push for them.

    • by Vairon ( 17314 )

      I agree that USB based 2FA that supports PKCS is better but even OTPs can be secured on the server side. Yubikey sells a HSM (hardware security module) for servers to store OTP keys on.

      https://www.yubico.com/product... [yubico.com]

      Other companies sells HSMs as well but I have not evaluated which ones support OTPs specifically versus other HSM use-cases.

      • Unfortunately that HSM only stores 1024 keys. That's nowhere near sufficient for large-scale use. And Yubico makes PKCS 2FA dongles that can interface directly with the browser to make the whole thing seamless.

  • Let's just throw out acronyms and expect that everyone knows WTF you're talking about.

  • Two-Factor Authentication Also At Risk

    Three-Factor authentication. (Can't wait until this escalates ...)

    • by gweihir ( 88907 )

      It may be called "three factor" by some vendors, but 2FA implemented in a way that deserved that name (i.e. 2 different, independent factors, so no putting them on the same device or on two not independent devices) is unbroken. The problem is elCheapo implementations that are not actually 2FA, but 1FA that simulate being 2FA.

    • My LastPass Authenticator (which uses Google's TOTP) can be set to require a fingerprint or PIN. Wonder if that could be considered 3FA. Maybe add some geofencing so it can only be unlocked in a certain area... 4FA now.

  • I hadn't considered how a database breach would compromise 2FA by exposing OTP secrets. Today I'm going to implement encryption of our websites' OTP secrets by one of our master secret keys (that's never written to server secondary storage, and so is harder to hack).
  • I think TFA's assertions are a bit stupid on their face. They are essentially saying "it's not something you have" because it can be reduced to an underlying secret key required to be guarded.

    This begs the question what physical factor can possibly exist which at its core does not effectively guard a secret from unwanted disclosure? Guarding secrets is the way ALL "what you have" schemes without exception operate. You can take issue with an implementation or judge relative quality yet to say it's not "so

  • Meaning complex and frequently-changed passwords.

    And with this, they undermine the whole thing by displaying little knowledge of what actually makes for good password security. Mistaking complexity for entropy and an avoidance of dictionary attacks / rainbow tables. Password changing doesn't make your password any harder to guess, it just helps limit the failure domain once it's compromised. But really, overlooking password reuse between sites (and corresponding duplication of security question answers) is really the topper - that needs to be in your a

  • ... which completely negated the value of the "physical hardware". Oh, the irony!

    Incidentally, 2-Factor _requires_ two different, independent devices by definition. And it only gives the added security if it is done according to the definition. Anybody claiming that it can be done on one device is essentially lying to their customers. If it is all on just one computer/phone/whatever or if the devices are not independent, for example a phone backed up to a computer, then it is not 2FA anymore. It is just mor

  • You can find the source for the topic of this post at the folowing site: https://pages.nist.gov/800-63-... [nist.gov]

    The updates are broken down into 3 sections, with section “b” being the most relevant to this e-mail.
    https://pages.nist.gov/800-63-... [nist.gov]
    https://pages.nist.gov/800-63-... [nist.gov]
    https://pages.nist.gov/800-63-... [nist.gov]

    Extract from section 63b:

    When processing requests to establish and change memorized secrets, verifiers SHALL compare the prospective secrets against a list that contains values known to be comm

  • by darth dickinson ( 169021 ) on Sunday October 22, 2017 @08:04PM (#55415133) Homepage

    >The original submission calls two-factor authentication "an enhancement to password security, but good password practices are far and away still the most important of security protocols." (Meaning complex and frequently-changed passwords.)

    Complex, frequently changed... and written down on a sticky note.

  • We have had client-side certificates forever. They make HTTPS more secure, they make us safer, they solve most of our password problems. Why aren't we using them?

    Also the frequently changed complex password requirements make passwords less safe, not more.

    https://www.engadget.com/2017/... [engadget.com]

Never test for an error condition you don't know how to handle. -- Steinbach

Working...