Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Google

Google Adds USB Security Keys To 2-Factor Authentication Options 121

An anonymous reader writes with this excerpt from VentureBeat: Google today announced it is beefing up its two-step verification feature with Security Key, a physical USB second factor that only works after verifying the login site is truly a Google website. The feature is available in Chrome: Instead of typing in a code, you can simply insert Security Key into your computer's USB port and tap it when prompted by Google's browser. "When you sign into your Google Account using Chrome and Security Key, you can be sure that the cryptographic signature cannot be phished," Google promises. While Security Key works with Google Accounts at no charge, you'll need to go out and buy a compatible USB device directly from a Universal 2nd Factor (U2F) participating vendor.
This discussion has been archived. No new comments can be posted.

Google Adds USB Security Keys To 2-Factor Authentication Options

Comments Filter:
  • I wonder if I can go dig out one of my old C=64 application dongles to use... of course it will be disconcerting if I heard the read/write heads slamming against the side of my disk drives
    • by Anonymous Coward

      Dongle bells, dongle bells, dongle all the way. Oh what fun it is to ride in the google headed sleigh

    • Not many PCs have 9-pin joystick ports.
      • Not many PCs have 9-pin joystick ports.

        You mean a serial port? I bet yours does and you didn't even know it.
        And if it doesn't? http://www.amazon.com/USB-9-pi... [amazon.com]

        • Re:Dongle Bells! (Score:4, Informative)

          by __aaclcg7560 ( 824291 ) on Tuesday October 21, 2014 @12:51PM (#48197233)

          You mean a serial port? I bet yours does and you didn't even know it.

          The OP mentioned Commodore 64 dongles that typically plugged into the 9-pin joystick ports, which were compatible with the Atari 2600 joysticks. The 9-pin connector for the joystick ports were also used for serial ports on the PC, although I think that came later as 25-pin serial connectors were still common on modems in the early 1980's. Early PCs had a 15-pin game port [wikipedia.org] on the old SoundBlaster cards. Don't recall if anyone made a 9-pin to 15-pin adapter to plug in the old Atari 2600 joysticks.

          And if it doesn't?

          None of my PCs have serial ports on them. I had to get a USB serial adapter to be able to console into my Cisco rack.

          • by Anonymous Coward

            The newer models come with a mini-USB type-B port just for console purposes. Perhaps it's time to upgrade.
            - Your Cisco sales rep

            • If you're studying for the entry-level Cisco certifications, you can use older routers and switches in your hardware lab. These require a rolled cable [cisco.com] for the console.
          • You mean a serial port? I bet yours does and you didn't even know it.

            The OP mentioned Commodore 64 dongles that typically plugged into the 9-pin joystick ports, which were compatible with the Atari 2600 joysticks. The 9-pin connector for the joystick ports were also used for serial ports on the PC, although I think that came later as 25-pin serial connectors were still common on modems in the early 1980's. Early PCs had a 15-pin game port [wikipedia.org] on the old SoundBlaster cards. Don't recall if anyone made a 9-pin to 15-pin adapter to plug in the old Atari 2600 joysticks.

            And if it doesn't?

            None of my PCs have serial ports on them. I had to get a USB serial adapter to be able to console into my Cisco rack.

            Your PCs probably everything but the physical port for a serial port. You can buy the connector and slap it on if you give a shit, then cut a hole in the i/o shield (or your case) for it.

            • Your PCs probably everything but the physical port for a serial port. You can buy the connector and slap it on if you give a shit, then cut a hole in the i/o shield (or your case) for it.

              Actually, they don't. I got extra headers for USB (Universal SERIAL Bus) on my motherboards. Serial ports are so old school these days.

        • Not for my last 6 computers have I seen a 9 pin serial port.

        • You mean a serial port? I bet yours does and you didn't even know it.

          Considering that when I was looking at desktops back in 2007, even, I only ran across one that did, I'd take that bet.

  • by DigitAl56K ( 805623 ) on Tuesday October 21, 2014 @12:08PM (#48196803)

    Let me know when they start selling cheap NFC dongles so we can just tap our phone on them to login. I'm sure our company would buy a bunch. 2-factor makes logging in to conference systems a pain in the ass - everyone is always looking to the guy who doesn't use 2-factor to login already. I don't see how fumbling around with USB sticks is much better.

    • by Anonymous Coward
    • by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday October 21, 2014 @12:33PM (#48197061) Journal

      I don't see how fumbling around with USB sticks is much better.

      I use a YubKey NEO-n [amazon.com]. It's a tiny device, only extends from the USB port by a millimeter or so... just enough that you can touch it to activate it. I just leave it plugged into my laptop all the time, so there's no "fumbling with USB sticks", I just run my finger along the side of the laptop until it hits the key. It's extremely convenient.

      There's an obvious downside of leaving the key plugged into your laptop, of course. If someone steals your laptop they have your key. However, in order to make use of it they have to have (or guess) your password as well, so it's really only a risk if someone is specifically targeting you, in which case they could also steal your phone. Well, it's also a problem if you use a particularly lousy password, and if you don't notice that the laptop/key are gone soon enough that you can disable the key before the attacker guesses your password.

      FWIW, Google switched to using security keys for corporate account authentication a while ago. Google's security operations team determined that the risk of theft of a security key is actually lower in practice than the risk that an employee's phone-based OTP might be phished. I would have thought that Google employees were too smart to be phished... but I suppose resistance to phishing attacks is as much about social intelligence as anything else, and Google hires a lot of socially inept people.

      • I don't see how fumbling around with USB sticks is much better.

        I use a YubKey NEO-n [amazon.com]. It's a tiny device, only extends from the USB port by a millimeter or so... just enough that you can touch it to activate it. I just leave it plugged into my laptop all the time, so there's no "fumbling with USB sticks", I just run my finger along the side of the laptop until it hits the key. It's extremely convenient.

        That's okay for you on your laptop. When you go to a conference room with a e.g. a PC set up for conference calls, and someone needs to log in to pull up the hangout, it's a different story (don't even get me started on Chromebox for Meetings...).

        Here, having a little dongle sitting in the middle of the desk connected to the main system via USB would provide an easy option to provide at least the 2nd factor auth, without anyone typing in codes or plugging in additional devices. Lots of people walk into a co

        • by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday October 21, 2014 @01:52PM (#48197767) Journal

          That's okay for you on your laptop. When you go to a conference room with a e.g. a PC set up for conference calls, and someone needs to log in to pull up the hangout, it's a different story

          The proper solution for that problem is for the conference room PC to have its own account, which is invited to the hangout, rather than logging in with some individual's account. From a security perspective, having a device that lots of people log into is a bad idea; it's an ideal target for compromise, regardless of whether or not you use 2FA.

          FWIW (not much, I suppose, since it's not generally available), the way this works at Google is that conference rooms have their own accounts and calendars. Rooms are added to meetings in a manner very similar to adding guests. Each conference room PC has a small, connected tablet computer sitting on the table that shows the room's upcoming meetings. You tap the one you want and the room joins that hangout. If someone needs to present something from their computer they just join the meeting from their computer, generally with a different URL that only shares their screen and doesn't use their camera, microphone or speakers (or they can join the hangout normally, mute their speakers, disable their mic and then go into presentation mode). All of this also works for people without Google accounts; if they're invited to a meeting they get a URL that connects them to the hangout, and they can present if needed.

          It's very slick. IMO, Google should package the solution and sell it, because it's far and away the best VC system I've seen.

          • The proper solution for that problem is for the conference room PC to have its own account, which is invited to the hangout, rather than logging in with some individual's account. From a security perspective, having a device that lots of people log into is a bad idea; it's an ideal target for compromise, regardless of whether or not you use 2FA.

            I'm aware of "the proper solution" from an administrative perspective, and maybe what you suggest does work at Google. However, there is a vast difference between a company the size of Google and, say, a startup where people just "take" rooms as needed, or you have to find a free room for something at short notice, and moving the conference from one room to another in a hurry becomes a pain. As I say, I've "experienced" the Chromebox for Meetings in the startup setting, and I'm sure it would be great _if_ y

            • Can you elaborate on what the problems are? You described having a PC in each room... so I don't see what's difficult about uninviting one and inviting another when moving. As for the other things you mentioned... do you think there's no need at Google to find a free room at short notice, or move hurriedly from one room to another? Actually, of late at Google in Mountain View there is no finding a room at short notice or moving hurriedly... because if you didn't grab that room days in advance it's just not
              • Can you elaborate on what the problems are? You described having a PC in each room... so I don't see what's difficult about uninviting one and inviting another when moving.

                Sure. Imagine it's a recurring meeting that someone else owns, or a short-term meeting where you're not the owner and the owner is late or doesn't have their laptop with them, etc. How are you going to change the invitation list? You can't, and neither can anyone else on remote teams, so you're screwed until someone goes and creates a new meeting and re-invites everyone, then hope the Chromebox picks that up fast enough, or at all, because technically the meeting has already started. Oh, and then also hope

                • The ownership thing can be mildly obnoxious. It's fairly standard practice at Google to click the checkbox to allow all attendees to edit a meeting. Even without that, though, it's always possible to make the change on your own copy; no one else will see the change if they look, but you can add someone (or a room), and the meeting will be added to the appropriate person/room calendar. Maybe Google Calendar works a little differently externally... I wouldn't think that part would be different.

                  Doesn't the C

            • Oh, and BTW, thanks for the mention of Chromebox. I had to go look it up. I didn't realize Google was selling it.

              I wonder if I could get one for my home office...

      • by SeaFox ( 739806 )

        I don't see how fumbling around with USB sticks is much better.

        I use a YubKey NEO-n [amazon.com]. It's a tiny device, only extends from the USB port by a millimeter or so... just enough that you can touch it to activate it. I just leave it plugged into my laptop all the time, so there's no "fumbling with USB sticks", I just run my finger along the side of the laptop until it hits the key. It's extremely convenient.

        Doesn't leaving the device plugged into your laptop all the time defeat the purpose of two-factor authentication? If someone steals your laptop they have your key now, same is if you left your one-time pad as a text document on the desktop.

        • I don't see how fumbling around with USB sticks is much better.

          I use a YubKey NEO-n [amazon.com]. It's a tiny device, only extends from the USB port by a millimeter or so... just enough that you can touch it to activate it. I just leave it plugged into my laptop all the time, so there's no "fumbling with USB sticks", I just run my finger along the side of the laptop until it hits the key. It's extremely convenient.

          Doesn't leaving the device plugged into your laptop all the time defeat the purpose of two-factor authentication? If someone steals your laptop they have your key now, same is if you left your one-time pad as a text document on the desktop.

          I addressed this in the paragraph below the one you quoted, and a bit more in the paragraph after that.

  • by Opportunist ( 166417 ) on Tuesday October 21, 2014 @12:18PM (#48196909)

    What keeps me (or my malware, respectively) from opening a google page in the background (i.e. not visible to the user by not rendering it but making Chrome consider it "open") and fool the dongle into recognizing it and the user into pressing the a-ok button?

    A machine that is compromised is no longer your machine. If you want two factor, use two channels. There is no way to secure a single channel with two factors sensibly.

    • by Junta ( 36770 )

      Sure, that will get malware authenticated for that session. Realistically speaking, if the end device is compromised to the degree of having malicious intervening software, there's little that may practically be done. However, 'keylogging' does much less in this case. It can intercept the one time credential that was sent, but that credential is useless beyond that session.

      Compare that to the common state of the art where not only can malware run amok with the authenticated session, it can also report up

      • Technically, "real" two factor authentication, with two different channels involved, require an attacker to infect and hijack BOTH channels if he doesn't want the victim to notice it.

        As an example, take what many banks did with text message as confirmation for orders. You place the order on your computer, then you get a text message to your cell phone stating what the order is and a confirmation code you should enter in your computer if the order you get as confirmation on your cellphone is correct. That wa

        • by Junta ( 36770 )

          Well, two factor doesn't mandate two channels (for example a door access system that requires both a badge and a keycode is also two factor), but yes, two distinct devices needing to be hijacked is better. However, in your example that's not assured either. If the mobile device is used to access the website then it's still one device. There's no guarantee that the user used a different device to access the web and process the text message. It's at the discretion of the user to take care of their circums

          • No, there is no guarantee that the user will not use a mobile phone to access his online banking (and the idiocy of some banks pushing out mobile apps for online banking doesn't actually improve security in that area either).

            You can't make the user secure. You can only offer it to him and hope that he's intelligent enough to accept it.

            • by Junta ( 36770 )

              idiocy of some banks pushing out mobile apps for online banking

              Though that pales in comparison to having the secret number to take as much of your money as someone wants printed in plain on paper checks or stamped into a little piece of plastic that you share with anyone that you give money. If a mobile banking app would help me spend money in a more secure fashion at vendors, I'd gladly take it over a credit card to swipe. It could actually be substantially be incredibly more secure than chip and pin in some ways (e.g. the account holder fully controls the input and

    • I don't think it works that way.
      The dongle has a key.
      The site has a key.
      depending on how this authentication is setup (I can't be bothered to check):
      Both sides send each other a challenge, which combined with the time is calculated and sent. (i.e. try it at 5pm and you'll get a different answer than 10am)

      Both results have to match as well as the users username and password.
      So, for an attack to be successful, they'd have to breach the Dongle, the website and the user. At that point it's kind of irrelevant wh

      • by Gr8Apes ( 679165 )
        I was thinking this was more a leave it plugged in dongle, so Google has guaranteed tracking of all you do. After all, why would Google do anything if it doesn't add to the bottom line?
      • The system you describe has been implemented often. Most often I've seen it with online games and the like where the main threat is the use of credentials by a malicious third party (i.e. some account hijacker stealing username and password, logging into your account and doing nefarious things with it). For that, you don't need a dongle. You need two synchronized devices that output the same (usually numeric) key at the same time. Basically you get the same if you take a timestamp, sign it using PKI and hav

    • The way bank do it (Score:4, Informative)

      by DrYak ( 748999 ) on Tuesday October 21, 2014 @12:44PM (#48197153) Homepage

      The way some bank do it, is that the authification asker (a 2F-protected service provider) sends a signed/encrypted message, that the security token decodes/verifies/displays. That message can't be tampered with (cryptography).

      So the token will display the message (something like "Authentication required to access GMail.com").
      so if an attacker tries to intercept your credential by opening an actual google page in the background, you'll notice that what the thing pretends to be on screen and what the dongle register as an asker aren't the same.

      The way to fool the user would be to try to look actually like the page you're trying to spoof. So an attacker needs to look like GMail, so the user thinks he's on Gmail, whereas actually it's a malware page maskarading as it and relying security tokens from the real Gmail.

      Now the way that banks counter-act that, is that any critical action (payment, etc.) needs to be confirmed again by the security token system. So the theoretic man-in-the-middle can't inject payment for 10'000$ for his Cayman Islands account. Because every payment needs to be confirmed again. And the bank will issue confirmation message regarding transaction.
      You'll notice if when paying a phone bill, the confirmation message instead is 10'000$ for Cayman Islands.

      Overall, it works as if the security token is its very own separate device, designed to work over non-reliable non-trusty channel.

      (The device doesn't implement a full TCP/IP stack. Most example device accepts only:
      - a string of caracters as an input (i.e.: you need to type the last five digit of the account you need to send funds too. The bank will notice when you type the digit of your utility company, but the man-in-the-middle has tried to inject a cayman island account from your browser).
      - a 2D flashing barcode to automate string input.
      - for the most crazy solution: writing a string to file on a flash-disk, this flashdisk is shared with the security token's microcontroller.
      Each time, the attack surface is very small. Only a short string of data is passed. You can't get much exploitable bugs.

      For the output, only a string again:
      - that you read and type from the token's screen.
      - that the token can type on your behalf, communicating with a HID chip on the same device.
      - the token can send it to a flash device that makes it visible inside a file.
      Again, the security token it self is limited to send just a string. Very small attack surface. All the funny "stuff" are implemented outside, and thus very low risk of remote exploitability)

    • What keeps me (or my malware, respectively) from opening a google page in the background (i.e. not visible to the user by not rendering it but making Chrome consider it "open") and fool the dongle into recognizing it and the user into pressing the a-ok button?

      For one thing, if the tab with the malware-loaded page isn't on top, Chrome won't allow it to talk to the dongle. If there is some way to render a page that is not visible to the user but which Chrome considers sufficiently "open", that's a Chrome bug which should be fixed.

      A machine that is compromised is no longer your machine. If you want two factor, use two channels. There is no way to secure a single channel with two factors sensibly.

      You should have stopped after the first sentence, because two channels doesn't help. If the machine you're using is compromised, it's no longer your machine, period. This is true regardless of the authentication method being used. That sa

      • The second channel will not secure a compromised channel, but it will make it easier to detect it.

        There are various defenses against replay attacks, most of them relying on keys being tied to the current time and only being valid NOW but neither before nor after. But that is only good against a replay, it is quite useless when the attacker is manipulating your own communication. That has been the staple of attacks against banking software since the advent of the OTPs, and the only sensible defense against t

        • The second channel will not secure a compromised channel, but it will make it easier to detect it.

          Oh, you're talking about a completely separate channel, with no joining to the primary channel? That creates its own set of problems... when the user authorizes a login, how do we bind that authorization to the login the user is attempting, rather than a login from some other location? Without a join (e.g. entering OTP from second channel into primary channel, or vice versa), the attacker just has to figure out when the user is logging in, and beat them.

          There is very little you can do to combat malware infections unless you are willing to use a second channel.

          I maintain that a second channel doesn't really help,

  • How will the government help us correctly understand politics if they can't read our email?
    • What makes you think our government gives a shit about anything other than grabbing more power? Besides, they'll just lose the email in an unfortunate chain of hard drive crashes, and the firing of the email archive company.

      • What makes you think our government gives a shit about anything other than grabbing more power?

        I'm not sure that sentiment makes sense - or ever has. The Government already has *all* the power, should it wish to exercise it. They make, interpret and enforce (or not) all the rules. All it takes is good people not doing anything to stop bad people.

        For example. The Supreme Court recently decided that Freedom of Speech over-rides any argument for a buffer-zone around abortion clinics (and, I believe, other places), but strictly enforces a buffer-zone around the steps of the Supreme Court.

        • The solution is a referendum declaring the Supreme Court is an abortion clinic, problem solved.
        • The Government already has *all* the power, should it wish to exercise it.

          Not true it does have a lot, but only to the point that people don't rebel against it. The trick is to make the people happy enough so they don't rebel while getting as much power as possible. Monitoring everyone greatly increases this power since you can squash dissidents (opponents to your power) much sooner, you can do this by labeling them the boogie man of the time, (currently terrorist) and imprison then for as long as you like without trial, or just assassinate them of course.

          The scarey thing is, I t

      • Well, the IRS has helped some tea party organizations understand they have incorrect political views.

        According to Eric Snowden, the NSA has been happy to help their significant others (and potential significant others) by keeping an eye on them.

        Some government offices in Ohio helped us learn more about Joe the Plumbers' tax history (since he has incorrect political views).

        So you see, the government has lots of time to help. And what will they do with all this time if they can't read our emails?
  • Good way to spread BadUSB exploits.
  • Does anyone know if LastPass USB dongles qualify?

  • where to buy one from.

    "Your search - FIDO U2F Security Key -amazon - did not match any shopping results"

  • Why not use standard smartcards with client-side SSL certs for this? There's already a widely used cross-platform, cross-browser, hardware/software standard to do exactly this!

    • I was just wondering about the same. The benefits for Smart Cards (preferably USB-dongles), is that it is actually a x509 or PKCS#12 certificate on them. This means that one can use encryption as security. Usecases for Smart Cards:
      1. SSH
      2. OpenVPN or StrongSwan
      3. Encryption of harddrive
      4. SSL client certificate for web-browsing

      The dongles also lock them selfes up if I type the wrong pin too many times.

    • by Anonymous Coward

      FIDO U2F was supposed to use a public-private key author system along with a JavaScript API. Looks like they chumped out and fell back on HOTP.

      The SSL peer cert is non-starter because neither the client nor server software ecosystem is setup to make it easy to manage that (who here even knows how to get the peer cert from their server stack? Or if you can do it without hacking the server framework? My stack can, but I wrote my entire server stack from the ground up, including massive Lua bindings to OpenSS

  • by icknay ( 96963 ) on Tuesday October 21, 2014 @01:04PM (#48197347)
    It's great the Google is trying to advance this. The attack to worry about is "Man In the Browser" MITB http://en.wikipedia.org/wiki/M... [wikipedia.org]

    MITB is the difficult case, and the way that bank accounts get emptied. The bad guy has malware on the victim computer, and the malware puts up web pages, and of course it can just lie about the url bar. So then the bad guy puts up the fake bank web site, and the victim type in the 2-factor code or whatever, and now the bad guy has it. Obviously Google knows about the MITB case. Does this thing have some sort of MITB mitigation? I'm guessing it does something. Hey Google, what do you say?

    The classical solution to MITB is that the little key has its own display, so it can show "Confirm transfer $4500 to account 3456" - showing the correct info to the "victim" even if their laptop is compromised. Basically, keeping the usb key itself from getting malware is feasible, while keeping the laptop or whatever clean is not.

    • by icknay ( 96963 )
      Well I watched some low-content video, and it mentions the MITM case (I called it MITB, but whatever). However, there was zero actual information. I guess one way it could work is that the key and google.com have a shared secret, and this is used to bring up a channel between google and the key, and that channel can be secure even if the bad guy controls the browser. But then how is the browser UI resistant against the MITB attack, since obviously the browser is running outside of the key, and outside the
      • by robmv ( 855035 )

        Fido U2F overview [fidoalliance.org]

        • by icknay ( 96963 )
          Ah, thanks. From a quick read of the doc, it is focused on the MITM case. My read of the quote below is that the MITB case is, in fact, not solved. +1 for being honest and transparent. Still, it's progress for one common class of attacks (like say your government feeding you a fake gmail page). It would probably be better in their docs if they used the "MITB" terminology (hey, it has its own wikipedia page!) to be super clear about what is and is not solved. Ultimately, the MITB solution dongle will probabl
    • The security key won't respond if it doesn't receive the right message from the website. Some detail here: http://fidoalliance.org/specs/... [fidoalliance.org] (See section 6, page 11)

    • I think Bank of America tries to solve this by having an Account Image that you are supposed to recognize before entering your particulars. The MITB would need to know which image to show me as well as pop up what looks like a real sign in screen.

  • Cant wait to get my hands on one of these. Unfortunately Amazon doesn't ship to Poland so I can't get it here. I have two concerns regarding this:

    I understand this is an amateur class device. Better (or is it?) than Authenticator app as you need to gain the physical key since a phone app can be accessed remotely at least in theory but still not hard security as corporate smart cards, RSA tokens etc. Just hardware two form auth for the masses and I guess it is a good thing to have (or is it?). But as I see t

  • I use Google Authenticator for quite some sites, not only the Google ones.

    After reading the links here I'm under the impression that any site outside Google will not work with this method and I'll have to continue using the Authenticator app on my phone. Is that correct?

Technology is dominated by those who manage what they do not understand.

Working...