Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Encryption

IETF To Change TLS Implementation In Applications 80

Trailrunner7 writes "The NSA surveillance scandal has created ripples all across the Internet, and the latest one is a new effort from the IETF to change the way that encryption is used in a variety of critical application protocols, including HTTP and SMTP. The new TLS application working group was formed to help developers and the people who deploy their applications incorporate the encryption protocol correctly. TLS is the successor to SSL and is used to encrypt information in a variety of applications, but is most often encountered by users in their Web browsers. Sites use it to secure their communications with users, and in the wake of the revelations about the ways that the NSA is eavesdropping on email and Web traffic its use has become much more important. The IETF is trying to help ensure that it's deployed properly, reducing the errors that could make surveillance and other attacks easier."
This discussion has been archived. No new comments can be posted.

IETF To Change TLS Implementation In Applications

Comments Filter:
  • Ok (Score:5, Insightful)

    by trifish ( 826353 ) on Sunday December 15, 2013 @04:09AM (#45693727)

    Just, please, this time, try to be more careful about who joins your working groups. And especially what their true intentions are.

    Sometimes when someone tries to "simplify deployment" or "offers insight to prevent user confusion", etc., you may want to think twice. History repeats itself, you know.

    • And open it up for everyone on the Internet to be able to review. That's the only way to avoid sabotage. Github it.
      • by Anonymous Coward

        GitHub? Those guys who censor repositories when they don't like it? No thanks.

      • by raymorris ( 2726007 ) on Sunday December 15, 2013 @09:57AM (#45694877) Journal

        This work is being done by IETF, the Internet Engineering Task Force, which is an open organization who does most of their work via their mailing list. Anyone can read the daily message archive or join. I was a member for several years and you too are welcome to lurk or join and be active.

        The only caveat is please remember this is how Jon Postel, DJB, and others of similar skill get work done. Anything you post goes to the email of many of the internet's primary architects, so please read for a while first to get a feel for how the group works, then contribute in your area of expertise. When posting, you're working with the world's top experts on internet technology, so please keep that in mind.

        • damn it - fat fingered the informative upvote.
        • by Anonymous Coward

          I was wondering why you mentioned Jon, who died over 15 years ago, but then I realized I cannot think of any equally representative names from recent years either... does that say something about IETF, or more about our capacity to annoint new leadership?

          • Vint Cerf is active with IETF, or was when I was.
            When I was new, I very nearly publicly scolded him for posting off-topic. I composed a scolding email before realizing who I was scolding. When I saw that it was v.cerf@ I decided to let someone else, TB Lee or someone, do the scolding if necessary. That that I'm adverse to telling the emperor he has no clothes, but I was a total newbie, a baby compared to them, so I felt I should let the long-time members uphold their own code etiquette in the way they had

    • by AHuxley ( 892839 )
      Re History repeats itself, you know.
      I would suggest reading all about Engima, Enigma after ww2, the NATO/embassy encryption with 'tempest' plain text and later sales of weakened global crypto machines with junk math, early cell phones....
      All this was well understood into the 1990's.
      Snowden now fills in the missing US telco and US crypto gaps in US science/gov/academia.
      A lot of trusted junk telco tech and code seems to have passed with great reviews :)
      http://it.slashdot.org/story/11/06/06/2045203/25-of- [slashdot.org]
      • by mysidia ( 191772 )

        I would suggest reading all about Engima, Enigma after ww2

        See enigma was pretty cool, but only the military units got the crucial plugboard feature -- and they all suffered a flaw that a letter could not encode to itself.

    • I like the way Snrub thinks!

  • by d33tah ( 2722297 ) on Sunday December 15, 2013 @04:22AM (#45693761)
    Does this mean that we'll finally give up on this sick certificate-based trust scheme? It's not like Moxie hadn't proposed his own solutions, even with implementations... why don't we make THESE internet standards? Making encryption stronger is just pointless if you can fake a ceritificate.
    • Re: (Score:2, Informative)

      by Anonymous Coward

      The blame should not be put on using certificates but trusting an unknown certificate just because one of 500 or so certificate authorities signed it.

      • Re: (Score:3, Interesting)

        by d33tah ( 2722297 )
        I agree. But that's what makes this model useless. We shouldn't outsource trust to CA's, but push it to the users. Let them decide who do they trust. If, after the VeriSign fiasco they don't trust VeriSign anymore, they should be able to revoke the trust without losing the ability to view 1/4 of the internet. Seriously, guys, go watch any Moxie's talk and you'll understand the issue much better.
        • Let them decide who do they trust.

          It will fail right here. Most users can not be trusted to make informed decisions.
          How could they when they don't even know the difference between a hard drive, a modem or the case housing the individual components?

          I prefer your solution but the industry is not moving towards "more user rights".

          • Re: (Score:2, Funny)

            by Anonymous Coward

            Maybe the government could run a CA, and then we all just trust that one CA.

            • by thogard ( 43403 )

              That is the ITU's current plan. It was also a core concept of the X.400/X.500 based email systems.

              • by mysidia ( 191772 )

                That is the ITU's current plan. It was also a core concept of the X.400/X.500 based email systems.

                Bleh... noone would use X.500 email systems though.... isn't Microsoft Exchange the only system that uses X500 addressing for e-mail ; everyone else doing SMTP / RFC8xx style Mailbox@Example.com style email addresses ?

                • by thogard ( 43403 )

                  GOSSIP is still the required email system for the US DOD and other government agencies. It is just that SNMP is an allowed migration plan thanks to two words I added to a very large document a long time ago. There is no functioning X.400 email system that I know of. Exchange was catching up with the very broken ISODE which had been the reference implementation for decades.

            • Pick a government, any government. There's about 200 to choose from, so that's not a huge improvement on 500-odd Certificate Authorities.

              Oh, you want me to trust your government. Now, why would I do that?

          • Knowing that sort of thing wouldn't help anyway, nor would any amount of technical knowledge. You need to know who everyone is, their intentions, relationships and the intentions of those they have relationships with, their competence and their past behaviour. It's too much to expect of nearly anyone just to have their browser do what they've asked (which is connect to who they've asked to connect to). A user just wants their computer to connect to their bank/shop/email and will object to it foisting that k
        • I agree. But that's what makes this model useless. We shouldn't outsource trust to CA's, but push it to the users. Let them decide who do they trust.

          Yes, most people are really good at making informed decisions based on complex technical information.

          Not.

      • by Anonymous Coward

        And various domestic and foreign intelligence agencies are bound to run certificate authorization outfits.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Does this mean that we'll finally give up on this sick certificate-based trust scheme?

      No. There are many people and institutions that are fine with the existing scheme and are not interested in adopting new techniques to thwart the NSA or whomever. The US government, for instance, will not be adopting an anti-NSA mentality any time soon, so they're not going to walk away from traditional CAs. Many businesses see no jeopardy to their business model if they continue to use cryptographic techniques that are vulnerable to the NSA or other national governments; as long as those techniques are

      • by davecb ( 6526 )

        Organizations are generally more concerned about foreign governments, such as ones who got a "google" certificate from a nominally Dutch CA. If they get told "you may not do business with country X", they'll be specifically interested in being sure country X can't eavesdrop on them with a forged certificate.

        They're already quite aware certificates can be forged: many have forged their own to snoop on their employees.

        Businesses are failure-averse. If they need to adopt a new scheme for certification i

    • by Anonymous Coward on Sunday December 15, 2013 @06:06AM (#45694027)

      One of the major problems is that currently no limits to what a CA can sign, and even though there is a urgent need to do major revamp to the protocol, I would like see first that TLS 1.next would at least fill that gap.

      Can someone, please, if they can justify why for example Türktrust can sign a certificate for a *.gov and .*mil domain? Or why Spanish CA issued a wildcard *.google.com to someone, please?

      Limiting that to happen, should be a minimum short distance goal, implementation shouldn't be delayed many years but possibly starting from beginning 2015.

      There are many ways to implement these. Adding OID's to root certificate stating policy TLD's which CA is authorized and then also verified from TLD controlling party DNS query asking RR's for that CA whether policy is current and not revoked. The protocol could be lightweight DNSCurve for example. But like I said, there are many ways doing it. Hardest one to solve would be those where no connection exist to network before offered certificate, such as 802.1x/EAP, without chicken-and-egg problems.

      IMHO, now founded new work group should concentrate longer period development, but first things first. The big gaping hole in current implementation should be fixed ASAP.

      Two years ago a post (Honest Achmed's Used Cars and Certificates [mozilla.org]) to apply root CA from Mozilla was funny, but not any more. The there are so many incidents with falsely issued certificates, even root certificates, that they could have admitted root to Achmed and his brother who knows few things about computers and situation wouldn't have been much worse by now.

      • This... Further... It is not just that CA's can sign for any site, it is that sites can only ever use one CA. If you want to make CA's accountable (ie. when one has awful security, or is buddies with people you don't trust) then the groups and people, need to be able to un-trust them without large parts of the internet going dark. Also, Different people will trust different CA's. There is no CA that both the Chinese and American governments will trust. If you want to sell to both, you will likely nee
      • by mysidia ( 191772 )

        Can someone, please, if they can justify why for example Türktrust can sign a certificate for a *.gov and .*mil domain? Or why Spanish CA issued a wildcard *.google.com to someone, please?

        Personally; I would favor requiring Server certificates to be signed by a minimum of 3 CAs; perhaps by using a separate trust document file; "Third party CA a auxillary attestation of certificate trust".

        The standard could then be --- at least two of the authorities must reside in different geographical juris

      • X.509 already has a name constraints extension. The problem with TLS is not necessarily its features or design, but that often solutions or upgrades become difficult to deploy because the standard for "this works" is "every device on the planet can connect", a standard that is often unreachable when you start thinking about buggy SSL stacks in embedded devices that never get upgraded.

        IF you were willing to accept, say, a 10% error rate for old devices connecting to your server, you could do all kinds of upg

    • So tell us, what else do you propose? CAs are useless as defending against intelligence services, but they do a pretty good job against your basic internet crime. There have been instances of fraudsters getting certificates incorrectly signed by social engineering and things like that, but such events are quite rare. A WoT model might work too, but it isn't going to offer much security against intelligence agencies either - it wouldn't be hard to influence various well-connected companies to endorse anythin

      • The CA model was never designed to do more than support Internet commerce. It was designed to be secure enough to exchange credit card information.

        CAs are not useless against defending against intelligence services, they are only vulnerable to being suborned by a limited number of such agencies, the ones that they have plant in. And any defection is visible on the Internet. Hence the use of schemes such as Comodo CertSentry and Google's Certificate Transparency which are designed to prevent covert subornati

    • by mysidia ( 191772 ) on Sunday December 15, 2013 @09:59AM (#45694897)

      Making encryption stronger is just pointless if you can fake a ceritificate.

      We should start, by allowing certificates to have multiple signers

      Instead of everyone trusting a small number of CAs --- the certificate should bear a number of signatures, and you should be able to score a certificate based on how much you trust the signers.

      • I like this idea, the whole thing is so broken atm it's a joke, the only peeps who take it seriously these days are the ones without much tech knowledge.

        not sure how much it would help, but this whole "WoT" idea is definately the way forward.

      • X.509 already supports this and complex, non-hierarchical trust schemes are frequently used.

        The problem is it doesn't make any difference because you still need to be able to connect to servers that are only signed by one CA, and you have no way to know ahead of time how many signers there should be for any given host. And if all clients accept one signer, why would anyone pay for two?

        This idea fails for another reason - many CA's validate your websites identity by connecting to it. If you take control of a

        • by mysidia ( 191772 )

          The problem is it doesn't make any difference because you still need to be able to connect to servers that are only signed by one CA, and you have no way to know ahead of time how many signers there should be for any given host. And if all clients accept one signer, why would anyone pay for two?

          My suggestion would be that browsers out of the box require a minimum of 4 current signers, for certificates issued after date X; and be configurable to require between 3 and 10 signers; at least one of the certi

    • by mysidia ( 191772 )

      In this video Moxie Marlinspike discusses the problem [youtube.com] and convergence.

      The trouble with Convergence; I think, is the reliance on online notaries; which become highly-centralized single points of failure.

      Remember; for the most part --- users will just use their web browser's default settings.

      I believe for it to be highly scalable --- the web server must gather signed notary responses and provide these to the user for dissemination.

      The internet standards should focus on changing the nature of SSL certifica

      • The trouble with Convergence; I think, is the reliance on online notaries; which become highly-centralized single points of failure.

        They don't, really. The great thing about notaries as opposed to CAs is that you can use as many of them as you want, and the client decides how to handle discrepancies and outages. So a browser could ship preconfigured with 8 independent notaries, and alert the user if more than four of them were down, or if any single one of them disagreed with the rest.

        In the same way, CAs can still act as authoritative notaries for domains they have signed. But now if they misbehave they can be instantly delisted, and u

    • by Alarash ( 746254 )
      The more immediate problem is that a valid Certificate, even if it's "reasonably" signed, costs quite a lot of money. The VeriSign Mafia and others took that market over, and there's no 'free' (as in beer or speech) standardized alternative. I'm working on a number of small- medium-sized projects and if I had to buy a certificate for all of them, that would add up to a hefty sum (the projects are free of ads and subscription, so I make no money at all from them). I wish I had a secure, safe and viable alter
  • by Anonymous Coward

    Both HTTPS and SMTP use TLS to encrypt but usually have you send the password "in plaintext" within the encrypted channel. That's a grievous mistake. You should never, ever transmit passwords.

    • by gweihir ( 88907 )

      That is BS. For example, it is perfectly fine to transmit passwords through SSH if the server authentication checked out.

      • It may be okay because the tunnel is encrypted with SSL or SSH -- but is still not best-design. Better designs never send the password, or even the password has over the wire.

        At which point, you no longer have to care whether or not the tunnel is encrypted.
        • by gweihir ( 88907 )

          There is no general "best design". For some cases this is perfectly acceptable and there is absolutely no general requirement to "never send the password". It depends on the scenario and in some it is perfectly fine and secure and the best solution.

          Using generalities like your is a sure way to end up with an insecure design. That is about the only generality valid in the security field.

  • by Anonymous Coward

    http://blog.djm.net.au/2013/11/chacha20-and-poly1305-in-openssh.html

    still relevant information. :)

  • Consider this scenario: You're about to connect to a resource, but the service says you need to authenticate. So, a browser native login box pops up. You enter your credentials and those are hashed with the session nonce to key the symmetric ciphers, the connection then commences since the server already has a shared secret with you. It's not like HTTP Auth doesn't exist, it's just that TLS is ignorant of HTTP. If you have a secret GUID on the system, you can hash the domain name of the server with it t

    • by mattpalmer1086 ( 707360 ) on Sunday December 15, 2013 @06:28AM (#45694097)

      Well, I can't really make out what you're proposing here.

      As far as I can see, the client side has three secrets to maintain - the GUID, master password and salt. If the GUID is unique to a computer, your accounts only work from a single machine, and if you lose the GUID then you lose access to all your accounts. Correct?

      The nonce is a "number used once" - i.e. randomly generated for each session in a cryptographically sound way.... so how do the server and client negotiate the nonce for each session? Does one pick it and encrypt it to send to the other? Do they both participate in picking it? Do they use something like Diffie-Hellman to arrive at the value?

      I really don't understand your point about changing the salt equals changing your logins without affecting your password. Do you mean if I wanted to lose access to all my accounts everywhere and begin again, I wouldn't have to change my password?

      And... how do you know you're talking to the right server in the first place? I don't see any server authentication at all in your proposal.

      That's enough for now. The one thing I've learned from studying protocols is that it's really, really hard to get right. Not because the people creating them are dumb or have malicious intent. It may well be time to start creating a new protocol to replace TLS eventually, using what we now know about trust, authenticated encryption, protecting the handshake and side channel attacks. And possibly using some new techniques in there, like identity-based encryption...

      • What he proposes, best I can see, is moving website login away from strictly the domain of HTTP where it is separate from TLS, and instead making it part of the cryptographic authentication. So when you log into slashdot, your password acts as a shared secret - even if an attacker is able to intercept and modify all communications, without the shared secret they couldn't generate the appropriate shared key and so couldn't decrypt or impersonate.

        Obvious weaknesses that I see:
        - Completly reworking the TLS/HTT

        • by dkf ( 304284 )

          What he proposes, best I can see, is moving website login away from strictly the domain of HTTP where it is separate from TLS, and instead making it part of the cryptographic authentication.

          On one level, that's trivial to do: turn on requiring client certificates in the TLS negotiation. The hard part is that users — people really — really hate being exposed to having to know details of security. This was discovered in depth and at length during the fad for Grid Computing ten years ago: the biggest hurdle by far was setting up users with proper identities. You could theoretically transfer the responsibility for that to a separate service, but then that gives a point where it much ea

      • by naasking ( 94116 )

        ID-based encryption is a terrible idea. There is no such thing as a public, unique, non-cryptographic name. See Zooko's triangle.

        • I'm no expert on id-based encryption, although I can just about understand how it works. It has some attractive properties as well as some serious downsides.

          Pros:
          * An encryptor can pick a public key at random for a recipient known to the decrypting authority.
          * No prior arrangement is required except for knowledge of the public parameters of the authority, and a recipient to send a message to.

          Cons:
          * The private key of the recipient can be calculated at any time by the decrypting auth

          • The problem with ID based encryption is revocation. If someone loses their key the best you can do is to tell people that it is bad. And any mechanism that could tell you the key status could be used for key binding.

            So the only applications where it really works is in low level device type schemes where the crypto is installed during manufacture.

    • by Anonymous Coward

      One really nice thing to read about protocol design is "Designing an Authentication System: a Dialogue in Four Scenes"

      http://web.mit.edu/Kerberos/dialogue.html

      It's about how Kerberos was invented, and it's done quite nicely as a play, of all things.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...