Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Encryption Security Businesses Google The Internet

Google's Obfuscated TCP 392

agl42 writes "Obfuscated TCP attempts to provide a cheap opportunistic encryption scheme for HTTP. Though SSL has been around for years, most sites still don't use it by default. By providing a less secure, but computationally and administratively cheaper, method of encryption, we might be able to increase the depressingly small fraction of encrypted traffic on the Internet. There's an introduction video explaining it."
This discussion has been archived. No new comments can be posted.

Google's Obfuscated TCP

Comments Filter:
  • by vux984 ( 928602 ) on Tuesday October 07, 2008 @08:07PM (#25294399)

    Firefox isn't helping the lack of SSL on the web by throwing a ridiculous warning when using self signed certs. Browsers should treat self signed certs as 'unsigned with the added bonus that communications can't be eavesdropped' instead of freaking out that you might not know who you are talking too.

    self signed certs aren't appropriate for processing credit cards... but not every site that has forms needs that... and simply removing eavesdroppers would be a step in the right direction.

    • by 0123456 ( 636235 ) on Tuesday October 07, 2008 @08:12PM (#25294441)

      "simply removing eavesdroppers would be a step in the right direction."

      Yes. Whereas self-signed certs let the eavesdropper send you a certificate which makes you think your connection is secure when in reality they're listening to everything you send.

      • by Kjella ( 173770 ) on Tuesday October 07, 2008 @08:21PM (#25294513) Homepage

        Per definition, then you're more than an eavesdropper. Then you're actively intercepting and rewriting the connection, which is a lot more complicated to do in volume plus detectable by comparing fingerprints. Whereas just copying the stream for the NSA is trivial and without detection possibility, but hey pick no security because the other is imperfect.

      • by Free the Cowards ( 1280296 ) on Tuesday October 07, 2008 @08:22PM (#25294519)

        The point being that this is the actual security hierarchy, from best to worst:

        1. SSL with cert signed by a trusted certificate authority
        2. SSL with self-signed cert
        3. Plain HTTP

        Whereas most web browsers make it appear like this:

        1. SSL with cert signed by a trusted certificate authority
        2. Plain HTTP
        3. SSL with self-signed cert

        Any browser that warns you about self-signed certs should make at least as much of a fuss about using plain HTTP, but they don't. Firefox takes it to ridiculous extremes but they're all faulty in this respect.

        And really, if browsers would save the self-signed cert and then alert me if it changes the way SSH does, then the result will be very good, nearly as good as a regular cert (and potentially even better, since there's no potential for compromising the trusted certificate authority).

        • by Sancho ( 17056 ) *

          The thinking behind the current browser behavior is that the while self-signed certs provide encryption, they do absolutely nothing to try to verify that the remote host is who they claim to be. Providing a lock symbol (which, over the years, security professionals have tried to train users to trust) when there is nothing even resembling validation does a disservice to the user. There is no need to make such a fuss over plain HTTP because users have been trained not to send credentials over plain HTTP. T

          • by Free the Cowards ( 1280296 ) on Tuesday October 07, 2008 @09:11PM (#25294901)

            So stop displaying the lock symbol! Nothing requires you to treat "real" SSL and self-signed SSL identically. It should be obvious that the current standard approach of making them look exactly the same except for a scary warning that appears the first time you hit a self-signed site is broken. But nobody cares about doing better because it's the "standard".

      • by Zadaz ( 950521 ) on Tuesday October 07, 2008 @08:23PM (#25294535)

        Whereas self-signed certs let the eavesdropper send you a certificate which makes you think your connection is secure when in reality they're listening to everything you send.

        aka: "Whereas having a keyed lock on your door lets a thief pick the lock and steal everything inside."

        Therefore we should make it less convenient to put locks on doors.

        • Straw man: The keyed lock argument is easier to prove false, and seems naively analogous to the SSL problem.

          We are instead talking about a keyed lock where I, as an attacker, can walk up to your house after you leave and use a key with a specially shaped tip to to prime the lock for accepting a new key (in addition to the old key-- this is not technically identical, but from a user interface point it is the same interaction). I can then unlock your door with my completely random key; when you get home,

    • by rsmith-mac ( 639075 ) on Tuesday October 07, 2008 @08:21PM (#25294507)
      Every time a Firefox or SSLTLS article comes up, we go over this again and again. SSLTLS is both an encryption and authentication scheme; it sucks but that's what the spec says it is. Firefox can't go off and do its own thing, least someone starts exploiting the fact that their implementation of SSLTLS is no longer an authentication scheme and starts taking advantage of people who expect otherwise. The W3C needs to separate authentication and encryption in the standards themselves, that's the only proper and safe way to change things.
      • What, the protocol spec says "thou shalt have such-and-such a user interface"? It completely forbids the application determining "the protocol can provide X and Y, but in this case we only have X and not Y", and telling the user what we actually have rather than what the protocol we're using could theoretically provide? If so... that's really very stupid, and maybe people should ignore it.
    • The problem (I think) with treating self-signed certificates as 'unsigned with the added bonus that communications can't be eavesdropped' is that it would rely on site owners not asking for sensitive information while using a self-signed cert.

      Most users are too dumb to check for SSL, good luck getting them to discern insecure, 'insecure but can't be eavesdropped', and secure. Hell, most users would be shocked to find out you can eavesdrop on their traffic in the first place.

      • by vux984 ( 928602 ) on Tuesday October 07, 2008 @10:11PM (#25295357)

        Most users are too dumb to check for SSL, good luck getting them to discern insecure, 'insecure but can't be eavesdropped', and secure.

        Fair enough. So don't put the secure green lock up for self signed SSL. Put up a totally different icon in some neutral color like blue. If they click on it it says, the connection is encrypted and can't be eavesdropped but there is no gaurantee you are talking to who you think you are.

        Hell, most users would be shocked to find out you can eavesdrop on their traffic in the first place.

        Good point! Maybe firefox 3 should pop up a huge error screen every time you try to connect to a site with plain http. It could say something like:

        The server you are connecting to is insecure. Maybe there is a configuration error on the server. Or maybe someone is trying to impersonate it. Oh, and by the way, not only that, but any communication with them maybe trivially intercepted by any 3rd party...

        Are you sure you want to communicate with them?

        Then it could have friendly buttons like:

        "Hell no get me out of here." or "Ok, I don't mind getting pnwed!"

    • by defaria ( 741527 ) <Andrew@DeFaria.com> on Tuesday October 07, 2008 @08:29PM (#25294585) Homepage
      There's an ambiguity to SSL certs. They do two things at once. They 1) prove that the person who has the cert is that person through a certificate authority and they 2) provide for encryption. Why not simply have grades of SSL? A self signed cert could then allow encryption and say perhaps show a yellow padlock whereas a CA signed cert could provide for encryption and provide CA authentication and give a green padlock or whatever. What's so freaking difficult about that?
      • What's difficult is if I'm technically capable of eavesdropping, I'm technically capable of MitM and thus a self-signed certificate doesn't add any extra security.

        It's very difficult to provide an explanation. Think about car locks, right? Car locks aren't secure; I can brick your window and steal your shit. People don't, so the (straw man) argument is that there's increased security, and self-signed SSL certs will increase security.

        But if you consider the differences here you should notice that someon

        • Re: (Score:3, Insightful)

          by vux984 ( 928602 )

          What's difficult is if I'm technically capable of eavesdropping, I'm technically capable of MitM and thus a self-signed certificate doesn't add any extra security.

          So have the browser treat it as being unsigned. Don't do anything special. Don't put up a big green lock. Don't make a fuss. Even if its not really MORE secure, its certainly not LESS secure, so firefox at WORST should treat it exactly the same as plain http.

      • by KermodeBear ( 738243 ) on Tuesday October 07, 2008 @10:32PM (#25295507) Homepage

        I dunno. I just click "Okay" until the windows go away and I can see the website.

      • Re: (Score:3, Insightful)

        by The Moof ( 859402 )
        Trying to get average computer users to understand "Encrypted" vs. "Authenticated" would be the biggest problem.
  • Opportunistic encryption was the original goal of the FreeS/WAN [freeswan.org] project. It was not realised, and the eventual forks (OpenSwan and strongSwan) are now aimed more at running IPSEC tunnels.

    • Re:OE is a nice idea (Score:4, Informative)

      by whoever57 ( 658626 ) on Tuesday October 07, 2008 @08:41PM (#25294671) Journal

      Opportunistic encryption was the original goal of the FreeS/WAN [freeswan.org] project. It was not realised,

      That depends on your definition of "not realized". Before the FreeS/WAN project was abandoned, opportunistic encryption had been implemented and was in use. Adoption was probably quite small, but it existed.

  • surveillance (Score:5, Insightful)

    by TheSHAD0W ( 258774 ) on Tuesday October 07, 2008 @08:08PM (#25294411) Homepage

    The video starts out saying that increased encryption is needed thanks in part to warrantless government surveillance. It then goes on to describe a system that assumes no MITM attacks can exist. The fact is, however, that governments are entirely capable of performing MITM attacks, as can telecommunications companies; and if it becomes popular we may see more techniques that allow individuals to perform MITM attacks. While this algorithm has significant merit, care needs to be taken to avoid a false sense of security.

    • Re: (Score:3, Insightful)

      by syzler ( 748241 )
      ... care needs to be taken to avoid a false sense of security.

      Which is why the video states that SSL/TLS should be the only user visible transport security and their third goal is to have no visual indications and no alternative URL schemes.
    • Re:surveillance (Score:5, Insightful)

      by Free the Cowards ( 1280296 ) on Tuesday October 07, 2008 @08:25PM (#25294555)

      It does not "assume no MITM attacks can exist". It deliberately does not protect against them. This is not the same thing, as one is a position of ignorance whereas the other is an intentional choice not to defend against that threat.

      In practical terms, MITM is considerably harder than simply listening in. Wide-scale surveillance such as what caused the big recent flap with FISA and the NSA simply can't perform MITM attacks. Protecting against pure eavesdropping while remaining open to MITM attacks is useful, it's just not a 100% solution. As long as it doesn't sell itself as one (and I see no indication that it is) then there's absolutely no problem with that.

  • NOT GOOGLE (Score:5, Informative)

    by Zutroi_Zatatakowsky ( 513851 ) on Tuesday October 07, 2008 @08:16PM (#25294475) Homepage Journal

    This is not GOOGLE's Obfuscated TCP, this is a small one-man project HOSTED on Google Code.

    That guy's gonna get tons of traffic for what's maybe a good idea but not endorsed or supported by Google.

  • by Culture20 ( 968837 ) on Tuesday October 07, 2008 @08:18PM (#25294485)

    By providing a less secure, but computationally and administratively cheaper, method of encryption, we might be able to increase the depressingly small fraction of encrypted traffic on the Internet.

    If the encryption is computationally cheaper, then the decryption is computationally cheaper. I'd rather people know that what they're sending over the 'net can be sniffed than have them think that because example.com uses Rot13 encryption their traffic is private.

    • Re: (Score:2, Informative)

      by mrbene ( 1380531 )

      If the encryption is computationally cheaper, then the decryption is computationally cheaper. I'd rather people know that what they're sending over the 'net can be sniffed than have them think that because example.com uses Rot13 encryption their traffic is private.

      A few key points:

      • Obfuscation != Encryption
      • Cost to Encode (encrypt/compress/obfuscate) does not directly relate to the cost to decode. The relationship differs per algorithm used.
      • Cost to de-obfuscate without proper keys can be significantly more than cost to de-obfuscate with proper keys.
  • by mikenap ( 1258526 ) on Tuesday October 07, 2008 @08:26PM (#25294565)

    So, basically we have the same concept as SSL, except instead of trusting the CA signature on the certificate, we trust DNS.

    Forging a CA signature on a certificate would be a BIG DEAL.
    Forging a DNS entry, especially with ISP cooperation(read government snooping), is DEAD SIMPLE.

    So we replace real security with, well, a CPU hog that's only a smidge better than running everything in the clear. It only keeps out the MOST casual, lazy, and uninterested snooper.

    • Re: (Score:3, Insightful)

      by Kjella ( 173770 )

      Forging a CA signature on a certificate would be a BIG DEAL.
      Forging a DNS entry, especially with ISP cooperation(read government snooping), is DEAD SIMPLE.

      True, if it required you to forge a real CA's signature. The whole point of self-signed certs is that there is no CA - you're not impersonating being anyone other than the website and it has zero effect on anything else. I can make such a certificate up in a terminal in the time it takes me to type it. I don't know if it would be a bigger deal legally, but technically it is equally dead simple. And if it was legally a big deal I'm sure they can get some retroactive immunity for it.

    • by mzs ( 595629 )

      Also you need either A) a CNAME (rejig your whole site) or B) hacked DNS resolver (HA! I bet only an eighth of ISP DNS server even handle more than the common records correctly as is).

      Basically what is outlined is if you own example.com, you make a web site http://www.example.com/ [example.com] and then have:

      www.example.com. 30 IN CNAME 0000000abcdef.abcdef.example.com.
      0000000abcdef.abcdef.example.com. 30 IN A 1.2.3.4

      The hex is a key and a tcp port. It would be trivial as you said for the ISP to deliver instead:

      www.examp

  • by this great guy ( 922511 ) on Tuesday October 07, 2008 @09:00PM (#25294819)

    I read the technical details [google.com] and they talk about an advert being encoded in the CNAME, to distribute a curve25519 key and a port number. But they could have done much simpler using technology that already exists: encode the 160-bit SHA1 fingerprint of an X.509 certificate and a port number in the CNAME (only 32 chars needed in base32). Then connect to this port using HTTPS and simply verify that the certificate matches the fingerprint ! Advantages:

    • This technique works using standard TLS/SSL technology, no need to reinvent a poor man's TLS protocol like they did with Salsa20/8, Curve25519, Poly1305, etc.
    • It is just as secure as their "Obfuscated TCP" (both techniques rely on the DNS records not having been tampered).
    • The SHA1 fingerprint being encoded in the CNAME allows the browser to verify its validity without prompting the enduser with scary dialog boxes (and it also works with self-signed certs).
    • And as a bonus, the fact a standard HTTPS server is running allows endusers who really want true security to explicitely connect to the HTTPS URL by themselves (without relying on the CNAME trick). Doing this would make the browser verify the validity of the cert using the normal way (scary dialog boxes... or not if the cert's CA is trusted).
  • Ust-jay se-uay ode-cay.
  • by embsysdev ( 719482 ) on Tuesday October 07, 2008 @09:18PM (#25294937)

    Seriously, not trying to troll, but wasn't this problem solved more completely by IPSEC back in 2000? Why hasn't IPSEC been adopted more? Why should this solution fare any better?

    • by Anonymous Coward on Tuesday October 07, 2008 @11:25PM (#25295857)

      IPSec prevents its own widespread adoption in two ways:

      1) IPSec is very much about authenticating who you are talking to on the other end. If two nodes connect for the first time, with no previous knowledge, they have no way to authenticate the other is who they say they are. This is a failure to IPSec - you'll get no SAs, and most implementations will drop traffic that would've required that SA.

      2) The classic key exchange issue:
        a) You can authenticate a session using certs, but now you have the same problems as signed SSL certs, except that every host participating needs to have one and know about all the other nodes' CAs.
        b) You can instead opt to use a pre-shared key, but now you have to pre-share the key. This is fine when you are looking to secure specific traffic to a specific node.

      For uses that aren't affected by these downsides, IPSec is a hugely popular technology. VPNs between a branch and central office as well as remote access for roaming users are very popular. Of course in these cases you can very meaningfully authenticate the other end and the key exchange isn't a problem.

    • by Frogbert ( 589961 ) <frogbertNO@SPAMgmail.com> on Wednesday October 08, 2008 @12:12AM (#25296131)

      Ever set up IPSEC? Yeah, that's why.

  • by FilterMapReduce ( 1296509 ) on Tuesday October 07, 2008 @09:24PM (#25294977)

    we might be able to increase the depressingly small fraction of encrypted traffic on the Internet.

    I agree that this would indeed be a good thing for several reasons. An encrypted message in a medium where most everything is plaintext may attract the attention of attackers or, worse, be seen as "suspicious" by a government. (Certainly the U.S. and the PATRIOT Act spring to mind, but let's not forget the truly oppressive governments such as China's and any number of third-world dictatorships.) If online privacy via encryption comes to be a right that everyone gets used to enjoying—much like how almost all mail is sent in sealed envelopes, whether or not its contents are sensitive—then it will be that much harder, for technical and/or social reasons, for an authority to take away. If Obfuscated TCP is even a token step in that direction (and it seems to be a bit better than that), then it is probably a good thing overall.

    Someone earlier today on Slashdot was plugging Cory Doctorow's Little Brother, and I'm going to follow that example (you can read it for free!) [craphound.com] as part of it advances the same idea.

  • by PhilK ( 20847 ) on Tuesday October 07, 2008 @09:32PM (#25295029) Homepage

    It's interesting how the same idea gets "reinvented" over and over. Opportunistic encryption using advertised DH public numbers is just such a thing.

    ObsTCP is just a reinvention of SKIP.

    See here via the Wayback Machine since the concept is long dead and buried.
    http://web.archive.org/web/20021129230049/http://www.skip-vpn.org/ [archive.org]

  • by BlackSabbath ( 118110 ) on Tuesday October 07, 2008 @09:55PM (#25295217)

    "By providing a less secure, but computationally and administratively cheaper, method of encryption, we might be able to..." give people a false sense of security.

    Remember, weak encryption can be worse than none as Mary Queen of Scots found out at the cost of her life (see http://www.nikon.com/about/feelnikon/light/chap04/sec01.htm [nikon.com]).

  • This is equivalent, in a security sense, to SSH/TLS with self-signed certs. There's no protection against man-in-the-middle attacks, but there is some protection against passive eavesdroppers.

    Unfortunately, man-in-the-middle attacks are most likely in the same situation that allows easy passive eavesdropping - public WiFi access points.

    Also, they've chosen completely different cryptographic standards than SSH/TLS uses, with different key handling. Until many qualified people have gone over exactly ho

  • kdawson, you are a king(queen?) among idiots. And the submitter, agl42, is a bastard prince(ss).

    how did it escape both of you that google code != Google?

  • by Marrow ( 195242 ) on Tuesday October 07, 2008 @10:30PM (#25295479)

    If the objective is to prevent large-scale keyword sniffing, then you can obfuscate it with compression.

    The support is already built into the browsers.

    Yes I know its not encryption, but if everything was gzip'd then it would cost the listener more to decrypt it. Plus gzip'd data would not invite any added attention.

  • by Twillerror ( 536681 ) on Tuesday October 07, 2008 @10:57PM (#25295665) Homepage Journal

    Just make SSL cert cheaper and get rid of all the multiple server licensing and crap.

    Make the damn thing ran by a non-profit organization and cut the cost.

Ummm, well, OK. The network's the network, the computer's the computer. Sorry for the confusion. -- Sun Microsystems

Working...