Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security The Internet

Delayed Password Disclosure 163

ET_Fleshy writes "Markus Jakobsson has an interesting article discussing a promising new security protocol called "Delayed Password Disclosure" that can validate a computers authenticity before exchanging passwords/keys. While nothing is ever truly secure, this seems to show promise in protecting users from a wide variety of stealth attacks (pdf) used today, specifically man in the middle (pdf) attacks."
This discussion has been archived. No new comments can be posted.

Delayed Password Disclosure

Comments Filter:
  • by SeanTobin ( 138474 ) * <byrdhuntr@hot[ ]l.com ['mai' in gap]> on Tuesday February 22, 2005 @04:54PM (#11749371)
    Forgive me for not reading my latest issue of Cryptographer weekly but how on earth is this any different than RSA fingerprints? It looks like the "envelope" and "carbon paper" are just elements of a pre-shared key anyway.

    If you know the fingerprint of the host you are connecting to, you are more or less immune from man-in-the-middle attacks. If you have never communicated with the host before, nothing is going to stop a man-in-the-middle - especially if you have to magically share locations of "carbon paper" without the man-in-the-middle knowing about it.
    • I almost had a joke...but no.
    • It's different because it's less secure than a public key signing system. Oh wait, that's not good.

      It's different because it uses a carbon paper template. I suppose that to authenticate an e-mail, you'd put the thing on your computer screen. What if it's not the right size? Oops, that's not good either. False negative authentication makes it impossible to trust. That's not good either.

      Maybe this is yet another invention just waiting to be patented, since only unoriginal inventions can get a patent these d
    • by GreyPoopon ( 411036 ) <gpoopon@gm[ ].com ['ail' in gap]> on Tuesday February 22, 2005 @05:54PM (#11749968)
      If you know the fingerprint of the host you are connecting to, you are more or less immune from man-in-the-middle attacks. If you have never communicated with the host before, nothing is going to stop a man-in-the-middle - especially if you have to magically share locations of "carbon paper" without the man-in-the-middle knowing about it.

      It actually provides a technique of verifying th authenticity of a host with whom your computer has never communicated. The host, presumably, knows your password (or a salted-hash representation). The host either obtained this via connection with another computer at some time in the past, or by some information that you provided when signing up for whatever the service is (think bank). The host uses what it knows about your password to send you specially encoded information that, in combination with what *you* also know about your password can be used to verify that at the very least you aren't giving your password to a system that doesn't already have that information. You can also think of this method as a decent way to validate RSA fingerprints by a system that hasn't already been seeded with pre-shared keys.

      • Ah, so it's no different than Kerberos? (Disclaimer: I haven't RTFA.)
      • And why wouldn't I just encrypt my traffic with the password both the server and I know, thus eliminating both man-in-the-middle attacks, and establishing trust, without ever exchanging keys?
      • I'm suspicious of this just because they lump so many different scenarios into one category.

        #1. Zombies

        #2. Man in the middle

        #3. Traffic analysis

        #4. "email-cluster-bomb attack"

        #5. "incorrectly update routing tables"

        And so forth. Of what possible use would authentication be with a bunch of zombies? If a zombie is an example of a "stealth" attack, then what would be an example of a non-stealth attack?

        Anyway, if Alice and Bob (might as well use the common ID's) both have access to the password, why not s
      • mmm.... salted hash.
      • Use SRP, B-SPEKE, or any of the other hundreds of variations of secure password authentication that have been invented.
      • First of all, if you can securely communicate with the server at some point (which is required for this to work - the bank needs your hash already) then you can simply use one time pads with rotating keys from that point forth... much more secure. Second of all, I don't know if his analogy to envelopes was just vague or if it really did accurately describe the protocol. If it did describe it accurately then what exactly stops me from standing in between the alice and her bank. Alice hands me her signed enve
      • SRP [stanford.edu] would do this (although normally with SRP you don't want the host to ever have your password, for an initial contact this could work - then you change your password securely to something else that the host doesn't know). With SRP, if the host can validate your password, that validates the host as well. Since the real host doesn't get any information about the password when you authenticate, neither does a hostile system that hijacks the authentication.

  • Bigger problem is... (Score:5, Interesting)

    by fembots ( 753724 ) on Tuesday February 22, 2005 @04:55PM (#11749394) Homepage
    There are enough people who will give away plain-text password unsuspiciously over the phone or internet.

    My bank (and probably many others) will block an account after three consecutive failed authentication, so any guesswork is going to be hard for the bad guys.
    • My bank (and probably many others) will block an account after three consecutive failed authentication

      This is a big hole for denial of service. Try purposely logging into the bank CEO's account with a bad password, and see how quickly the policy is changed.

      • Not if they block an IP rather than a login name, which is the smart thing to do (and the way it's been implemented where I've seen it).

        p
      • And just how would you guess/know anyones' usernames , especially without also knowing their passwords?

        Personally, my bank usernames look like a chunk taken out of some top-secret military encoded spy message--pretty much like the password that goes with it... I think it's a good practice to obfuscate usernames as much as passwords. It's about as likely that stream of space born gamma rays would trigger my account as it is that an actual person or computer would.
        • Actually it is a bad idea as customers are more likely to save those names in a file to copy&paste them if they are totally unreadable.

          • I thought the parent was referring to the username being encrypted? Just like the password.

            I mean, if his password looked like some huge chunk of encoded text... how on earth would anybody remember it?? I presume his bank does this client side, then transports it via https?

            This way the bank users only have to remember stuff that is possible for joe blogs to remember... but a level of hightened security is still achieved.
            • Actually, I write my user names and passwords down. Then I put them in my depleted uranium and lead lined concrete and hardened steel reinforced vault with biometric and timed locks. Then I kick the pair of radioactive lava spewing mutant doberman pinschers that guard the door to wake them up.

              It's hell trying to figure out what my balance is.
        • Personally, my bank usernames look like a chunk taken out of some top-secret military encoded spy message--pretty much like the password that goes with it

          So what happens when you want someone to send you money? I thought part of the definition of "username" was that it is the public token for others who want to interact with you to identify you.

    • You gotta love the banks that utilize a person's social, followed by a four digit pin, and unlimited tries.
  • by aendeuryu ( 844048 ) on Tuesday February 22, 2005 @04:56PM (#11749400)
    It'd be better if the font weren't so small, though...
  • by TechyImmigrant ( 175943 ) * on Tuesday February 22, 2005 @04:57PM (#11749415) Homepage Journal
    Mutual authentication is nothing new. There exist many mutual authentication schemes that are resistant to man in the middle attacks and also ensure liveness of the exchanges.

    The one described here looks to be a simple shared secret method. In may situations, certificate based methods are used in order to avoid the need to securely distribute a shared secret ahead of time.

    For a shared secret based mutual auth, why not do the normal thing and pass random numbers and their hashes back and forth, mixed in with the challenge-response sequences needed to establish an authenticated identity, a shared session secret and liveness? Read various EAP drafts or 802.11i or recent 802.16e drafts for real world examples of how to do this. The details necessarily change with the context.

    These methods have the benefit of lots of analysis by the crypto community. This delayed password disclosure scheme doesn't seem to have the same benefit.
  • Sharing keys? (Score:5, Insightful)

    by nizo ( 81281 ) * on Tuesday February 22, 2005 @04:57PM (#11749418) Homepage Journal
    Thus spake the article:
    Note that use of encryption software, such as SSH, does not address this problem, since the attacker simply can replace the public keys of the two parties with public keys for which it knows the secret keys. This results in the two parties sharing keys with the attacker, as opposed to with each other; as a consequence, the attacker will be able to read (and even modify) all traffic before re-encrypting it and forwarding it.

    And this is why you always share public keys via some other secure means (USB drive, cd, floppy), at least in an ideal world. The article talks about this in regards to someone transmitting data to their bank, however if I am not mistaken SSL(not mentioned in the article) already takes care of this kind of attack. Somehow I doubt any joe user is using SSH to authenticate with their bank :-)

    • Re:Sharing keys? (Score:5, Interesting)

      by slavemowgli ( 585321 ) * on Tuesday February 22, 2005 @05:04PM (#11749484) Homepage
      SSL (ideally) gives you the ability to do that, at least. I had one professor (giving network engineering / security classes) who said that at times, he actually called banks etc. whose websites he'd used and asked them to confirm the SSL certificate fingerprints etc. It always confused the hell out of them, but it worked. :)
      • Bank employees know what SSL is?
      • The only problem with this is that it only works if you remove all the CAs from your web-browser and only import certificates for the sites you visit.

        Otherwise, unless you check that fingerprint every time you visit the bank you won't know if the page hasn't been replaced by another site that has a Verisign-signed SSL cert. In theory Verisign doesn't just had those out to anybody, but if you trusted them you wouldn't need to verify all those fingerprints in the first place...
  • by Mr. Capris ( 839522 ) <tobeycapris@nOSpAm.gmail.com> on Tuesday February 22, 2005 @04:58PM (#11749427)
    Me, i hate pdf...so here's HTML versions, courtesy of Google: man in the middle attack [google.com]
    stealth attacks [google.com]
  • I wonder if the type of people that came up with 'blink' are now writing new crypto protocols.

    I think I'll just withdraw my deposit in gold bricks and sleep on it.
  • by Sheetrock ( 152993 ) on Tuesday February 22, 2005 @05:01PM (#11749456) Homepage Journal
    The only part I can't figure out is how they're going to send the carbon paper and envelopes across the Internet. I can't find the protocol for that.
  • Breakdown (Score:3, Interesting)

    by MasTRE ( 588396 ) on Tuesday February 22, 2005 @05:03PM (#11749474)
    This basically verifies that the party you are conversing with knows your password, or something about it (i.e. has a salted hash of your password), _before_ you input your password. One could argue that this is more secure than (poorly-implemented) channel security via PKI as a man-in-the-middle would not have access to the accounts hash table unless the target system was compromised.

    Interesting, but there are probably a million such things you can do to further tighten security.
  • by smug_lisp_weenie ( 824771 ) * <cbarski.4503440@bloglines.com> on Tuesday February 22, 2005 @05:05PM (#11749493) Homepage
    I'm no cryptography expert, but the secret positions of the carbon paper need to go into "an envelope only Alice can open"- Nowhere in this essay is it explained how this "envelope" is created technologically or how the recipient can interact with it, making the analogy pretty useless (unless I'm missing something). Also, it says that SSH doesn't help with man-in-the-middle attacks, but a third party signing agency, I believe, solves that problem, from what I understand. This "envelope" sounds suspiciously how quantum cryptography works- Is this just an explanation of "quantum cryptography" without mentioning "quantum cryptography"? I'm confused...
    • by Sheetrock ( 152993 ) on Tuesday February 22, 2005 @05:23PM (#11749679) Homepage Journal
      I have not seen the implementation, so I am only speculating.

      I believe that, in this case, Alice could generate the contents of said envelope with her public key, then send both the envelope and the key to the remote host. That host would respond with its positions, encrypt those with Alice's public key as well, and return the whole bunch to Alice who then decrypts everything with her private key.

      There's something missing in my speculation -- why does Alice need to send anything but her public key?

      • The key to the whole thing is the carbon paper only shows the numbers written over the password.

        yes this does sound similar to quantum encription, however it is possible with non quantum technology, "an envelope only Alice can open" is as the above said encripted with Alice's public key. However, no one is looking at how the carbon paper is in the computer world.

        actually, if you think of say an array of as many longs as the pasword has bits, and for each pasword bit you know to look at the two Most
  • by TechyImmigrant ( 175943 ) * on Tuesday February 22, 2005 @05:08PM (#11749527) Homepage Journal
    What the world does not need is another generalized mutual authentication method. These are used to place a veneer of security on a generally insecure thing.

    E.G. Credit card transactions over the internet. These are protected by SSL/TLS. This is somewhat removed from the credit card transaction itself, instead protecting the link rather than the transaction. So you log onto vendorX's web site and use certs with SSL/TLS to protect the link. You feel conforted by the little lock icon in the corner of your screen and proceed to hand VendorX all the details needed to drain arbitary amounts of money from your credit card.

    Instead.. Protect the transaction directly, with something like a secure credit card transaction protocol. VendorX doesn't need your credit card details, he needs your money. The security protocols should run between you and the vendor to establish a transaction and the vendor's identity, between you and your credit card company to authorize a payment against the transaction to VendorX and between the credit card company and VendorX to transfer the payment.

    VendorX gets the money, not a blank, signed cheque.

    Repeat exercise for all activities you need to secure, applying appropriate measures for the situation. Leave SSL/TLS for securing the link, not the application.
    • by Wesley Felter ( 138342 ) <wesley@felter.org> on Tuesday February 22, 2005 @05:24PM (#11749692) Homepage
      Instead.. Protect the transaction directly, with something like a secure credit card transaction protocol.

      That was called SET. It failed because it was expensive and credit card fraud is already pretty low.
    • by ajs ( 35943 )
      Of course, what you suggest could be done without having to introduce special transaction elements between the customer and credit card vendor. You can simply use encryption here.

      encrypt(Kvendor, "Authorize $20 to [vendor] from account 000001 at bank foo")
      encrypt(Kvendor, encrypt(Kbankfoo, sign(Kself, "Authorize $20 to [vendor] from account 000001 at bank foo")))

      SSL can easily serve all of these purposes. Of course, you would not send exactly the same message to both (this makes a known-plaintext attack p

      • I assume that then the vendor would send:

        encrypt(Kbankfoo, sign(Kself, "Authorize $20 to [vendor] from account 000001 at bank foo"))

        to the bank...

        But how does the vendor knows that you didnt sent him:
        encrypt(Kbankfoo, sign(Kself, "Authorize $2 to [vendor] from account 000001 at bank foo"))

        ?

        And why would you not just send:

        encrypt(Kvendor, sign(Kself, "Authorize $20 to [vendor] from account 000001 at bank foo"))

        And let the vendor send :
        encrypt(Kbankfoo, sign(Kself, "Authorize $20 to [vendor] from accou
        • But how does the vendor knows that you didnt sent him: encrypt(Kbankfoo, sign(Kself, "Authorize $2 to [vendor] from account 000001 at bank foo"))

          Cryptographic cut-and-choose protocols.

          Bascially, you send 100 sealed (encrypted), but unsigned, money orders to the vendor. The vendor picks 99 of them, and says "Open these". You do (you provide the decryption keys), he sees that they're all for $200, so he has a pretty good assurance that the 100th is also for $200. You sign the 100th.

          You're sort of sign

          • There's no need to do that, and it presents an opportunity for abuse (gives you a chance of defauding the vendor, though a small one).

            This is "authorization", not "payment". The vendor will still submit a normal credit card transaction with the amount, and if the authroization does not match the amount, then the normal exception handling processes involved in insufficient funds transpire. None of that has to be new technology, and there's no reason to throw out working code.

            The point is that without users
        • You could do it either way, but the vendor is going to get confirmation of the transaction from the bank. If the bank comes back and says, ok, self told me to give you $2, then vendor is going to report that you had insufficient funds and fail the transaction.

          The reason to hide the information you're sending to the bank is that you might include other details in that request that the vendor should not see. If not, then it does not matter which path you choose.
  • I don't get it (Score:3, Interesting)

    by lampajoo ( 841845 ) on Tuesday February 22, 2005 @05:09PM (#11749534)
    Could someone explain to me how you implement carbon paper, "magic envelopes" and invisible ink inside of a computer? seriously...

    Also, it seems like you could come up with an algorithm to make password guesses based upon the numbers that were returned...trying different values that add up to zero. Or would this take too long?
    • Re:I don't get it (Score:3, Interesting)

      by 0123456 ( 636235 )
      "Could someone explain to me how you implement carbon paper, "magic envelopes" and invisible ink inside of a computer?"

      It's a metaphor. As far as I can see, the bank would calculate a matrix of numbers and send that to Alice, who would use the bit-pattern from her password to find the correct numbers to use for the key.

      However, as has been pointed out, if the bank knows her password, it can simply send her the session key encrypted with her password, which will be impossible for the man-in-the-middle to c
      • lol, yeah I got that it was a metaphor. So Alice sending the magic envelope with carbon paper in it to the bank was pointless? Why did he include all that junk about the envelope and the invisible ink then?
    • Re:I don't get it (Score:2, Insightful)

      by marcosdumay ( 620877 )
      Worse yet, as I undertood, the gay is trusting his magic envelopes to block the man in the middle attacks. There is no other place where he verifies that there is nobody on the line (sinmply, the man in the middle can receive the sent menssage, retransmit, receive the password and retransmit, as men in the middle do).
  • by Anonymous Coward on Tuesday February 22, 2005 @05:09PM (#11749535)
    X.509 Certificates have been known for ages. There's nothing to see here. Please move along.
  • Huge text (Score:1, Redundant)

    by hab136 ( 30884 )
    Holy crap that is some seriously large text!
  • The article describes a (new?) challenge-response authentication algorithm.
    • Has anybody bothered getting the actual paper from these people instead of the cutesy descriptions of envelopes, red/green ink, and carbon paper? The authors go out of their way to write a cutesy layperson's description of their work, but they've obfuscated the real math inside envelopes full of carbon paper (assuming that some of their readers are old enough to remember using carbon paper), instead of just giving us the math.

      So nobody technical can tell if they've really done anything new or interesting

  • by dmiller ( 581 ) <djm@mindro[ ]rg ['t.o' in gap]> on Tuesday February 22, 2005 @05:19PM (#11749638) Homepage
    This is a little like the interlock protocol [ecn.ab.ca], without the public-key cryptography. But this instance has the serious disadvantage that the server side must know the user's unencrypted password (or equivalent) to play the game. That is a very bad thing - it has been empirically demonstrated that users will resue their passwords, so any authentication database that keeps them in the clear is a high-value target for attackers.

    BTW You are quite safe from MITM attacks when using SSH if you use ssh protocol 2 and public key authentication. The public key signature checks are bound to the results of the Diffie-Hellman key exchange that occurs at the start of the protocol. In the case of a MITM, these DH results will be different for the client->MITM and the MITM->server legs, so the real server will refuse to accept the signature that the client presented to the MITM and the authentication will fail.
  • by Anonymous Coward
    This proposed scheme does nothing to prevent a man in the middle attack. For example, if a person is trying to log in to a server, they would wait for the server to prove it knows their password before actually sending the password. But the man in the middle attaker could obviously just start a login attempt at the real server to get the appropriate hash of the user's password. But since anyone can get this hash, where is the security?
  • by TechyImmigrant ( 175943 ) * on Tuesday February 22, 2005 @05:22PM (#11749671) Homepage Journal
    Mutual authentication sounds safe and warm. Alice know Bob is at the other end Bob knows Alice is at the other end.

    However this is the situation after you have performed the mutual authentication, not before. In all protocols I have seen, this takes place in some order. In order for Alice to authenticate Bob's identity and the other way around, with both exchanges bound together (so differentiating from bilateral authentication), Either Alice or Bob has to first reveal their identity so it can be authenticated. This includes the proposed scheme.

    This asks the question "Who goes first". Usually the protocol forces this issue and leaves one side or the other in the disavantageous position of identifying themselves first. This is analagous to the gatekeeper shouting "Halt! Who goes there?" to someone trying to enter. The person trying to enter is forced to go first and reveal themselves.

    I may not want to reveal my identity to anyone, especially when it comes to say, wandering around in public with a wireless device. All sorts of tracking mechanisms become possible.

    What we want is a "Who goes first protocol" so I can enforce my own policy on revealing my identity. If someone wants to sell to me, they had better go first. If I'm trying to get through a door, the building owner can reasonably expect me to go first. There are plenty of situations where a network may want to only reveal its identity to people who are allowed to know its identity, and noone else.

    We already have the algorithms, but the protocols are stuck in the mud and prevent us from moving forward with security that offers more than what SSL gives us.
    • If I'm going through a door, to follow your actual logic, I expect the building to identify itself first.

      I identify myself by passing through said door.

      Wireless trasponders on buildings, etc. should identify themselves before I decide to respond or not.

      An SSL handshake should begin with the server proving who it is before asking me to prove who I am, etc.
  • by StikyPad ( 445176 ) on Tuesday February 22, 2005 @05:23PM (#11749684) Homepage
    By then, it may be too late, as in the meantime, the attacker may collect and even modify information that was not intended for him.

    Damnit, Bones I, can't figure out how to, place commas in, my, sentences I know they, should go somewhere I'm. Just not sure where.
  • by Gollum ( 35049 ) on Tuesday February 22, 2005 @05:34PM (#11749777)
    ... has solved this problem more than 6 years ago. And it does not require the password to be stored in clear-text by the server. (although, "with a little thought", according to the article, neither does this scheme. BAH! Proof is left as an excercise for the reader)

    Stick with something that has been rigorously reviewed, and proven over a period of time. And something that can be explained simply, in terms of the actual technology, rather than resorting to pathetic analogies that do not explain anything!

    SRP [stanford.edu]
    • And yet for all its potential benefits, SRP was crippled by patent licensing that made people reluctant to implement it. What a shame.
    • Pity Wu went and broke it by whacking a minefield of patents around it. Now hardly anyone wants to touch it and those that are interested are scared [cmu.edu] off by the other [ietf.org] sharks [ietf.org] who are circling that pond. So another potentially interesting protocol becomes unusable for 20+ years because of simple greed.
  • the problem here is that a webpage is not data but also program (i.e. javascript).

    Alice could log in to the fake bank, and not realize that instead of doing the magic password trick, she's sending her password in plain text. Why? Because at the moment, the password encryption is (putting SSL aside) implemented by javascript!

    To be safe, a key encryption algorithm would need an established software running it (in this case, the web browser).

    This means:

    a) having a W3-approved algorithm to be implemented in browsers, or
    b) Having downloaded specific software by the bank (i.e. bankOnline browser(TM) or something).
  • by Effugas ( 2378 ) * on Tuesday February 22, 2005 @05:53PM (#11749955) Homepage
    So I actually got this sent to me this morning, accompanied with some nice snarkiness about "known plaintext handouts".

    http://www.eurekalert.org/pub_releases/2005-02/a af t-ncs021405.php

    Hmm. It's basically Kerberos, except totally broken.

    So we don't actually know how this protocol works, but the description at the above link is vastly more coherent. (Anything with "magic envelope" and "this is a metaphor" really shouldn't be taken as a protocol specification.)

    ===
    CUSTOMER: Bank, I will send you some information that is encrypted. You can only decrypt it if you know my password. If you don't know the password, you could of course try all possible passwords (although that is a lot of work!), but you would never know from my message if you picked the right one. Once you have decrypted the message, I want you to send it to me. If it is correctly decrypted, I will know that you know my password already. Once I know that you know my password, I will send it to you so that you can verify that I also know it. Of course, if I am lying about my identity and don't know the password in the first place, then I will not learn anything about the password from your message, so it is safe in both directions.
    ===

    It's also wildly exploitable. Here's how:

    First of all, password brute forcing? Alot of work? Only if there's no way to execute an offline attack, i.e. you can run attempts as fast as your own computer can calculate them. What we need is an offline attack -- something that lets us try to try as many attempts as possible. The most important thing is verifiability -- we need to know when we guessed the actual password.

    Can we possibly verify our guess? Well, Alice sends the bank some random data, which is dutifully returned to Eve. Eve sniffs this traffic, and now has a very simple task:

    Guess all possible passwords the bank could have used to decrypt the password. When the content from Alice, decrypted with the guess, equals what came back from the Bank, Eve has found the password.

    But then there's Eve's friend Mallory, who thinks Eve isn't ambitious enough and wants to steal anyone's password at the bank, not just Alice's. Suppose Bob has angered her somehow. Mallory can't sniff Bob's traffic, but then, she doesn't actually need to. Mallory can simply blindly provide some arbitrary data to the bank. It's garbage going out, but even garbage will decrypt into something. Unless the bank specifically has users provide some known plaintext in the outgoing data, it's going to "decrypt" that noise, using the password, into more noise.

    Once again, outgoing data + bank password = incoming data. Mallory gets to do offline attacks too -- but against any user she wants.

    Of course, the bank *could* put some sort of verifier in the message that Alice sends to it. But then Eve has an even easier time guessing passwords, since she just tries random passwords until the verifier is unveiled. No need to sniff the traffic back from the Bank (which is actually significant -- it means Mallory could firewall off the bank and still successfully participate in the auth protocol, with no way for the bank to find out.)

    Anyway, long story short, broken. Really, really broken.

    --Dan
    • Hmm. It's basically Kerberos, except totally broken.

      My thoughts exactly - I was hoping to see more details on how it is different in a better way from Kerberos, but there's nothing (well, except "patent pending" which is hardly an improvement)

      (Anything with "magic envelope" and "this is a metaphor" really shouldn't be taken as a protocol specification.)

      Indeed. Maybe it was taken from the patent application? (can't tell, as there's nothing on uspto.gov so far)
    • Yeah, too bad the protocol in that link has absolutely nothing to do with the actual protocol, as described in the article.

      Which do you think is more likely, that some security researchers at IU are not at least as smart as you are, or that the description you got from that website, written by a probably clueless reporter, is just plain inaccurate?

      Jeremy
      • I've worked with a fair number of reporters; they're not coming up with something that complicated on their own. Hell, you're lucky if you can get like two sentences correctly quoted :) Only time you get that much technical writing in a row is if it's straight from the interview, and generally copied word for word from email.

        Now, you're right. I could have taken that metaphor seriously. I could have gone ahead and pointed out, gee, they're basically describing a protocol in which the server XOR's arbit
  • I'm a novice ssh user, and I'm a little unclear on man in the middle and if this carbon-copy technique will benefit? For example, my debian box got a new dhcp address after being powered off from a power outage, I ssh'd into it and Putty alerted me that it had a new signature (I assumed because of the new IP address). Will this technique benefit me by creating a carbon copy in addition to looking at the signature before I log in? It's probably my own fault because I never created and shared my own keys; I r
  • by elronxenu ( 117773 ) on Tuesday February 22, 2005 @06:28PM (#11750324) Homepage
    I don't know much about crypto but this paper strikes me as both original and insightful - the insightful parts are not original, and the original parts are not insightful.

    First of all, we already have protection in protocols such as SSH and SSL against man-in-the-middle attacks. Thus, the paper's whole reason for existence disappears.

    Secondly, the security of this "masking" technique depends upon the randomness of the numbers chosen by the server (and, by implication, any man-in-the-middle). I could send a packet containing all zeroes and it would guarantee to sum to zero after applying any mask at all. How does the receiver judge whether the numbers passed are sufficiently random?

    • Actually the masking thing probably lessens security a fair bit. It would be possible to iterate through just one of these masking packets to find potential passwords. If the password were english text you could apply letter analysis to quickly find synonyms - words which "could" be the password because they fit the random data supplied.

      Then if you had another masking packet, you can check all your first words against the second masking packet and eliminate 99% of the incorrect words.

      Basically these mas

  • Before i rtfa, i thought this had links to the MS UndeadTAPI "PASSPORT for the beyond" i.e. a triggered email discloses your passwords in order to circumvent your family having to sue for disclosure. Next up in UndeadTAPI is auto-distruct pr0n (in XML).
  • I guess his Mom never told him it was rude to shout, or to use MS-Word for editing HTML.
  • by misleb ( 129952 ) on Tuesday February 22, 2005 @07:22PM (#11750875)
    Can anyone give an idea how often things like man in the middle attacks actually happen? I know it is possible, but it seems quite unlikely that anyone would go through the trouble when there are so many easily hacked things out there whether it is known exploits or just unencrypted links. The only way I can see it happening is if you were a target for a particular reason such as corporate espionage.

    Has anyone here at slashdot actually been the victim of a hack as sophisticated as man-in-the-middle on an otherwise encrypted link? I'm curious.

    -matthew
  • I see one obvious and major flaw to this approach. A 'man in the middle' can easily respond to the client (Alice) with all 0's. Regardless of the 'carbon paper' mask that is used, all 0's always add up to 0 (which will make the imposter appear valid).

    Alice seems pretty gullible.

    ps. I developed a little encryption scheme to be used between a hardware firewall and client a little while back (the hardware firebox is called a 'WazerBox'). It has since been altered but this paper [wazer.net] describes the basics.
  • IPSec does this just fine. Any other system that involves host fingerprints or (better yet) both client and server certificates will acieve this.


    Of course, you could always use S/Key or OPIE and have what is essentially a one-time password system.

  • That web page is a rambling, disorganized mess--the author should take a writing class.
  • If you read the other papers this professor has written, a lot of his stuff is about being anonymous. Look at his e-commerce/e-cash [indiana.edu] stuff and you will see. I get the impression that he has more in mind for this than just banks and its customers. I think he is looking for ideas on how you can trust someone you have never "met".

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...