Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security The Internet

Kaminsky DNS Bug Claimed Fixed By 1-Character Patch 120

An anonymous reader writes "According to a thread on the bind-users mailing list, there is nothing inherent in the DNS protocol that would cause the massive vulnerability discussed at length here and elsewhere. As it turns out, it appears to be a simple off-by-one error in BIND, which favors new NS records over cached ones (even if the cached TTL is not yet expired). The patch changes this in favor of still-valid cached records, removing the attacker's ability to successfully poison the cache outside the small window of opportunity afforded by an expiring TTL, which is the way things used to be before the Kaminsky debacle. Source port randomization is nice, but removing the root cause of the attack's effectiveness is better."
Update: 08/29 20:11 GMT by KD : Dan Kaminsky sent this note: "What Gabriel suggests is interesting and was considered, but a) doesn't work and b) creates fatal reliability issues. I've responded in a post here."
This discussion has been archived. No new comments can be posted.

Kaminsky DNS Bug Claimed Fixed By 1-Character Patch

Comments Filter:
  • by neonux ( 1000992 ) on Friday August 29, 2008 @08:17AM (#24792727) Homepage

    If this is indeed not a protocol flaw, how come the same vulnerability is present on other DNS servers as well ?

    Do they all use the same code from BIND for this particular 'feature' ?

    • by larry bagina ( 561269 ) on Friday August 29, 2008 @08:25AM (#24792805) Journal
      There is a small window of time when a malicious record could be cached by ANY DNS server. (Port randomization makes guessing the correct port to hit much harder) Bind (and only bind) has/had a huge fucking bug that opened that window of time.
      • But what if I really want to change a record, that hasn't expired yet? Will the fix force me to live with the old values until they expire?

        • Re: (Score:2, Insightful)

          by tsalmark ( 1265778 )
          I don't think hacking every DNS server has ever been the solution of choice. Maybe updating your record and serial number, then reloading, if needed, the authoritative server. And the ones you don't control, well wait.
          • by mi ( 197448 )

            And the ones you don't control, well wait.

            This means like a functionality loss to me — if I've made a mistake, I can't correct it until TTL expires — and I could before the "bug" was fixed...

        • by MikeFM ( 12491 )

          So don't keep your TTL so high? On the other hand we changed from a T1 to a fiber line the other day and all our IP addresses changed. It's been a nightmare trying to get them changed properly. I had them preemptively set to a low TTL for a few days to give things time to clear from cache but we still are getting weird fluctuations. Some servers are showing the old addresses still more than a week later. Some are alternating between the new and old address. Some have just decided to give addresses that are

      • by OriginalArlen ( 726444 ) on Friday August 29, 2008 @02:21PM (#24798323)
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Bind is effectively the reference implementation, so probably, or they made the same mistake at any rate. That's not surprising, this is a very subtle bug that requires knowledge of the Kaminsky attack to recognise. It's worth pointing out however that djbdns had source-port randomisation from the start as a defensive measure, and thus remained very resistant to this attack.

    • by gclef ( 96311 ) on Friday August 29, 2008 @08:39AM (#24792951)

      No, this solution is basically breaking the DNS functionality that Kaminsky exploited. By design, the referral records were supposed to overwrite the cache (which some organizations do use). This patch breaks that.

      • by B'Trey ( 111263 ) on Friday August 29, 2008 @09:22AM (#24793367)

        That seems accurate to me. After all, what happens when a DNS record gets updated? With the new behavior, you won't see the change until your cached record expires. That may be preferable to a gaping security hole which lets attackers poison your cache, but I don't think it's accurate to call the issue a bug in BIND. I believe BIND was working as intended to allow updated records to overwrite older ones.

        • by More Trouble ( 211162 ) on Friday August 29, 2008 @09:52AM (#24793805)

          After all, what happens when a DNS record gets updated? With the new behavior, you won't see the change until your cached record expires.

          You don't see that update until the TTL expires. That's why there's a TTL. If you're planning to make a change, lower the TTL well in advance to allow the new TTL to propagate.

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            +1 Insightful

            This is what the DNS books I've read say happens. When I first started playing with DNS I was always surprised and could never explain why my updated records became active before the old record's TTL expired. Sounds like a bug that's been needing to be fixed for a long time now.

          • by mxs ( 42717 )

            You don't see that update until the TTL expires. That's why there's a TTL. If you're planning to make a change, lower the TTL well in advance to allow the new TTL to propagate.

            Impeccable logic.

            What if the change is unplanned ? Say, an outage-prompted one ?

            • These are the TTLs of the name servers for a domain. One hopes there's more than one. Further, one hopes they are widely separated. This is totally DNS "standard operating procedure". We're not talking about a highly coherent transaction processing system, here. Availability is the goal. Achieved admirably, given the age and design history of DNS...

        • ... what happens when a DNS record gets updated? With the new behavior, you won't see the change until your cached record expires. ... I believe BIND was working as intended to allow updatedrecords to overwrite older ones.

          That's what I thought too, at first. It seems right to replace cahced records with newer ones. And perhaps that's why the code is written as it is.

          But then I got to thinking: If the server has a cached record already, it wouldn't have asked for and update. So why is the new information

    • by mrsbrisby ( 60242 ) on Friday August 29, 2008 @09:51AM (#24793785) Homepage

      how come the same vulnerability is present on other DNS servers as well ?

      It isn't. djbdns [cr.yp.to] for example, is not affected. I don't think maradns is affected either.

      Do they all use the same code from BIND for this particular 'feature' ?

      Very likely.

      BIND has a very permissive license; most other DNS servers exist to facilitate lock-in with a particular vendor's stack, or to push some enhanced feature set, so they'd be considered foolish if they didn't copy BIND's source code where they could.

      If this is indeed not a protocol flaw,

      Well, I'm not sure it is unfair to call this a protocol flaw. Maybe a design flaw.

      BIND has resisted port randomization because "the RFC said so"- never mind that they wrote the RFC, and that no clients bother checking. Because it stopped spoofing attacks ten years ago, and it stops them today, most DNS servers- including those derived from BIND- do this.

      BIND also uses these very complicated credibility rules for determining if it can override existing cache-knowledge. This can presumably save one or two queries per dot, but surely it would be safer to only cache answers to questions that were asked. That is, by the way, what djbdns does.

      Most DNS spoofing attacks can also be solved by solving most blind spoofing attacks. There's a little reluctance to do so, because it makes things like DNSSEC largely obsolete for their intended audience. As a result, we see a lot of chest thumping and stomping in the temper tantrum. You can tell when you're about to get into one because they start by saying "If we just switched to DNSSEC by now, we wouldn't be having this problem."

      Of course, since BGP peers now route-filter everywhere on the internet (they didn't used to!), mandatory source filtering is a completely possible and realistic way to stop this and other similar problems...

      • Re: (Score:3, Informative)

        No, you are mistaken; djbdns, MaraDNS (and all other conformant DNS servers, including the patched BIND) are vulnerable [milw0rm.org] to the "ten hour attack", ie., the same attack run for ten hours rather than ten seconds. It just takes a bit longer to work because it has to hit the right port number out of the 65K range.
        • I'm sorry, that link indicates that there's another bug in BIND. It doesn't say djbdns and MaraDNS are vulnerable to this attack; just merely that source port filtering is not enough to withstand a 1gbp/sec sustained attack.

          Please point to a link that says that attack works on djbdns and MaraDNS. I cannot find anything that supports your statement.

          I haven't looked at MaraDNS, which is why I said I don't know about it.

          I don't see how that attack would work against djbdns because djbdns doesn't accept answers

      • by adri ( 173121 )

        I'm not sure what world you're in but BGP peers do not route-filter everywhere.

        • I'm not sure what world you're in but BGP peers do not route-filter everywhere.

          Prove it.

          Publish a route for 207.68.160.190/32 to any AS besides AS8075. Your publishing must be visible from AS21863. I will monitor and log ALL BGP announcements I receive for 207.68.160.190/32 for the lest two weeks.

          If you can do this, I would love to know what your ISP is.

          Every ISP I've dealt with route-filters their peers to prevent you from doing that. It would be a small thing for them to source filter packets as well, and

    • Re: (Score:3, Informative)

      by wkcole ( 644783 )

      If this is indeed not a protocol flaw, how come the same vulnerability is present on other DNS servers as well ?

      Do they all use the same code from BIND for this particular 'feature' ?

      No.

      The /. description of that thread is inaccurate and the behavior of BIND in breaking trustworthiness ties (which are set up by RFC2181) in favor of apparently newer records is not a bug, but rather a behavior which has been operationally useful and normal for most of the history of DNS. If you look closely at Dan Kaminski's discussion of how he came to recognize the vulnerability, it becomes clear that he was using that normal behavior and put together all of the pieces of the attack from the fact that

  • by Anonymous Coward on Friday August 29, 2008 @08:20AM (#24792745)

    Ok! Ok! I must have, I must have put a decimal point in the wrong place
    or something. Shit. I always do that. I always mess up some mundane
    detail.

    • Re: (Score:2, Funny)

      by oodaloop ( 1229816 )
      This is not a MUNDANE DETAIL, Anonymous Coward!
    • Re: (Score:1, Redundant)

      by dzfoo ( 772245 )

      Fine. Now, chill out, here's my fix:

      / *
      * Rev. #1138 - 08/29/2008
      * DESC: Need to add one to not brake the intarnetS!
      */
      int idx; // idx = CURRENT_INDEX + 1;
      idx = CURRENT_INDEX + 2; // do nnot braek!!!!1one
      return = _cache[i];

      And don't do it again!

      -dZ.

  • This is not the first time a huge security vulnerability was fixed by changing a single character!

    From what I remember, the SSL vulnerability we saw a while ago was caused by a single excess comment mark (well, maybe two if it was a double forward slash

    • This is not the first time a huge security vulnerability was fixed by changing a single character!

      Yeah. I once wrote a web application and in one of the auth checks, I put a '==' where I wanted a '!='. Fortunately in that case one of the testers caught it and it never actually went to production, but sometimes we all make silly mistakes.

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        And if you used unit tests like a real developer you would had caught that simple error.

        • by Tridus ( 79566 )

          Of course, because software that uses unit tests has never, ever, had a bug in it.

          • Of course it has, that doesn't mean that unit testing doesn't reduce the odds of you having a bug - especially one like that, where the test would almost certainly have flagged it up.

            Your argument strikes me as being not unlike saying that locking your door doesn't stop someone breaking the front window, so why not just leave it open.

            • by Tridus ( 79566 )

              And AC's argument was... what, exactly? "A bug got caught by the testers, therefore you don't use unit tests and are not a real developer!"

              If someone is going to start with a completely inane statement, they're going to draw inane replies.

              • by WNight ( 23683 )

                No, it is because that bug is one that unit testing would have caught. And if he was writing specs and implementing those, he'd have caught that error within seconds of making it.

    • Re: (Score:3, Interesting)

      by jellomizer ( 103300 )

      There are a lot of bugs fixed by changing 1 character... It is a very common occurrence. Either you comment out a feature that isn't needed but causing a problem. Or change a default variable or a constant to a different value.
      Eg origional code (Just making it up on the fly) of a possible security hole bug:

      char x[9];
      // x is populated by a char* variable
      for (register int i=0; i <= 9; i++) {
      //doing some stuff on x[i]
      }

      Now anyone with any C experience will realize that we have the possib

      • Re: (Score:3, Informative)

        by sqlrob ( 173498 )

        better change:

        Remove the < *AND* don't use a hardcoded number, change to sizeof

        • ...change to sizeof...

          Which will return the size of a char * (pointer to character) on your system (typically 4 bytes), _not_ the length of the array. There is no way in C to get the length of an array after it's been allocated. Arrays are 'stupid' chunks of memory, not objects with properties.

          • Re: (Score:3, Informative)

            by locofungus ( 179280 )

            ..change to sizeof...

            Which will return the size of a char * (pointer to character) on your system (typically 4 bytes), _not_ the length of the array. There is no way in C to get the length of an array after it's been allocated. Arrays are 'stupid' chunks of memory, not objects with properties.

            Huh?


            char x[9];
            printf("%d\n", (int)sizeof x);

            will print 9 exactly as required.

            There are a handful of cases where arrays do not decay to pointers. This is one of them.

            Tim.

          • by sqlrob ( 173498 )

            I've used that construct (and in many cases sizeof(x)/sizeof(x[0]) for getting count of elements that aren't byte sized ) for more than a decade over many compilers without issues. You need to relearn C/C++.

            • Actually, I'd just as soon forget it. C was a very thin layer over assembly language, and C++ is an absolute abomination containing the worst of procedural and object-oriented languages (new and malloc in the same language? Oh my!) What I was describing happens after you've used your variable as a parameter to a function. As soon as you're no longer in the same block as the declaration, you lose the original size data. That's a fairly dangerous thing, IMHO.
    • It's not the first time that any huge defects have been caused by a single character. Quoting Code Complete [wikipedia.org] who in turned was referencing an article from the early 80's (Kill that Code, Gerald Weinberg), "...three of the most expensive software errors of all time-costing $1.8 billion, $900 million and $245 million- involved the change of a single character in a previously correct program. So on the one hand, it's easy to sort of snicker that they were so close to having a correct implementation, but just m
  • by beakerMeep ( 716990 ) on Friday August 29, 2008 @08:22AM (#24792781)
    (and I think for speak for everyone), this is how I feel about it:

    !
  • It does not fix the case where the attacker first tries to poison aaaaaa.example.com, aaaaab.example.com, ... , fc4dss.example.com until he succeeds with the glue-record being the real evil. In that case there is no previous cache-entry to rely on.

    • by logru ( 909550 )
      I would assume that is not a huge issue as af9ad.example.com is not a host that people would go to or anything would point to for that matter.
      • It is indeed an issue: The injected record is trusted because it orgiginates from example.com, but the evil bits are in the glue record, which goes ahead hijacking the www.example.com record. Without really knowing bind, I assume the patch does not work in that case.

        • Re:No, it doesn't! (Score:4, Informative)

          by quantumplacet ( 1195335 ) on Friday August 29, 2008 @09:45AM (#24793685)

          yes, the whole point of this patch is to fix this problem. previously, if i successfully passed a bad record for safdsaus.example.com i could send glue records for www.example.com that would overwrite your cached record for www.example.com no matter what. with this patch i can only pass bad glue records if the ttl on your cached www.example.com record has expired. this gives an attacker a very narrow window during which they could mount this type of attack, likely making it not worth the effort.

        • I thought that bit had to be set if present - NO?

          The bit field is laid out as follows:

          0
          +-+
          |E|
          +-+

          Terminology

          The keywords MUST, MUST NOT, REQUIRE

      • It is an issue in the case of <img src="http://af9ad.example.com/whatever"> and related things that can load without active user intervention...

  • I call bullshit (Score:4, Insightful)

    by Anonymous Coward on Friday August 29, 2008 @08:28AM (#24792835)

    Updating a cache with new data when the source data changes before the cached copy is a bug?

    The "root cause" is being able to fake being the correct source of the data being overwritten, NOT the ability to refresh a cached copy.

    And AFAICT, the ability to falsify data sources remains a FUNDAMENTAL flaw in DNS.

  • Allegedly... (Score:4, Interesting)

    by drmofe ( 523606 ) on Friday August 29, 2008 @08:29AM (#24792853)
    ....Paul Vixie is no longer allowed to commit code to BIND. Can this vulnerability be traced to code that he DID write originally?
  • by js_sebastian ( 946118 ) on Friday August 29, 2008 @08:30AM (#24792857)

    From one of the mails of the guy who made this proposal:

    What's the downside to my patch ? I guess we are now holding an
    authoritative server to the promise not to change the NS record for
    the duration of the TTL, which is kinda what the TTL is for in the
    first place :)

    I wonder if this is an issue. Otherwise it seems Kaminsky may really have missed the point.

    • Re: (Score:3, Insightful)

      by Ed Avis ( 5917 )

      It does sound like an issue. Suppose an authoritative server responds to a query with a TTL of five minutes. That means it must not change the record during the next five minutes. After one minute the domain owner makes some change. Okay, there will be a lag of four minutes before it fully takes effect. Fine. But what if a second request is received a minute after the change? The authoritative server has to know that it has a change queued up to take effect in three minutes' time, and serve a reply w

      • by Anonymous Coward on Friday August 29, 2008 @09:15AM (#24793289)

        That's not how caches work. There is no guarantee that the authoritative server won't give out different responses until the TTL expires. The TTL just means that the resolver may cache the value for that duration. If the value changes during that time, the effect is just like when the server does DNS round-robin load balancing: This resolver uses a different value than other resolvers. Whether that is a problem depends on the validity of the resource, not on a server side decision to stick with an answer or to change it before the old value's TTL. When you change DNS records, you always keep the old resource up until you see only a low amount of requests to the old resource. There are way too many caches which ignore the server-defined TTL and use their own minimum TTLs.

        • Re: (Score:3, Informative)

          by berashith ( 222128 )

          thank you!
          A TTL is not a promise to never change the record. A true authoritative source can change and push new information. A TTL is an amount of time that a cached record can live before the holder of the cache needs to check back for new information, which is usually not changed.

          • Re: (Score:3, Informative)

            by mrsbrisby ( 60242 )

            No, a true authority cannot push new information.

            They would have to know all of the caches in order to push the changes to them, and since caches can cache for caches, it's unrealistic that a normal site could know this, and unlikely that a specially designed site would.

            The cache should not cache answers to questions it didn't ask, and that includes new authorities for the domain.

            • ok, yes, somewhat. I wasnt as clear as I should have been. All caches are not known, as you said, and DNS isnt a push system. But, there are cases of things like stealth masters that do keep track of all of its slaves, and these can tell the slaves to come look for new information. Not allowing updates to the slaves because of TTLs would create a non-needed time gap in propagation.

              • Re: (Score:3, Informative)

                by mrsbrisby ( 60242 )

                But, there are cases of things like stealth masters that do keep track of all of its slaves, and these can tell the slaves to come look for new information. Not allowing updates to the slaves because of TTLs would create a non-needed time gap in propagation.

                That's a terrible reason to allow such a large security hole.

                You should have to list all of your ignore-ttl-from hosts, and src-filter communication to those sites before you should be allowed to do this.

                That said, you could also use some other communica

    • by photon317 ( 208409 ) on Friday August 29, 2008 @09:41AM (#24793643)

      But that's not what the TTL is for in the first place. The TTL was not intended to mean "I will hold this record for this duration, ignoring any other updates in the meantime". It was meant to mean, "I will not under any circumstances remember this record any longer than this duration". The difference has practical implications for DNS operations.

    • Re: (Score:3, Informative)

      by QuietLagoon ( 813062 )
      I guess we are now holding an authoritative server to the promise not to change the NS record for the duration of the TTL, which is kinda what the TTL is for in the first place :)
      .

      TTL does specify the Time To Live for a cached record before it is no longer considered to be valid.

      TTL does not specify the length of time that changes are not allowed.

  • by Anonymous Coward on Friday August 29, 2008 @08:30AM (#24792861)

    It's 570MB.

  • by ccguy ( 1116865 ) * on Friday August 29, 2008 @08:34AM (#24792907) Homepage
    I'm so bored that I actually read the post in the mailing list and all the replies in the thread.

    Just to be at the same time informative and to the point, the 7 replies so far have been as positive as this patch [iu.edu] is in the linux kernel mailing list a few years ago.
    • Re: (Score:2, Interesting)

      by Nigel Stepp ( 446 )

      Ha! I feel like that is the same guy who wrote a text editor that runs in ring 0 or something and halts multitasking.

      Anyone remember that guy? There was a huge usenet fight about it on some linux newsgroup in the 90s.

      Anyway, he had exactly the same reasoning style.

    • by alx5000 ( 896642 )

      Just to be at the same time informative and to the point, the 7 replies so far have been as positive as this patch [iu.edu] is in the linux kernel mailing list a few years ago.

      OMG, is that guy for real?? I mean, I haven't still read through all of the replies, but... trying to un-UNIX Linux? Either he is one of the biggest morons to ever roam the earth, or he deserves a special place in the Trolls hall of fame...

      • by ccguy ( 1116865 ) *

        OMG, is that guy for real?? I mean, I haven't still read through all of the replies, but... trying to un-UNIX Linux? Either he is one of the biggest morons to ever roam the earth, or he deserves a special place in the Trolls hall of fame...

        Don't know, but after getting the old Al Viro's treatment he hid under a rock and I hear he's still there.

    • I love this post in the thread: http://www.ussg.iu.edu/hypermail/linux/kernel/0104.3/0107.html [iu.edu] that says "And UNIX on a phone is pure overkill." We sure have come a long way in this regard since 2001 :-)

    • This guy was obviously trying to write the beginning stages of Ubuntu. Thankfully he settled on forking Debian.
  • I'm not an expert so please ignore (or better .. explain) this if I'm way off but Google adives TTL 1 week when using their GoogleApps mail server. So .. what if I really really want to change my MX after 2 days? :-/

  • by hanshotfirst ( 851936 ) on Friday August 29, 2008 @08:39AM (#24792947)
    (Source unknown)

    A manufacturer had a problem with one of the older machines on their line. It shut down the line and held up production, costing many thousands of dollars in lost production. Since it was older equipment it was hard to find someone knowledgeable in repairing the machine, and nobody on-site knew what the problem could be. They found a technician with knowledge of the machine and hired him to come in and fix it.

    When the technician arrived on site he listened to the client's description of the problem, examined the machine, opened a panel, and turned a single screw. He restarted the machine and it was back to full function. The line was up and running and the manufacturer was happy.
    A week later the manufacturer received a bill for services: $1000. They called the technician and demanded an explanation - after all, they reasoned, he had only turned one screw to fix the problem. He agreed to re-bill, this time with itemized charges. The next bill contained two lines.

    Turning the screw... $1
    Knowing which screw to turn... $999
    • This is a derivative (or descendant) of a story that I read about a small town in Vermont. They had there own power generation facility for the town and it went on the fritz, plunging the small hamlet into darkness. The only person who knew anything about the machinery had long since retired, but the townspeople were desperate, so they they gave the old guy a call. He came out and took a look at the equipment. He then took a small hammer from his old toolbox and gently tapped on a certain point of the aged
  • Is this going to break dynamic DNS services like redirectme.net?

  • NOT a fix... (Score:5, Insightful)

    by nweaver ( 113078 ) on Friday August 29, 2008 @09:04AM (#24793141) Homepage

    This is NOT a fix to the root problem of the Kaminski vulnerability.

    The root problem is the cases where athority/additional/unasked-answers are accepted, and there are plenty of variants this "patch" does not affect. EG.

    Answer:
    whatever.foo.com CNAME www.foo.com
    www.foo.com A 66.6.66.6
    Authority:
    (usual goop).

    If www.foo.com is not yet cached (and often even if it is), this will set it as a Kaminski variant.

    • Re:NOT a fix... (Score:4, Informative)

      by blueg3 ( 192743 ) on Friday August 29, 2008 @09:16AM (#24793313)

      In cases where www.foo.com is not cached, DNS resolvers are vulnerable to the much more trivial attack of simply forging the answer www.foo.com IN A 66.6.66.6. Of course, they have to hope to guess the proper transaction ID in the first query, because if they fail, the proper answer will be cached.

      Poisoning an uncached name is fairly easy and doesn't require Kaminsky's trick. Kaminsky's trick relies on caching the answers to questions you didn't ask, rather than not caching them or using the cached answer over the uncached answer. I think you called this the "elephant in the room" at Usenix Security, even. :-)

      • by nweaver ( 113078 )

        Well, Kaminski doesn't save packets over a normal race.

        Its point is "Race until win" rather than "race once per TTL".

        Thus in a normal race for www.foo.com, you can run the race once per TTL, and then have to wait if you fail.

        With the kaminski race, you can keep running it until you succeed.

        This is why Kaminski attacks are all about glue policy.

        • by blueg3 ( 192743 )

          Exactly -- and this patch causes you to prefer cached data over data supplied in glue records, yes?

  • Meh. (Score:4, Funny)

    by Rob T Firefly ( 844560 ) on Friday August 29, 2008 @09:14AM (#24793271) Homepage Journal
    Ever since seeng this [wikipedia.org] I don't trust that one character, Patch.
  • by certain death ( 947081 ) on Friday August 29, 2008 @09:42AM (#24793659)
    They stopped random UDP port use, and now use a static pool of UDP ports for queries. Note that they have come out with a P2 release that addresses a completely different issue that the first patch caused. I was able to essentially cause a DOS on a BIND server that was patched with P1 by sending more than 10,000 queries to the system. It ran out of usable UDP ports and puked. The same issue exists in the Windows patch, and especially on Windows 2003 SBS. There was way more than one line of code, or a single character changed.
  • I didn't have time to read everyone's post here but the one thing I think everyone is forgetting or overlooking is why cache was created/used in the first place. 1st. Could you imagine the amount of extra traffic on the internet and core devices if every lookup had to traverse the net? It was designed because there is no need to change you DNS RRs constantly they should be static, or failing that only needed to be update once in a while. 2nd. what happens if my DNS serves go down (yes assuming both of them
  • Kaminsky's rebuttal (Score:5, Informative)

    by buelba ( 701300 ) on Friday August 29, 2008 @11:58AM (#24795919)
    Kaminsky has an interesting rebuttal here [doxpara.com].
  • i know of forms of poison that do not involve the authority section at all.

    i know of servers with no BIND code inside that were poisoned by kaminsky.

    i know of valid configuration changes that depend on NS RRset replacement.

    is this a troll of some kind? as slashdot lead articles go, this one shows unusually high disinformation and ignorance.

  • A detailed explanation of the Kaminsky exploit can be found here: http://thefrozenfire.com/data/dnspoisoning.html [thefrozenfire.com]

  • Without even bothering to RTFA, I'm going to hazard that the patch was a '!' right before a '=' ;)

  • This is Dan Kaminsky, finder of the bug.

    No, this isn't a reasonable fix. Nice try, but no. See http://www.doxpara.com/?p=1234 for details.

    (Incide

    • by PRMan ( 959735 )

      Really?

      You basically said:

      It doesn't stop all attacks.

      No, but it's better than nothing.

      Google Analytics and Facebook currently have unreasonably low TTL values that this wouldn't help.

      Implement the fix anyway. If they don't want to get pwned, they can raise their TTL. And without the patch, they get pwned even easier.

      Google and Microsoft have unreasonably high 4 day TTLs and if something happened they could be down for 4 days!

      If they get pwned by not having the patch, they could be down for half a day eve

      • Really?

        You basically said:

        It doesn't stop all attacks.

        No, but it's better than nothing.

        I think maybe what he's saying is that the fix isn't good enough. It's not very elegant as it breaks long-standing functionality in DNS, *and* it doesn't fully address the issue. Perhaps the gist of what he's saying is "let's think on this one a little harder before committing lame fixes and thereby shooting ourselves in the collective foot... again". (Of course, feel free to correct me if I'm wrong, Dan!)

  • I'd just like to say that during all of this the one man that deserves the community's recognition is DJB. I've been following this on and off since the exploit erupted and through out all of it the one thing that has been missing is significant, heartfelt praise for DJB. He's often maligned by the open source community for releasing his code to the public domain but the fact remains the guy produces and ships kick ass code. Qmail and Tinydns absolutely rock and I think it's a great shame the man doesn't ge

  • Over 10 years ago it was realised that eventually random transaction ids would not be good enough. That someone would come up with a novel way to attack the DNS.

    DNSSEC was the counter measure that was designed to beat this attack scenario as well as lots of other threats. It still is the only real solution to this problem.

    It defeats both on path and off path attacks.

    It is a enabler of other security measures.

    e.g. SMTP security depends, in part, on getting valid answers out of the DNS. You need t

What is algebra, exactly? Is it one of those three-cornered things? -- J.M. Barrie

Working...