Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Networking The Internet

Massive, Coordinated Patch To the DNS Released 315

tkrabec alerts us to a CERT advisory announcing a massive, multi-vendor DNS patch released today. Early this year, researcher Dan Kaminsky discovered a basic flaw in the DNS that could allow attackers easily to compromise any name server; it also affects clients. Kaminsky has been working in secret with a large group of vendors on a coordinated patch. Eighty-one vendors are listed in the CERT advisory (DOC). Here is the executive overview (PDF) to the CERT advisory — text reproduced at the link above. There's a podcast interview with Dan Kaminsky too. His site has a DNS checker tool on the top page. "The issue is extremely serious, and all name servers should be patched as soon as possible. Updates are also being released for a variety of other platforms since this is a problem with the DNS protocol itself, not a specific implementation. The good news is this is a really strange situation where the fix does not [immediately] reveal the vulnerability and reverse engineering isn't directly possible."
This discussion has been archived. No new comments can be posted.

Massive, Coordinated Patch To the DNS Released

Comments Filter:
  • Oh cool! (Score:5, Funny)

    by RockMFR ( 1022315 ) on Tuesday July 08, 2008 @02:18PM (#24104409)
    http://www.doxpara.com/ [doxpara.com]

    Your name server, at 65.24.7.3, appears vulnerable to DNS Cache Poisoning.

    Sweet!
  • by suso ( 153703 ) * on Tuesday July 08, 2008 @02:19PM (#24104425) Journal

    Here everyone, install this patch to your Unix/Linux DNS servers that was conceived of on the Microsoft campus.

    While if true, one should be expedient to fix it, one should also be careful to verify that this is true.

    • Re: (Score:2, Informative)

      by Nos. ( 179609 )
      Except your Unix/Linux server is probably using BIND , and ISC has released a patch (and lots more information): http://www.isc.org/index.pl?/sw/bind/bind-security.php [isc.org]
    • by brouski ( 827510 )

      Are you seriously suggesting the possibility that this is a plot by Microsoft to somehow break or degrade Unix/Linux servers?

  • by simul ( 113898 ) * <slashdot@documentroot.com> on Tuesday July 08, 2008 @02:20PM (#24104459) Homepage

    I used to run a DNS hosting company. Fortunately, this error only affects caching resolvers, since it is yet another example of cache poisoning. There have been (and continue to be) hundreds of cache poisoning exploits over the years. This one is fairly technical and would require significant expertise to execute in a timeframe (ie: before everyone patches up) to cause harm. I don't know about you,but if someone started flooding my servers with thousands of response regords in hopes of guessing a transaction ID, my iptables config would block them in a heartbeat.

    this is not the kind of security problem that should cause people's heart to skip a beat. your average malware worm is much worse.

    dan has written an article on a javascript attack that can compromise a home router [google.com].... that's probably far worse - in terms of real damage (ie: bot creation, personal data stolen)

    in sum... run yum update.... then don't worry about it.

    • by morgan_greywolf ( 835522 ) * on Tuesday July 08, 2008 @02:28PM (#24104559) Homepage Journal

      an has written an article on a javascript attack that can compromise a home router.... that's probably far worse - in terms of real damage (ie: bot creation, personal data stolen)

      And that's precisely why the first thing I do on a home router is to disable the cashing nameserver and install DJBDNS on a Linux box instead. :)

    • if someone started flooding my servers with thousands of response regords in hopes of guessing a transaction ID, my iptables config would block them in a heartbeat.

      Would you be kind enough to publish your iptables config that does that? Such a set of rules seems like it would be very useful.

    • by Effugas ( 2378 ) * on Tuesday July 08, 2008 @08:19PM (#24109311) Homepage

      [This is Dan Kaminsky]

      No, this attack is much worse than my home router exploits (which, admittedly, aren't getting fixed anytime soon). While it is indeed nice to have compromised firmware living somewhere on your LAN, being able to generically attack everyone using a given ISP is a much more valuable proposition -- especially when I don't need to worry about the pesky paranoid people changing their router passwords, or even using a router I haven't built a script to attack.

      I'm being very circumspect about implications. August 6th will be an interesting day.

      It's funny you mention the iptables auto-block. There's been a known attack here for years -- spoof the root servers attacking you, and...yeah.

      That being said, we agree on the ultimate fix...run yum update, and be done.

  • The good news is this is a really strange situation where the fix does not immediate reveal the vulnerability and reverse engineering isn't directly possible.

    So, uhh, why the secrecy in planning the fixes?

  • DJBDNS not affected. (Score:5, Informative)

    by morgan_greywolf ( 835522 ) * on Tuesday July 08, 2008 @02:22PM (#24104471) Homepage Journal

    Note that DJBDNS (and derivatives) are not affected, since it uses randmoized source ports for DNS resolving.

  • by COMON$ ( 806135 ) * on Tuesday July 08, 2008 @02:23PM (#24104485) Journal
    FTA The good news is this is a really strange situation where the fix does not immediate reveal the vulnerability and reverse engineering isn't directly possible.

    FTA Update: Dan just released a "DNS Checker" on his site Doxpara.com to see if you are vulnerable to the issue.

    in other news

    Sooooooo, Im supposed to run a random file on my network to check an unknown DNS issue...this just reminds me all too much of those "download our program to fix all your antispyware issues" alerts.

    And finally the obligatory profit usage:

    1. Find a vulerability

    2. Dont tell anyone what said vulnerability is.

    3. Release malware in the form of a "Patch" to "Fix" the issue exploiting thousands of servers.

    4. ???

    5. PROFIT!

    • by StreetStealth ( 980200 ) on Tuesday July 08, 2008 @02:28PM (#24104581) Journal

      Still, it's not exactly like you clicked a banner with a lame attempt at a bouncing, fake window telling you your DNS software was in immediate need of a fix and that this combination patch and shopping buddy would fix it.

    • Release malware in the form of a "Patch" to "Fix" the issue exploiting thousands of servers.

      Well, you have to trust your vendor at some point right? Trust enables us to run yum or apt-get without having to read every line of source code for each upgraded package. I suppose having an open-source vendor is an advantage if you don't trust your supplier. But if you don't trust them why are you using them?

      The fact that so many are doing this at once might be a clue that it's real.

      • by COMON$ ( 806135 ) *
        sure, but it only works with step 1,2 and the elusive 3, geesh. At least usually you get told what the patch is for.
    • Re:Sinisterness (Score:5, Informative)

      by Effugas ( 2378 ) * on Tuesday July 08, 2008 @08:23PM (#24109407) Homepage

      [This is Dan Kaminsky]

      Er, you know you use DNS to retrieve web pages, right?

      So I just watch how you retrieve my web page, and synthesize content based on the Port/TXID patterns you request my page with.

      No code. Just script. And then I tell you whether you need to install a patch from your own vendor. It's not too complicated.

  • "An attacker with the ability to conduct a successful cache poisoning attack can cause a nameserver's clients to contact the incorrect, and possibly malicious, hosts for particular services. Consequently, web traffic, email, and other important network data can be redirected to systems under the attacker's control." Too bad the fix doesn't 'Cure' the problem. It only makes it more difficult. "ISC is providing patches for BIND 9.3, 9.4 and 9.5" - Thank the Internet gods.
  • This is utterly serious! And only a matter of time before attackers compromise DNS on servers and/or clients.

    The good news is this is a really strange situation where the fix does not immediate reveal the vulnerability and reverse engineering isn't directly possible.

    And wow! Great news! There's a very critical flaw over the entire Internet name-to-IP infrastructure. But don't bother, it will take time before the bad guys find what we fixed...

  • Reverse Engineering? (Score:5, Informative)

    by ergo98 ( 9391 ) on Tuesday July 08, 2008 @02:24PM (#24104525) Homepage Journal

    From the summary-

    The good news is this is a really strange situation where the fix does not immediate reveal the vulnerability and reverse engineering isn't directly possible

    His DNS tester is submitting a DNS check that it knows will be relayed, and then monitoring if the upstream check (it is intentionally doing lookups against a DNS server it controls) consistently uses the same source port. If it does, hypothetically an attacker could send "response" packets in concert with the original request, poisoning the cache.

    I would guess that the patch makes the DNS server randomize the nonce when relaying DNS requests.

    I know nothing about this, but that's my super-l33t-hacker assumption from looking at it for 10 seconds.

    • Re: (Score:3, Funny)

      by HTH NE1 ( 675604 )

      When an absolute statement is modified with an adverb, the statement is not generally true. Examples:

      • "does not immediate[ly] reveal"
      • "isn't directly possible"
      • "the statement is not generally true"
  • Error establishing a database connection
  • Finally...! (Score:5, Funny)

    by JackassJedi ( 1263412 ) on Tuesday July 08, 2008 @02:36PM (#24104687)
    I'm (sort of) a native German speaker, in which "DNA" is abbreviated "DNS" ("DesoxyribonukleinsÃure" with "sÃure" being "acid").
    Needless to say, my first impression of the headline was way more futuristic than what is there.
  • by molo ( 94384 ) on Tuesday July 08, 2008 @02:36PM (#24104691) Journal

    Here is the CERT advisory in a readable format.

    http://www.kb.cert.org/vuls/id/800113 [cert.org]

    BTW, did they hold this for a Microsoft patch Tuesday?

    -molo

  • Nature of the attack (Score:5, Informative)

    by Animats ( 122034 ) on Tuesday July 08, 2008 @02:41PM (#24104739) Homepage

    It's reasonably obvious from the CERT advisory how an attack would work. The CERT advisory tells us that the vulnerable systems are ones where the 16-bit DNS transaction ID and the 16-bit port number for a transaction are not randomly chosen. The CERT advisory also tells us that the attacker must be able to spoof IP addresses, that is, they must not be behind some ISP with egress filtering. CERT also tells us that it's a DNS poisoning attack.

    So it looks like a form of this attack documented in 2003 [net-security.org] at "Cache Poisoning using DNS Transaction ID Prediction". Back in 2003, it took a large number of packets to make this attack work, and even then it wasn't reliable. But there may be a more cost-effective attack strategy if you know how the DNS server assigns transaction numbers and ports.

    The fundamental problem comes from 1) the fact that source IP addresses can be forged, and 2) the DNS transaction ID, at 16 bits, is far too short to be considered a useful random key. Any key with security implications should be at least 64 bits and be generated by a crypto-grade random number generator.

  • Good going Dan, this should add spice to the proceedings this year.
  • by molo ( 94384 ) on Tuesday July 08, 2008 @02:47PM (#24104811) Journal

    Debian released 3 advisories:

    bind9:
    http://www.debian.org/security/2008/dsa-1603 [debian.org]

    bind8:
    http://www.debian.org/security/2008/dsa-1604 [debian.org]

    glibc:
    http://www.debian.org/security/2008/dsa-1605 [debian.org]

    Bind9 now contains a port randomization, which can require firewall rule changes.

    Bind8 is now considered deprecated and the advisory recommends upgrading to bind9. There is no patch for bind8.

    The glibc stub resolver is also vulnerable, and there is no patch yet. The recommended workaround is to install bind9 as a caching resolver and point /etc/resolv.conf at localhost.

    In short, this is a big mess.

    -molo

  • The real solution... (Score:5, Interesting)

    by Ethanol ( 176321 ) on Tuesday July 08, 2008 @02:49PM (#24104843)

    ...is to sign the root and deploy DNSSEC.

    Unfortunately that's politically non-expedient. But now that this vulnerability is out there, maybe the political will can at last materialize.

    The second-best solution is to deploy DNSSEC using DNSSEC Lookaside Validation [rfc-archive.org] (which means you get trust anchors from some other known site, not from the root zone). And that's available now.

    The worst thing about DNSSEC is it's too damn complicated at present; there needs to be the equivalent of "one-click" zone signing. ISC (and others) are working on getting us closer to that.

    The third-best solution is what's been done today. We just made it a lot harder to exploit the vulnerability--typically about 16000 times harder, depending on your configuration. There's a difference between "harder" and "impossible" though.

    • by mpeg4codec ( 581587 ) on Tuesday July 08, 2008 @04:28PM (#24106363) Homepage

      ...is to sign the root and deploy DNSSEC.

      Unfortunately that's politically non-expedient. But now that this vulnerability is out there, maybe the political will can at last materialize.

      The political will has been shifting a lot lately. I've spoken directly to the gentleman in charge of managing the root zone, and he says that technically speaking it would be an overnight change. All the DNSKEYs and RRSIGs have been generated, he's waiting for the OK from above, which he says appears to be more likely with each passing day.

      The second-best solution is to deploy DNSSEC using DNSSEC Lookaside Validation (which means you get trust anchors from some other known site, not from the root zone). And that's available now.

      The largest DLV repository that validates that the DNSKEYs belong to who they say they belong to (think Verisign-style verification), is run by isc.org. At this writing, this zone has a grand total of twenty five DLV records. Not exactly what I would call useful from a security standpoint, although it is a start.

      I'm a part of a DNSSEC monitoring project (called SecSpider [ucla.edu]). We have a set of pollers distributed around the world from which we collect data about the current deployment. In conjunction with this, when we are able to collect an identical DNSKEY RRset, we generate DLV records and serve them from one of our delegations. For details on how to use it, check out our blog [ucla.edu]. This serves the same purpose as ISC's repo, but the data is collected in an orthogonal manner. We currently have DLV records for over 12000 zones, although we haven't directly verified the identity of any of them.

      The worst thing about DNSSEC is it's too damn complicated at present; there needs to be the equivalent of "one-click" zone signing. ISC (and others) are working on getting us closer to that.

      This I can't disagree with. DNSSEC is over-engineered by academic crypto people. In fact, DNS in general is somewhat over-engineered, but at least it was successfully rolled out. ISC's efforts are valiant, and hopefully with a larger roll-out their tools will become de-facto.

      The third-best solution is what's been done today. We just made it a lot harder to exploit the vulnerability--typically about 16000 times harder, depending on your configuration. There's a difference between "harder" and "impossible" though.

      Yes, the difference is that impossible isn't possible. You can't stop a determined hacker, not even with the best technology (think of social engineering attacks). Security is like an onion: as soon as you pull away one layer there are a dozen more to get in your way.

      • by Ethanol ( 176321 ) on Tuesday July 08, 2008 @05:40PM (#24107515)

        The largest DLV repository that validates that the DNSKEYs belong to who they say they belong to (think Verisign-style verification), is run by isc.org.

        (My employer, BTW.)

        I'm a part of a DNSSEC monitoring project (called SecSpider [ucla.edu]). [...] This serves the same purpose as ISC's repo, but the data is collected in an orthogonal manner. We currently have DLV records for over 12000 zones, although we haven't directly verified the identity of any of them.

        That's an intriguing idea, but it doesn't really serve the same purpose as ISC's DLV until you do verify identity. (Would UCLA's lawyers be comfortable with someone relying on your DLV record repository for, say, banking transactions?)

  • Most of the companies in the list are "Status: unknown".....
  • by zero1101 ( 444838 ) on Tuesday July 08, 2008 @03:02PM (#24105065) Homepage

    This is from the advisory.

    Filter traffic at network perimeters
    Because the ability to spoof IP addresses is necessary to conduct
    these attacks, administrators should take care to filter spoofedaddresses at the network perimeter. IETF Request for Comments(RFC)
    documents RFC 2827, RFC 3704, and RFC 3013 describe best currentpractices (BCPs) for implementing this defense. It is important to
    understand your network's configuration and service requirements
    before deciding what changes are appropriate.

    So...is this REALLY that serious? Is anyone NOT already doing this? I'm incredibly skeptical of big, sensational security alerts like this.

  • by Mr.Ned ( 79679 ) on Tuesday July 08, 2008 @03:02PM (#24105067)

    ... because djb recognized the vulnerability. it's even documented as such: http://cr.yp.to/djbdns/dns_random.html [cr.yp.to]

    • by Grendel Drago ( 41496 ) on Wednesday July 09, 2008 @08:02AM (#24115577) Homepage

      From this posting [doxpara.com]: "DJB was right. All those years ago, Dan J. Bernstein was right: Source Port Randomization should be standard on every name server in production use."

      But I'm sure his acting like a jerk still means that nobody should ever take his criticisms of software design seriously. Heck, the BIND folks didn't, and it's not like people are going to stop using BIND.

  • I'd of thought that, by its very nature, reverse engineering is never really directly possible. Sorta why it has to be reverse engineered.
  • The Death of BIND (Score:5, Interesting)

    by Sevn ( 12012 ) on Tuesday July 08, 2008 @03:43PM (#24105655) Homepage Journal

    I help admin one of the larger DNS systems (90,000+ zones) and our initial testing of the patched BIND showed it having half the performance of prior versions. That prompted us to very quickly replace all BIND caching servers with something else. We had already replaced authoritative services with something else because of BIND's lackluster performance. 3+ hours to load zones on reboot is quite frankly ridiculous. We really had no choice. Microsoft said they were going to open their mouths on a certain date, and we had a massive time crunch. We can't be the only company that simply had to ditch BIND. And I can't say I'm sorry to see it go. I'm sure mister Vixie is a great guy, but his domain name service is, and always has been complete garbage.

    • Re:The Death of BIND (Score:5, Informative)

      by Ethanol ( 176321 ) on Tuesday July 08, 2008 @04:40PM (#24106555)

      How in the world did you manage to get hold of the patches, test them, and deploy a competing product on a 90,000+ zone installation in the two hours between the patch's public release and your post? That's... really fast work.

      Out of curiosity, what version of BIND were you running prior to the change, and on what OS/hardware?

      It is true--and we acknowledged in the release announcments--that the initial security patches (9.3.5-P1, 9.4.2-P1, 9.5.0-P1) cause a significant performance hit on heavily-loaded systems.

      There are further code optimizations that get performance roughly back to baseline, but we felt they were too extensive to release without putting them through a beta cycle.

      Two beta releases, with the enhanced performance code, were published at the same time as the patches: BIND 9.5.1b1 [isc.org] and BIND 9.4.3b2 [isc.org]; you can grab them now (um, for values of "now" that include "very soon"; one of our 10G fiber links picked an unfortunate moment to fail).

      The remaining beta, BIND 9.3.6b1, will be released in a few days, because five releases at one time was already enough to juggle.

      • Re:The Death of BIND (Score:5, Interesting)

        by Sevn ( 12012 ) on Tuesday July 08, 2008 @04:54PM (#24106805) Homepage Journal

        We've known about it for a while. Certain providers were contact about it a while ago. Any other information is confidential, as I said, not my call. We were seeing QPS start out at 5,000ish then drop to 3,000ish during our testing. With the 30ish days we had to react, the path of least resistance was replacement. The only version we were given to play with was 9.5.0rc1, which was three weeks ago. Understand that all this was driven by Microsoft saying they were going to spill the beans on a certain date. So your "now" wasn't good enough to meet our deadline. I'm not a huge fan of replacing production services that are "working fine", and BIND was performing adequately for us before we got the word on this vulnerability from one of our vendors. At this point, we are "BINDless" though, and the mountains we had to move will probably not be moving back.

        • Re:The Death of BIND (Score:5, Interesting)

          by Ethanol ( 176321 ) on Tuesday July 08, 2008 @06:01PM (#24107813)

          Thank you for your reply.

          The only providers who should have received the patches earlier than today were a small group of our support customers who've contracted for advance notice of security issues. They were all told that this was a preliminary patch only, and to watch for betas with better performance--and that the patches were highly confidential and covered by nondisclosure agreements.

          Your installation profile doesn't seem to match any of theirs, and in any case I hope they would have let us know before they eliminated BIND from their networks. If you are not one of our support customers, then I'm very concerned that you had the patch in your hands as early as you say you did. Partly because it means you only got a partial picture of the situation, and partly because it means someone violated our trust--and it's important that we know who, so we can emphasize to them that this is not a joke.

          Can you please tell me where it was that you got the patch you tested?

          • Re:The Death of BIND (Score:5, Interesting)

            by Sevn ( 12012 ) on Tuesday July 08, 2008 @06:22PM (#24108109) Homepage Journal

            I am one of your support customers. Thing is, I'm not comfortable saying much else because we were told the 10th was the magic day, and it leaked 2 days early. To be clear, the patched BIND worked the way it's supposed to, and I'm sure it's going to work fine for most customers. With the news that you have patched versions that address the issues with heavily taxed servers, probably almost all of them. We jumped the gun because that's what we do. : ) And I'm sorry I was critical on BIND. It is still the industry standard, and the original daemon that made it possible to get rid of enormous host files. There's a degree of comfort in running *the* DNS daemon, and we were doing it even though my organization is decidedly anti opensource. That speaks volumes.

  • by spir0 ( 319821 ) on Tuesday July 08, 2008 @04:34PM (#24106465) Homepage Journal

    from http://www.kb.cert.org/vuls/id/800113 [cert.org]: "The DNS protocol specification includes a transaction ID field of 16 bits. If the specification is correctly implemented and the transaction ID is randomly selected with a strong random number generator, an attacker will require, on average, 32,768 attempts to successfully predict the ID."

    Just put the real seed back into the code.

    obrant: and who the frak releases advisories in DOC format in the 21st century?

  • by X.25 ( 255792 ) on Tuesday July 08, 2008 @11:58PM (#24111939)

    1. DNS (well, UDP protocols in general) problems have been known for ages. This is nothing new, it's just new because so much drama has been created. There is a reason why certain counter-measures have already been implemented in DNS software. Never mind that noone is using them because it requires effort.

    2. So much focus has been put on "phishing". I'd like someone to explain me how phishers are going to forge certificates and get sensitive info? Sure, I'll get bogus IP for the website I want to visit, but unless phishers manage to create valid certificate for gmail.com (for example), I'll get a nice warning box. Which is the same shit as what is happening now, when you go to a phishing website. Those who click "Ok" on every prompt will still get fucked, those who check errors will still not be tricked. Nothing changes.

    3. Security became a joke when advisories like "Man in the middle attack allows attackers to steal Myspace passwords" started showing up on first pages of various news outlets.

An adequate bootstrap is a contradiction in terms.

Working...