Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Networking The Internet

Massive, Coordinated Patch To the DNS Released 315

tkrabec alerts us to a CERT advisory announcing a massive, multi-vendor DNS patch released today. Early this year, researcher Dan Kaminsky discovered a basic flaw in the DNS that could allow attackers easily to compromise any name server; it also affects clients. Kaminsky has been working in secret with a large group of vendors on a coordinated patch. Eighty-one vendors are listed in the CERT advisory (DOC). Here is the executive overview (PDF) to the CERT advisory — text reproduced at the link above. There's a podcast interview with Dan Kaminsky too. His site has a DNS checker tool on the top page. "The issue is extremely serious, and all name servers should be patched as soon as possible. Updates are also being released for a variety of other platforms since this is a problem with the DNS protocol itself, not a specific implementation. The good news is this is a really strange situation where the fix does not [immediately] reveal the vulnerability and reverse engineering isn't directly possible."
This discussion has been archived. No new comments can be posted.

Massive, Coordinated Patch To the DNS Released

Comments Filter:
  • by iXiXi ( 659985 ) on Tuesday July 08, 2008 @03:24PM (#24104505)
    "An attacker with the ability to conduct a successful cache poisoning attack can cause a nameserver's clients to contact the incorrect, and possibly malicious, hosts for particular services. Consequently, web traffic, email, and other important network data can be redirected to systems under the attacker's control." Too bad the fix doesn't 'Cure' the problem. It only makes it more difficult. "ISC is providing patches for BIND 9.3, 9.4 and 9.5" - Thank the Internet gods.
  • by morgan_greywolf ( 835522 ) * on Tuesday July 08, 2008 @03:28PM (#24104559) Homepage Journal

    an has written an article on a javascript attack that can compromise a home router.... that's probably far worse - in terms of real damage (ie: bot creation, personal data stolen)

    And that's precisely why the first thing I do on a home router is to disable the cashing nameserver and install DJBDNS on a Linux box instead. :)

  • by swb ( 14022 ) on Tuesday July 08, 2008 @03:48PM (#24104831)

    Attention all DJB software fans, here's another chance to champion the superiority of DJB's software. Don't forget to include positive commentary on the licensing and patch status.

    Thanks!

  • The real solution... (Score:5, Interesting)

    by Ethanol ( 176321 ) on Tuesday July 08, 2008 @03:49PM (#24104843)

    ...is to sign the root and deploy DNSSEC.

    Unfortunately that's politically non-expedient. But now that this vulnerability is out there, maybe the political will can at last materialize.

    The second-best solution is to deploy DNSSEC using DNSSEC Lookaside Validation [rfc-archive.org] (which means you get trust anchors from some other known site, not from the root zone). And that's available now.

    The worst thing about DNSSEC is it's too damn complicated at present; there needs to be the equivalent of "one-click" zone signing. ISC (and others) are working on getting us closer to that.

    The third-best solution is what's been done today. We just made it a lot harder to exploit the vulnerability--typically about 16000 times harder, depending on your configuration. There's a difference between "harder" and "impossible" though.

  • by PCM2 ( 4486 ) on Tuesday July 08, 2008 @03:58PM (#24105009) Homepage
    Am I right in guessing that the Web page containing the payload would have to be coded with the default password AND the default IP address of your router's admin interface? I'm sure I'm in the minority, but I generally set my subnet to a 10.x.x.x address block when I first configure my home router.
  • by jeremypv ( 455256 ) on Tuesday July 08, 2008 @04:18PM (#24105283) Homepage Journal

    >Did anyone else notice that today is Tuesday?

    http://www.microsoft.com/technet/security/bulletin/ms08-037.mspx

  • The Death of BIND (Score:5, Interesting)

    by Sevn ( 12012 ) on Tuesday July 08, 2008 @04:43PM (#24105655) Homepage Journal

    I help admin one of the larger DNS systems (90,000+ zones) and our initial testing of the patched BIND showed it having half the performance of prior versions. That prompted us to very quickly replace all BIND caching servers with something else. We had already replaced authoritative services with something else because of BIND's lackluster performance. 3+ hours to load zones on reboot is quite frankly ridiculous. We really had no choice. Microsoft said they were going to open their mouths on a certain date, and we had a massive time crunch. We can't be the only company that simply had to ditch BIND. And I can't say I'm sorry to see it go. I'm sure mister Vixie is a great guy, but his domain name service is, and always has been complete garbage.

  • Re:The Death of BIND (Score:3, Interesting)

    by Sevn ( 12012 ) on Tuesday July 08, 2008 @05:04PM (#24105969) Homepage Journal

    Oh, and despite the Ron Paulesque nature of the DJB fanbase, I'd still recommend the djbdns suite as the best free solution. I can think of a little ISP in Iowa that I set up with djbdns that has to be happy they don't have to do a thing right now.

  • by netwiz ( 33291 ) on Tuesday July 08, 2008 @05:11PM (#24106085) Homepage

    Do I trust it? I don't know. Tell me the facts. The sheer quantity of internet shenanigans going on of late makes me suspicious. This sounds like they're patching for a remote root exploit, but a protocol issue won't do that. DNS poisons? What is it then?

    They're making us patch everything, and aren't telling us what it does. These are my systems, and you're going to tell me precisely what's going on before any of your code gets to run.

  • by afidel ( 530433 ) on Tuesday July 08, 2008 @05:14PM (#24106155)
    Yeah really, I figure half the reason Windows gets such a bad rap is that many of the people implementing it don't know how to read a crash dump. When we implemented our Citrix farm two years ago we ran into a couple BSOD's which I was able to trace back to some obscure KB articles through the crash dumps and obtain the private hotfixes from MS. Since that initial month we haven't had a single server crash across the entire farm. I didn't even really need to read the assembler portion of the crashdumps, just the function entry points to figure out what was going on. Also I would have to work MUCH harder if I couldn't script because when you have to apply a change to several or all of over 160 servers it would take quite a while to do them by hand!
  • by mpeg4codec ( 581587 ) on Tuesday July 08, 2008 @05:28PM (#24106363) Homepage

    ...is to sign the root and deploy DNSSEC.

    Unfortunately that's politically non-expedient. But now that this vulnerability is out there, maybe the political will can at last materialize.

    The political will has been shifting a lot lately. I've spoken directly to the gentleman in charge of managing the root zone, and he says that technically speaking it would be an overnight change. All the DNSKEYs and RRSIGs have been generated, he's waiting for the OK from above, which he says appears to be more likely with each passing day.

    The second-best solution is to deploy DNSSEC using DNSSEC Lookaside Validation (which means you get trust anchors from some other known site, not from the root zone). And that's available now.

    The largest DLV repository that validates that the DNSKEYs belong to who they say they belong to (think Verisign-style verification), is run by isc.org. At this writing, this zone has a grand total of twenty five DLV records. Not exactly what I would call useful from a security standpoint, although it is a start.

    I'm a part of a DNSSEC monitoring project (called SecSpider [ucla.edu]). We have a set of pollers distributed around the world from which we collect data about the current deployment. In conjunction with this, when we are able to collect an identical DNSKEY RRset, we generate DLV records and serve them from one of our delegations. For details on how to use it, check out our blog [ucla.edu]. This serves the same purpose as ISC's repo, but the data is collected in an orthogonal manner. We currently have DLV records for over 12000 zones, although we haven't directly verified the identity of any of them.

    The worst thing about DNSSEC is it's too damn complicated at present; there needs to be the equivalent of "one-click" zone signing. ISC (and others) are working on getting us closer to that.

    This I can't disagree with. DNSSEC is over-engineered by academic crypto people. In fact, DNS in general is somewhat over-engineered, but at least it was successfully rolled out. ISC's efforts are valiant, and hopefully with a larger roll-out their tools will become de-facto.

    The third-best solution is what's been done today. We just made it a lot harder to exploit the vulnerability--typically about 16000 times harder, depending on your configuration. There's a difference between "harder" and "impossible" though.

    Yes, the difference is that impossible isn't possible. You can't stop a determined hacker, not even with the best technology (think of social engineering attacks). Security is like an onion: as soon as you pull away one layer there are a dozen more to get in your way.

  • Re:The Death of BIND (Score:5, Interesting)

    by Sevn ( 12012 ) on Tuesday July 08, 2008 @05:54PM (#24106805) Homepage Journal

    We've known about it for a while. Certain providers were contact about it a while ago. Any other information is confidential, as I said, not my call. We were seeing QPS start out at 5,000ish then drop to 3,000ish during our testing. With the 30ish days we had to react, the path of least resistance was replacement. The only version we were given to play with was 9.5.0rc1, which was three weeks ago. Understand that all this was driven by Microsoft saying they were going to spill the beans on a certain date. So your "now" wasn't good enough to meet our deadline. I'm not a huge fan of replacing production services that are "working fine", and BIND was performing adequately for us before we got the word on this vulnerability from one of our vendors. At this point, we are "BINDless" though, and the mountains we had to move will probably not be moving back.

  • Re:The Death of BIND (Score:5, Interesting)

    by Ethanol ( 176321 ) on Tuesday July 08, 2008 @07:01PM (#24107813)

    Thank you for your reply.

    The only providers who should have received the patches earlier than today were a small group of our support customers who've contracted for advance notice of security issues. They were all told that this was a preliminary patch only, and to watch for betas with better performance--and that the patches were highly confidential and covered by nondisclosure agreements.

    Your installation profile doesn't seem to match any of theirs, and in any case I hope they would have let us know before they eliminated BIND from their networks. If you are not one of our support customers, then I'm very concerned that you had the patch in your hands as early as you say you did. Partly because it means you only got a partial picture of the situation, and partly because it means someone violated our trust--and it's important that we know who, so we can emphasize to them that this is not a joke.

    Can you please tell me where it was that you got the patch you tested?

  • Re:The Death of BIND (Score:5, Interesting)

    by Sevn ( 12012 ) on Tuesday July 08, 2008 @07:22PM (#24108109) Homepage Journal

    I am one of your support customers. Thing is, I'm not comfortable saying much else because we were told the 10th was the magic day, and it leaked 2 days early. To be clear, the patched BIND worked the way it's supposed to, and I'm sure it's going to work fine for most customers. With the news that you have patched versions that address the issues with heavily taxed servers, probably almost all of them. We jumped the gun because that's what we do. : ) And I'm sorry I was critical on BIND. It is still the industry standard, and the original daemon that made it possible to get rid of enormous host files. There's a degree of comfort in running *the* DNS daemon, and we were doing it even though my organization is decidedly anti opensource. That speaks volumes.

  • by cduffy ( 652 ) <charles+slashdot@dyfis.net> on Tuesday July 08, 2008 @07:31PM (#24108225)

    I should have specified senior sysadmins, to be sure -- there's no reason to be that picky about everyone on staff. OTOH, the whole point of having senior people is to have someone who can dig into and eventually solve any issue that comes up -- and that means understanding what's under the abstractions. Addressing your comparison, software isn't as reliable as hardware is these days; maybe in ten years this won't be the case, but right now having someone available who can dig into the operating system (or your app server, or any layer inbetween) is damned useful.

  • Re:Oh cool! (Score:1, Interesting)

    by Anonymous Coward on Tuesday July 08, 2008 @07:51PM (#24108421)

    Well strangely enough.... maybe it's DNS has entry has been compromised.

    I'm accessing doxpara.com (66.240.226.139) from two seperate computers (work and home), and it's giving me two quite different pages.

    When I access it from work - I see a full blog style page including the checker, but when I access it from home, I get a plan text only version of just the DNS Checker text, and a standard HTML button 'Check My DNS'. The rest of the website does not appear at all.

    I've done a tracert to doxpara.com from both computers, and from home I get 'Request timed out' on every hop except the last one - 22. The tracert from work gives normal hop information on all hops.

    Very very odd!

  • by quazee ( 816569 ) on Tuesday July 08, 2008 @07:52PM (#24108433)

    The exploit is trivial to figure out - if a caching DNS server has recursion enabled, and also sends the outgoing DNS queries to the authoritative servers over a fixed (or predictable) UDP port, you can just send forged UDP responses together with your recursive DNS query.
    The bogus response will be cached and will affect other users of the DNS server.

    The attacker also needs to also guess the transaction ID (16-bit value), but they can send multiple bogus UDP responses with different transaction IDs.
    Also, vulnerable implementations may generate transaction IDs in a predictable way, so the attacker can obtain the current state of the PRNG by generating a recursive DNS query to DNS zone under attacker's control.

    Such an attack cannot be performed from a typical home broadband connection, as most ISPs will not route packets originating from IP addresses not allocated by the ISP.
    The attacker needs to be in control over the routing/egress filtering within his AS (e.g. an enterprise-level Internet service).

  • Re:The Death of BIND (Score:4, Interesting)

    by Ethanol ( 176321 ) on Tuesday July 08, 2008 @08:48PM (#24109011)

    Ah. Well, that's a relief, then, of sorts. :)

    Thanks for the kind words about BIND and I'm sorry it didn't meet your needs this time. Please encourage your management to contact ISC and tell us about your choices and your experiences.

  • by Effugas ( 2378 ) * on Tuesday July 08, 2008 @09:19PM (#24109311) Homepage

    [This is Dan Kaminsky]

    No, this attack is much worse than my home router exploits (which, admittedly, aren't getting fixed anytime soon). While it is indeed nice to have compromised firmware living somewhere on your LAN, being able to generically attack everyone using a given ISP is a much more valuable proposition -- especially when I don't need to worry about the pesky paranoid people changing their router passwords, or even using a router I haven't built a script to attack.

    I'm being very circumspect about implications. August 6th will be an interesting day.

    It's funny you mention the iptables auto-block. There's been a known attack here for years -- spoof the root servers attacking you, and...yeah.

    That being said, we agree on the ultimate fix...run yum update, and be done.

If you think the system is working, ask someone who's waiting for a prompt.

Working...