Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Security The Internet

Attack Code Published For DNS Vulnerability 205

Posted by samzenpus
from the protect-ya-neck dept.
get_Rootin writes "That didn't take long. ZDNet is reporting that HD Moore has released exploit code for Dan Kaminsky's DNS cache poisioning vulnerability into the point-and-click Metasploit attack tool. From the article: 'This exploit caches a single malicious host entry into the target nameserver. By causing the target nameserver to query for random hostnames at the target domain, the attacker can spoof a response to the target server including an answer for the query, an authority server record, and an additional record for that server, causing target nameserver to insert the additional record into the cache.' Here's our previous Slashdot coverage."
This discussion has been archived. No new comments can be posted.

Attack Code Published For DNS Vulnerability

Comments Filter:
  • Here we go... (Score:4, Interesting)

    by LostCluster (625375) * on Wednesday July 23, 2008 @06:41PM (#24312697)

    This has to be the worst time ever to be a web surfer. How long until we see the major networks broadcasting the legit IP quads of sites we want to reach?

    • Re:Here we go... (Score:5, Informative)

      by Carnildo (712617) on Wednesday July 23, 2008 @07:12PM (#24312999) Homepage Journal

      This has to be the worst time ever to be a web surfer. How long until we see the major networks broadcasting the legit IP quads of sites we want to reach?

      There's nothing new about this. DNS cache poisoning attacks have been found before, and the internet hasn't melted down yet. If you're paranoid, run your own caching resolver.

      • Re:Here we go... (Score:5, Interesting)

        by Martin Blank (154261) on Wednesday July 23, 2008 @07:28PM (#24313131) Journal

        You may still not be safe. If someone can fire off a XSS attack through your browser, it could do enough lookups to make you vulnerable. Combine this with a periodic other run to a controlled server to grab your source port for guessing (presuming that you have not patched), and you may have a problem.

        Granted, it's unlikely that you would explicitly be targeted, and things like NoScript help defend against it, but there are still possible gaps. In fact, there are several tens of million of systems which will remain vulnerable for some time to come; I haven't seen many SOHO router firmware fixes released so far, and a lot of people point to their routers for their DNS.

        • Re: (Score:3, Informative)

          by afidel (530433)
          Heck, MOST of the corporate firewalls don't do the right thing, so even if your clients and DNS server are patched you may STILL be vulnerable! Unless your firewall does transparent port passthrough (IE NAT but not PAT) you are vulnerable. For most firewalls this means you have to put a caching resolver in the DMZ and point internal servers and/or clients to it to be fully protected. Oh and don't forget things like anti-spam appliances, most are pointed directly out to the internet for DNS but not all are o
          • Re:Here we go... (Score:5, Informative)

            by Martin Blank (154261) on Wednesday July 23, 2008 @11:01PM (#24314637) Journal

            Where I work, we run the servers through a proxy firewall with a DNS proxy service, and the DNS service on the firewall has been patched for this vulnerability. For traffic run through it, it doesn't preserve source port from the DNS servers, and from a quick glance, the source ports on requests seem to be randomized, so I think from that perspective, we may well be safer even for unpatched servers. However, our setup seems to be the exception, and we may have a couple of other networks (physically and logically separated from the primary) that do not have the benefits of this arrangement.

        • Re: (Score:3, Funny)

          by ILuvRamen (1026668)
          And also a solar storm can knock out the entire internet and power grid. And at any time we can be hit by a gamma ray burst or a black hole from the LHC can suck us all up. Yeah, internet security is never going to be 100%, DUH! Is it really even worth mentioning?
      • Your own caching resolver will submarine you for a while, but eventually it has to come up from air and trust some other DNS resolver to see if the info hasn't changed.
        • Re: (Score:3, Informative)

          by Anonymous Coward

          You can run your own recursive resolver which only talks to authoritative DNS servers. You can configure it to use random source ports if you want to make this attack much more difficult. Then an attacker would have to send you billions of spoofed packets to poison your DNS. That seems a little excessive for exploiting just one user. You could make it even more difficult by rate limiting your resolver (you're its only user after all).

    • Re: (Score:3, Insightful)

      by Darkness404 (1287218)

      This has to be the worst time ever to be a web surfer.

      Ummm... No. Today I can easily surf the 'net with just about every ad blocked, have Flash blocked when I want it to, but re-enable it for say, YouTube, all at the click of a mouse. I can use an OS and browser that is free and open source. I can surf 100% anonymously easily. I can download every video game I played as a child in less than an hour. And I can hear just about any song I ever would want to hear in less than a minute.

      Sure, some things suck today, BT throttling, the ISP's "No-Usenet" crusade

    • by MadMidnightBomber (894759) on Thursday July 24, 2008 @02:36AM (#24315577)
      Can someone please send me the HOSTS file for the Internet?

      kthxbye

      • Re: (Score:3, Interesting)

        You made me laugh, but as with all humour there is a grain of truth within.

        Curiously I spent some time yesterday attempting to estimate the number of zones currently known to DNS. Perhaps there is a better approach ( one that, say, inquires against DNS ) but by using Teh Googler to search for site:.${TLD} I came up with these order-of-magnitude results:

        • .com 7,980,000,000
        • .org 1,950,000,000
        • .net 2,140,000,000
        • .info 195,000,000

        These numbers just seem insane. Can anyone advise?

        • Re: (Score:3, Informative)

          by LostCluster (625375) *
          That's a count of URLs, not domains within each TLD. For example, site:cnn.com accounts for 3,540,000 of your .com results.
  • Google (Score:5, Funny)

    by bdasd5 (1257940) on Wednesday July 23, 2008 @06:41PM (#24312699)
    And here I am, thinking I was on Google.
  • by Aussenseiter (1241842) on Wednesday July 23, 2008 @06:41PM (#24312701)
    And lo, all unpatched websites were rendered unto Goatse.
    • by Bryansix (761547) on Wednesday July 23, 2008 @06:50PM (#24312797) Homepage
      It doesn't work that way. DNS local servers are either run by a corporation or by your ISP. Either one could be hacked now. So it's not if the website is patched. It is if the DNS server your computer is using is patched.
      • by rs79 (71822) <hostmaster@open-rsc.org> on Wednesday July 23, 2008 @07:45PM (#24313241) Homepage

        Um... even if you run your own caching server, if your ISP runs a "transparent" web proxy it will do its own dns. You may in fact run DJB which is immune from this bug, but if your ISP runs an unpatched dns server you'll still be scrod despite running your own caching server.

        Slick huh?

        They need to take the dns lookup out of the web proxies.

        • by blincoln (592401) on Wednesday July 23, 2008 @08:27PM (#24313565) Homepage Journal

          They need to take the dns lookup out of the web proxies.

          The problem with doing that would be that it would then be impossible (at least using current DNS software, as far as I know) to allow clients on an internal network to have limited internet access without allowing them to perform DNS tunneling (and thereby upgrade their internet access to "full").

          Once someone (anyone?) releases a DNS package that allows firewall-style rules (e.g. "client on this range of IPs may only resolve subdomains of the following domains...", "clients may only look up X distinct subdomains each of Y domains every Z hours" then the picture would probably change.

          • by DragonHawk (21256) on Wednesday July 23, 2008 @08:46PM (#24313727) Homepage Journal

            Once someone (anyone?) releases a DNS package that allows firewall-style rules (e.g. "client on this range of IPs may only resolve subdomains of the following domains..."

            I think you might be able to do that with the "views" feature of ISC BIND v9 named, although I've never tried. I know you can define ACLs for clients and control how they see the DNS using the ACL. You should be able to define forwarding zones for the domains you want to work, and blackhole everything else. I think.

            http://www.isc.org/sw/bind/arm93/Bv9ARM.ch06.html#view_statement_grammar [isc.org]

          • by rs79 (71822)

            "The problem with doing that would be that it would then be impossible (at least using current DNS software, as far as I know) to allow clients on an internal network to have limited internet access without allowing them to perform DNS tunneling "

            You've lost me toally. I'm not talking about expolicit web proxies, but the "transparent" ones that ISPs use.

            I can connect with ssh and ftp to free.tibet, but not via port 80 ("web") service. It's all in the wrist action of (the screw you I'm doing my own dns looku

    • by LostCluster (625375) * on Wednesday July 23, 2008 @06:51PM (#24312803)

      unpatched websites

      Have you been following this story. It's not sites that need the patch, it's DNS servers. Site owners are powerless if the ISPs fail to protect their domain name from the an entry leading to the spoof site's IP address.

  • %> /usr/bin/treaceroute fruity.stuff

    traceroute to fruity.stuff (1.2.3.4), 30 hops max, 42 byte packets
    evil bit detected. re-routing ...

  • I know (Score:4, Funny)

    by Daimanta (1140543) on Wednesday July 23, 2008 @06:52PM (#24312819) Journal

    I exploited this and let a huge cache of people visit my site(127.0.0.1) in stead of the site they wanted to go. It was kickass.

  • The interesting thing, DNS glue (additional) poisoning WAS known, just not widely. EG, the SECOND hit for "dns glue poison" in Google gets http://lists.oarci.net/pipermail/dns-operations/2006-May/000537.html [oarci.net].

    Quoting Emin Gun Sirer:
    Incidentally, the client should be wary of trusting glue records unconditionally, as they are non-authoritative. A well-known cache poisoning attack works by tricking clients to believe glue records for all time and for all queries. Glue should be trusted for only the lookup

    • Re: (Score:3, Funny)

      That's not the attack. Try again.
      • Re: (Score:3, Insightful)

        by szap (201293)

        No, but it's a "feature" that makes the attack possible. Turn it off, or make it stricter, and the attack falls apart.

    • by Anonymous Coward on Wednesday July 23, 2008 @07:51PM (#24313277)

      Congratulations, you confused the mods. Bailiwick checking was added to all DNS resolvers in response to glue poisoning and made cache poisoning through spoofed glue records very difficult. The current problem is that the typical filter rules are insufficient for stopping a glue poisoning attack which appears to come from the authoritative server: Kaminsky found a way around the glue poisoning countermeasure. This means that a very dangerous kind of attack which was thought to be defeated is now possible again.

      • Re: (Score:3, Insightful)

        by nweaver (113078)

        Read the quotation again...

        Emin Gun Sirer: "Glue should be trusted for only the lookup in question[Emphasis added] for only the duration of that lookup.

        This says "No Bailiwick checking at all": glue (additional) records should NEVER be cached. Period.

    • The problem is that glue records are often used to pass the addresses of nameservers required to resolve the domain in question. If that glue record can be passed back with a false address to the nameserver, the entire domain can now be controlled. If you can pull this off with a TLD, then the attack becomes much more serious. It appears at first glance that in addition to TTL restrictions (com has a TTL of two days), bailiwick limitations may limit these kinds of attacks (com, for example, is served off

      • by blueg3 (192743) on Wednesday July 23, 2008 @08:21PM (#24313515)

        It only works because the DNS server caches the result of the glue record, against the recommendation of the above writer.

        The glue record is necessary if, say, you need to provide the address of a nameserver when you provide the name of the authoritative nameserver for a query. You should use that glue record for that query only.

        What happens is that an attacker queries lbixds.google.com (or some other nonexistent domain) and then sends the server he issued that request to a response to that query that also has a glue record giving a false address for ns.google.com. If the DNS server only used that false address for resolving lbixds.google.com, cached lbixds.google.com, and left it at that, then lbixds.google.com would be the only entry the attacker could poison -- basically useless. However, the DNS server caches the glue record giving the address for ns.google.com, too.

        • by g0at (135364)

          ... and then sends the server he issued that request to a response to that query that also has a glue record...

          I don't understand what you mean there; I'm presuming you meant to say "...and the server issues a response to that query that also has a glue record..." In any case, I don't understand why a properly-designed resolver would pay any heed to such a reply. If it's asking the google.com authority to resolve "lbixds", and it receives an answer, why would it also expect (and cache) an unsolicited answer for the domain "ns" (in your example)?

          -b

          • Re: (Score:3, Insightful)

            by Martin Blank (154261)

            Because that's how DNS generally treats requests that fall within the same domain (known as bailiwick protection). The question that you ask has been asked numerous times, and there's certainly good reason to review the logic behind Additional Resource Record handling, but tinkering with DNS is a very tricky thing. A proposed solution may fix the problem, but break other things on a much wider scale.

          • by blueg3 (192743) on Wednesday July 23, 2008 @11:13PM (#24314677)

            So, first part. An attacker is trying to poison a DNS cache. Generally, he'd be interested in poisoning a DNS server that's a caching server for a group of people, like one run by a regional ISP. An efficient way of getting a poisoned record into its cache is to issue a request to that server, and then immediately send a forged response to the server. So, for example, I issue my local nameserver a request for abcd.google.com. It doesn't have this cached (you don't say!), so it starts trying to resolve it. I quickly send it a forge response for abcd.google.com, and it believes me. Transaction IDs make this a slim chance that it'll believe me, but it's still a chance, and I can issue a ton of requests to different fake addresses.

            The answer to the second part is tricky. Basically, say I want to resolve mail.google.com. I have nothing about google.com in my cache. So I contact the nameserver for .com. It isn't authoritative for the google.com domain, but it knows who is, and it tells me so. (Say that it's ns.google.com.) Knowing ns.google.com is the nameserver for that domain is useless without its IP address, so it tacks on a glue record that gives me the address of ns.google.com. Now I can contact ns.google.com to ask it the IP of mail.google.com.

            Originally, these records were just accepted. This is a huge security hole: I could request bob.domainiown.com, send a legitimate response (I control domainiown.com), and tack on a record telling them where ns.google.com is, even though I'm not authoritative for that. Now, such a record can only be attached to a request that is in the same domain, so I need to ask for bob.google.com to attach an ns.google.com record, which requires me to forge a response.

            There are a number of situations where these auxiliary records are necessary, so they can't just be ignored. However, they shouldn't be cached -- they should be used only for the one request that generates them.

  • by neokushan (932374) on Wednesday July 23, 2008 @07:20PM (#24313061)

    There's a tool on the site below that apparently checks if the DNS you're currently using is vulnerable to such an attack. I checked my work DNS and my home DNS - both were fine. Apparently OpenDNS is secure as well, so there's probably nothing to worry about.

    http://www.doxpara.com/ [doxpara.com]

    • Re: (Score:2, Funny)

      by PRMan (959735)

      This link is in French. I'd rather read scripts. At least they're in Geek.

  • Use djbdns! (Score:2, Interesting)

    Even though it is not as popular as BIND but djbdns doesn't have this vulnerability. Remember Dan J Bernstein had the original idea in 2002 about this issue and Dan Kaminsky and Paul Vixie looked into this and found these vulnerabilities.

  • Hey all, I see lots of comments re: OpenDNS as a good solution if your ISP sucks (as mine does) and has not patched.

    But I can't trust my DNS to resolve correctly to OpenDNS.com or whatever.

    Anyone got dotted quads for me?

    • Re: (Score:3, Funny)

      Unfortunately it.slashdot.org has already been poisoned; you actually posted that request to an elaborate mock-up of the real slashdot, and the replies are coming from l33t hackers who are supplying you with false DNS servers which currently appear to work correctly.

      You'd best disconnect from the internet and burn your computer. It's the only way to be sure.

  • DNS Cache poisons you.

    Sorry, I had to.

  • by duplicate-nickname (87112) on Wednesday July 23, 2008 @09:26PM (#24314033) Homepage

    I used one of the tests below and found that my ISP's DNS servers were vulnerable. Now I am using the OpenDNS [opendns.com] servers on all of my clients instead:

    208.67.222.222
    208.67.220.220

    Their servers are not vulnerable, and you can create an account to enable things like antiphishing at the DNS level (much better idea then a browser plug-in).

    If you find that your ISP's routers are vulnerable, your best bet is switch to OpenDNS...or just run your own caching server.

  • by bizitch (546406) on Wednesday July 23, 2008 @09:26PM (#24314039) Homepage

    In case anyone is dumb enough to use a Microsoft DNS server as a authoritative internet DNS server -

    MS has released two lovely patches -

    KB951746 and KB951748

    The problem with this fix is that it turns the DNS.EXE daemon into a UDP socket grubbing whore.

    After the patch, the DNS.EXE daemon grabs no less than 2500 freaking UDP sockets.

    This wreaks havoc on anything that - you know - needs UDP sockets on the same server.

    So far Zonealarm, Blackberry BES and Sphericall VOIP software all break with this "patch"

    Stay tuned for more fun to come ...

    • It sounds more like Zonealarm, BES and Sphericall are broken. Why would they try to listen on a UDP port that is use? There are only 65,000+ ports available, why are they running into conflicts when only 2500 are in use? If the port is not in use, why are they not validating the data they are receiving through UDP?

      Not to mention that similar conflicts are starting to show up on patched BIND servers that are running other services which rely on UDP.

      • Re: (Score:3, Informative)

        by Phroggy (441)

        ZoneAlarm breaks in the sense that it thinks Microsoft's new DNS resolver is behaving like malware and should therefore not be trusted. ZoneAlarm has a ridiculous little slider with three security levels marked "High", "Medium" and "Low"; if you set it to "High" (as recommended), you can't resolve DNS.

        ZoneAlarm has released a patch to work around the problem. You can set your security setting to "Medium" while you download the update.

    • Sorry, but I'm pretty certain that this is needed. It needs to use random UDP ports for each reply. If it's just 2500 ports, that's bad. It should use around 64K ports.

      One chosen at random, for each reply it is sending.

      Or is it something I do not understand in the problem you're describing? Or is it you to that do not understand the problem?

  • by paulbd (118132) on Wednesday July 23, 2008 @09:28PM (#24314045) Homepage
    so, there are a lot of us in the following position, no doubt: we run a router (linksys, whatever) that gets DNS from our ISP. lets assume that the ISP is patched. our local machines use the router for DNS. do we need to patch the router? are its DNS request services even accessible to the external network? can it be compromised in the same way that the ISP DNS could be? i have been wondering this ever since news of this problem broke, and i have still not seen a clear answer.
    • From my understanding, if you are using a DNS proxy on your router (which most SOHO routers seem to do now), then you might be vulnerable. I checked my 2wire (which has no option to turn off DNS proxy for DHCP clients) and they have not updated the firmware in forever. :/

      See my post below about switching to OpenDNS instead.

      • Re: (Score:3, Informative)

        by profplump (309017)

        Generally the proxy on a SOHO router runs as a forward-only cache (or even just a simple proxy) to your ISP's DNS. As such it's really your ISP's DNS that is or isn't vulnerable, because you aren't ever going to see records from anyone else, nor will anyone else know you're asking for them.

        The test listed above -- http://entropy.dns-oarc.net/test/ [dns-oarc.net] -- will let you know what the rest of the world sees as your DNS source address, and whether or not that source is sufficiently randomized.

      • Re: (Score:3, Informative)

        by Cato (8296)

        Mod parent up! Home broadband/WiFi routers may well be vulnerable unless you've specifically checked.

        Unless you've checked the internals of your home router and whether it's using the wrong sort of DNS proxy/cache, I recommend *everyone* with a home router switches their client computers to using OpenDNS, so it's Windows/Mac/Linux directly requesting DNS services from OpenDNS. (If you have DHCP for your clients at least you only need to change the router, but any laptops should also explicitly use OpenDNS

    • by blueg3 (192743) on Wednesday July 23, 2008 @11:19PM (#24314699)

      Most of these routers don't run caching, recursive resolvers -- they just forward the request to your ISP's DNS server. As such, they are immune.

  • NAT routers (Score:4, Informative)

    by smash (1351) on Wednesday July 23, 2008 @10:59PM (#24314627) Homepage Journal
    Be aware that if you patch your DNS server, and it sits behind a NAT that forwards requests, its possible that you are still vulnerable. Would suggest using one of the available tools, (eg on www.doxpara.com) to check your DNS, and if required/possible update your NAT firewall as well.

    Simply patching your DNS server may not be enough.

"Buy land. They've stopped making it." -- Mark Twain

Working...