Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security The Internet

Attack Code Published For DNS Vulnerability 205

get_Rootin writes "That didn't take long. ZDNet is reporting that HD Moore has released exploit code for Dan Kaminsky's DNS cache poisioning vulnerability into the point-and-click Metasploit attack tool. From the article: 'This exploit caches a single malicious host entry into the target nameserver. By causing the target nameserver to query for random hostnames at the target domain, the attacker can spoof a response to the target server including an answer for the query, an authority server record, and an additional record for that server, causing target nameserver to insert the additional record into the cache.' Here's our previous Slashdot coverage."
This discussion has been archived. No new comments can be posted.

Attack Code Published For DNS Vulnerability

Comments Filter:
  • by Anonymous Coward on Wednesday July 23, 2008 @08:01PM (#24312905)
    My understanding is that it's DNS resolvers, not servers, which need to be patched.

    Now if your windoze box doesn't do its own resolution, it forwards all requests to a caching resolver (e.g. at your ISP) then your ISP's resolver needs to be patched. Or if they run DJB's dnscache, it's not vulnerable because DJB recognised the problem years ago.

  • Re:Here we go... (Score:3, Insightful)

    by Darkness404 ( 1287218 ) on Wednesday July 23, 2008 @08:13PM (#24313005)

    This has to be the worst time ever to be a web surfer.

    Ummm... No. Today I can easily surf the 'net with just about every ad blocked, have Flash blocked when I want it to, but re-enable it for say, YouTube, all at the click of a mouse. I can use an OS and browser that is free and open source. I can surf 100% anonymously easily. I can download every video game I played as a child in less than an hour. And I can hear just about any song I ever would want to hear in less than a minute.

    Sure, some things suck today, BT throttling, the ISP's "No-Usenet" crusades, but all in all, it is a better time than the very early 2000s or the late 90s.

  • Um... even if you run your own caching server, if your ISP runs a "transparent" web proxy it will do its own dns. You may in fact run DJB which is immune from this bug, but if your ISP runs an unpatched dns server you'll still be scrod despite running your own caching server.

    Slick huh?

    They need to take the dns lookup out of the web proxies.

  • by Anonymous Coward on Wednesday July 23, 2008 @08:51PM (#24313277)

    Congratulations, you confused the mods. Bailiwick checking was added to all DNS resolvers in response to glue poisoning and made cache poisoning through spoofed glue records very difficult. The current problem is that the typical filter rules are insufficient for stopping a glue poisoning attack which appears to come from the authoritative server: Kaminsky found a way around the glue poisoning countermeasure. This means that a very dangerous kind of attack which was thought to be defeated is now possible again.

  • Re:Here we go... (Score:5, Insightful)

    by Anonymous Coward on Wednesday July 23, 2008 @09:07PM (#24313395)

    Yes, there was. Before there was bailiwick filtering, spoofing was even easier. Back in the days, DNS servers would even accept "responses" with bogus data out of the blue. We've come a long way and we don't stop here. A patch of bad weather is ahead, but the sky is not falling.

  • by szap ( 201293 ) on Wednesday July 23, 2008 @09:10PM (#24313419)

    No, but it's a "feature" that makes the attack possible. Turn it off, or make it stricter, and the attack falls apart.

  • by nweaver ( 113078 ) on Wednesday July 23, 2008 @09:17PM (#24313495) Homepage

    Read the quotation again...

    Emin Gun Sirer: "Glue should be trusted for only the lookup in question[Emphasis added] for only the duration of that lookup.

    This says "No Bailiwick checking at all": glue (additional) records should NEVER be cached. Period.

  • Re:Here we go... (Score:4, Insightful)

    by Anonymous Coward on Wednesday July 23, 2008 @09:22PM (#24313529)
    This attack vector has been around for /years/. Just look at the list of affected systems. Some friends and I had stumbled on this a few years ago (yes, and the fact that you can insert yourself as an authoritative nameserver for that domain,) but we figured it was so obvious that it didn't need to be announced. That coupled with the fact that phishing wasn't really as popular back then. But now that the cat is out of the bag, as it were, you definitely want to patch your machines if they have not been. This is mostly dangerous to people who use Nameservers of large ISPs (which admittedly is a large portion of the internet userbase.)

    I guess this is just a wake up call that if you find such large flaws in network systems that could possibly affect millions, if not billions of users, that you should try to get the word out and get the products fixed beforehand.
  • by blincoln ( 592401 ) on Wednesday July 23, 2008 @09:27PM (#24313565) Homepage Journal

    They need to take the dns lookup out of the web proxies.

    The problem with doing that would be that it would then be impossible (at least using current DNS software, as far as I know) to allow clients on an internal network to have limited internet access without allowing them to perform DNS tunneling (and thereby upgrade their internet access to "full").

    Once someone (anyone?) releases a DNS package that allows firewall-style rules (e.g. "client on this range of IPs may only resolve subdomains of the following domains...", "clients may only look up X distinct subdomains each of Y domains every Z hours" then the picture would probably change.

  • by DragonHawk ( 21256 ) on Wednesday July 23, 2008 @09:52PM (#24313771) Homepage Journal

    Oh noes, the world is going to crash down around us! Just saying, why overreact?

    A problem you ignore will have full impact. A problem you prepare for and take counter-measures against is prevented from having a serious impact. That's the whole point.

    We spent great effort fixing Y2K bus, thus prevented the bugs from causing serious damage. Therefore, you conclude, we should not have fixed the Y2K bugs.

    I guess, since seat belts have saved lives, we should not wear them.

    Get it now? :-)

  • Re:Use djbdns! (Score:1, Insightful)

    by Anonymous Coward on Wednesday July 23, 2008 @09:54PM (#24313787)

    Please understand that there is a downside to massively randomizing your source ports -- you use a lot more of them! File descriptor consumption->exhaustion is an issue that a lot of folks are running into with the patched versions of BIND. People running djbdns no doubt ran into similar problems years ago and had to deal with them too.

    So, before you jump all over ISC about dropping the ball, understand that a capacity/security tradeoff was made based on an assumption that it was difficult to exploit this knownweakness. Kaminsky discovered a not-so-difficult way to exploit it, and that's why the landscape changed.

    If DNSSEC were implemented, this would all be a non-issue. After the immediate crisis subsides, expect a lot of recriminations and finger-pointing with respect to why DNSSEC is still undeployed after all this time.

  • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Wednesday July 23, 2008 @11:12PM (#24314331)

    Yes, DJB "recognized" the problem by lobotomizing DNS, and he refuses to consider what will solve the problem once and for all, DNSSEC. Right...

  • by Martin Blank ( 154261 ) on Thursday July 24, 2008 @12:11AM (#24314669) Homepage Journal

    Because that's how DNS generally treats requests that fall within the same domain (known as bailiwick protection). The question that you ask has been asked numerous times, and there's certainly good reason to review the logic behind Additional Resource Record handling, but tinkering with DNS is a very tricky thing. A proposed solution may fix the problem, but break other things on a much wider scale.

  • Re:Use djbdns! (Score:3, Insightful)

    by PlusFiveTroll ( 754249 ) on Thursday July 24, 2008 @03:41AM (#24315595) Homepage

    Cryptography is not magic.

    While I don't expect the AC to read this, it lays out why we are not going to see DNSSEC for some time. http://www.internetnews.com/security/article.php/3758566/Is+DNSSEC+the+Answer+to+Internet+Security.htm [internetnews.com]

    http://en.wikipedia.org/wiki/DNSSEC [wikipedia.org] is also suprisingly good with a lot of easy to read information, and includes why the current DNSSEC specs may open up more security risks.

    example1 >Note that someone could deliberately or inadvertently cause a degradation of service by sending large number of queries for uncached RRs, for example, traversing the NSEC RR chain for a large TLD.

    example2>DNSSEC forces the exposure of information that by normal DNS best practice is kept private. NSEC3 drafted in march 2008 may correct this.

    Also, most people don't realize that DNSSEC is not an end-to-end security mechanism. It only protects DNS data between an authoritative name server and a caching name server. Currently no operating system resolver libraries that I know of verify that the caching server is providing legitimate results to DNSSEC protected domains. Until your OS or applications provide DNSSEC support, running your own DNSSEC enabled cache is the only way to currently protect your dnssec queries from being forged.

  • by geminidomino ( 614729 ) * on Thursday July 24, 2008 @12:01PM (#24319853) Journal

    Doesn't OpenDNS fail to properly return NXDOMAIN results, or is that another service (I've not slept in >24 hours, so I could be mistaken.)

There are two ways to write error-free programs; only the third one works.

Working...