Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security The Internet

New DoS Vulnerability In All Versions of BIND 9 197

Icemaann writes "ISC is reporting that a new, remotely exploitable vulnerability has been found in all versions of BIND 9. A specially crafted dynamic update packet will make BIND die with an assertion error. There is an exploit in the wild and there are no access control workarounds. Red Hat claims that the exploit does not affect BIND servers that do not allow dynamic updates, but the ISC post refutes that. This is a high-priority vulnerability and DNS operators will want to upgrade BIND to the latest patch level."
This discussion has been archived. No new comments can be posted.

New DoS Vulnerability In All Versions of BIND 9

Comments Filter:
  • Interesting (Score:2, Interesting)

    This is very interesting. I'm sure the people behind BIND will scramble to get things sorted out ASAP, but I wonder how long it will take other vendors (Apple, I'm looking at you!) to release a patch.

    I do have to wonder about exploits like this that seem initially incredibly serious, yet nothing much comes from them and they don't seem to get exploited to the extent that you might expect they would - this one reminds me of l0pht's famous claim that they can bring down the internet in 30 minutes. If this vul

    • Re: (Score:2, Informative)

      by d3matt ( 864260 )
      so... any BIND server would be down for a bit... anyone with a caching name server would still be able to surf.
    • by rs79 ( 71822 )

      " This is very interesting. I'm sure the people behind BIND will scramble to get things sorted out ASAP, but I wonder how long it will take other vendors (Apple, I'm looking at you!) to release a patch. "

      I'd be less concerned about that than I would be about how long it will take for people to do something about this on their nameservers. IMO the best update to BIND is DJBDNS but that's just me.

      Either way, there are FIVE HUNDRED THOUSAND nameservers out there. Some of them still run Bind 4.7.

  • Use Unbound or NSD (Score:5, Informative)

    by nwmcsween ( 1589419 ) on Tuesday July 28, 2009 @09:13PM (#28861589)
    I don't want to bash BIND but it has had a fair amount of sec issues (well a lot), try unbound or nsd instead http://unbound.nlnetlabs.nl/ [nlnetlabs.nl] http://www.nlnetlabs.nl/projects/nsd/ [nlnetlabs.nl]
    • by medlefsen ( 995255 ) on Tuesday July 28, 2009 @09:54PM (#28861789)
      or djbdns. We use it where I work and other than a slight adjustment to djb-land it has been wonderful. I know people appreciate how powerful BIND is and maybe some people need that. I suspect though that most people just need their DNS servers to serve their DNS records or provide a caching DNS server for local lookups and for that BIND seems to be bloated and insecure.
    • Re: (Score:3, Interesting)

      by abigor ( 540274 )

      PowerDns for the win. Plus it reads legacy BIND zone files.

  • Well.. (Score:2, Funny)

    Well DNS operators do appear to be in a bit of a bind don't they?

  • by mcrbids ( 148650 ) on Tuesday July 28, 2009 @09:17PM (#28861609) Journal

    Was once the day whe a notice like this would kick off a flurry of migrationn plans, compiler scripting, compiling, and restarting servers in the dead of night. (and bonuses to match!)

    But now?

    # yum -y update && shutdown - r now

    Sometimes I pine for the 'good old days'. A little. (ok, hardly at all)

    • You seem to be just taking all changes and rebooting. I do that all the time on my ubuntu laptops but I wouldn't manage my servers that way.

      Having said that patching in netbsd will require a compilation at my end. It would be nice if I could just update a package. The infrastructure is right there for it...
      • by secolactico ( 519805 ) on Tuesday July 28, 2009 @11:44PM (#28862369) Journal

        You seem to be just taking all changes and rebooting. I do that all the time on my ubuntu laptops but I wouldn't manage my servers that way.

        More so because some package managers (such as CentOS) tend to replace customized init.d files with the stock ones (renaming the ones you had). This is not really a big deal, but it sometimes breaks some services.

        • Re: (Score:3, Insightful)

          by FireFury03 ( 653718 )

          More so because some package managers (such as CentOS) tend to replace customized init.d files with the stock ones (renaming the ones you had). This is not really a big deal, but it sometimes breaks some services.

          If you are modifying packaged files that aren't marked as %config in the RPM spec then you're doing it wrong. 99% of the time you don't need to modify those files anyway, the other 1% of the time you really should be building a custom package and adding it to yum's exclude list.

    • I'm just hoping that CentOS pushes out the update before 10:00 PM MST today.

      Why?

      So I'll get my daily e-mail status update, telling me to do just that: run yum, and then restart (just bind) -- as opposed to seeing it tomorrow.

      As a footnote, it is generally a good thing to subscribe to whichever vendor's security-announce list that you use. It is really nice getting e-mail notifications of security-related package updates. CentOS has one, right here: http://lists.centos.org/mailman/listinfo/centos-announce [centos.org]

    • by lordkuri ( 514498 ) on Tuesday July 28, 2009 @09:23PM (#28861629)

      Why in the holy hell would you reboot a server to put a new install of BIND into service?

      • Because modern-day admins don't know how to restart a service?

        Oh, wait, these are fellow Linux "admins" we're talking about...
        • The strange thing is that he used shutdown -r now instead of this newfangled reboot the kids like to type. If you know what shutdown does, you should know when to not use it.

        • It gets restarted automatically. Check system.log.

        • by mcrbids ( 148650 )

          Because modern-day admins don't know how to restart a service?

          Oooh! Oooh! I think I can get this one! Either of these should work:

          # service named restart;
          # /etc/rc.d/init.d/named restart;

          But... if you have a properly designed network, why the **** wouldn't you reboot your name server? Given that there are minimally TWO of them registered for your domain name, that the DNS protocol is designed to seamlessly fail over in the event of a failure, rebooting the name server will have no discernible effect for any

          • Sure, you should have four authoritative nameservers. There is still no excuse for bouncing an entire box when a simple service restart is completely sufficient. Do you honestly issue a host restart every time you want [insert DNS daemon here] to kick over? If you do, you're completely retarded.

            Assuming you have failover for other services running on your network (as you probably should if you're working in an organization that gives two rips about service availability), do you restart entire servers eac
          • Because modern-day admins don't know how to restart a service?

            Oooh! Oooh! I think I can get this one! Either of these should work:

            # service named restart;
            # /etc/rc.d/init.d/named restart;

            Properly designed packages do "service foo condrestart" on upgrade anyway, so most of the time you don't need to manually restart anything.

            But... if you have a properly designed network, why the **** wouldn't you reboot your name server? Given that there are minimally TWO of them registered for your domain name, that the DNS protocol is designed to seamlessly fail over in the event of a failure, rebooting the name server will have no discernible effect for any end user,

            I'm afraid you're wrong. If one of your DNS servers disappears, stuff will continue to work *slowly*. If you have 2 NS records then each server will get 50% of the requests. That means that 50% will go to the dead server and have to wait for a timeout before trying the working one.

            There are other reasons for not rebooting - restarting bind takes approximately 2 second

      • by ZeekWatson ( 188017 ) on Tuesday July 28, 2009 @10:50PM (#28862089)

        If you're running a serious server you should always do a reboot test after installing any software. I've been burned many times by someone doing a "harmless" installation only to find out 6 months later a critical library was upgraded with an incompatible one (a recent example is expat 2.0) and the server doesn't boot like it should.

        Always reboot! Even with the super slow bios you get in servers nowadays it should only take 2 minutes to be back up and running.

        • So...with linux, you should always reboot upon applying any sort of application update. I weep for the future of our computing race.
          • Re: (Score:3, Interesting)

            by Vancorps ( 746090 )
            Why? You're DNS servers are clustered and load balanced right? rrright? Those of us that need our infrastructure up don't think twice about rebooting even during the day! A golden age we live in indeed when I can just take the server out of the load balancer rotation, apply updates, perform reboot rest, and then put it back into rotation repeating the steps for all servers in the cluster.
          • by dbIII ( 701233 )
            It's what nights, weekends or redundant systems are for. It's not as if you make major changes every week to things that might muck up the startup sequence.
        • by MrMr ( 219533 )
          I wouldn't consider a simple parser that replaces a critical library a 'harmless' installation.
          Please tell which repository managed to mess up that badly, so I can steer away from it in the future.
        • Re: (Score:2, Insightful)

          by sago007 ( 857444 )

          If you're running a serious server you should always do a reboot test after installing any software.

          You should obviously wait to outside working hours in case it actually breaks something.

          If you apply an update over ssh you should test that you can create a new ssh connection before you disconnect the first one.

      • Re: (Score:3, Insightful)

        Because you may have a stack of other pending updates, particularly kernels, and this has been the first "gotta switch" update in quite some time for those core servers? Also because without the occasional reboot under scheduled maintenance, it's hard to be sure your machines will come up in a disaster. (I've had some gross screwups in init scripts and kernels cause that.)

    • And hope to hell you've got some sort of LOM for when your server doesn't come back up.
    • This isn't Windows...

      # yum -y update named\* && service named restart

      (Not sure if yum [or apt] would restart named and NOT willing to take the chance.)

    • # yum -y update && shutdown -r now

      and pray to FSM that it comes back up.

  • by Yvan256 ( 722131 ) on Tuesday July 28, 2009 @09:24PM (#28861631) Homepage Journal

    Good thing I'm using FreeDOS!

  • by bogaboga ( 793279 ) on Tuesday July 28, 2009 @09:25PM (#28861637)

    According to this document [ripe.net], BIND 9 has issues including being monolithic, having a "Bad Process Model", Hard to Administer and Hard to Hack. That's not a good reputation to have.

    To some extent, these issues apply to everything Linux save for the last point. I am waiting for the time these points will not apply to Linux and its associated software.

    I must say that understanding BIND's configuration file was not that easy for me at first but after trying several times, I can say I am almost an expert. Things can be made simpler though. A text based interactive system could be of a lot of help. Tools like Webmin come in handy too though they require that a system be running initially.

    • by MrMr ( 219533 )
      BIND is not a typical Linux application. It was developed at Berkeley and shipped with BSD Unix, and later also with Windows.
      Not a very clever bit of trolling.
  • by Olmy's Jart ( 156233 ) on Tuesday July 28, 2009 @09:28PM (#28861653)

    From the advisory: "Receipt of a specially-crafted dynamic update message to a zone for which the server is the master may cause BIND 9 servers to exit. Testing indicates that the attack packet has to be formulated against a zone for which that machine is a master. Launching the attack against slave zones does not trigger the assert."...

    So an obvious workaround is to only expose your slave DNS servers and to not expose your master server to the Internet. That's part of "best common practices" isn't it? You have one master and multiple slaves and you protect that master. Come on, this is pretty simple stuff. Just simple secure DNS practices should mitigate this. Yeah, if you haven't done it that way to begin with, you've got a mess on your hands converting and it's easier to patch. But patch AND fix your configuration.

    • by Jurily ( 900488 )

      That's part of "best common practices" isn't it?

      Two posts up there is someone mentioning a reboot to solve this. Best practices seem like rocket science around here...

    • Hmm, both of my public servers are 'masters' because the zones are synced via rsync over SSH from an internal server which actually has the master copy of the zones. However as far as bind is concerned, they public-facing ones are masters.

      I could potentially trick it into thinking it's a slave zone but seems too fiddly/risky, so I'll just wait for it to be patched. Nagios will tell me if they stop working, anyway.

      • Perhaps you should rethink that mistake and create a real "master" and make them "slaves". The system was designed this way for a reason. It baffles me why people do things this this way.

        • by raddan ( 519638 ) * on Tuesday July 28, 2009 @11:01PM (#28862151)
          Because lots of people don't want intruders being able to affect the actual zone data in case an outward-facing DNS server gets compromised. Using SSH to transfer zone data is much easier and more secure than BIND's own zone transfer mechanisms (e.g., you can automate and schedule them), and you don't have to worry about zone transfers through firewalls. Troubleshooting all the weird crap that can happen between different DNS daemons all supposedly doing regular AXFRs is a real pain in the ass. SSH makes life easier.

          If having a DNS machine on the Internet that thinks it is a master really is a mistake, when then, BIND9 is a piece of shit. This is the most straightforward thing a DNS daemon should be asked to do.

          Nowhere in BIND's manual does it say people have to use BIND in a master/slave setup.
          • Copying zone files over ssh means you then have to rndc reload/reconfig every time you change a single A record.

            With a "normal" hidden master + slaves setup, at least you can send Notifies which will cause the slaves to query the master and update the zone without a reload. Also, this is the only sane way to provide secondary DNS for a trusted third party.

            If you have a lot of zones, it can take a while to reload bind. If you only have a handful of zones, and you don't do secondary DNS, I'm sure reloading

            • by kju ( 327 ) *

              There is no need to reload all zones. You can easily detect which zonefiles have changed since the last reload and do "rndc reload ".

            • Re: (Score:3, Informative)

              As kju responded, you can reload on particular zones if you want. The logs seem to suggest that bind itself only actually reloads the zones which have changed (i.e. mtime is newer than the last time it was loaded). I only get messages that it's loading every zone if I actually restart bind (stop and start), telling it to reload I only get messages about zones that have actually been changed.

              I haven't noticed any performance hit from doing a simple reload, but I only have 120 zones.

              If we were supplying secon

          • Re: (Score:3, Informative)

            by Fastolfe ( 1470 )

            So I'm responding not because I disagree with your conclusions, but I disagree with the logic you're using to justify them:

            Because lots of people don't want intruders being able to affect the actual zone data in case an outward-facing DNS server gets compromised. ...
            If having a DNS machine on the Internet that thinks it is a master really is a mistake, when then, BIND9 is a piece of shit. This is the most straightforward thing a DNS daemon should be asked to do.

            You start off with a reasonable statement (th

        • I do it that way mostly because I didn't previously consider "type master" to be a potential vulnerability (they don't have dynamic DNS or anything fancy enabled). Maybe it is time I looked into djbdns, now that it's no longer a pain in the ass to install.

          As for not using the built-in zone transfer method, that's partly because I don't particularly like it, but mostly because I don't see any reason to allow access to our internal hosts from our DMZ unless absolutely necessary -- and this is not a case where

          • by Fastolfe ( 1470 )

            DNS queries are not encrypted, so if you believe the contents of your DNS zones should be secret, you'd better hope nobody queries them. You may be interested in TSIG, which can authenticate your secondaries to your master. If you'd prefer to store and manage your zone files "offline", pushing them out to one or more masters through SSH or something might be the right thing to do, but if you already have an internal master, and need to update some public-facing slaves/shadow masters, there's no reason to

            • Well the SSH is only used a convenient transport mechanism, with a nice side effect of some kind of authentication that the host it thinks its transferring the data to really is that host. But all the transfers happen through our internal network anyway, so it's not really important. The reason it's preferred is because the internal server connects to the DMZ server, rather than the other way around. We try to avoid letting DMZ hosts contact internal servers whenever possible, under the assumption that the

    • This [rfxn.com] might help too.

    • by sa3 ( 628661 )

      You may hide your master DNS servers but your slaves are probably still master for "localhost".

  • by syousef ( 465911 ) on Tuesday July 28, 2009 @09:30PM (#28861669) Journal

    ...to Windows! DOS is just so 80's and 90's it's not funny.

    (Suggested mod: +1 funny)

  • djb (Score:5, Funny)

    by dickens ( 31040 ) on Tuesday July 28, 2009 @09:40PM (#28861721) Homepage

    Somewhere I think djb [cr.yp.to] is managing to both smile and raise his eyebrows simultaneously.

  • This is a reason why I want to be able to do LDAP based zone updates.

  • by Bilbo ( 7015 ) on Tuesday July 28, 2009 @10:02PM (#28861831) Homepage
    It's unlikely that, if you're running a DNS server inside of your private network, someone on the outside is going to be able to hit it. But then, like all other vulnerabilities, you combine this one with a couple of other attacks (such as a non-privileged login), and all of the sudden you've got something really dangerous. :-(
    • Re: (Score:3, Insightful)

      by Olmy's Jart ( 156233 )

      A server behind a firewall does not imply a server on a private network. You can have firewalls in front of a DMZ on a public address providing services. Firewalls are used for much more than merely "private networks". Those are two orthogonal issues.

      OTOH... A master on a private network providing zone feeds to slaves on various other networks (firewalled or not) on public addresses would be a very good idea.

      • Please remember that most "private" networks aren't. They have laptop or VPN access to potentially compromised hosts, which may insert attacks from behind your typical firewalls. I've had considerable difficulty explaining this to management who have, effectively, been lied to for years by their own staff who refuse to accept responsibility for the existing insecure mess, and who are uninterested in the unglamorous and unpopular work of fixing it.

  • OMG... (Score:5, Interesting)

    by Garion911 ( 10618 ) on Tuesday July 28, 2009 @10:56PM (#28862119) Homepage

    I reported a bug *very* similar to this back in Oct, and only now its coming to light? WTF? I submitted this back in january and it was rejected. Ah well. Here's my page on it: http://garion.tzo.com/resume/page2/bind.html [tzo.com]

  • by kju ( 327 ) * on Wednesday July 29, 2009 @12:02AM (#28862459)

    For a quick "fix":

    iptables -A INPUT -p udp --dport 53 -j DROP -m u32 --u32 '30>>27&0xF=5'

    Will block (all) dnsupdate requests.

    • that blocks all updates, including legit updates. If you're running a server that needs to process non-malicious updates, your best bet is to run a hidden-master/public-slave combination of servers (the attack doesn't work on slave zones).
  • Does anyone know if CentOS 4 will have an update for BIND to ver 9.4.3-P3, 9.5.1-P3 or 9.6.1-P1?
  • Poor coding (Score:3, Interesting)

    by julesh ( 229690 ) on Wednesday July 29, 2009 @03:34AM (#28863463)

    Why on earth is BIND shipping with assertions that cause the entire server to exit when they fail? They should just cause processing of the current request to exit.

  • Time to let go of that ancient rule that ports under 1024 are root-only.

  • And this is why asserts should *never* go into production builds of any project. It's fine to have asserts in your debug build, but ALWAYS deal with the unexpected case immediately after your assert (which should be compiled out in release mode).

    If you have no way of throwing an error and handling it gracefully back up your call stack (no, you don't always need exceptions for this), then you've done a shit job!

  • ... about a security flow in MS DOS nowadays ?

Avoid strange women and temporary variables.

Working...