Massive, Coordinated Patch To the DNS Released 315
tkrabec alerts us to a CERT advisory announcing a massive, multi-vendor DNS patch released today. Early this year, researcher Dan Kaminsky discovered a basic flaw in the DNS that could allow attackers easily to compromise any name server; it also affects clients. Kaminsky has been working in secret with a large group of vendors on a coordinated patch. Eighty-one vendors are listed in the CERT advisory (DOC). Here is the executive overview (PDF) to the CERT advisory — text reproduced at the link above. There's a podcast interview with Dan Kaminsky too. His site has a DNS checker tool on the top page. "The issue is extremely serious, and all name servers should be patched as soon as possible. Updates are also being released for a variety of other platforms since this is a problem with the DNS protocol itself, not a specific implementation. The good news is this is a really strange situation where the fix does not [immediately] reveal the vulnerability and reverse engineering isn't directly possible."
I would say that this is a pretty serious issue... (Score:2, Interesting)
Re:not that big of a problem (Score:5, Interesting)
And that's precisely why the first thing I do on a home router is to disable the cashing nameserver and install DJBDNS on a Linux box instead. :)
Let the DJBing begin! (Score:4, Interesting)
Attention all DJB software fans, here's another chance to champion the superiority of DJB's software. Don't forget to include positive commentary on the licensing and patch status.
Thanks!
The real solution... (Score:5, Interesting)
...is to sign the root and deploy DNSSEC.
Unfortunately that's politically non-expedient. But now that this vulnerability is out there, maybe the political will can at last materialize.
The second-best solution is to deploy DNSSEC using DNSSEC Lookaside Validation [rfc-archive.org] (which means you get trust anchors from some other known site, not from the root zone). And that's available now.
The worst thing about DNSSEC is it's too damn complicated at present; there needs to be the equivalent of "one-click" zone signing. ISC (and others) are working on getting us closer to that.
The third-best solution is what's been done today. We just made it a lot harder to exploit the vulnerability--typically about 16000 times harder, depending on your configuration. There's a difference between "harder" and "impossible" though.
Re:not that big of a problem (Score:3, Interesting)
Re:More independent verification needed (Score:2, Interesting)
>Did anyone else notice that today is Tuesday?
http://www.microsoft.com/technet/security/bulletin/ms08-037.mspx
The Death of BIND (Score:5, Interesting)
I help admin one of the larger DNS systems (90,000+ zones) and our initial testing of the patched BIND showed it having half the performance of prior versions. That prompted us to very quickly replace all BIND caching servers with something else. We had already replaced authoritative services with something else because of BIND's lackluster performance. 3+ hours to load zones on reboot is quite frankly ridiculous. We really had no choice. Microsoft said they were going to open their mouths on a certain date, and we had a massive time crunch. We can't be the only company that simply had to ditch BIND. And I can't say I'm sorry to see it go. I'm sure mister Vixie is a great guy, but his domain name service is, and always has been complete garbage.
Re:The Death of BIND (Score:3, Interesting)
Oh, and despite the Ron Paulesque nature of the DJB fanbase, I'd still recommend the djbdns suite as the best free solution. I can think of a little ISP in Iowa that I set up with djbdns that has to be happy they don't have to do a thing right now.
Re:So give a layman explanation (Score:3, Interesting)
Do I trust it? I don't know. Tell me the facts. The sheer quantity of internet shenanigans going on of late makes me suspicious. This sounds like they're patching for a remote root exploit, but a protocol issue won't do that. DNS poisons? What is it then?
They're making us patch everything, and aren't telling us what it does. These are my systems, and you're going to tell me precisely what's going on before any of your code gets to run.
Re:More independent verification needed (Score:3, Interesting)
Re:The real solution... (Score:5, Interesting)
The political will has been shifting a lot lately. I've spoken directly to the gentleman in charge of managing the root zone, and he says that technically speaking it would be an overnight change. All the DNSKEYs and RRSIGs have been generated, he's waiting for the OK from above, which he says appears to be more likely with each passing day.
The largest DLV repository that validates that the DNSKEYs belong to who they say they belong to (think Verisign-style verification), is run by isc.org. At this writing, this zone has a grand total of twenty five DLV records. Not exactly what I would call useful from a security standpoint, although it is a start.
I'm a part of a DNSSEC monitoring project (called SecSpider [ucla.edu]). We have a set of pollers distributed around the world from which we collect data about the current deployment. In conjunction with this, when we are able to collect an identical DNSKEY RRset, we generate DLV records and serve them from one of our delegations. For details on how to use it, check out our blog [ucla.edu]. This serves the same purpose as ISC's repo, but the data is collected in an orthogonal manner. We currently have DLV records for over 12000 zones, although we haven't directly verified the identity of any of them.
This I can't disagree with. DNSSEC is over-engineered by academic crypto people. In fact, DNS in general is somewhat over-engineered, but at least it was successfully rolled out. ISC's efforts are valiant, and hopefully with a larger roll-out their tools will become de-facto.
Yes, the difference is that impossible isn't possible. You can't stop a determined hacker, not even with the best technology (think of social engineering attacks). Security is like an onion: as soon as you pull away one layer there are a dozen more to get in your way.
Re:The Death of BIND (Score:5, Interesting)
We've known about it for a while. Certain providers were contact about it a while ago. Any other information is confidential, as I said, not my call. We were seeing QPS start out at 5,000ish then drop to 3,000ish during our testing. With the 30ish days we had to react, the path of least resistance was replacement. The only version we were given to play with was 9.5.0rc1, which was three weeks ago. Understand that all this was driven by Microsoft saying they were going to spill the beans on a certain date. So your "now" wasn't good enough to meet our deadline. I'm not a huge fan of replacing production services that are "working fine", and BIND was performing adequately for us before we got the word on this vulnerability from one of our vendors. At this point, we are "BINDless" though, and the mountains we had to move will probably not be moving back.
Re:The Death of BIND (Score:5, Interesting)
Thank you for your reply.
The only providers who should have received the patches earlier than today were a small group of our support customers who've contracted for advance notice of security issues. They were all told that this was a preliminary patch only, and to watch for betas with better performance--and that the patches were highly confidential and covered by nondisclosure agreements.
Your installation profile doesn't seem to match any of theirs, and in any case I hope they would have let us know before they eliminated BIND from their networks. If you are not one of our support customers, then I'm very concerned that you had the patch in your hands as early as you say you did. Partly because it means you only got a partial picture of the situation, and partly because it means someone violated our trust--and it's important that we know who, so we can emphasize to them that this is not a joke.
Can you please tell me where it was that you got the patch you tested?
Re:The Death of BIND (Score:5, Interesting)
I am one of your support customers. Thing is, I'm not comfortable saying much else because we were told the 10th was the magic day, and it leaked 2 days early. To be clear, the patched BIND worked the way it's supposed to, and I'm sure it's going to work fine for most customers. With the news that you have patched versions that address the issues with heavily taxed servers, probably almost all of them. We jumped the gun because that's what we do. : ) And I'm sorry I was critical on BIND. It is still the industry standard, and the original daemon that made it possible to get rid of enormous host files. There's a degree of comfort in running *the* DNS daemon, and we were doing it even though my organization is decidedly anti opensource. That speaks volumes.
Re:More independent verification needed (Score:3, Interesting)
I should have specified senior sysadmins, to be sure -- there's no reason to be that picky about everyone on staff. OTOH, the whole point of having senior people is to have someone who can dig into and eventually solve any issue that comes up -- and that means understanding what's under the abstractions. Addressing your comparison, software isn't as reliable as hardware is these days; maybe in ten years this won't be the case, but right now having someone available who can dig into the operating system (or your app server, or any layer inbetween) is damned useful.
Re:Oh cool! (Score:1, Interesting)
Well strangely enough.... maybe it's DNS has entry has been compromised.
I'm accessing doxpara.com (66.240.226.139) from two seperate computers (work and home), and it's giving me two quite different pages.
When I access it from work - I see a full blog style page including the checker, but when I access it from home, I get a plan text only version of just the DNS Checker text, and a standard HTML button 'Check My DNS'. The rest of the website does not appear at all.
I've done a tracert to doxpara.com from both computers, and from home I get 'Request timed out' on every hop except the last one - 22. The tracert from work gives normal hop information on all hops.
Very very odd!
Re:My first response is to call Bullshit (Score:5, Interesting)
The exploit is trivial to figure out - if a caching DNS server has recursion enabled, and also sends the outgoing DNS queries to the authoritative servers over a fixed (or predictable) UDP port, you can just send forged UDP responses together with your recursive DNS query.
The bogus response will be cached and will affect other users of the DNS server.
The attacker also needs to also guess the transaction ID (16-bit value), but they can send multiple bogus UDP responses with different transaction IDs.
Also, vulnerable implementations may generate transaction IDs in a predictable way, so the attacker can obtain the current state of the PRNG by generating a recursive DNS query to DNS zone under attacker's control.
Such an attack cannot be performed from a typical home broadband connection, as most ISPs will not route packets originating from IP addresses not allocated by the ISP.
The attacker needs to be in control over the routing/egress filtering within his AS (e.g. an enterprise-level Internet service).
Re:The Death of BIND (Score:4, Interesting)
Ah. Well, that's a relief, then, of sorts. :)
Thanks for the kind words about BIND and I'm sorry it didn't meet your needs this time. Please encourage your management to contact ISC and tell us about your choices and your experiences.
Re:not that big of a problem (Score:5, Interesting)
[This is Dan Kaminsky]
No, this attack is much worse than my home router exploits (which, admittedly, aren't getting fixed anytime soon). While it is indeed nice to have compromised firmware living somewhere on your LAN, being able to generically attack everyone using a given ISP is a much more valuable proposition -- especially when I don't need to worry about the pesky paranoid people changing their router passwords, or even using a router I haven't built a script to attack.
I'm being very circumspect about implications. August 6th will be an interesting day.
It's funny you mention the iptables auto-block. There's been a known attack here for years -- spoof the root servers attacking you, and...yeah.
That being said, we agree on the ultimate fix...run yum update, and be done.