Forgot your password?
typodupeerror
Security Government The Internet News

Experts Tell Feds To Sign the DNS Root ASAP 147

Posted by kdawson
from the digital-john-hancock dept.
alphadogg sends along news that the US National Telecommunications and Information Administration has gotten plenty of feedback on its call for comments on securing the root zone using DNSSEC. The comment period closed yesterday, and more than 30 network and security experts urged the NTIA to implement DNSSEC stat. There were a couple of dissenting voices and a couple of trolls.
This discussion has been archived. No new comments can be posted.

Experts Tell Feds To Sign the DNS Root ASAP

Comments Filter:
  • by geekmux (1040042) on Tuesday November 25, 2008 @02:15PM (#25889749)

    (Satan unpacking his sno-cone machine)

    "'Bout damn time I got to use this thing..."

  • by jonaskoelker (922170) <jonaskoelker&gnu,org> on Tuesday November 25, 2008 @02:21PM (#25889843) Homepage

    Is DNSSEC ready for prime time?

    Last I checked (admittedly more than a year ago), they were still working on a good way of refreshing the key; there were also other problems with DNSSEC that made it not quite ready for prime time.

    Does anyone know if the people involved have all said "Yep, it's done now, go use it"?

    It'd suck to be in the IPv4 situation: there's this thing we want to migrate to as soon as everyone else does as well.

    It's easy to say "let's try out some shit and drop it if it doesn't work" when very few people grow dependent on your work; when the whole world does so, it's a bit more difficult.

    • by WiglyWorm (1139035) on Tuesday November 25, 2008 @02:39PM (#25890093) Homepage
      Well, the U.S. owns the internet, right? We should just pass a law for IPv6.
      • Re: (Score:2, Insightful)

        Huh? Was that post tongue in cheek, and the mods are just crazy, or am I missing something?
        • My post was very tongue in cheek. Not sure why I'm +5 informative.... +funny, maybe...
        • by neoform (551705)

          The US owns the network within their borders.

          Every country owns their own portion of the internet.

          Saying that the US owns the internet is like claiming the US owns Earth; the US controls the DNS servers, much the way the US has the most power in the world.. but that doesn't change that they only control the part of the internet that's located on US soil.

      • Actually, I think that would work, if those not converting are punished.

        I think the rest of the world will follow suit. There are enough interesting pages on US-based servers that not offering IPv6 transit is a business non-starter.

        Would it be a good idea? "I'm from the government and I'm here to help you". I'm not sure what the outcome would be, and I think that outcomes are ultimately that which we should judge governmental actions by.

    • by arotenbe (1203922) on Tuesday November 25, 2008 @03:08PM (#25890517) Journal

      It's easy to say "let's try out some shit and drop it if it doesn't work" when very few people grow dependent on your work; when the whole world does so, it's a bit more difficult.

      In fact, that was what got us into this mess in the first place. We can't replace any part of the internet without breaking everything, so we just keep tacking on new standards and quick-fix patches. Someone needs to redesign the whole thing with an generalized, expandable security model. But then we would have two internets...

      "I think the problem here may be more of a question of getting rid of the bad internets and keeping the good internets."

    • Re: (Score:3, Informative)

      by Cyberax (705495)

      NSEC3 (http://tools.ietf.org/html/rfc5155) solves most of initial DNSSEC problems. But it's not yet supported by production versions of major DNS servers.

      • by afidel (530433) on Tuesday November 25, 2008 @04:22PM (#25891661)
        That RFC makes my head hurt. After a few readings I can usually grok most RFC's, but that one is particularly dense with acronyms and references to other DNSSEC concepts not included in the RFC. Also I don't see any provision for multiple signers, my ideal system has each of the ROOT servers having their own key and each zone being signed with each of the keys from the ROOTS they trust. That way if some government or corporation does something you disagree with you can choose to revoke their key as either a signor or a receiver.
    • Re: (Score:3, Interesting)

      by rs79 (71822)

      "Is DNSSEC ready for prime time?"

      Nope.

      I note with relish Vint Cerf and Joe Baptista, who couldm't be more apart on DNS agree that something othre than DNSSEC shuld be used. This is probably the only thing they agree on. And they're quite right.

  • by nweaver (113078) on Tuesday November 25, 2008 @02:25PM (#25889895) Homepage

    With a conventional PKI for your SSL certificates, Verisign or the other CA gets a cut for EVERY server.

    With DNSSEC, the "CA" only gets a cut per domain. Thus DNSSEC can be used to offer key distribution with far less cost, once the root and the TLDs start signing records.

    (Not an original argument, but I agree with it.)

    • Congratulation! You've just explained why the DNSSEC will never be implemented on the root server.

    • by TheLink (130905) on Tuesday November 25, 2008 @02:55PM (#25890305) Journal
      Uh it's just a way for CAs to make money _twice_ (or more times).

      You'll still need CAs.

      How does DNSSEC stop the browser from giving Joe User a warning box that the https cert is not signed by a recognized CA?

      That's the only real reason why you pay CAs to sign your certs - to stop Joe User from being bothered it.

      That CA signing bullshit is little to do with security. Because the last I checked:

      1) nobody really goes through all the CAs bundled with their browser and says: "Yes I trust this CA, no I don't so I'll delete this". There are tons, do you know who they are and how trustworthy they really are? Do you really care? No all you care is that you don't get that warning.
      2) Verisign has proven that they voluntarily do dubious stuff and they've even misissued Microsoft certs (go look under Untrusted Publishers in IE's list of certs ;) ), and yet people _will_ leave the Verisign root certs in - because all you care is you don't that get warning.
      3) Do browser makers actually remove CAs who don't comply to some standard? Do they even have some meaningful standard in terms of security?
      4) AFAIK browsers don't warn you if the a valid cert changes to a different valid cert (even if it is signed by a different CA).

      As you can see, they're not really safer than self-signed certs. To me browsers should do that SSH thing and warn you if the cert has changed (whether it's self-signed or CA signed).

      In that light, forgive me if I'm not convinced that DNSSEC is really going to make things more secure :).

      It'll just be more of the same. One more way for Verisign and gang to make money for making people feel safe.
      • by jonaskoelker (922170) <jonaskoelker&gnu,org> on Tuesday November 25, 2008 @04:33PM (#25891827) Homepage

        You'll still need CAs.

        How does DNSSEC stop the browser from giving Joe User a warning box that the https cert is not signed by a recognized CA?

        That's the only real reason why you pay CAs to sign your certs - to stop Joe User from being bothered it.

        You don't need the CAs, once applications are rewritten to grab keys from the DNS instead.

        Using DNS as a PKI means that my DNS provider is now my CA. If I grab jonaskoelker.free-dns.com and I start out with only a trusted root key, I can learn free-dns's key and trust them. I can then securely send them my key, which they sign for free, along with my signed records.

        Then, when you go to jonas.free-dns.com with a modified firefox, that firefox will trust the DNS key for jonas.free-dns.com as an SSL key for jonas.free-dns.com as well, and you'll trust that the guy whose server you're talking to is the same guy as the one who got the name in the first place.

        With a changed Firefox, you won't need a CA.

        Now, changing how "we" (meaning our browsers) decide whether to trust a site may not be easy, but it can be done.

        If your DNS parent is com, all I can say is "Meet your new CA, same as the old CA" ;)

        • DNS wasn't designed to be a CA and shouldn't be a CA. This is a technical problem with a technical solution and using DNSSEC on the TLDs in place of regular certs is not it.

        • by TheLink (130905)
          "You don't need the CAs, once applications are rewritten to grab keys from the DNS instead."

          1) Why would they rewrite browsers to do that? They're not even rewriting browsers to allow the option of making things more secure.

          2) Could you trust Verisign to sign the real free-dns.com key AND only the real free-dns.com key? http://www.microsoft.com/technet/security/bulletin/MS01-017.mspx
          http://slashdot.org/article.pl?no_d2=1&sid=08/01/08/1920215
          http://yro.slashdot.org/article.pl?sid=04/02/26/235256&tid=
      • Interesting. I just checked.. Firefox 3 has 58 CA's installed. Didn't know that.
    • by ThreeGigs (239452)

      With a conventional PKI for your SSL certificates, Verisign or the other CA gets a cut for EVERY server.

      With DNSSEC, the "CA" only gets a cut per domain. Thus DNSSEC can be used to offer key distribution with far less cost, once the root and the TLDs start signing records

      OK, I give up. What's EVERY an acronym for, again?

  • DNS (Score:5, Funny)

    by Gizzmonic (412910) on Tuesday November 25, 2008 @02:27PM (#25889935) Homepage Journal

    Are you troubled by DNS cache poisoning...well don't worry!

    I wrote a song about it!

    Your domain will be safe,
    You'll be well on your way
    With DNS-SEC security!

    Signing is a breeze,
    Bring hackers to their knees
    With DNS-SEC security!

    I know you're grown attached to old
    Ways of doing things
    But when you update BIND
    Your heart will race to sing!

    DNS-SEC implementation
    Put the spammers on permanent vacation
    DNS-SEC implementation
    I hear it's got great documentation!

    Bind me, baby!

    (GUITAR SOLO)

  • not so fast (Score:5, Interesting)

    by ejtttje (673126) on Tuesday November 25, 2008 @02:29PM (#25889955) Homepage
    I wouldn't be so quick brush aside dissension on this issue. This comment in particular:
    http://www.ntia.doc.gov/DNS/comments/comment034.pdf [doc.gov]
    seemed well thought out, and at the end suggests several other workarounds with fewer issues. Namely, switch to using TCP instead of UDP so there's a handshake involved instead of blindly accepting incoming datagrams. It's not that the bug shouldn't be addressed, but maybe DNSSEC is the wrong answer.
    • Re: (Score:3, Interesting)

      by Intron (870560)

      Unfortunately, the comment is wrong. The Kaminsky bug is not cache poisoning by fraudulent UDP packets (which is a concern), it is using glue records to provide false NS address. Example:

      You visit a website which pulls an image from subdomain.malicious.example.com. To get that, you need to know its nameserver. So you ask malicious.example.com who tells you that the nameserver is ns.citibank.com and oh, BTW that address is 666.666.666 (glue record). Now your cache has a phony address for ns.citibank.com

      • You visit a website which pulls an image from subdomain.malicious.example.com. To get that, you need to know its nameserver. So you ask malicious.example.com who tells you that the nameserver is ns.citibank.com and oh, BTW that address is 666.666.666 (glue record).

        And you throw away the glue record 'cos ns.citibank.com is not inside malicious.example.com.

        Baliwick, right?

        • by Phroggy (441)

          The Kaminsky exploit involves getting a reply to a request for subdomain.malicious.example.com that includes a glue record for www.example.com, which passes the bailiwick test.

      • by afidel (530433)
        That was brought up when the flaw was released and the reason it doesn't work is that the glue records were a workaround for another DNS flaw (which I can't remember at the moment).
      • uhh... no, but thanks for playing.

      • Re: (Score:3, Interesting)

        by Wowlapalooza (1339989)

        That was the Kashpureff attack, not the Kaminsky attack. Your understanding of DNS cache poisoning attacks is unfortunately about a decade out of date. All major resolver implementation now do "bailiwick checking" and aren't fooled by crude, cheap tricks as you describe.

        The Kaminsky attack does use forged packets, which then poison the cache with bogus NS records in ways that are not blocked by bailiwick-checking. These bogus NS records then "redirect" future queries of names under the same delegation point

        • I was at a talk some time ago where people who run several european root domains were discussing the issue and all seemed to agree that TCP is not an option because it would greatly increase their load.

          That said, I don't know how DNSSEC would be any more light on the CPU... I don't know the details but I assume you would have to sign every DNS packet that you send...
    • seemed well thought out

      It does, although I have some additions and disagreements.

      They characterize the spoofability of DNS replies as a flaw in UDP. I think that's incorrect. UDP isn't marketed as a data integrity protocol, it's marketed as a transport protocol. That job it does fine. TCP is the same thing: a transport protocol.

      A blind attacker against UDP has to guess a source port and a transaction ID. A blind attacker against TCP has to guess an initial sequence number as well. If you use SYN cookies (http://cr.yp.to/syn

    • by leto (8058)

      you want a 3-way handshake per dns lookup? Are you crazy? Do you even know how many dns lookups your browser creates on average.

      You'd be looking at 10 seconds delay for a webpage like slashdot easilly

      • by ejtttje (673126)
        Hate to break it to you, but it sounds to me like the encryption and key lookups from a lot of these other solutions have are going to have a lot more overhead than the TCP handshake.

        Besides, you have to do a TCP handshake for every HTTP connection, which is a concern (hence "keep-alive" option), but it's hardly a 10 second delay. Especially since you'll be querying your ISP's nameserver, so it'll be low-latency, not some distant server. And DNS entries are cached anyway, so it's not nearly as bad as yo
    • TCP? are you insane? you will bloat the DNS system tremendously and it will then become susceptible to the sockstress attack performed on TCP stacks which exploits the way TCP is suppose to work.

    • by tamyrlin (51)

      I just briefed through this document and found something which was hideously wrong: (Oh dear, someone is wrong on the Internet!)

      "1024-bit encryption can be easily broken by script kiddies via bot nets or large
      organizations and governments having access to similar technology (computer
      resources). DNSSEC provides the world with a false sense of security."

      As far as I know there is zero evidence that script kiddies are capable of
      breaking 1024-bit RSA regardless of how many computers on the Internet that
      they have

  • by NinthAgendaDotCom (1401899) on Tuesday November 25, 2008 @02:31PM (#25889973) Homepage
    It's funny how a regulated DNS still has so many security problems. I wonder if a distributed, non-governmental DNS that used a web of trust / trust ratings would work better for domain resolution.
    • Yeah, lets do that.
      I for one welcome our soon-to-be DNS bombing, v1agr@ providing overlords.

    • by Fastolfe (1470)

      I can't imagine a way that this would work that would be anything but a total disaster. Since there would (presumably) be no central authority, you have no way of knowing that http://example.com/ [example.com] is the same http://example.com/ [example.com] that someone else is looking at. How would you share links? How would a bank advertise its URL? How would domain registrations work? How would SSL certificate registrations be vetted? If you try to distribute the SSL function as well, now you have no idea if https://example.com

      • by rs79 (71822)

        "A need exists for a set of (reasonably) persistent, unique, meaningful identifiers for services on the Internet, and in order to ensure this, you need a central registry."

        Rubbish. As Bernstein pointed out a decade ago you could publish a cryptographically signed root zone via usenet.

        You'd probably want some tool to check consistancy and some tool to let you pick what tlds you want to support.

        So you don't actually need a central registry. You might say you need a single DNS root zone but you'd e wrong the

    • If you know how to do it, do it. Even if you aren't comfortable with network programming, if you can specify a distributed DNS system that works, people will implement it for you. But it's awfully hard to argue that something that no-one has managed to implement is a better solution to a problem with an existing popular solution.

  • by PhysicsPhil (880677) on Tuesday November 25, 2008 @02:32PM (#25889993)
    For those of us who trust that this is something that matters, but aren't nerdy enough to understand. What is the problem that the experts were being consulted about?
    • by Anonymous Coward on Tuesday November 25, 2008 @03:05PM (#25890455)

      It's about the DNS poisoning attacks from a few months ago. DNS Sec works properly when the top servers can vouch for the next server down the tree, but this only works if the top servers are secured with a well known public key.

      The issue is that the Federal bureau in charge of the root servers felt it had to go through the same bureaucratic process of getting consent, comments and so on and so forth that all federal regulations have to go through, by law. This takes a while, and a lot of people think they should have just done it.

      John Roth

    • Re: (Score:2, Insightful)

      by supradave (623574)

      The problem is that DNSSEC is a manually intensive proposition. Keys have to be rolled daily and those keys have to be generated on a machine that is not connected to a network, i.e. sneaker net. The problem stems from current OS implementations that allow you to have access to all the memory. If I could compromise your signing keys, I could sign your zone with my keys and probably get away with further damage as people would inherently trust DNS. The issue is automation. Since you cannot, on Linux or

      • Why would it be any more difficult than running an automated CA? It's basically the same problem, and automated CAs manage to issue certificates in real time without too much trouble.

  • by Burz (138833) on Tuesday November 25, 2008 @02:52PM (#25890243) Journal

    ...over ubiquitous use of SSL?

    Almost all of the extra overhead for crypto and/or signing is in processing the initial public key. So DNSSEC seems to make our systems work about as hard, without the benefit of encrypted data.

    OTOH, having an Internet trend set in with most servers switching to SSL (i.e. HTTPS, etc) keeps the government (and corps providing its "security" snooping services) from profiling people based on their everyday choices of art, books, and ways of socializing. It takes ISPs out of the loop as far as acting as surrogate cops snooping on peoples' data.

    If I wanted to further a police surveillance state, I would try to set a trend with DNSSEC instead of a different public key scheme that provides encryption along with verification for the same price... especially if the tools to implement the latter were already on everyone's system waiting to be fully used.

    • by amorsen (7485)

      With secure DNS, key distribution for e.g. IPSEC or TLS becomes easier.

      • by Burz (138833)

        With secure DNS, key distribution for e.g. IPSEC or TLS becomes easier.

        Whereas with existing schemes like HTTPS, the client simply caches the acquired symmetric keys as needed. And non-browser applications could poll the default browser on a system in order to use its CA-based verification; that would allow such apps to distribute their own keys safely. (That is, if you're programming in a framework that doesn't already have PKI functionality.)

        I don't believe that whatever ease is gained in key distribution outweighs the technical problems and risk of abuse that DNSSEC carries

        • by amorsen (7485)

          Whereas with existing schemes like HTTPS, the client simply caches the acquired symmetric keys as needed.

          The way it gets the public key of the site today is ridiculously insecure. It trusts a bunch of organizations, several of which have proven to be completely untrustworthy.

          You can used self-signed keys, but then the security is basically non-existent. There is no GPG-like system for the web.

          It all seems very specious to me, replacing an established address verification system with a less functional one.

          If you turn off DNSSEC in your resolver, nothing changed. I don't see how it can be less functional then.

          • The way it gets the public key of the site today is ridiculously insecure. It trusts a bunch of organizations, several of which have proven to be completely untrustworthy.

            I'm pretty sure that the same organizations would be in the chain for DNSSEC.

            • by amorsen (7485)

              I'm pretty sure that the same organizations would be in the chain for DNSSEC.

              True, but at least the security is only as bad as that of one particular company. With regular TLS the security is as bad as that of the worst company.

          • by Sloppy (14984)

            There is no GPG-like system for the web.

            There could be [gnu.org], if we'd just put it into the browsers.

    • by xrayspx (13127)
      Because changing DNS to TCP globally would cause a lot of networks to grind to a halt. I believe DNSSEC allows you to keep things UDP and fast.
      • by Burz (138833)

        Because changing DNS to TCP globally would cause a lot of networks to grind to a halt. I believe DNSSEC allows you to keep things UDP and fast.

        I don't mean DNS over TCP. I'm talking about protocols like HTTPS making attacks on regular DNS futile.

        • It doesn't make those attacks futile. You can detect them, sure, but if you're getting bogus information from your DNS server, that's still a denial of service (because you can't get the real address of the site).

          Plus all that an adversary would need to do is watch the DNS requests as they come in to find out where people are going.

          • by Burz (138833)

            It doesn't make those attacks futile. You can detect them, sure, but if you're getting bogus information from your DNS server, that's still a denial of service (because you can't get the real address of the site).

            The same DOS issue applies to DNSSEC. It is not magic and cannot overcome determined interference... it can only prevent you from using falsified data as if it were genuine.

            Plus all that an adversary would need to do is watch the DNS requests as they come in to find out where people are going.

            Again no different with DNSSEC, since it does not encrypt anything... it only signs/verifies. Here is a nice overview with diagram. [com.com]

            Neither technology was intended to provide anonymity for the 'who' of the connection, but SSL does hide the 'what' of our data. And though SSL was not meant for anonymity, it is the basis for anonymity in oni

        • by xrayspx (13127)
          SSL is TCP only, DNSSEC is kind of like UDP-SSL for DNS. IIRC there is a proposal for TLS over UDP which would accomplish a similar thing, but I think the specific answer of DNSSEC accounts for all of this.
          • by Burz (138833)

            The PKI part of SSL can be used to verify addresses, and to exchange symmetric keys that can be used with any TCP or non-TCP stream.

    • OTOH, having an Internet trend set in with most servers switching to SSL (i.e. HTTPS, etc) keeps the government (and corps providing its "security" snooping services) from profiling people based on their everyday choices of art, books, and ways of socializing. It takes ISPs out of the loop as far as acting as surrogate cops snooping on peoples' data.

      If only you can mod higher than +5...
      Everything on the Internet SHOULD be encrypted. I really really wish that I could encrypt every piece of data I send and receive regardless of its content. The only current solutions for constant encryption are things like TOR which uses proxies and there's still a point for failure (the proxy itself to the destination), and can be LAGGY as hell...

      • You can not do this with TCP/IP. The destination of where your packet is going has to be visible, whether this is the address of a proxy that will later forward your packet or the address of a IPSec gateway that will forward your packet or of the ultimate destination for your packet. Otherwise it will never get there. Now, you can encrypt the payloads (see IPSec), but you can't encrypt the destination address.

        • by Burz (138833)

          Now, you can encrypt the payloads (see IPSec), but you can't encrypt the destination address.

          That is what onion routing is for. [eff.org] I suggest you read up on the Tor project, where the destination address is indeed encrypted and can't be traced back to the client.

          But anonymity isn't required in order for encryption over conventional links to add a great deal of privacy.

    • Re: (Score:3, Informative)

      by MasterOfMagic (151058)

      Because SSL and DNSSEC solve two different problems. Unless you're doing DNS-over-SSL, which means running DNS in TCP mode.

      • by Burz (138833) on Wednesday November 26, 2008 @01:44AM (#25896551) Journal

        Because SSL and DNSSEC solve two different problems. Unless you're doing DNS-over-SSL, which means running DNS in TCP mode.

        I don't think so. A primary motivation for PKI-backed SSL was to protect against any misdirection, whether at the domain-name or IP address level.

        DNS over TCP isn't being suggested here. Normal DNS with a PKI-using protocol like HTTPS is what provides the protection I'm talking about. Its the scheme you and I already use whenever we make a purchase or do online banking.

        In the case of HTTPS, a interfering with either DNS resolution or misrouting an IP address will cause the connection to stop with a warning. In the case of DNSSEC, interference will generate an error message that most server and client software does not understand.

        With SSL/HTTPS/etc. the address is verified outside the DNS protocol. But it is still verified. Moving that verification into DNS doesn't really help unless you prefer to see most internet traffic remain unencrypted.

  • by Sloppy (14984) on Tuesday November 25, 2008 @03:18PM (#25890667) Homepage Journal

    I love beating this dead horse: OpenPGP is the one scheme that authentication right, and DNS is Yet Another great example where OpenPGP should be used instead of the obsolete X.509.

    Why would I trust the feds as an introducer? We already know that they do attempt MitMs sometimes, and there's already a history of DNS abuses ordered by presumably well-intentioned courts. But even if this organization had a good reputation, it's just plain dumb to put all your eggs in one basket. There should be provisions multiple certifiers of an identity, so that users decide who is trustworthy and who isn't.

    If the feds are going to sign, I hope they use an OpenPGP [ietf.org] signature (which apparently the spec allows!), but I somehow doubt they would want to lend any legitimacy to a scheme that actually lets people authenticate identities, instead of the one intended to create monopolies and single points of failure.

    I have no problem with the feds helping out on this, but we shouldn't completely trust them, and we have the technology so that we don't have to. PRZ gave it to us a couple decades ago.

    • by Chandon Seldon (43083) on Tuesday November 25, 2008 @05:17PM (#25892457) Homepage

      This is a case where you're right, everyone who has thought about it agrees that you're right, and that's still not the design decision that's going to be made.

      The issue here is a disagreement on goals. You want to make it so that someone who goes to the necessary effort can be secure against an arbitrary attacker. Others want to make it so that someone who goes to no effort will be secure from one step technical attacks by poorly funded attackers. People who are interested in the second case, which includes all major application developers including Mozilla, dismiss the proof of your point ("what about malicious CAs") as being out of scope.

      The only solution to this problem that I can see is to try to provide real security and decentralized infrastructure in as many cases as possible. Why don't we have a Mozilla plugin that uses OpenPGP for SSL with a revolutionary UI that makes it practically useful? Why don't we have distributed DNS? Once we have proof of concept and working code, it'll be much easier to argue that we should be doing these things correctly.

      • by Sloppy (14984)

        that's still not the design decision that's going to be made.

        I'm not so idealistic as to disagree, but..

        People who are interested in the second case, which includes all major application developers including Mozilla, dismiss the proof of your point ("what about malicious CAs") as being out of scope

        ..the solution for the first case can also achieve the goals of the second. If they want to include a trusted-by-default OpenPGP public key with Firefox, they could.

        I don't think they'll listen, but I think

        • If they want to include a trusted-by-default OpenPGP public key with Firefox, they could.

          How would that help? Either they'd have to then use that OpenPGP certificate to sign site certificates (and thus either become a CA or create a new class of OpenPGP CAs out of the certificates that they did sign).

      • by rs79 (71822)

        " This is a case where you're right, everyone who has thought about it agrees that you're right, and that's still not the design decision that's going to be made."

        "The Internet is about consensus, not truth. Never confuse the two. - Brian Reid (who, funded BIND's development)

  • why only one CA (Score:5, Interesting)

    by bugs2squash (1132591) on Tuesday November 25, 2008 @03:38PM (#25890963)
    I don't see why any nameserver (especially the root nameservers) could not carry signatures from multiple CAs. Maybe that's not DNSSEC (I can't be bothered to read the RFCs !) but it's certainly a technical possibility.

    Also, I think any device looking up any DNS record can chose to ignore the signatures if it wants to anyway (most will).

    So I fail to see what all the conspiracy issues are surrounding the signature of the root name servers. It seems a far cry from implementing a system to roll dnssec out to every nameserver and if a better solution comes along later, or DNSSEC gets better, the new ideas can probably get bolted on.
    • That is how Frankenstein's monster got his head, it was bolted on later as an afterthought. And man was it an ugly hack!

    • by leto (8058)

      indeed. you can ignore what you want. You can only create your own "secure entry point" that override a parental DNSKEY if you would want to (Think China removing .tw entries). Anyone who controls a resolver can do this. It's a one line configuration change.

      The root key is not Sauron's Ring

The typical page layout program is nothing more than an electronic light table for cutting and pasting documents.

Working...