Forgot your password?
typodupeerror
Upgrades

Running BIND 4 or 8? Upgrade! 237

Posted by jamie
from the someone-was-bound-to-find-these dept.
The Dev was the first of several zillion to point out that security holes were found in BIND. The detailed table of known vulnerabilities will help clarify (and it has tarball links too), but the short version is, if you're running BIND 4 or BIND 8, set aside some time today to upgrade to 4.9.8 or 8.2.3 (not beta, betas of 8.2.3 are vulnerable). And now's a good time to reconsider version 9, too. SecurityFocus warns that the last time a BIND hole of this magnitude was found, it was followed by a "cyber-crime wave." Exploits for these holes were successfully created by COVERT Labs, but nobody seems to know whether they're in the wild yet. Obviously, they soon will be. Post your questions and answers about upgrading below.
This discussion has been archived. No new comments can be posted.

Running BIND 4 or 8? Upgrade!

Comments Filter:
  • sprintf(buf, "...%80s...", badstr)
    will happily pull more than 80 characters from badstr. You really want to use snprintf, though it isn't supported identically on various platforms -- check the manpage.

    Read this if you want to learn considerably more about safe usage of strncpy/cat/etc:

    http://www.usenix.org/events/usenix99/full_papers/ millert/millert_html/ [usenix.org]

  • The problem is pure theory classes are frequently a little too abstract for people to really grasp, and people quickly forget the lesson. When you back up your theory with a bit of practice, IE writing some C code in an unsecure fashion then breaking it to show how easy it is; then you have something that the students will remember.

    The biggest problem with security problems is that they don't show up during ANY part of the standard software development cycle (your testers generally don't have the source code to try and exploit the code with, and certainly don't have the expertise to do so anyway), so they go unnoticed for years until someone on the outside finds the hole and exploits it.
  • I don't mean this as a troll, but it seems that BIND has more security vulnerabilities than any other piece of software.

    I'd say that dubious distinction falls to wu-ftpd, but BIND is a close second.

    Anyway, BIND 9 is a complete rewrite.
    --
  • According to the mailing lists, OpenBSD's implementation of BIND4 is immune, the sprintf()s rersponsible for the overflows were changed to snprintf()s by the development team in 1997.

    SoupIsGood Food
  • I'm of the opinion that no course should be teaching printf, writeln, or any of that. They should teach the concept and let you apply it to the language of your choosing.

    Sure, they should mention what buffer overruns are. But they shouldn't be teaching you how to use a particular tool - but how that class of tools work in general.

    Unless of course it is a C/C++ course :)
  • Piece of cake to switch - a $TTL in one file, and a line in another file to quite a warning, and up on 9.1.

    Much MUCH easier than 4->8
  • Whatever software you have - there's a hole in it. Somewhere. Somehow. It's just a code that hasn't been beat on enough to find it. While some software may be 'better' out of the box, it may not necessarily be completely secure.
  • Ultimately, those languages will be written in a language like C, where there isn't bounds checking. What then? If an exploit is discovered in the compiler-generated code, then _every_ program written with that compiler is vulnerable. This way, you at least don't have that sort of large-scale exploits happening.
  • You're still missing the point. Even if you compiler was written in your "really cool language", it still compiles down to assembly. And assembly has no bounds checking. Therefore, if you screwed up your implementation of bounds-checking in the assembler conversion, then you would screw everyone who used your solution. Just because the language does bounds-checking doesn't mean it will code the bounds-checking assembly for you when you recompile.
  • the question is: why is all this open source software like bind, sendmail, ftpd and such so full of bugs to begin with?

    Because all software is buggy crap to begin with.

    Programmers of open source software are no different than programmers of closed sourc -- both code to their level of skill and pride. The only difference is pay, and money does nothing to make someone write better code. It's not like a programmer gets paid more for writing more elegant, secure code. Nope, it only has to work not too long after their scheduled release date.

    The fact is that all code of sufficient size and complexity will have bugs in it. I leave it to the reader to decide whether they want the buggy programs they depend upon to be open or closed.
  • It doesn't matter how large or complex the code is nor how elegant and securely it's written if the underlying architecture & methodology principles suck.

    A very good point. There are fundamental protocol flaws that can render code vulnerable even if there are no buffer overflows or other standard bugs.

    However, looking at the list of vulnerabilities for BIND, they appear to almost exclusively be of the buffer overflow and 'improper handling' vein, which falls into the category of buggy code, not bad underlying design.

    Then again, your idea would apply if the code was written without concern for preventing things like buffer exploits.

    maybe we should take a pointer from the *BSD camp; they fix how the code functions and then they evaluate why the code does something in that manner so design flaws can be addressed.

    Who's "we"? BSD uses bind just like Linux does.

    But I agree, if you mean specifically OpenBSD and their thorough audit process. It reminds me of the processor industry, when years are spent validating a design.

    Then again, processors ship with bugs in them as well. You can never be assured that you are 100% bug free in any sufficiently complex (ie not provably correct) design. It's worse with software than with hardware, because in software there are more uncontrollable variables.

    BIND 9.x is on the right track. They've completely rewritten nearly all aspects of the underlying architecture to address the design problems inherent in BIND 4 & 8.

    Which CERT advisories refer to underlying architectural flaws?

    Not to say a re-write is bad... I think developers are too afraid of starting over. Especially in the open source world, where release schedules are not a concern, but code quality is.
  • I recommend reading Scott Wunsch [wunsch.org]'s excellent Chroot-BIND HOWTO [linuxdoc.org] for instructions on setting up BIND in a jailed root. I sleep better at night (really) thanks to this how-to.

    -Waldo
  • When I upgraded last night, I got an error explaining that I had to be running kernel 2.3.99 or newer. I didn't desire to patch the kernel on this particular machine, so I ended up upgrading to the newest 8.x. YMMV, but that was the result on this particular RH6.0 Intel box.

    -Waldo
  • One INCIDENTS post suggests that there is a exploit in the wild.

    So upgrade.
  • This was already on MSNBC and ZDNN, so all the black-hats already know.
  • There's a nice, if short, checklist at http://www.openbsd.org/porting.html#security [openbsd.org]
  • You presume that people here considers security to be important. What is the saying that someone keeps quoting? "Those who would exchange freedom for a little security deserves neither".

    Let look at the track record of BIND.
    1) explot every few months (followed by apologies like, "well, BIND has been out so long, it has to be secure NOW".
    2) New BIND, where the authors seem to indicate that security was not part of the design critieria.

    But you see, djbdns has the wrong license. It's not GPL. And people will rather be rooted than run a non-GPL software. Especially if running it would mean that one had to admit that there is actually a non-GPL software that is (Oh nooo) *better* than the GPL alternative.

    If you want to see the same additude for another piece of "software", check out any discussion on Sendmail (same arguments, same security holes).
  • I've been using djbdns for almost a year now (while it was still called dnscache).

    Note that djbdns is a suite of dns utilities that together gives the same functionality of BIND.

    dnscache *only* do caching (great if you are on a dialup. Because, do you really need a fullblown dnsserver if you only what to do caching?).

    tinydns *only* only server dns request (no caching).

    If you want a dnsserver, you only need those two. They run in with their own userid, in chroot'ed into their own directories owned by them.

    AND, it's a snap to set up (took my half a day to figure out everything).

  • Did you read his 'license'? He has limits on distribution the same way GPL limits the distribution.

    GPL limits the distribution, in the sense that if you distribute it, you have to give the source code. AND YES THIS IS A LIMITIATION.

    Bernstein's license is that you can't distribute it and changing the author's (his) original wish on how the software should work. That means you cqan't arbitrary change the code, or the location of where the software is installed, and distribute and still call it qmail/djbdns.

    You can distribute binaries, AS LONG AS IT INSTALL EXACTLY LIKE IT WOULD IF THE USER COMPILED AND DID A MAKE INSTALL FROM PRISTINE SOURCES.

    Heck, like the GPL, if you don't like it, you can always negotiate with the author the change the license terms.

    If you want talk about true freedom, talk about the BSD license.
  • So why don't you just turn on the telnet service or download the free SSHD for NT/2000 [criadvantage.com]? It's really not that difficult...

    I still can't understand how in this day and age someone can waste their time complaining and not be able to figure this stuff out.


    Cheers,

  • by kaisyain (15013)
    djbdns requires seperate machines for almost everything.

    Granted I'm not a DNS wizard but I don't think this is the case. In the worst case you could say that djbdns requires separate IP addresses for everything. Except that really isn't the case anymore, as I understand it.

    For all of the complaints about the Outlook/Exchange monoculture and its susceptibility to exploits that you see on slashdot, I'd really expect more people to be using things like djbdns and fixing the holes in it rather than complaining. I'd rather patch djbdns to add minimal functionality than patch BIND to fix major security problems.

    Granted Berstein isn't the most affable character in the world, but I don't pick my software based on the personality of the people who write it.
  • ask yourself if it is as widely deployed and as widely scrutinized as bind

    However, it is misleading to suggest that that is the only, or even the most important, criterion. Quantity of scrutiny has nothing to do with quality of scrutiny...as many open source software projects find out. Having millions of naive users who never look at the source code does you very little good from a security standpoint. Having ten knowledgeable people audit the source code does a tremendous amount of good. Also, djbdns has a little more than 10,000 lines of code. BIND has well over 120,000. It is much easy to verify simple software than complex software. That, combined with the relative track records of the authors of djbdns and BIND make the comparison much more difficult than simply looking at how widely deployed something is.
  • why don't we all just use OpenBSD?

    it's quite a nice OS..


  • bind is mirrored in australia at:

    PlanetMirror:

    ftp://ftp.planetmirror.com/pub/bind/src/8.2.3/

    AARNet:

    ftp://mirror.aarnet.edu.au/pub/bind/src/8.2.3/

    please try to use one of them before hitting
    the ISC server.

    -jason
  • Anyone notice how this CERT advisory comes out only a few days after Microsoft had it's DNS borked? Coincidence? I think not ;-)

  • "It appears to me to be a straight shoot-out between C and Java, unless you can give us some of the "plenty of alternatives to C" (preferably ones with comprehensive libraries). Can someone who has worked on implementing a JVM indicate the performance of a machine with nameserver (along with httpd, ftd, etc) all written in Java?"

    I didn't start about Java :) But now that we're discussing it: it would require more memory and depending on the way the data is stored, it should perform about as good as the C version provided a run-time optimizing JVM such as Hotspot is used. I wouldn't bet my life on the performance being as fast as C but I don't think performance would be seriously slower. But then there's only one way to find out. The memory usage is an issue however, and I wouldn't go as far as to recommend Java for the job. Probably C++ with a good library and preferably with a garbage collector or some other form of controlling memory allocation would take care of such things as buffer overruns and memory leaks. If you must use a procedural language there's always pascal, modula and derivatives. They're all capable of making system calls so libraries are not an issue + they probably come with some libraries of their own. But then why bother with obsolete paradigms at all. I know this is an issue for some but object oriented programming has been around for thirty years, good quality compilers and tools have been around for a long time, performance has ceased to be an issue in most situations.

    My whole point is that the technology exists today to prevent this kind of situations. There's no kind of excuse for this kind of bugs anymore.

    I strongly oppose your suggestion that you can make programmers work harder and code better (if you know how, you're going to be rich). It hasn't happened in the past and I guarantee you it won't happen in the future. It's the technology that's fundamentally flawed and not the programmer.
  • The ultimate test for any language is to write the compiler in its own language. Of course you'll need a separate language for writing the first versions of the compiler. Once that's in place you can write a compiler in the language it is supposed to compile. Doing so is usually a good test for your language. So ultimately you don't need C even for writing compilers.

    C should only be used in those parts of a system that are really critical (critical as in a profiler shows that we need more performance here and there's no way to do it the language we're currently using). Using C when it's not needed costs you in terms of lines of code, development time, bugs (direct correlation with lines of code) and maintenance cost (same correlation again). Some might argue against LOC, but the other things have been shown to be true in very extensive and convincing casestudies (I could look up some references if you'd like to have them).
  • Cut the crap, one of the most important tools on the internet broke down because of a memory leak.

    Of course it is possible to create good programs if you don't make any errors, duh. The problem is that humans do make errors. And since C provides little or no protection against these errors it is unsafe.

    As long as we will use C for implementing these kind of things, there will be memory leaks. Of course C is a very performance efficient language, however, things like this make it unsuitable for security critical apps because you can never be 100% sure it doesn't have memory leaks.
  • >can you suggest another language that scores higher than C for such a low-level application?

    C++, with some proper string class would have probably prevented the problem. I suspect a Java implementation would perform acceptably too. However, there are several procedural languages that would be suitable as well. There's plenty of alternatives to C. C may have been a nice language for this kind of programs in the seventies but its 2001 now. C had technical flaws, many of which were addressed in languages that came after it.

    BTW. I disagree that this is a low level application. Device drivers are lowlevel applications. You typically find them at the bottom layer of the OSI model. Bind would classify for the application layer (almost at the top).

    Then, you hammer down the fact that it is possible to create safe programs in C. But then my simple question is: why the hell do we have all these security leaks? Bind isn't an incident, it's just the latest leak to be found. Probably a solution will be provided in the form of a patch. However, this patch won't fix the fundamental problem, it will just fix the symptom and in the future more bugs will be found.
  • Debian users running the stable (Potato) distribution can find a safe version in Debian's security archive. If it's not there already, the following line should be in /etc/apt/sources.list:
    deb http://security.debian.org/ stable/updates main contrib non-free

  • Umm, the responsible people already read bugtraq this morning and patched their servers.
  • Bind8 is in the ports. Bind4 is in the base system. There's a reason. If you'd paid any attention to the misc mailing list, where the question comes up with monotonous regularity, you'd know why: the team doesn't trust (and wouldn't audit) bind8 because it's a hideous mess.

    As to timely updates, there was a patch for bind4 yesterday, even though it looks like the buffer overruns were defanged back in 1997 in a general sweep for sprintf()s.
  • It all depends if you have machines/IP's to spare. djbdns requires seperate machines for almost everything. If you want your load balancing DNS server run this, resolver run this, master/root server run this.

    Ok if you are working from scratch. But more tricky if you want a replacement for an existing set up.
  • by mpe (36238)
    Microsofts mistake was to put all their servers on one subnet, and allow a change to be performed on a mission-critical router without proper approval, as far as I can work out.

    Though the router was only "mission-critical" because of the DNS servers being misconfigured.
    Microsoft is hardly unique in not complying with rfc 2182 though...
  • M$ uses their own DNS software. Hopefully because of their recent DNS borking on their own software/systems they won't try to convince people their DNS software is superior because /their/ DNS isn't vulnerable to the BIND holes.

    Their "own" version of DNS could easily be an old version of BIND hacked to work with a Windows GUI, however...
    Anyway RFC 2182 is software agnostic.
  • by mpe (36238)
    Well, Microsoft (despite what it's trying to become) is hardly a mission critical systems retailer, nor a networking hardware vendor. Cisco is widely known to be the manufacturer of some of the best communications gear around.
    If Cisco's network were to go down, that would say a lot more about their products than if the same thing happened to MS.


    More to the point whoever set up Cisco's nameservers appears to understand the basics and know what they are doing. Something which is self evidently not the case with Microsoft.
  • Well, making a chroot jail is not really different for any kind of deamon:
    • Figure out which files and libraries the deamon needs, that is at least libc and /etc/passwd and most likely some more.
    • Make a rooted environment at e.g. /var/named/chroot with the derectories, libraries, files and data. If the deamon calls other programs copy them too.
    • TRIM everything down to (nearly) nothing. No other entries in the passwd file that root, bin and the like and (doh!) * in the passwd fields!
    • start the deamon like chroot /sbin/named ... and the deamon will believe that the worlds top is /var/named/chroot.
    You can run any deamon like this, apache, sendmail, finger and whatever.

  • The vulnerabilities / exploit list is long! And while 9.1.0 doesn't have any known explots according to this list, I think this should be an eye-opener to people when it comes to security. Like Microsoft likes pointing out, you are unsafe with *ANY* OS if you don't stay up to date with the patches. I'm not "pro MS" or anything, but there's a lot of rhetoric on Slashdot about how Microsoft OS's are safe. The idea a lot of people get is that Linux is automatically completely safe. This is, of course, not the case. Unless you know what's going on and what has been hacked, you're leaving your system wide open.

    For those who feel safe and comfortable with their home box, especially those hooked up to DSL or cabel, I strongly recommend checking out that list. It's scary and it's only bind! To keep the balance, the fix list for Win2K SP1 is even longer... and scarier..

    I run a box at home that is connected to the net 24/7 on a dynamic IP without an easy-to-guess hostname and I get about 10 probes a day.. FTP, ping, SSH, telnet, http.. you name it.. I assume most boxes get the same amount.. If you have an open door, it WILL be exploited!
  • Uh, that should have been:

    "but there's a lot of rhetoric on Slashdot about how Microsoft OS's are *UN*safe."
  • I know.. If there's more than one probe from one IP, I always check out who it is. My ISP does probes too, for instance every time I send an email I get a probe.. but that's different.
  • I doubt djbdns has received the attention that BIND has. If djbdns was used on every server instead of BIND, there'd probably be problems found with it too.

    DJB is willing to bet [cr.yp.to] that there won't be and even though djbdns is not in wide use, his other project, Qmail, which carries a similar guarantee is widespread even in high-profile high-risk locations like Hotmail. No security related bug has ever been found AFAIK.

    Regards,
    Xenna (who bets his servers on it)
  • Except that Microsoft's [microsoft.com] DNS is now being provided by Akamai [akamai.com] on (apparently) Linux 2.1 servers. See this story [theregister.co.uk] in The Register.
  • Coincidence... I think so.

    M$ uses their own DNS software. Hopefully because of their recent DNS borking on their own software/systems they won't try to convince people their DNS software is superior because /their/ DNS isn't vulnerable to the BIND holes.

    But they probably will anyways... Oh well.

  • It is obviously hard enough to do things safely by hand that people do not do it: that's really all that matters. Obviously it is *possible* to write safe code in a non-bounds-checked language, but it is hard enough that people generally don't, so we have buffer-overflow vulnerabilities in critical code every few months.

    I'm not really interested in an argument that it's possible to write bounds checking code by hand -- obviously it is (and I'm sure you do!) -- but equally obviously, many, possibly most, people do not.

    I can see two fixes to this problem:

    • Educate people to write better code. So far there hasn't been much progress here: possibly there has been negative progress.
    • Start writing critical software in languages which check array bounds both at compile time where possible -- which can eliminate runtime overhead -- and at runtime where needed, and handle out-of-bounds accesses gracefully.

    These vulnerabilities cost huge amounts every time they happen, not just in terms of security breaches but in all the hidden cost of time spent upgrading systems. How many DNS servers are there running vulnerable versions of BIND right now? How long will it take to fix them, assuming they get fixed? This is really a lot of money...

    I kind of wish education could solve this problem, but I'm cynical, so I place more faith in systems which prevent it happening.

  • Is the Secure DNS server that's part of the FreeS/WAN project ready to go? If so, does it have any of these vulnerablities? -jcr
  • ... distros like RedHat (which I use) run everything under the sun when you first install.

    Which is truly annoying.

    A quick way to give yourself some protection is to configure ipchains first thing to block all inbound everything except responses to things (like TCP sessions) originating inside. Then selectively expose anything you want to be reachable from outside. This limits the (initial) vulnerabilities to the servers you expose and the TCP/IP stack itself.

    Even if a server like BIND is running they can't exploit it unless they can get a message to it.

    (Of course once they get through a hole in one of the things you DO expose they can open up any others they want. Then all bets are off.)

    When installing on a new machine you might want to go out onto the net and get any security tools and patches I might need, roll them onto a floppy, then pull your network connections and reinstall from scratch (reformatting the disk), just in case some kiddie got to the box while the initial wide-open install was running.

    Of course you don't want it running open on a home network, either, since it could be used to sniff and attack other machines while it's open. But if you have any other machines you can write that floppy on one of 'em and run both the install and door-locking while the machine is connected to nothing but the power grid and sneakernet. B-)
  • Its partially the language C that causes these problems because C has no bound checking on its arrays which can lead to bad situations with buffer overruns and such.

    That's because C is an "enough rope" language. Others do some checking, but it costs execution speed, and they still can't block all the holes. C does JUST what you tell it, without waisting cycles on trying to save you from yourself (and giving you a false sense of security). It's up to you to tell it to do whatever checking you want done.

    The problem isn't really the language. It's the standard library, which contains some input routines with buffer overflows built in. The biggest culprit is gets(). It was a mistake to put it there, and the manual page now warns you not to use it and what to use to replace it (fgets()). But now it's there, and a bunch of stuff will break if it goes away.

    (Of course anything that will break is already broken. So you might want to cut it out of your own library and see what won't link. B-) )
  • "And so it begins."
    "There is a hole in your BIND."
    "What do you want?"


  • I upgraded to BIND 9 and had problems right off the bat, to say nothing of the fact it's 10X the size of BIND8. DJBDNS is one Slick package. It rawks. Very, very elegant. http://cr.yp.to/djbdns.html [cr.yp.to]

    BIND 9 is supposed to have been written by a "team of professionals". From where? Microsoft? Guys that were "let go" because they wrote code too buggy and bloated for M$? DJBDNS shows once again one guy with a major clue beats a "team of professionals" every time.

    Thanks for getting us this far, Vix and Co., but you can sit down now.

  • At work I've got a Windows NT Server I slapped together from parts left over from a workstation that was too decrepid for use as a workstation. It's got a number of handicaps working against it, including:
    • 486 DX/2 - 50 Mhz processor
    • Only 32 MB of DRAM
    • BIOS patch drivers running in real mode
    • Runs a Telnet server, DNS, and web server
    • Goofy BIOS/Video card combination that dies after a warm boot
    This would rule it out as a candidate for real use, right? Wrong! It NEVER dies, (it can't, won't reboot except for a power cycle). I take it down for the odd service pack, otherwise it's always there.

    It's currently at 42 Days, it was past 150 when I took thinks down because I tweaked IP addresses for our network. (Yeah... NT needs to be reboot to work right... it's not perfect).

    The point is that NT is stable, you just have to treat it like a server instead of a workstation.

    --Mike--

  • djbdns requires seperate machines for almost everything.

    You don't know what you're talking about. The latest djbdns has load balancing built into tinydns, the iterating resolver. Dnscache, tinydns, and axfrdns can all run on the same machine, e.g., to replicate the usual BIND installation. And please explain to me how software can "rot." Oh yes, there's a new release of qmail in the works, you got that wrong, too. Qmail is doing fine, are you a shill for ISC?

    The bottom line is that if you are running BIND you're more vulnerable than with djbdns. Everyone runs bind and sendmail for the same reason that windows is installed on so many desktops, it's the default install.

  • Most security leaks are a direct consequence of using languages like C. People claim it is possible to program safely in C, however, incidents like this prove them wrong.

    What a strange statement. It is perfectly safe to program in C as long as you are paying attention. In my experience, the security leaks occur by a) oversight of the programmer (probably about 3am) b) code contributed by an amateur who lack formal training thus wouldn't know the basics we do or c) rush jobs that were only meant for test purposes but then got incorporated into final code.

    The first can be checked for by code review, which is where Open Source is supposed to excel. The second tends to occur where people have never studied CompSci, yet have dabbled in Javascript hence consider themselves a programmer (ok, slight exageration). The only solution to this is use software where the team has a good reputation. The last is poor software engineering. Harangue the author(s) to go back and do a proper job.

    Personally I think C is an excellent language for writing core OS apps in. Fast, flexible and efficient. Java is a good server-side language for application server development but I wouldn't write my core server apps in it (not fast or lean enough). What alternative language would you suggest?

    Phillip.
  • If you were running Bind 8 in the chroot jail as documented on the ISC web site, do this:

    Before you run ./configure, do a "export CFLAGS=--static"

    Then ./configure --enable-chroot.

    make

    Then go in and copy the binaries to your chroot jail.

    Then go make sure your chrooted /var/run can be written as the user that named runs as.

    Then go edit your zone files and add "$TTL 84000" to the top of each one.

    Then start named as you did previously.

  • Seems like for a while there they were reporting a hole a week in Bind and Sendmail. Haven't heard much about sendmail in a while (Haven't cared, either, switched to Postfix ages ago.) Bind shows no sign of letting up though. You'd think after a certain point, they'd say "Good GOD! This code SUCKS! Let's redesign and rewrite it!"
  • The closeset thing out there currently is Dan Bernstein's DJBDNS [cr.yp.to].

    This comes in two parts- 'tinydns', which only handles serving authoritative data, and 'dnscache' which only handles providing caching DNS services.

    Installation is somewhat complex, but the software works like a charm once you get past that.

  • Relying on /. for security news/instrucions is probably the stupidest thing one can do!

    Never trust anyone who tells you not to trust people. 0

  • Bind: Bug Infested Network Daemon

    The folks who wrote and/or maintain bind had the best of intentions. Bind code filled the need when Arpanet/Internet sites were copying around large host files. I don't wish to denigrate / attack those who helped create and maintain bind, but one cannot ignore the fact that bind is one of the larger infrastructure vulnerabilities we face today. The track record of bind v8 and previous version cast doubt on the wisdom trust bind v9.

    Bind's track record clearly shows it for what it is: a bug infested and many flawed chunk of code that has lasted way past its prime. Bind is to name service as sendmail is to EMail.

    Bind has and very likely continues to suffer from:

    • Buffer overruns
    • %n bugs
    • Denial-of-service attacks
    • Cache poisoning
    • Man-in-the-middle attacks
    • root exploits
    • protocol exploits
    • etc., etc., etc.

    But all is not lost in the name service front. A few alternatives to bind exist now. Several more efforts are in the works as well. Time and experience will show which efforts will succeed.

    For those cannot become a bind-free site now or in the near term future, there are some things you can do to minimize the damage bind code can cause. Consider the following ideas. These idea are not for everyone. This list is by no means exhaustive. You might want to:

    • run named on separate hosts (do not put other services on your named server machines)
    • run named in a chrooted environment
    • dedicate a separate file system for named
    • if your OS allows it, mount that separate file system with nosetuid, nodev, etc...
    • run named with ``-u dns'' or better yet ...
    • never run named as root: use a small well designed prog to listen on port 53 and forward connections to named -or- change your kernel to allow the dns user to use port 53 (on Linux this is a simple change to inet_bind() function in net/ipv4/af_inet.c)
    • where possible in applications, avoid doing name service lookups; for example log IP addresses instead of hostnames
    • do not run named on your firewall(s)
    • put a firewall(s) between your named host(s) and machines you care about
    • use different named servers for different needs - consider running separate services for:
      • your external authoritative name server (configure to ONLY answer queries for your external domains, no glue, no recursion)
      • your internal / intranet name service needs
      • your production services (accessible by only your production servers, not the Internet or your Intranet)

    If you treat bind with caution, you will be more likely to survive intact until a bind-free solution with a good track record presents itself.

  • here is the short list
    • Big DNS packets compiled in, no patch, yes some of us must relay to aol.com. the answer that "oversized dns packets are illegal don't use them" doesn't mean shit to 10,000 users who are trying to mail dear old grandma.
    • Big to do package needs to be a compile time options, simply put, some people have large mail servers and need the extra room.
    • TCP server should be included in the qmail source. It's necessary to for the smtp client (inetd sucks and is a waste of time here) and should probably be used by pop3.
    • .fastforward and maildrop are 2 other packages that should be included with qmail.
    • The ability to deliver messages to the same domain over a single SMTP connection (qmail opens up seperate connections for each message relayed)
    • A good IMAP client

      Qmail works great if your a programmer, or if you have LOTS of time. Some people do not. Qmail works out of the box for 90% of what we do. The other ten percent could be made easier if some of the extremely common "add ons" were merged into the source.

  • djbdns requires seperate machines for almost everything

    From an ISP point of view, you really want to do this. Servers which customers use to lookup names should not be the servers which you use to store customer zone files. This ensures that when domains get redelegated away from your nameservers, that your own customers always see the correct (i.e. as delegated) zone contents.

    Qmail appears abandoned.

    What a pity. I use qmail in several places and it really works well. But I won't stop using it even if it is abandoned because I have the source, and ICHI (I Can Hack It).

  • because the obsd team just goes through the code and kills anything that looks like it could possibly be an overflow. they change lots of code that MIGHT be a security risk, they cant report 12 thousand maybes.
  • How, despite the thousands of eyes that look at it ever day, did these problems not reveal themselves earlier?

    Because only very few of those eyes are looking at the code. Most of them are just looking at a list of programs running on their system with BIND in it. They never bother (nor have time) to look at the actual code.

  • ... are bound to happen.

    (Sorry, bad pun, couldn't resist :-) )


    --
    Fuck Censorship.
  • Actually, bind, sendmail and wu-ftpd have had a really bad history of aweful bugs. The subject of this message, "WuFTPD: Providing *remote* root since at least 1994 [securityfocus.com]" really sums it up pretty well. As mentioned on the Cert page, BIND has had TWELVE Cert Advisories [cert.org] and this makes 13. The even named the 11th one "Continuing Compromises of DNS servers", though I suppose it's just the infamous NXT bug.
  • The upgrade from BIND 4.x to 8.x was very painful, they changed nearly everything about the config file format.

    Does anyone here know about what (if any) compatibility issues there are going from 8.2.x (installed on most machines today) to 9.1 ?? Did they change stuff in the config file format, again?

  • ...slashdot's DNS hasn't been compromised and someone is forging the ENTIRE site and ALL the posts!!.. :)
    Cheers...
    --
    "No se rinde el gallo rojo, sólo cuando ya está muerto."
  • Ok, just to jump into the fray, (sorry if someone else has asked this question, but its late where I am), does anyone know how to chroot bind 9? I looked at the docs, looked on the web and have asked on the mailing list. No one seems to know. I currently run bind chrooted (I know its possible to break out, but every little bit helps) and would like to do the same with bind 9. If anyone on the bind development team reads this, or anyone who develops internet service based software (ftp, http, whatever), including documentation that details how end users can at least add an additional layer of protection when, not if, bugs and exploits are discovered, would be GREATLY appriciated. Don't get me wrong, I applaud your efforts, but sometimes finding information, even when you think you know what your doing can be kinda fustrating. 8*). Also, anyone have problems upgrading to v9? I am especially interested in anyone who is doing dynamic dns with it. Last one to upgrade is a rotten egg! 8*)

    SealBeater
  • The CERT/CC has recently learned of four vulnerabilities spanning multiple versions of the Internet Software.......

    You just have to wonder what recently means, 90 days? Time to cancel the LAN party and have an Update party


    ________

  • I guarantee you that Akamai will patch far faster than microsoft did their own DNS servers.

    Except that Microsoft were running their own Microsoft-based DNS servers, and were thus not affected by these latest announcements.

    Microsofts mistake was to put all their servers on one subnet, and allow a change to be performed on a mission-critical router without proper approval, as far as I can work out.

    The interesting this is that their marketing machine managed to hush this up so well: if it had been Cisco, they would have been toast.


  • [sigh] I note that you've been moderated down as Flamebait. Apparently, someone is moderating based on emotion, not rational discussion, again.

    And so've I. Despite the fact that my point was rational, intelligent, on topic and clearly posted.

    Yup, we've got some wonderfully intelligent moderators these days.

    It's okay. I'll just go back to the home page, hit refresh until I get moderator access (it'll only take two or three times), and then I'll fix the stupid moderation going on (in other discussions, of course).

    Read the moderator guidelines, you cheese-eating dweebs.


  • At work I've got a Windows NT Server I slapped together from parts left over from a workstation that was too decrepid for use as a workstation. It's got a number of handicaps working against it, including:

    486 DX/2 - 50 Mhz processor

    Only 32 MB of DRAM

    BIOS patch drivers running in real mode

    Runs a Telnet server, DNS, and web server

    Goofy BIOS/Video card combination that dies after a warm boot

    Windows NT 4.0 Server

    That *is* impressive with all those handicaps. Oops, especially with that last one. [grin]

    Yeah, I've got a friend who has a small ISP up in Maine, and he was running the whole damned thing off 486s and Windows NT Server. Except for the uptime-limiting Windows-esque reboots, it was stable.

    Then, when Microsoft came a-knockin' to do a software audit, they screwed him.

    I'd propose that, in a business situation, you really have to keep away from the pirated software, and the overheads involved in making sure that you have that license handy for the copy of Windows NT Server on that machine may negate the savings of using an OS that you just had kicking around (and therefore didn't have to purchase again).

    Even with that stability, I'll stick with my Linux. Aside from BIND (!), it's secure and stable. I'm running DNS, web (Apache), SMTP (sendmail), POP, telnet, Windows file and printer sharing (SAMBA), DHCP, NAT gateway to my LAN, and PPPoE to connect to my DSL provider. And the damned thing (a Pentium 100) still spends most of its CPU cycles on SETI@Home.

    This would rule it out as a candidate for real use, right? Wrong! It NEVER dies, (it can't, won't reboot except for a power cycle). I take it down for the odd service pack, otherwise it's always there.

    [grin] Yeah, I know. I hate those. That's the kind of computer that you simply can't throw out, even though it's of marginal usefulness.

    I've got a great 486 motherboard. Sure, it's only VLB, but it's a 486DX4-100 with a load of cache RAM soldered to the board. It's stable, it's fast (for a 486!), and I have a VESA video, IO and network card for it. And it's narrow.

    And despite all those good things, it's also got a really annoying problem: the CMOS memory doesn't stay. So, I tried connecting an external battery. No better. I tried desoldering the CMOS battery from the board and replacing it, figuring that the battery external battery connections were bad. Still didn't work. Something is obviously fried. So, every time I have a power failure (not very often), I have to manually intervene, tell it that it the size of the hard disk attached to it, etc. Pain in the ass, but it's too good a board otherwise. You know what I mean - I've got a nice Socket-7 board kicking around; it's clockable to 233MHz and will take an MMX processor. It's got PCI slots, integrated I/O, much nicer board by specs than that 486. And yet, for anything mission critical, I'd still take that 486 any day.

    That Socket-7 board feels like it's got static damage, but I'll be damned if I know how. I bought it new, unsealed the factory box, and have always used a wrist strap, static baggies and a good anti-stat workmat underneath it. The 486, on the other hand, came from a crappy clone builder, where you know they carried it across a carpet on a dry winter day.

    It's currently at 42 Days, it was past 150 when I took thinks down because I tweaked IP addresses for our network. (Yeah... NT needs to be reboot to work right... it's not perfect).

    No, but that *is* impressive.

    My alltime uptime record for Windows is 66 days for Windows 95B. Of course, that's only possible when you're running Windows 95 under laboratory conditions, and only then with the 49.7 day crash memory leak bug fixed.

    The point is that NT is stable, you just have to treat it like a server instead of a workstation.

    Servers don't waste resources on GUIs.


  • But do you really think linux losers spend their time trying to find buffer overflows in software? Nah, they spend their time downloading exploits written by others, writing WinAMP skins (or whatever it is called on linux), and playing quake.

    I like what the Linux losers seem to do best. They write stuff. Stuff that lets me do kewl things that impress my boss and save my IT budgets for grander things.

    Like really blowing away the MCSE idiots at the office by setting up and running a domain server, web server with caching proxy, mail server, SAMBA printer server, DHCP server and NAT firewall - with an uptime that blows away the best that they've done so far with Windows 2000 - for the 17 user LAN in a division of a Fortune 500 company - for under $200.

    Fine, our website only gets about 50-60 distinct hits/day. But, the server processes about 300 e-mails a day, including large AutoCAD DXF attachments. The printer attached to it is always running. And we've saturated our T1 a few times now, though the server's NAT.

    Yup. <$200. Old but tough-as-nails Compaq Pentium 100 with 48 megs of mismatched SIMMs kicking around - free. 4.3 gig Maxtor IDE hard disk drive - left over from an upgrade. Operating system and ISP-on-a-disk - Red Hat 6.2, free download, $0.50 blank CD-R, ~$0.12 for bandwidth. Couple of el-cheapo PCI network cards with gold "MADE IN TAIWAN, R.O.C." stickers on them? ~$30. Time to set it up? A few hours of my time, ~$150.

    Stats? Check 'em out yourself. I've cut out lines that I didn't deem necessary to judging the performance of this server.

    [lwade@www /]$ cat /proc/cpuinfo
    processor : 0
    vendor_id : GenuineIntel
    model name : Pentium 75 - 200
    cpu MHz : 99.717487
    bogomips : 39.73
    [lwade@www /]$ top

    1:37pm up 75 days, 19:29, 1 user, load average: 1.04, 1.05, 1.01
    52 processes: 50 sleeping, 2 running, 0 zombie, 0 stopped
    CPU states: 1.1% user, 2.1% system, 96.6% nice, 0.0% idle
    Mem: 46848K av, 45524K used, 1324K free, 6212K shrd, 1588K buff
    Swap: 153176K av, 15632K used, 137544K free 19232K cached

    The nice CPU usage there is represented entirely by SETI@Home's UNIX/Linux client. If not for that, the little old Compaq wouldn't have much to do with most of its CPU cycles.

    I think that the people who contribute to, and are the most ardent advocates of an operating system with that capability, can't possibly be accurately described as losers.

    When you can do that with Windows (any version), with that kind of uptime, on a Pentium 100, lemme know.


  • Most security leaks are a direct consequence of using languages like C. People claim it is possible to program safely in C, however, incidents like this prove them wrong.

    [sigh] I note that you've been moderated down as Flamebait. Apparently, someone is moderating based on emotion, not rational discussion, again.

    Years ago, I used to be a very fluent assembly language programmer. I haven't done it in years, and I kind of lost interest in programming when I saw that the higher-level languages were taking over.

    For anything that has to be rock-solid-stable and predictable, like core operating system components and security, relying on higher-level programming where your code is being mangled by a compiler and linked to potentially faulty libraries, scares the hell out of me.

    Look at Windows 9x as a perfect example of why this is a problem. You install a new application. It swaps all the DLLs for its own versions. Because the DLLs are changed, anything which had a dependency on those DLLs will be affected.

    What will happen?

    Well, to quote Ren Hoek from the legendary History Eraser Button episode, "Maybe something bad, maybe something good. We just don't know."

    Eudora has caused a fatal exception error in CTL3D.DLL

    For security, the vulnerabilities are even more subtle, and I believe that they're unavoidable.

    The only way to ensure that you have complete control over what is actually running is to write it all yourself. Assembled from mneumonics, not compiled from a high-level code. All your own subroutines, written in your hand, not packaged libraries and other cop-outs.

    High level programming languages are great for community college programming students. But I think the 'Net would be a lot more secure if we kept them out of our operating system core components.

    And yes, writing in lower level languages can take a very long time. And, during development, some of the crashes are absolutely spectacular. But if you think about how much a bug that crashes an operating system like Windows 2000 costs to productivity worldwide - especially in an economy where every hiccup of a webserver slams NASDAQ into the guardrail like a Honda Civic being edged off the road by a Plymouth TrailDuster - spending a little more time to avoid the ambiguity of compilers and linked libraries is well worthwhile.

  • Cool logo! Anyway, it's in the wild. This is known for two reasons:
    1. I knew about this about a day before the /. post, and so have many other folks. Manual exploits are obviously out, and script kiddies are bound to follow within another 24 hours.
    2. It's posted on /. - EVERYBODY knows!
    One way or another you should upgrade because any security risk that is preventable is too much of a risk...

    The problem with capped Karma is it only goes down...
  • by sulli (195030)
    Even the SF Chronicle [sfgate.com] did a story before /. posted the damn article! (But there was much less useful info in the SF Gate article, other than the old bugaboo "Can bring down web sites! And whole sections of the Internet!!")
  • All the better. If the advisories were released before a fix was available, the whole damned net would fall to its knees under the hordes of script kiddies.
  • Let's suppose your fairy godmother appears and offers to use her magic to make your system safe and secure.

    As part of the way the magic works, in order to remove all buffer overflows and memory leaks and the like, it will cause all your programs to use twice as much cpu horsepower.

    Would you take her up on the offer? Is it worth sacrificing some horsepower for security and safety?

    You can program completely safely in assembly langage -- heck, even directly in binary using a hex editor. It's just not productive to do so. The high level C does so much of the bookkeeping for you. Similarly, using even higher level languages to achieve type safety, bounds checkinging, automatic memory management, etc. is just an extension of getting the computer to automate more of the tedious bookkeeping of programming. Isn't it worth it? For *most* applications (esp. bind) is the efficiency of C *so* inmportant?

    Not trying to start a flamewar. Just some thoughtless remarks to piss off people who hate high level languages.
  • Because all those eyeballs are busy studying the p0rn instead of the open source.
  • You'd think after a certain point, they'd say "Good GOD! This code SUCKS! Let's redesign and rewrite it!"

    That's exactly why they rewrote BIND 9 from scratch.

  • The scary thing is that I first heard about this yesterday on the cnn.com webpage! (Okay, so I could have heard about it first on Bugtraq if I had been religiously reading it daily, but I hadn't.)

    Fortunately I can ssh into my server at home, so I had it upgraded within an hour.

    Another scary thing is the CERT graph [cert.org] showing the exploit reports for the NXT bug. I definitely don't want to have an un-upgraded BIND in the peak of that curve.

  • by Chris Burke (6130) on Tuesday January 30, 2001 @02:31AM (#471391) Homepage
    All software has bugs. OK. BIND has a trackrecord of having security related bugs.

    Or rather, track record of having known security related bugs, because it is so widely used and hence so widely scrutinized. Whatever it is that you think has less bugs because of less known security issues, ask yourself if it is as widely deployed and as widely scrutinized as bind.

    Maybe we should be more forgiving to Microsoft security issues then?

    As long as the patch is released in a timely fashion (which means a day or two tops), and they don't attempt to cover up the "issue", then yes we should be. Unfortunately, neither of these things describes Microsoft behavior in most cases.

  • by Carl (12719) on Monday January 29, 2001 @11:30PM (#471392) Homepage
    Add the following line to your /etc/apt/sources.list file:

    deb http://security.debian.org/ potato/updates main

    Then do a:
    apt-get update
    followed by a:
    apt-get upgrade

    DONE.
  • by msaavedra (29918) on Monday January 29, 2001 @11:23PM (#471393)
    I don't mean this as a troll, but it seems that BIND has more security vulnerabilities than any other piece of software. I know someone brings this up on every DNS related post, but I think more people should try djbdns [cr.yp.to], with which I have been very impressed since I started using it about six months ago. I have heard that BIND 9 is supposed to be an improvement, but with BIND's history of security problems I'm not sure if I would trust even this new improved version. I think it is better to go with software that has already demonstrated its good security, like djbdns has.
    ---------------------------
    "The people. Could you patent the sun?"
  • by bconway (63464) on Tuesday January 30, 2001 @04:42AM (#471394) Homepage
    First, stay away from Bind 9. It has yet to incorporate all the features of version 8, and is still in its infancy. There are many security holes that have been found it it, and I suspect many that have not. You'd be best to stick with 8.2.3.

    Second, and more importantly, DO NOT RUN A NAMESERVER AS ROOT. There are -u and -g flags when starting named that allow you to set which user the nameserver will run as, much in the same way that IRC servers are run as unpriveleged users. Then if the server is compromised, you've only lost an account and not the whole system, assuming no one will be able to hit you with a local exploit.
  • by dimator (71399) on Monday January 29, 2001 @11:22PM (#471395) Homepage Journal
    How many of you think this story got posted just to use that cool icon [slashdot.org]?


    --
  • by biglig2 (89374) on Tuesday January 30, 2001 @02:55AM (#471396) Homepage Journal
    If you can't remember if you're running BIND or not you probably shouldn't ;-)
  • by cluge (114877) on Tuesday January 30, 2001 @04:34AM (#471397) Homepage
    From what I understand and have read BIND 9 is a total rewrite, supposedly with security in mind. No code was used from BIND 8 or BIND 4. BIND 8 still had a great deal of code from BIND 4, which itself was written VERY VERY long ago in a "programmers drunken orgy" of coding.

    BSD users are still screwed if they downloaded the source and compiled from source. The changes to BSD's BIND 4 are only for those people that used open BSD's implementation of BIND4.

    There are severl alternatives, and having used them all, we had to switch back to bind because of interoperative problems or performance issues. Some solutions are.....

  • by cluge (114877) on Tuesday January 30, 2001 @04:16AM (#471398) Homepage
    It all depends if you have machines/IP's to spare. djbdns requires seperate machines for almost everything. If you want your load balancing DNS server run this, resolver run this, master/root server run this. While I use a good deal of Bernstein software, and genearlly really like it djbdns wasn't up to snuff. The other thing I'm worried about is that the software will be left to rot.

    Qmail appears abandoned. Many people are making patches, but what a pain in the ass, get the source then apply the 3 patches you need and hope they work together. Qmail is a great program, BUT if the author isn't going to keep improving it, then he should turn it loose to those that are.

  • by marvinglenn (195135) on Monday January 29, 2001 @11:14PM (#471399)
    As a partially informed/ignorant Linux user, I went to see if I was running "bind"...

    It's probably worth mentioning that the program "named" (as seen in the service control activity panel of LinuxConf) is "bind".
  • by Tracy Reed (3563) <treed@ultrav i o l e t . org> on Monday January 29, 2001 @11:20PM (#471400) Homepage
    I switched to djbdns a few months ago because I just KNEW something like this would happen. Now I am glad I did! Bind is such a clusterf*ck. :(

    http://cr.yp.to/djbdns.html [cr.yp.to]
  • by demi (17616) on Tuesday January 30, 2001 @05:55AM (#471401) Homepage Journal
    I was running bind 8 in a chroot jail and when
    I built bind 9 it barfed a little, but all I
    really needed to do was make the /var/run
    under the chroot directory world writable. And
    bind 9 complained about not having a $TTL
    directive in my zone files. Once I fixed those
    things, I was up and running without having to
    change named.conf.

    I found the following things helpful:

    named -g -u <user> -t <chroot_dir>

    this runs named in the foreground without
    writing to log files and lets you see what's
    going on with it for troubleshooting. I
    also used ktrace to good effect: use truss
    on Solaris, strace on Linux and ktrace on
    BSDs and you'll see what named is trying to
    do (in particular, which files it's trying to
    open).

    I'm running OpenBSD and (now) BIND 9.1
  • by bluehell (20672) on Monday January 29, 2001 @11:22PM (#471402)
    Get the not yet announced RPMs of bind-8.2.3 at Red Hat's FTP-Server's Update-Section [redhat.com] or the Mirrors [redhat.com]. Goes back even to Red Hat Linux 5.2.
  • by karot (26201) on Tuesday January 30, 2001 @03:40AM (#471403)
    This is not true. OpenBSD have of course merged the required fixes already, and they can be found at:

    OpenBSD 2.8 http://www.openbsd.org/errata.html [openbsd.org]
    OpenBSD 2.7 http://www.openbsd.org/errata27.html [openbsd.org]

    The rebuild and install is trivial.

    --
  • by ivarch (92123) on Tuesday January 30, 2001 @03:34AM (#471404) Homepage
    Couple of things changed, nothing drastic. I changed over to 9.1.0 this morning and basically had to delete 1 line (about fetch-glue) and put in another (auth-nxdomain yes|no). That was just 2 changes in /etc/named.conf, for something like 337 zones on a primary server. Not painful at all. :-)

    It's all in the docs/misc/migrating file in the 9.1.0 tarball...

  • by doctor_oktagon (157579) on Tuesday January 30, 2001 @12:28AM (#471405)
    Well they're on /. , i don't think they can be "in the wild" much more than they are now.

    Because this announcement is on slashdot does NOT imply there are exploits available in the wild for these security holes.

    An exploit "in the wild" implies it is generally available to any script k1d that wants to download it, and as yet there are no "known" attack exploits available on the popular crack download sites.

    This does not mean there are no exploits available. A very skilled cracker (or hacker doing it on a theoretical basis) may already have worked out what code he can get by the BIND signiture parser buffer overflow, and thus what he can get the CPU to run.

    I hasten to add though that because of the way BIND parses it's input to this buffer, the attacker cannot actually run arbitrary code, but only use code containing characters which can get through the parsing routine.

    Excellent description at The Register [theregister.co.uk].

  • by h2odragon (6908) on Tuesday January 30, 2001 @12:32AM (#471406) Homepage
    I can report scans of port 53 with "interesting" payloads seen as early as 2am GMT.

    The BIND 4 hole(s) is/are going to be a BITCH to exploit, certainly not impossible; but hard enough that it won't be suprising if such never sees wide distribution. Quoth the original advisory [pgp.com]:

    "In order to trigger this overflow, an attacker needs to get BIND to cache an NS record with a very large length. Furthermore, the attacker needs to cache a record for the resolution of the NS record that contains one of the problem conditions for the logging. This is achievable by sending a query to a recursive name server, asking it to resolve a large name that is under the authority of a malicious name server. The malicious name server then needs to refer the request to another name server also with a large name, and provide an additional record giving an invalid address for that name server.

    The limitations placed upon the character set allowed in domain names makes the construction of a viable return address difficult. However, there is a potential for an attacker to make the name server return into memory that the attacker has forced the name server to allocate. In this case, vulnerability is contingent upon the location of the heap and the amount of memory available, as well as whether or not the operating system has a policy of lazy swap page allocation as opposed to an eager reservation policy. COVERT has verified that it is possible to exploit named running under Linux by growing the heap to sizes that far exceed that amount of memory and swap available. This was performed by utilizing specific patterns of memory allocation that maximize untouched memory."


  • by Barbarian (9467) on Tuesday January 30, 2001 @12:24AM (#471407)
    I doubt djbdns has received the attention that BIND has. If djbdns was used on every server instead of BIND, there'd probably be problems found with it too.

  • by nchip (28683) on Monday January 29, 2001 @11:59PM (#471408) Homepage
    Assuming that your dns server hasn't been compromised!

    When making security updates, verify first the debs really are the ones announced on:

    http://lists.debian.org/debian-security-announce -0 1/

    A mailing list you should be subscribed to, if you run public services with debian. Relying on /. for security news/instrucions is probably the stupidest thing one can do!
  • Most security holes come down to two things. One is allowing unvalidated input from untrusted users to be passed to any sort of general purpose command interpreter. This was a prime source of holes in early CGI scripts; for example, if you ask a user for an email address and then use the mail utility to send mail to it, and the user types me@mydomain.com; cat 'hax0r::0:0:lee7 hax0rs ownz you sux0rs:/:/bin/sh' >> /etc/passwd then you've just lost your machine.

    The other is accepting unchecked amounts of input from untrusted users. Remember that C (unlike, for example, Pascal, Java or LISP) does no bounds checking, so you have to implement bounds checking yourself.

    If you do the equivalent of:

    char buffer[ BUFFLEN];
    int i = 0;

    while( ! feof( stdin))
    {
    buffer[ i++] = getchar();
    }
    buffer[ i] = '\0';

    That's going to lead to a buffer overrun which someone can exploit. If you do the equivalent of:

    char buffer[ BUFFLEN];
    int i = 0;
    int maxinput = BUFFLEN - 1;

    while( ! feof( stdin) && i < maxinput)
    {
    buffer[ i++] = getchar();
    }
    buffer[ i] = '\0';

    Then you're reasonably safe. But to be safer still, don't use C to write daemons which take input from untrusted third parties, and don't run daemons as root - give each it's own separate role account.

  • by ASCIIMan (47627) on Monday January 29, 2001 @11:19PM (#471410)
    One Ring to rule them all,
    One Ring to find them,
    One Ring to bring them all
    and in the darkness BIND them.

    Hmmm... Interesting.

  • by DrWiggy (143807) on Tuesday January 30, 2001 @03:58AM (#471411)
    Does anybody out there have links to some good reference material on this?

    Sure. There is a mailing list over at SecurityFocus called SECPROG that discusses secure programming practises. The idea is to produce a white paper that describes how to write secure code. The draft can be seen here [securityfocus.com] and is probably the definitive how-to in existence at the moment.

    Hope that helps.

Passwords are implemented as a result of insecurity.

Working...