Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Bug

Wu-ftpd Remote Root Hole 515

Ademar writes: "A remote exploitable vulnerability was found in wu_ftp, which is distributed in all major distros. The CERT has a (private) list to coordinate this kind of disclosure so vendors can release updates together, but RH broke the schedule and released their advisory first. You can see the full advisory from securityfocus in bugtraq, but here is a quote: "This vulnerability was initially scheduled for public release on December 3, 2001. Red Hat pre-emptively released an advisory on November 27, 2001. As a result, other vendors may not yet have fixes available."" CNET has a story about this too.
This discussion has been archived. No new comments can be posted.

Wu-ftpd Remote Root Hole

Comments Filter:
  • Re:Nice. (Score:2, Insightful)

    by Wells2k ( 107114 ) on Wednesday November 28, 2001 @08:58PM (#2628002)
    Perhaps, but think of it in another way. Redhat is trying to protect their own customers by producing and releasing a fix as soon as possible. The fact that other distributers are falling behind on this mark is truly not their fault.


    You don't see Microsoft doing this, do you? :)

  • Whats ethical? (Score:3, Insightful)

    by L-Wave ( 515413 ) on Wednesday November 28, 2001 @09:00PM (#2628016)
    This raises the question of ethics, is it more ethical to keep quiet about a hole in software that people run / store important data until its fixed, or is it ethical to tell the public in which case the people affected become "more" vulnerable?

    Personally, i would rather be told of the hole, and advised to turn off the daemon, as opposed to running the daemon and not knowing about the hole.....some people think ignorance is bliss.....not me. =)
  • Well, I'll bash MS, and I'll bash the GNU and Linux guys for the same thing. Why was this not released SOONER?

    The people who would really use the exploit already know about it in their cracker circles, so why are we limiting the public in this knowledge? Just tell us and we'll shut down the FTPs or temporarily switch the access to a different daemon while you write a patch for it.

    Again, this is security by obsurity, and shame on the OSS community for trying to hide it!
  • by tim_maroney ( 239442 ) on Wednesday November 28, 2001 @09:08PM (#2628075) Homepage
    The attacker must ensure that a maliciously constructed malloc header containing the target address and it's replacement value are in the right location in the uninitialized part of the heap. The attacker must also place shellcode in server process memory.

    Color me stupid, but that doesn't sound too feasible for a remote hack. How would you muck with the malloc heap this way? DoS, maybe, but unless there's something I'm missing, not too great for root access. Let me know if there's something I'm missing.

    Tim
  • by Faré ( 6486 ) on Wednesday November 28, 2001 @09:08PM (#2628076) Homepage
    Using the C language to implement anything else but the lowest-level layers of a system is plain incompetence, all the more when security is involved. The criterium is simple: if there is ANY use of dynamic allocation, you should use a safe language like OCAML, CommonLISP, Mercury, Perl, Python, etc. [Of course, C may be used when *implementing* the dynamic allocation].
  • by Cato the Elder ( 520133 ) on Wednesday November 28, 2001 @09:12PM (#2628098) Homepage
    I haven't.

    It's not like only Redhat distro users can now get a safe version of wu_ftpd--it's just that not everyone (neccessarily) has the packages ready for all there configurations.

    If you have 6 boxes, better start checking versions and installing newer ones. Sure it sucks, but it's better than being surprised when your servers are "owned"
  • by child_of_mercy ( 168861 ) <johnboy AT the-riotact DOT com> on Wednesday November 28, 2001 @09:13PM (#2628103) Homepage
    To paraphrase Keynes:

    "When the facts change, I change my mind. What do you do?"


    Seriously?
  • by Pinball Wizard ( 161942 ) on Wednesday November 28, 2001 @09:25PM (#2628148) Homepage Journal
    Now wait a minute. Here on /., MS gets slammed because they want bugtraq and whoever to wait before they publicize a security hold until a fix can be reasonably made.


    Now you guys are criticizing Red Hat for releasing information too quickly?!


    Make up your minds. Either it is a Good Thing to release this sort of information to the public or not. IMO, if CERT is withholding information to the public that just gives a wiley cracker that much extra lead time to perform exploits. Whereas if the info was just released in the first place, at least people could turn their FTP servers yet, or switch to something like pure-ftp, which has yet to be cracked.


    I agree with Red Hat on this one. They did people a favor by releasing the information.

  • No surprises here (Score:5, Insightful)

    by Broccolist ( 52333 ) on Wednesday November 28, 2001 @09:26PM (#2628155)
    Wu-FTPd has had a long history of security holes. It's practically the BIND of FTP servers.

    I looked through the source of Wu-FTPd some time ago, when I was interested in adding support for an encrypted form of FTP proposed in a recent RFC (the protocol never caught on). What I found scared me. Most of the server is one humungous 8000-line C source file which appears to do pretty much everything.

    Having quite a bit of experience with the FTP protocol, I expected to immediately understand what was going on, but at first glance, this code baffled me. It's full of pointer arithmetic and chains of if-statements performing mysterious, undecipherable operations on fixed-length arrays. It's not divided into clear levels of abstraction and I had difficulty telling what most functions were supposed to do, let alone what they actually did.

    Anyway, I immediately gave up any thought of adding any new features to this godawful mess. Considering all the weird cruft that goes on in that code, it's no surprise to me that people are constantly finding new security holes in it. There are other featureful FTP servers out there; it's hard to see why distributions continue to include a bug-ridden program like Wu-FTPd as default in their distributions.

  • by orkysoft ( 93727 ) <orkysoft@m y r e a l b ox.com> on Wednesday November 28, 2001 @09:31PM (#2628181) Journal
    The fact is, the blackhats have known about this vulnerability for some time, so fix or no fix, you need to be aware of it, so you can disable your ftpd if you think the risk is too high.

    Not disclosing this asap will only give you a false sense of security, and will deny you from making your own risk assessment.

    Hell, why do you think Microsoft wants to limit disclosure? To empower the sysops? ;-)
  • by noahm ( 4459 ) on Wednesday November 28, 2001 @09:35PM (#2628196) Homepage Journal
    There have been a number of posts here claiming that the Linux vendors are being hypocritical by claiming to support full disclosure while maintaining a private list for the coordination of announcements regarding such security issues is this. However, they are missing the point. This list is not against full disclosure in any way. It is simply a way for vendors to coordinate their fixes before the exploit is widely published. At no point are the vendors discouraging the vulnerability's publication. They are merely delaying the announcement so they can coordinate the availability of their updates.

    The closed source vendors who are against full disclosure would prefer that the vulnerability is never announced, which would (according to them) allow them to take their time and roll the update into their next service pack release or whatever.

    And to the people who suspect some kind of nastiness on Red Hat's part for their early announcement, the individual at Red Hat who claims personal responsibility has already apologized on the private list, and has admitted to erring. The private list has existed for a long time and has worked very well in the past, allowing several vendors to all release fixes at once to a previously unknown vulnerability. It would have worked fine again in this case, except for the mistake by Red Hat.

    noah

  • by kimihia ( 84738 ) on Wednesday November 28, 2001 @09:35PM (#2628198) Homepage

    Well then close the service off. An unuseable service is better than a r00ted server.

    It is good to know that it could potentially be rooted. Being ignorant of security holes does not make it secure - no matter what Scott Culp may tell you.

  • by Atilla ( 64444 ) on Wednesday November 28, 2001 @09:38PM (#2628206) Homepage
    any decent os, whether it is linux, *bsd, BE, Windows, or whatever can be made secure if you actually take the time to set it up properly.

    i know it's tempting for all the [insert your OS of choice] zealots to waive their flags when another OS becomes known to have a security exploit. but for fucks sake, just because wu has a hole in it, doen't mean that the entire OS is scrap.

    oh by the way -

    SNORT [snort.org] is a NIDS (network intrusion detection system) that could help you detect and prevent a good deal of network attacks. IIRC, it has some windows plugins too.

    DEMARC [demarc.org] is a web-based console for SNORT, plus a pretty good host/service monitor.
  • by x-empt ( 127761 ) on Wednesday November 28, 2001 @09:42PM (#2628221) Homepage
    Now wait a minute. Here on /., MS gets slammed because they want bugtraq and whoever to wait before they publicize a security hold until a fix can be reasonably made.

    Microsoft is bashed because they take so long to release a fix that they know will work. RedHat releases a FIX immediately when they know it works.

    Which company would you rather have a support / maintainance contract with ? Yeah, I thought so.

    CERT had knowledge of the bug, a patch available, and quality assured with that patch... yet they still asked for a delay in publicizing the bug. Why ? The question should not be about RedHat, who acted responsibly, but instead why CERT is causing holdups that allow people in the underground communities more time.

    Hmm... I wonder if the FBI, NSA, or CIA is on the list of "early notifications" .... FBI intel. probably uses these early notifications
  • by Ambassador Kosh ( 18352 ) on Wednesday November 28, 2001 @09:43PM (#2628227)
    This absolutely should have been released as soon as it was found. And shame on CERT, Redhat, WU-ftpd authors etc for trying to hide it. This is inexusable because the crackers and kiddies have probably had it for several weeks now. If you find a security flaw report it immediately publically so that those that could be affected can turn off the service if that is what it takes.

    However considering the quality record of wu-ftpd if you are running it on your box you don't care about security already so it could be worse. Overall people should probably be use proftpd or maybe even zope for their ftp server.

    On a plus note there are wu-ftpd packeges in incoming for sid and those might have the fixes people need to debian boxes. If you are running wu-ftpd and refuse to use something else try those.
  • HTTP vs. FTP (Score:5, Insightful)

    by rcw-home ( 122017 ) on Wednesday November 28, 2001 @10:02PM (#2628333)
    HTTP can't really offer all that FTP does in terms of file transport.

    HTTP really is all that.

    HTTP/1.1 supports, among other things, file resuming via a standardized header (Range:) and pipelining (whereas FTP's control port+data port means n+1 TCP connections). HTTP can give you a file compressed the way you want it - and in the language you asked for - without filename hacks. HTTP's If-Modified-Since: header makes it more cacheable. In addition, most HTTP server implementations are more flexible - they can authenticate against things other than the local account database, and there is a widely implemented standard for HTTP over SSL - HTTPS. CGI is also more pervasive and useful than SITE EXEC.

    Let FTP die the death it has so long deserved.

  • by The Pim ( 140414 ) on Wednesday November 28, 2001 @10:12PM (#2628384)
    When the facts change, I change my mind.

    The facts did not change a whit. This is just another in a long train of gaping holes in critical software, which you must have been aware of. Either you never thought to ask yourself, "What if this bug affected a service that I rely upon?" (in which case you were intellectually lazy), or you failed to appreciate the impact it would have (in which case you erred in judgement). It happens, I know, but don't make excuses.

  • by psamuels ( 64397 ) on Wednesday November 28, 2001 @10:20PM (#2628415) Homepage
    I simply question the view that HTTP is a simple (and better) replacement for FTP.

    For uploads, FTP is still probably better, if only because nobody seems to use the HTTP PUT command.

    For downloads, though ...

    • both require a new connection for each dir listing or file transfer - except HTTP/1.1 which can reuse a connection. HTTP wins.
    • FTP requires an additional TCP connection for the control info. More setup and teardown cost. HTTP wins.
    • Many sites are already running an HTTP server, so using that for file transfer means one less daemon. Mitigated by running ftpd out of inetd, which most people do, but still ... HTTP wins.
    • HTTP can use auth methods other than plaintext, and can easily have different sets of auth'd users in different directories (without using Unix permissions, which can occasionally get clunky, or you can use Unix permissions if you prefer). FTP only has user/password/account auth, and nobody uses the "account" part anyway. HTTP wins.

    What are the advantages to FTP for downloads (especially anonymous, but also authenticated)? I honestly can't think of any ATM.

  • by Anonymous Coward on Wednesday November 28, 2001 @10:37PM (#2628497)
    The facts didn't change; your circumstances did.

    You ran software yesterday and were happy with full disclosure;
    You run software today and aren't happy with full disclosure.

    What changed? The fact that you're now personally affected. "Mummy, it hurts. Make it go away"
  • by debolaz ( 526572 ) on Wednesday November 28, 2001 @10:54PM (#2628591) Homepage
    Anyone using wu-ftpd has only themselves to blaim if anything happends to their servers. This application [wu-ftpd.org] has a bug history making Microsoft look like what OpenBSD [openbsd.org] claims to be. There are many free and secure [netbsd.org] and certainly more extensible [proftpd.org] options available, so why distros still stick with wu is beyond my understanding.
  • by reflective recursion ( 462464 ) on Wednesday November 28, 2001 @11:06PM (#2628637)
    Do tell me the other forms of security.

    I hear this all the time. "Security through obscurity is bad!" What other forms _are_ there? Passwords and encryption _is_ the same as obscurity. People using this "security through obscurity is bad" argument seem to have another agenda: tearing down IP laws and promoting freedom of information. While IP may be bad, it is a very seperate issue.

    How do people claim security through obscurity is a bad thing? Why is it bad? How else does security work? There is physical security or there is abstract/obscure (i.e. encryption) security. What else?

    There is also insecurity through ignorance, which seems like a disease in the networked world. It really doesn't matter much if you post the memo on the admin/end-user's forehead if they don't bother to read it. This seems to be the case more than script kiddies finding out before knowledgable admins. After all, where do script kiddies get their info? Same place admins do: Bugtraq. By the time those damn elusive script kiddies on IRC exploit a few holes in nasa.gov, I'm sure at least one knowledgable admin has posted a report to bugtraq. In case you didn't pick up the sarcasm, most script kiddies travel in herds and attack usually obvious "high-risk" sites. If someone knows something before Bugtraq, I'm sure you have very little to worry about. The exploiter is probably a knowledgable cracker and probably has specific targets. If you happen to be a target, I wish you well, but I don't think any amount of Bugtraq info will keep someone determined to get in your system out (hint: There is a whole world of social explotation that is damn near impossible to detect or even be aware of).
  • Because the people who discovered it didn't want it released before the patches were out.

    Patches, smantchs...the only people who DON'T know about the bug are the sys admins with the servers! All of the script kiddies and crackers knew even before the guys at Wu-FTP knew! Waiting for a patch and not telling people about this major security hole is just inviting crackers to hack in and root the server!

    Who says the OSS vendors had anything to do with the waiting? If software vendors want some notice on holes, then it's only right that if the discoverer of the hole wants to wait for patches, the software vendors should respect that.

    Again, it's better to disable the FTP server or change the daemon while waiting for a patch, than letting a server sit WIDE OPEN for somebody to rip it apart! This whole situation is simple logic.

  • by jspaleta ( 136955 ) on Wednesday November 28, 2001 @11:54PM (#2628901) Homepage
    Cox, from redhat is on the record in the Cnet article as saying this was a "big mistake"
    and that redhat didn't mean to force the other vendors into a tough situation....
    Has Redhat done this kinda thing before? I don't think so....mistakes happen. One mistake like this , I can forgive...especially when I company takes the blame right out from and admits it was their mistake at the first oportunity. If it happens again then I'll start to question Redhat's sense of vendor fair-play.....and I'm sure the security vendor list will evaluate RedHat's commitment to the list rules as well. If redhat does this again, more likely than not they will stop getting the private vendor security announcements..and redhat will be the one scrambling to update applications in the future.

    But I really don't know if a a large scale security announcement should have been held off till a patch was tested...I'd rather know as soon as the vendors know so I can turn off the servers while they work on a patch. I don't want vulnerable servers humming along without knowing their vulnerable if I can help it. So in some way I'm actually grateful that RedHat mistakenly broke ranks....now I can turn off any wuftp servers and safely wait for a patch from the distros.

    -jef
  • by Garc ( 133564 ) <jcg5@po.[ ]u.edu ['cwr' in gap]> on Thursday November 29, 2001 @12:16AM (#2628997)

    Hmm, when I think of "Security through Obscurity", I tend to think of it in a different way than thought of above. I think of it as keeping the method used to encrypt/secure/hide something secret, thinking that because the method is secret it is secure.

    For example, say I develop a new top secret encryption scheme, called Rot-13. I tell no one of how it works. Since I am not a professional cryptographer, the chances are my algorithm is not cryptographically sound. So it is only secure as long as its method is secret. Once the secret is out, its security is gone. This is security through obscurity.

    An example of the opposite would be RSA. The algorithm is well known, therefore with peer review, it is thought of as secure. Even though I know how RSA works, I'm still unlikely to be able to crack it if used properly.

    regards,
    garc

  • by Tom7 ( 102298 ) on Thursday November 29, 2001 @12:19AM (#2629013) Homepage Journal

    I know that we sometimes live with legacy code; fair enough. But I claim that it is entirely inappropriate to write security-critical internet daemons in C!

    There are lots of people here claiming that this is caused by sloppy or inexperienced programmers. I think that this is bullshit. Are the authors of wu_ftpd bad programmers? BIND? IIS? perl? telnetd? quake 3 arena? sshd? All of these have had remote overflow (or related) exploits. There are hundreds more... Have you personally ever written a multi-thousand-line network daemon that you know is buffer overflow free? How do you know?

    Here is what I say: C the language makes it easy to make the kind of mistake that leads to a remotely exploitable buffer overflow. It is almost as if the language is designed to enable this behavior. According to CERT and others, buffer overflows (and related format-string vulnerabilities, also endemic to C) are the most common source of security holes in UNIX applications (On win32, they are second only to Outlook attachments).

    There are only two reasons I can imagine that people would reasonably use C:

    Low-level Hardware Access - Fair enough. There are not really any good alternatives now. However, network applications do not need to do low-level hardware access at all.

    Raw Speed - Though I believe that other languages are very near to C in performance (http://www.bagley.org/~doug/shootout/craps.shtml) , conventional wisdom says that if you want ultimate speed, use C. However, network applications are not typically CPU-bound, they are network bound. ESPECIALLY FOR THE HOME USER, with a 1.5ghz PC and 5 users per day, this argument is totally silly. Outside the enterprise (where hopefully people can custom tune their software and have people devoted to keeping it secure), there's no reason to need C's speed in a network daemon.

    IN A NETWORK APP, SECURITY (SAFETY) IS CRITICAL. That means that all network apps should be written in a language with machine-checked safety. This might mean Java for people who need it to feel like C. (Note that there are several good native code compilers for java, and it has reasonable network support.) In these kinds of languages, buffer overflows and format string vulnerabilities are automatically impossible. Personally, I prefer a more efficient language with stronger safety guarantees: SML. (Ocaml [inria.fr] might suit the slashdot audience better) In fact, at the time of the last wu_ftpd remote root exploit, I decided that it was time for me to rewrite my ftp daemon in SML. It took me only 1 weekend to get it working, by myself. It does not support every feature of FTP (especially obsolete things and dubious "features" like SITE EXEC), though it supports plenty for say, the average linux desktop user. Writing code in a modern, high-level language has other benefits too: it is only about 3000 lines, including library code that I wrote to implement MD5 passwords and various other things that I plan to use in other daemons (the core ftp server is only 850 lines). Compare this to wu_ftpd (8000+ lines) and the PAM MD5 password implementation (200 lines). Most importantly, I know that by using a safe language that I have a 100% buffer overflow free daemon. Thus, I can spend more time looking over the code for more subtle security problems, such as possibilities for Denial of Service attacks. (I didn't do much of this, actually, though it is not vulnerable to the ls globbing attack, SITE EXEC, or PAM authentication bugs that have been in other ftp servers.)

    If you think this sounds good, you can get my FTP server here [sourceforge.net] and an ML compiler here [sourcelight.com] . (It is just a proof of concept, so don't get too excited!) But what I would rather you do is just listen to my advice, and demand better from your software manufacturer! Linux distributions that want to be secure should be rewriting this kind of software in some modern safe language. It is easy to do, and the results are worthwhile.

  • by LinuxHam ( 52232 ) on Thursday November 29, 2001 @01:16AM (#2629165) Homepage Journal
    Yes, or you could replace both of them with webdav.

    I just spent a week playing with WebDAV, investigating it as a possible solution for a customer looking for secure Internet file access. Anyone please correct me if my findings are incorrect.

    For the unitiated, WebDAV is the protocol name for the "web folders" feature of IE5.5 and up. I ran it as an Apache module. It was incredibly easy to setup. HOWEVER.. Under WinNT, you can only copy files to and from the web folder, not open or edit them directly. With Win2k and up you can open and edit files directly in the web folder without needing to transfer them to your local PC first, which is much nicer.

    The reason I wouldn't recommend it for my customer is that AFAICT the reads and writes on the server side are done with the user and group that the web server runs as. While it does indeed support ACL's, the ACL's are just for the web server protecting the file space in general, and do not maintain the uid/gid of the web-authenticated user down to the file level. It would be sufficient for providing a "common" drive for all the authorized users with no file-level ACL's. You would need to create a new VirtualHost for each file area that needs its own ACL (think home directories).

    Imagine 100 users. That would require 100 VirtualHost blocks with independent htaccess files and at the filesystem level, every file and every directory would still be owned by the web server! Not exactly a suitable solution for a client to implent his own in-house version of WebDrive.

    In addition, I repeatedly experienced "this operation could not be completed due to an unexpected error" in Windows NT when trying to traverse certain directories of MP3's. If it doesn't work for me in certain situations, it would be disastrous for the customer looking for a "highly available" solution. More like "barely available". I can't architect a solution around something like that.

    Having said that, I would love to see a major web archive like ibiblio.org set this up for easy file browsing and access. That would also give the WebDAV team an enormous amount of feedback from a single site, and hopefully iron out more of the issues that keep this unsuitable as an enterprise-class solution.
  • say what? (Score:2, Insightful)

    by Eric Gibson ( 166760 ) on Thursday November 29, 2001 @01:25AM (#2629200) Homepage
    What's another form of security? Security based on sound techniques where one can disclose the nature of thier mechanism and it still be secure.

    Example:

    1) Company A encrypts a key file in thier software using DES because "that's good enough", and they rely on the fact that no one knows that they use DES as a means of security. "They can't even brute force it, they don't know what encryption mechanism we are using. DES is good enough!"

    An attacker then does a few simple tests, say disassembles the binary that is used as a tool to encrypt the file, and figures out that they are using DES and runs a few tests where they produce a encrypted file where they know the plain text, and proceed to brute force for the key. This was a mistaken notion of security through obscurity.

    2) Company B encrypts a file for thier software, but they use RC6, and then encrypt that file with Two-fish. Or heck, use a totally different security mechanism where this file doesn't even need to be encrypted because it's inherently secure. Then they disclose how they do it so that the mechanism has peer review to make sure that security can be improved in the future.

    Aside from these obvious points, your other arguments are totally bogus from a security standpoint. A company or organization can easily prevent simple "social engineering attacks", using security procedures in thier company. If they are good procedures, they could even disclose them to the public. Your argument that "Well, if someone wants to, they can get into your system anyway." is absolutely not security concious. I don't see why any of this has to do with Intellectual Property either, it's just logical.

    I gaurantee you Securityfocus found out about this it was because some hacker group on IRC has been using it for weeks. Trading it amongst themselves for whatever, and someone leaked it to them. For them to "hide it until the vendors can make a fix" just extended the time that these current criminals could use it, and left my system at risk. I agree with RedHat. I'm sure the fix is a one or two liner, and I wanted it as soon as they found out about it. Not when they were ready to tell me.
  • by Animats ( 122034 ) on Thursday November 29, 2001 @01:34AM (#2629227) Homepage
    There's no excuse for running the entire FTP daemon as root. It should start out as "nobody", and upgrade its privileges using a minimal privileged login program. The security checking shouldn't be in the FTP daemon at all.
  • by victim ( 30647 ) on Thursday November 29, 2001 @01:53AM (#2629288)
    It is a good point. Poorly made, but there is a good point hiding in there. I see the article has atracted 6 flame replies and a -1 troll before I read it.

    I have not made an ontology, but it seems to me that nearly all exploits the past few years have been (in decreasing prevalence order)
    • data buffer overflow
    • string overflow
    • filename .. abuse
    A language with safe memory management will eliminate the first two. The third needs a more robust set of filename functions.

    Its not impossible, or even hard, to avoid these sins in C programming. But, it also isn't impossible or even hard to screw up and commit this sins.

    Programmers make mistakes. That is why it is called programming instead of typing. Choosing a language that minimizes the security impact of mistakes makes a lot of sense.

    Don't forget about other criteria. You may need the speed that can be had with well written C code. Usually you won't.

    I look at my servers. They are all the slowest rackmount machines I could buy from Gateway when I bought them, 800MHz PIII is typical. (They are plural because the have different security policies, not because of load.) They handle things like mail, http, samba, cvs, ldap, the usual suspects for a 100 engineer software firm. They rarely go beyond 5% cpu utilization. I would gladly sacrifice my surplus cpu cycles for slower, safer, services. When they do go beyond 5% it is almost always for a very specific function like the rsync algorithms or blowing backup data over to another box. Make the hot spots of these functions fast, spend a lot of time making them secure. Probably not more than 400 lines of code between them. Let the rest be written in a safe language.
  • by 5KVGhost ( 208137 ) on Thursday November 29, 2001 @02:29AM (#2629433)
    If you define all non-physical security measures as being "obscure" then I suppose your argument is correct, but I think most people would find that definition inadequate.

    Passwords and encryption, to use your examples, are exactly the opposite of security through obscurity. Both are pro-active measures taken by responsible parties to reduce the risk of intrusion. The "obscure" counterparts to these measures would be using obvious or default passwords, enabling anonymous access, and failing to enable encrypted communications, all while hoping that no one notices.

    Any security measure can be poorly implemented, whether it's physical, electronic, or purely psychological. To know the difference between a good implementation and a bad one it's necessary to know the vulnerabilities inherent in your methods and equipment. That lock on the door may seem like the perfect choice until you discover that it's easily picked. Witholding that information from the owners and potential owners of the faulty lock doesn't protect them from anything, it just gives a false sense of security.

    The really Bad Guys don't need to consult BugTraq to discover vulnerabilities. I'm sure some of them do, but if open discussion of the bugs is prohibitied then there are plenty of alternate sources for that sort of information. And while individuals may prefer to attack only high-profile sites, there's nothing guaranteeing that; and once an exploit is sufficiently automated there's no way to know where it'll turn up next.
  • by coolgeek ( 140561 ) on Thursday November 29, 2001 @04:20AM (#2629685) Homepage
    the alarming thing this time is the Linux guys adopting the Microsoft methodology for patching leaks. sit on their ass while boxes get rooted and release the patch when the "agree" to do it. don't tell anyone who might just code a patch into the source themselves. can't have that, can we?
  • by chrysalis ( 50680 ) on Thursday November 29, 2001 @05:41AM (#2629830) Homepage
    To protect against unknown exploits, there are kernel patches like LIDS [lids.org] . With LIDS, you can enforce any program to only access some files. For instance, you can enforce Bind to only read his configuration files, and nothing else. So even if an exploit is found, your system will be safe.

    It works amazingly well, and for almost everything on your system.

    But does it apply to SSH and FTP? Probably not. When you give FTP access to customers so that they can upload web pages, the FTP server needs read/write access to everything in /home. So it means that if an exploit is found, even with a properly configured LIDS barrier, the attacker can change the content of any customer file. And that's really dangerous. And LIDS can hardly avoid this.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...