Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security

BitchX 1.0c19 IRC Client Backdoored 338

JRAC writes "A recent Bugtraq submission has indicated that the popular IRC client, BitchX, contains a backdoor. So far, only certain 1.0c19 files, downloaded from ftp.bitchx.com are reported to contain the malicious code. The BitchX developers have been notified, so hopefully a fix will be issued soon. Looks like irssi wasn't the only one ;)"
This discussion has been archived. No new comments can be posted.

BitchX 1.0c19 IRC Client Backdoored

Comments Filter:
  • by XaXXon ( 202882 ) <xaxxon.gmail@com> on Tuesday July 02, 2002 @09:50AM (#3806977) Homepage
    If BitchX was some sort of closed-source product, how long might this have taken to show up? Many eyes lock down all backdoors.

    Anti-GPL people (read Microsoft and their lackies) may try and take this as a weakness in OSS, but I look at it as a strength. If one of their developers gets something like this into one of their products (either on his/her own or with the blessing of the company, the world may never know). With OSS, it's out in the open for everyone to see/fix.
  • Re:How long... (Score:3, Insightful)

    by Anonymous Coward on Tuesday July 02, 2002 @09:57AM (#3807027)
    About 5 seconds into install, when the closed-source firewall running on the closed-source OS catches the closed-source IRC client trying to create the reverse telnet connection.
  • Re:The name.... (Score:3, Insightful)

    by dalassa ( 204012 ) on Tuesday July 02, 2002 @09:57AM (#3807028) Journal
    Because most companies have marketing people to hit them on the head and say no, this is not appropiate.
  • by splorf ( 569185 ) on Tuesday July 02, 2002 @09:58AM (#3807032)
    I'm sorry but this is one thing Microsoft and/or Netscape did right. The practice of including detached PGP signatures on download sites is useless--they have to be manually verified, and hardly anyone bothers.

    GNU/Linux downloads should be in signed archives like Netscape JAR files. JAR files are basically ZIP archives with a signature file stored inside the .zip in a standard place. When you unpack the archive, the unpacker checks the signature the same way a browser checks an SSL web site.

    JAR files use a certificate chain ending in a certificate authority (usually a commercial one) but maybe the signed-download scheme could be signed against a certificate on the official developer's website. Of course that wouldn't be unspoofable, but it would be as secure as the current scheme of having a PGP public key on the developer website and signing against that. The main benefit is the checking would happen automatically, so it would be much harder to put crap into downloads. If someone makes a modified version, they would have to sign it themselves (with a signature pointing back to their own website) or else the unpacker would print a message saying the code was unsigned and the user should check it carefully before using it.

  • by Pave Low ( 566880 ) on Tuesday July 02, 2002 @09:58AM (#3807033) Journal
    Interesting how there's a fairly serious bug in slashcode that was exploited yesterday but they don't publicize that. At least they fixed it quickly, but if you guys like to point out other peoples bugs, how about shining the light on yourself once in awhile? I'm sure other slashcode sites would have liked to have known about it.
  • by toupsie ( 88295 ) on Tuesday July 02, 2002 @09:59AM (#3807042) Homepage
    If BitchX was some sort of closed-source product, how long might this have taken to show up? Many eyes lock down all backdoors.

    Not to burst your bubble, but if BitchX was closed source, I doubt a third party would have access to the source code to inject the trojaned backdoor, modify the FTP server and set up a bizarre distribution method (has anyone figured this out yet?). Granted many eyes helped find this problem, but in a closed source world, this wouldn't happen unless you had a disgruntled employee or a really stupid project manager. If BitchX were a commercial, closed source product, the exploit would most likely be a buffer overflow, not a blatant backdoor.

    Disclaimer: I use a closed source IRC product called, Ircle [ircle.com].

  • Re:It's Odd (Score:3, Insightful)

    by mindstrm ( 20013 ) on Tuesday July 02, 2002 @10:09AM (#3807086)
    Well, perhaps they wanted to spread it to dumb home users but not to anyone more professional. Perhaps they wanted to go longer without being caught.

    Perhaps it's actually a DNS issues, and it's directing some people to a dummy server.
  • GnuPG (Score:2, Insightful)

    by giminy ( 94188 ) on Tuesday July 02, 2002 @10:28AM (#3807194) Homepage Journal
    If more people used GnuPG [gnupg.org] and checked the signatures on their software, we wouldn't have to worry as much about backdoored software (assuming, of course, that you trust the original author. And if you don't, then you shouldn't be using their software now should you?). One of these days someone is going to do something like this with something major, like the kernel, and it's going to affect a lot of people. So start checking now!
  • by torinth ( 216077 ) on Tuesday July 02, 2002 @10:32AM (#3807212) Homepage
    If BitchX was some sort of closed-source product, how long might this have taken to show up? Many eyes lock down all backdoors.

    Anti-GPL people (read Microsoft and their lackies) may try and take this as a weakness in OSS, but I look at it as a strength. If one of their developers gets something like this into one of their products (either on his/her own or with the blessing of the company, the world may never know). With OSS, it's out in the open for everyone to see/fix.


    Please. It's open for everyone who has nothing better to do than read slashdot or bugtraq, maybe. What much of OSS needs but doesn't have is strict maintainers, who know what contributions are made to the product and know what they'll do before they're let in. Fortunately, some of the bigger projects have this (Linux kernel, *BSD, Mozilla), but alot of OSS today is about people being too lazy or incompetent to double check some 15-year-old hax0r's crappy-ass contribution until it's too late.

    The other thing OSS needs to enforce a little better is something along the lines of code signing. From what I can tell, it looks like somebody hijacked the bitchx FTP domain on some routes and is returning trojaned copies to the downloaders who are going through it. This is a weakness of OSS. It's much easier for me to grab a piece of Open Source software, drop some malicious code in it, and redistribute it from a hijacked domain than it is for me to do so with something I don't have the source to. Granted, it's still possible, if I inject code into the compiled version, but it's a hell of a lot easier to do it with source.

    The simplest move is to use MD5's for major releases and have some 3rd-party location to verify them. Freshmeat? Sourceforge? This, at least, could add some security, and would a central point for people to watch out for hijacking...

    Get your head out of the damned OSS-as-a-religion sand and look at what needs to be done to make it viable to people who don't fuck around reading about the next idiot to shoot himself into space in a backyard rocket.

    Meh. Enough ranting, for now.

    -Andrew
  • Re:Backdoor. (Score:3, Insightful)

    by kmellis ( 442405 ) <kmellis@io.com> on Tuesday July 02, 2002 @10:33AM (#3807223) Homepage
    This is the real security threat for everyone, particularly anyone with sensitive data.

    Viruses and worms have been mostly merely malicious. Same with cracking. And the malice involved is not very great. But what if people get serious about stealing data?

    A few years ago I had an epiphany one night, and waltzed into a network security company the next day.

    "Look", sez me, "Inbound connections and activity are, in the long run, not going to be the real threat. The real threat is trojaned applications that mine for data and somehow send it offsite. You need to be monitoring outbound activity for appropriateness. For example, eventually you're going to see corporate espionage where someone writes an attractive and actually useful little app, then social engineers a targeted person within an organization to download it and compromise security. This is just an example of the general problem."

    They were actually pretty impressed, but the company's strategy was deliberately to avoid concerning itself with viruses or worms (more specifically, they wanted to stay only on the servers, monitoring network activity in a sophisticated manner). But it seemed to me that this was a natural extension of their product and technology. And they thought I was a pretty bright guy, but they didn't know what to do with me. Well, anyway. The irony is that they were only a year or so later bought by one of the big antivirus firms, mostly just to acquire their technology.

    In this particular case, the BitchX irc app, it looks like an outside source injected some backdoor code into the application, and hacked the ftp server to distribute it in a selective manner, presumably to help lower the risk of detection. A lot of effort for not that great of a payoff, really. Here, as is often the case, it's mostly about proving how clever you are.

    But we're starting to see rudimentary examples of what I was warning about with spyware and other apps that make outbound connections that are in some sense illicit. Firewalls monitoring outbound connections can only be so successful given that they're always going to let some through. I know that some of the client based firewalling/monitoring software looks at connections on a per application basis. That's a start.

    Personally, my inclination is that we need a networking monitor that operates like a virus scanner -- on the client, in the background -- that accesses a secured database of allowed application to outbound connection mapping, with secured handling of exceptions or new applications referred to a security admin (ideally) or an admin. This way we don't have to use a brute-force approach that simply locks down all allowed applications and allowed outbound connections in a non-specific, usability-destroying way.

    But whatever the solution, I have little doubt that this will be a growing problem which will make a transition from script-kiddie nuisance cracking to something much more sophisticated. Although I could be wrong.

  • Re:Who's this? (Score:3, Insightful)

    by Neil Watson ( 60859 ) on Tuesday July 02, 2002 @10:34AM (#3807224) Homepage
    I disagree. That would be equivalent to saying you are responsible for your house being burglared. Not having (adequate) security makes one a likely target. It does not, however, make you responsible.

    I see your point. Still, would you say the same for all the Windows users that did not patch there IIS code when Red Code hit?

    Anyone who has a box attached to the internet has a responsibilty to others. They have to be held accountable for something. It is true that nothing is crack proof and you can't expect people to have perfect security. However, they have to take reasonable steps to protect themselves and others. But, what are reasonable steps? Who can judge?

    If someone breaks into a house and steals a handgun, that was not locked up securly, and then uses it to commit armed robbery; should the home owner be responsible for the robbery? Of course not. However, the home owner should be responsible for improperly storying his handgun. This is the kind of responsiblity I'd like to see. Did someone take reasonable steps to secure their server?

    As for the IP in question at the beginning of this thread. At this time, I don't know any details so I'm not casting any blame.

  • Not sure but on my non OSS operating system I run firewalls and intrusion detection software to help me catch spyware and other things which are accessing ports which I am not aware of. Since I'm not the only one who does this I would think the backdoor would be found. You don't have to see the source code to find holes if you can see the holes.

    Frankly I am quite tired of this common belief that thousands of eyes are constantly scanning OSS looking for problems to fix. In the 9 or so years I have been using Linux and GNU software I have never looked for such things. Maybe that is because I am a developer and spend enough time with my code. Even when I first started with Linux and things like CDROM and NICs required patching and compiling I was content with the code I was downloading. Hobbiests tended not to screw other hobbiests (unless money is changing hands) and I tend to still believe that. I really doubt there are that many people who police code. If you are working on something and notice a problem then you submit a patch but the belief of a huge and constant code review going on is a false one as far as I am concerned.

    With the popularity of Linux and free software however and the perceived threat to some commercial software it might be wise for OSS project leaders to be extra careful of new code that slips in. I have belived for a while that sooner or later we will see companies like Microsoft or Sun let slip some pattented code into a free software project just so they can come back later and shut it down with a lawsuit. Face it, these companies are getting hurt. A project like Mono has the potential to hurt .Net and if successful hurt Java. I would not have thought that someone would slip in a backdoor into a project however.

    Anyway, I don't think you can look at OSS or a closed source project and say one is more "secure" than the other. I think it really comes down to how it is managed and the quality of the people who are contributing. You might also want to consider they type of application.

    As far as IRC goes, this is a community where you are judged by how "bad-ass" your kick scripts are and your "l33t h4xx0r" skills. I'd be cautious of any IRC tool I used for that matter.
  • Re:Who's this? (Score:2, Insightful)

    by jallen02 ( 124384 ) on Tuesday July 02, 2002 @11:04AM (#3807418) Homepage Journal
    Well, what if your house was a known fire hazard that was like a fire stacked with tinder in the middle of a summer drought?

    That is how I would see the house if it were an operating system with unpatched vulnerabilities in it.

    Are you responsible if it burns down your neighborhood?

    No answers here.. just an interesting question to mull over.

    Jeremy
  • Re:Who's this? (Score:3, Insightful)

    by Sloppy ( 14984 ) on Tuesday July 02, 2002 @11:39AM (#3807650) Homepage Journal
    I disagree. That would be equivalent to saying you are responsible for your house being burglared. Not having (adequate) security makes one a likely target. It does not, however, make you responsible.
    But your house isn't likely to be used as a weapon against the next victim. I think a much better analogy is that you are partly responsible if your gun is stolen. If you own a gun, you need to take special care and not just leave it around where any idiot or child can take it. The same goes for a computer that is hooked up to the Internet.
  • Re:The name.... (Score:1, Insightful)

    by Anonymous Coward on Tuesday July 02, 2002 @11:41AM (#3807667)
    at which point it will no longer be usefull to anyone but idiots like you.
  • by Anonymous Coward on Tuesday July 02, 2002 @11:43AM (#3807682)
    they have to be manually verified, and hardly anyone bothers
    Guess what: I bother, and everyone I know bothers. Is "hardly anyone bothers" a fancy way of saying "I don't bother"?
  • by Animats ( 122034 ) on Tuesday July 02, 2002 @12:37PM (#3808048) Homepage
    IRC clients are a good place to start on security, because they need very limited access on the client machine. So put the client in a FreeBSD jail. All it needs to talk to is its window and the net, and maybe a few specific files.

    Jailing a browser is tougher, but an IRC client should be easy. Somebody who's into IRC and security should do this as a demo.

  • by Junta ( 36770 ) on Tuesday July 02, 2002 @01:01PM (#3808245)
    Actually, I would say both are equally 'tough' to jail. Access to the network is pretty much the same, both tend to use particular, specific ports but circumstances can require just about anything, though IRC tends to deviate less than web browsers do from the standard ports, they still deviate.

    As far as file system access, neither *truly* require write access to the disk nor read access to nothing more than a few config files. I know, browsers tend to use disk as cache and you want to download using your browser as well, but same goes for IRC, a large portion of users exchange files through the IRC client with the intent of the transferred file not being transient. For those who want to have non-transient downloads (and ability to save configuration, both sorts of clients equally likely to require this), chroot is as far as I would go.

    Strictly speaking, all network applications have similar issues. While it may appear easy to pinpoint required operations of a piece of software, there are always enough deviations to make it not 100% possible to tighten it all down. The only place where you can really predict and jail based on those predictions what a network application needs to do and access is on the server end where you have the most control over how the network is used. Clients having to interoperate with oddball server configurations and users who want to use the software in different ways will always make the jailing you describe less feasible.

    Of course, most any app could run fine in a chrooted environment if you have the disk space for the requisite libraries, and that by itself greatly reduces (but doesn't eliminate) threats to data outside the chroot jail.
  • by TheAwfulTruth ( 325623 ) on Tuesday July 02, 2002 @01:14PM (#3808339) Homepage
    Probably only one in 10,000 people running apache could have found OR fixed that last root expoit on their own machine. So for 9,999 people open source doesn't matter at all.

    What the hell do you think source is anyway. Have YOU ever looked at it? That any person can just "look" at it and go "Oh, here it is, I'll just fix it here. There done."

    Apache had to fix that bug. And it wasn't in a day either, it took neary a week. Other people hacked at it. DIDN'T FIX IT, but SAID they did and tried distributing a broken patch. HORRAY OPEN SOURCE!

    We had to wait for the vendor to patch. Just like closed source. Code is generally FAR too complicated for anyone not familiar with it to just start hacking away at a "fix". Especially a "Security fix", which would require full regression testing to make sure the product still works as advertised and that the fix actually worked.

  • Look kids... (Score:4, Insightful)

    by ice-man_efnet ( 589707 ) on Tuesday July 02, 2002 @01:22PM (#3808407)
    The developers of BitchX did *NOT* put malicious code in the source. For one thing, there were two versions of the 1.0c19 source running around. It also seems that the security on *.bitchx.org was never even compromised. The problem lies somewhere with a 'man-in-the-middle' changing some DNS aliases somehow. This is why some people were able to download the real version that was actually released, and some people got the 'hacked' copy.

    Also, even though the box doesn't appear to be compromised, it could happen. I hope one of you kids out there is the first one attacked when a new apache or ssh bug is found. You can never be completely secure, especially when you are running anonymous servers for people to download programs.

    kthx.

    ice-man@efnet.

  • Re:How long... (Score:1, Insightful)

    by Anonymous Coward on Tuesday July 02, 2002 @02:34PM (#3809125)
    How do you know the outbound connection isn't just a smoke screen? The point is you've already executed untrusted code, any number of things could have been done to your system without your knowledge. Stopping a single vector is not a solution and gives a false sense of security.

    Of course, you wouldn't know anything about this, would you?
  • by flacco ( 324089 ) on Tuesday July 02, 2002 @05:45PM (#3810711)
    Oh yeah, gang rape is fucking hilarious until you're faced with the prospect of spending a few nights in jail.
  • by dave-fu ( 86011 ) on Tuesday July 02, 2002 @06:30PM (#3811014) Homepage Journal
    > Very few sites are running Slash from CVS,
    as the CVS tree is a pre-alpha version. We have not yet even
    stamped it with a development release number (which will be 2.3.0
    as soon as we feel it is stable enough for bleeding-edge users).


    In spite of the fact that you haven't "stamped" the version with a release number, you had gone ahead and deployed a version of software which was open to and was, in fact, visibly exploited by XSS flaws. You then pretended that it never happened. No "whoops, we screwed up, here's what we did wrong so the rest of you can avoid our pitfalls" on the front page of the site that was exploited, no note on slashcode.com that people who have deployed the same version that you deployed are open to exploitation as well.

    > Sites running CVS should stay as current as possible at all times,
    of course. The courageous admins of those sites should probably
    hang out on the IRC channel given on the slashcode.com homepage
    (#slash on irc.openprojects.net).


    This doesn't reflect reality. Many people pull down a CVS snapshot and run with it, but it's nice to know that you think that admins should spend what little free time they've got idling in IRC just in case there's another bug that you don't feel like publicizing is exploited.
    Now that I think about it, doesn't that sound a whole lot like "security through obscurity"?

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...