Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security

Kraken Infiltration Revives "Friendly Worm" Debate 240

Anonymous Stallion writes "Two security researchers from TippingPoint (sponsor of the recent CanSecWest hacking contest) were able to infiltrate the Kraken botnet, which surpasses its predecessors in size. The researchers have published a pair of blog entries: Owning Kraken Zombies and Kraken Botnet Infiltration. They dissect the botnet and go so far as to suggest that they could cleanse it by sending an update to infected hosts. However, they stopped short of doing so. This raises the old moral dilemma about a hypothetical 'friendly worm' that issues software fixes (except that the researchers' vector is a server that can be turned off, not an autonomous worm that can't be recalled once released). What do you think — is it better to allow the botnet to continue unabated, or perhaps to risk crashing a computer controlling a heart monitor somewhere?"
This discussion has been archived. No new comments can be posted.

Kraken Infiltration Revives "Friendly Worm" Debate

Comments Filter:
  • by seramar ( 655396 ) on Tuesday April 29, 2008 @09:06AM (#23236916) Homepage
    I have two things to add, one in response to your comment about the monitoring stations and the other just in general on this topic, but they tie together: 1. If a hospital is running a machine that is vulnerable to any worm, including a friendly worm, then I question their entire network/security structure in the first place and it is only a matter of time until the monitoring station goes down, anyway. 2. Friendly worms? Definitely. I am a technician/manager of a small shop and see people whose machines are constantly bombarded with malware of all kinds. While it would hurt our bottom line to see friendly worms in the wild dismantling these botnets it would no doubt save a lot of people a lot of trouble. These folks who are infected generally don't know what their doing and don't care to learn - they're worried about using their computer to perform a certain task - not understanding the ins and outs of how it functions. If a few people are affected by some "friendly fire" so be it - they would have gotten infected in the first place.
  • I did this once... (Score:3, Interesting)

    by el_flynn ( 1279 ) on Tuesday April 29, 2008 @09:11AM (#23236960) Homepage
    ...and nearly paid for it.

    We were on the verge of fall break, and someone on campus had found out a 'catch-all' email address which was aliased to _all_ the university email addresses. So some dickwad started sending a weird email saying something like "Hey joe, where are you?", which everyone got, and everyone replied "Hey, I'm not joe -- who are you?" Which was then sent to everyone else.

    The thing basically kept feeding back to itself and was threatening to get out of hand. Literally hundreds of emails started popping up. Of course, this was waaay back then, before the days of spam, so it was 'abnormal', 'weird' and annoying all at once. Since it was a friday evening, and knowing that at the rate it was going everyone's inbox would be flooded when they returned from the week-long holidays, I -- perhaps naively -- thought I'd put a stop to it.

    I attached a large binary file to an email and sent it to that catch-all address, hoping that it would jam up the works enough that the network admins would notice.

    Notice they did, and eventually I got called up to see the ombudsman -- who promptly said he was considering kicking me out of campus.

    So yeah, one can have good intentions -- like what I did -- but the means to achieve that end may not be acceptable to everyone, even though it did get the job done.

    My 2 cents anyway.
  • by rtb61 ( 674572 ) on Tuesday April 29, 2008 @10:58AM (#23238150) Homepage
    These people really are crazy, especially when you consider the warranty/EULA that accompanies the windows OS. A warranty that basically stipulates that it is wildly unsafe for that kind of use.

    Hence if there is a software failure that results in a death the full liability falls back on the hospital and the staff responsible for that software purchase and their criminally negligent willingness to use software the is clearly unfit for the purpose based upon the warranty/EULA supplied with the software.

    It is only a matter of time before some hospital CIO finds themselves facing a possible prison sentence fro criminally negligent manslaughter.

  • by martyb ( 196687 ) on Tuesday April 29, 2008 @11:57AM (#23239254)

    For those who are advocating that an anti-bot be released (or whatever you want to call it) so as to disable this pest, I have a question for you: how is someone going to be able to tell the difference between these:

    1.) A user who creates and releases an anti-bot, but through an error (design, programming, whatever) inadvertently causes "harm" to the system.

    2.) A user who creates and releases an anti-bot that appears to try to block the worm, but is in fact designed to cause "harm" to the system.

    Recall that the Morris worm [wikipedia.org] was not intended to bring down the internet:

    According to its creator, the Morris worm was not written to cause damage, but to gauge the size of the Internet. An unintended consequence of the code, however, caused it to be more damaging: a computer could be infected multiple times and each additional process would slow the machine down, eventually to the point of being unusable.
    AND

    The critical error that transformed the worm from a potentially harmless intellectual exercise into a virulent denial of service attack was in the spreading mechanism. The worm could have determined whether or not to invade a new computer by asking if there was already a copy running. But just doing this would have made it trivially easy to kill; everyone could just run a process that would answer "yes" when asked if there was already a copy, and the worm would stay away. The defense against this was inspired by Michael Rabin's mantra, "Randomization." To compensate for this possibility, Morris directed the worm to copy itself even if the response is "yes", 1 out of 7 times [3]. This level of replication proved excessive and the worm spread rapidly, infecting some computers multiple times. Rabin remarked when he heard of the mistake, that he "should have tried it on a simulator first."

    See also A Tour of the Worm [std.com] for a more detailed account of how it unfolded.

    The intention may have been good, but the implementation had an unintended consequence that led to a major disruption of the internet. I remember full well the confusion at the time as the details unfolded. I was working at a major computer manufacturer that dropped its connection to the net to protect itself. Ultimately, none of our systems were hit (wrong OS), but the sheer volume of packets on the net led, effectively, to a DDOS'ing of the uninfected systems, too.

    So, in a nutshell, how can one objectively tell the difference between an attempt to kill the worm that causes problems, and an attempt to cause problems that looks like it is trying to kill the worm? In a non-static environment. With our limited ability to write bullet-proof, error-free code. Besides, someone else could capture and re-purpose the good code to cause more problems.

Say "twenty-three-skiddoo" to logout.

Working...