Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Bug The Internet

Analysis of the Witty Worm 415

DavidMoore writes "The Cooperative Association for Internet Data Analysis (CAIDA) and the University of California, San Diego Computer Science Department have an analysis of the recent Witty worm. Among other things, Witty was started in an organized manner with an order of magnitude more ground-zero hosts than any previous Internet worm."
This discussion has been archived. No new comments can be posted.

Analysis of the Witty Worm

Comments Filter:
  • buggy code (Score:5, Interesting)

    by neoThoth ( 125081 ) on Thursday March 25, 2004 @11:58PM (#8676617) Homepage
    The end of the worm seems to have bytes suggesting a flaw in the original worm code.
    I'm still getting data points for the infected by analyzing the worms victims who contact my IP.
    • Re:buggy code (Score:4, Interesting)

      by rritterson ( 588983 ) * on Friday March 26, 2004 @12:26AM (#8676818)
      "The end of the worm seems to have bytes suggesting a flaw in the original worm code."

      Would you mind elaborating on that assertion? I'm curious.
    • by Himring ( 646324 ) on Friday March 26, 2004 @12:38AM (#8676892) Homepage Journal
      There's a bug, in the worm, ... in the bottom of the sea....
    • Destructive (Score:4, Interesting)

      by Anonymous Coward on Friday March 26, 2004 @12:39AM (#8676899)
      Interesting: one could have had the feeling that it was 'stupid' for these worms to destroy their hosts so rapidly. Why not wait for a few hours or days and then do it in a synchronized manner?

      In fact, the overall number of host that could be infested was low (~12,000): there was no need for waiting.

      It seems that those who launched it had a very good knowledge of what they where doing.

      Definitely interesting.
      • Re:Destructive (Score:5, Interesting)

        by buttahead ( 266220 ) <tscanlan@so[ ]th.org ['sai' in gap]> on Friday March 26, 2004 @02:26AM (#8677414) Homepage
        there was no need for waiting.

        I'd go a step further and say that immediate damage to the system was mandatory. Waiting in this case would have detracted from the destructiveness of this worm. Since it was attacking firewalled, and, probably anti-virus enabled machines, waiting would mean that the destruction would be nullified.

        It seems that those who launched it had a very good knowledge of what they where doing.

        Sounds like someone from marketing has decided to write worms. They thought about the market of hosts they were trying to infect. A good reason for infecting this set of hosts would have been to stifle the security software vendors. In order to avoid this situation in the future, a person should invest in a new model of protection. Seems to be a perfect opening for a new market.
        • Re:Destructive (Score:4, Insightful)

          by SpaceLifeForm ( 228190 ) on Friday March 26, 2004 @06:47AM (#8678340)
          Hmmm, and what would this new model of protection entail? Something like Cisco proposed?

          From the analysis:

          When users participating in the best security practice that can be reasonably expected get infected with a virulent and damaging worm, we need to reconsider the notion that end user behavior can solve or even effectively mitigate the malicious software problem and turn our attention toward both preventing software vulnerabilities in the first place and developing large-scale, robust and reliable infrastructure that can mitigate current security problems without relying on end user intervention.

          Folks, we don't need any more infrastructure to prevent worms. We don't need any more infrastructure to control what you can and can't do on the Internet.

          It's not the Internet that causes the problems, it's the in-secure machines that are vunerable.

      • Re:Destructive (Score:3, Interesting)

        It acted very much like Ebola, which is an interesting comparison. Ebola is massively virulent, but it's onset and effects are so quick that it tends to "burn itself out" before infecting a large number of people. This virus did the same.

        It would be interesting to see what percent of the population that COULD have been affected, was. Maybe the writer concluded that, in hitting people with this specific vulnerability, they would have tapped the bulk of their targets in the first 24 hours or so, leaving no n
    • That is by design (Score:5, Informative)

      by isaac_akira ( 88220 ) on Friday March 26, 2004 @01:34AM (#8677187)
      From the article text:

      "The worm payload of 637 bytes is padded with data from system memory to fill this random size..."

      So you are seeing some random grabage that was in memory on the victim's machine while the worm was being sent out. That helps to avoid detection as it is harder to profile the worm.
  • by Anonymous Coward on Thursday March 25, 2004 @11:59PM (#8676621)
    Conclusion:

    The Witty worm incorporates a number of dangerous characteristics. It is the first widely spreading Internet worm to actively damage infected machines. It was started from a large set of machines simultaneously, indicating the use of a hit list or a large number of compromised machines. Witty demonstrated that any minimally deployed piece of software with a remotely exploitable bug can be a vector for wide-scale compromise of host machines without any action on the part of a victim. The practical implications of this are staggering; with minimal skill, a malevolent individual could break into thousands of machines and use them for almost any purpose with little evidence of the perpetrator left on most of the compromised hosts.

    While many of these Witty features are novel in a high-profile worm, the same virulence combined with greater potential for host damage has been a feature of bot networks (botnets) for years. Any vulnerability or backdoor that can be exploited by a worm can also be exploited by a vastly stealthier botnet. While all of the worms seen thus far have carried a single payload, bot functionality can be easily changed over time. Thus while worms are a serious threat to Internet users, the capabilities and stealth of botnets make them a more sinister menace. The line separating worms from bot software is already blurry; over time we can expect to see increasing stealth and flexibility in Internet worms.

    Witty was the first widespread Internet worm to attack a security product. While technically the use of a buffer overflow exploit is commonplace, the fact that all victims were compromised via their firewall software the day after a vulnerability in that software was publicized indicates that the security model in which end-users apply patches to plug security holes is not viable.

    It is both impractical and unwise to expect every individual with a computer connected to the Internet to be a security expert. Yet the current mechanism for dealing with security holes expects an end user to constantly monitor security alert websites to learn about security flaws and then to immediately download and install patches. The installation of patches is often difficult, involving a series of complex steps that must be applied in precise order.

    The patch model for Internet security has failed spectacularly. To remedy this, there have been a number of suggestions for ways to try to shoehorn end users into becoming security experts, including making them financially liable for the consequences of their computers being hijacked by malware or miscreants. Notwithstanding the fundamental inequities involved in encouraging people sign on to the Internet with a single click, and then requiring them to fix flaws in software marketed to them as secure with technical skills they do not possess, many users do choose to protect themselves at their own expense by purchasing antivirus and firewall software. Making this choice is the gold-standard for end user behavior -- they recognize both that security is important and that they do not possess the skills necessary to effect it themselves. When users participating in the best security practice that can be reasonably expected get infected with a virulent and damaging worm, we need to reconsider the notion that end user behavior can solve or even effectively mitigate the malicious software problem and turn our attention toward both preventing software vulnerabilities in the first place and developing large-scale, robust and reliable infrastructure that can mitigate current security problems without relying on end user intervention.
  • by Ralph JH Nader ( 765522 ) on Thursday March 25, 2004 @11:59PM (#8676622) Journal
    You can find more information here [lurhq.com].
  • Comment removed based on user account deletion
  • by seaswahoo ( 765528 ) on Friday March 26, 2004 @12:03AM (#8676659)
    In contrast, the Witty worm infected a population of hosts that were proactive about security -- they were running firewall software.

    This makes me feel a bit safer, since we used to run Windows-based boxen directly on the Internet but now they all hide behind a Linksys NAT Router and firewall.

    From what I've learned, the general rule is NEVER to put a Windows machine directly on an unsecure network. Unfortunately, the machine I'm typing on here at the University of Virginia is directly connected and yes, it runs Windows. I turned on the Internet Connection Firewall...but this kind of worm vulnerability makes me nervous. Today, someone attacks the eEye security software; tomorrow, someone takes out Microsoft's ICF.

    Similarly, end users may also be unaware that perceived slowness of their computer or Internet connection is caused by a worm, and they may reboot their computers in the hope that that will fix the problem.

    I find this problem with spyware and adware too. I recently cleaned out the computer of a family friend that was very slow and would no longer connect to the Internet. Removed a huge gob of spyware with Ad-Aware and Bazooka, and BAM! we were back online.

    Goes to show you. I'm thinking that Microsoft's security model in Windows may need to be revised, considering in XP Home at least, all users run as Administrator (root) and system services have way too many privileges.

    Makes me glad I replaced my aging NT file server with Linux/Samba.
    • So the worm infects people who are behind firewalls, and you're happy because that's what you're doing?
    • The article stated that a good number of request came from behind NAT firewalls. Many devices like the linksys allow you to DMZ a host, which would end up being an attack vector behind your firewall. Also many people turn on port forwarding, done incorrectly, is an attack vector.
    • by Tin Foil Hat ( 705308 ) on Friday March 26, 2004 @11:38AM (#8680527)
      There is no reason on Earth that this worm couldn't have attacked Linux boxen. If this worm had been tailored to attack the the recent openssh vulnerability the day after it came out, many of us would have been owned immediately. How many of us have an open ssh port through our NAT devices and firewalls? The scary thing about this worm is that the authors have demonstrated an ability to attack new vulnerabilities in third-party software very quickly. In the case of the openssh vulnerability (a root exploit) that would have meant that very many of us Linux users would have been affected before we could do anything about it.
  • Heh (Score:2, Funny)

    by Anonymous Coward
    [ Insert witty comment here. ]
  • by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Friday March 26, 2004 @12:04AM (#8676671) Homepage Journal
    They state that the most important thing is to force users into a security mindset and this is near impossible. Also, they point out that even security-aware users may be at risk because of the risk of infection before the ability to patch the firewall/AV software is possible.

    This leads to the conclusion that firewall/AV software should be included as part of the baseline system, whether with the operating system or as an additional package at system build time. Also it leads to the conclusion that user-assisted updates are useless and only automatic updates can effectively patch fast enough to block worms of this sort.

    This is one of the most depressing stories about the state of the Internet that I've read in a while.
    • They state that the most important thing is to force users into a security mindset and this is near impossible.

      Did we read the same article?

      The patch model for Internet security has failed spectacularly . . . Notwithstanding the fundamental inequities involved in encouraging people sign on to the Internet with a single click, and then requiring them to fix flaws in software marketed to them as secure with technical skills they do not possess, many users do choose to protect themselves at their own e
    • This leads to the conclusion that firewall/AV software should be included as part of the baseline system, whether with the operating system or as an additional package at system build time.

      Yep -- but how would that have helped here? The thing wasn't a virus spread by email -- the first thing to see the packets would have been the firewall, which is what keeled over.

      Also it leads to the conclusion that user-assisted updates are useless and only automatic updates can effectively patch fast enough to blo

    • by crimethinker ( 721591 ) on Friday March 26, 2004 @12:34AM (#8676877)
      This leads to the conclusion that firewall/AV software should be included as part of the baseline system

      That's a very good suggestion, except that in this case, the firewall software was the vulnerable component. No BlackICE, no Witty worm.

      I'm deeply troubled by this; we piss and moan about how the average windoze luser doesn't have a firewall or AV software, and then this pops up.

      Much as I would like to, I can't blame this on Microsoft. It's just sloppy programming, the sort of practice that M$ has made prevalent. There, I blamed M$ after all. Still, changing the permission model of Windoze wouldn't have helped this; BlackICE is exactly the sort of software that needs access to the network protocol stacks; it's supposed to be one of the trusted portion of the system, as compared to all those VBScript viruses that run as admin/root, but shouldn't.

      If I were designing a new CPU, I would think about including some hard-core stack protection. A no-execute bit in the MMU is a very good start, but still not bullet-proof. I'm thinking something (with OS assistance) to disallow all access beyond the link pointer for the current function call. Every CALL sets a new boundary, and every RET pops back to the last boundary. Try to write past the boundary, and you get a machine exception. Much finer granularity than 4K pages that most 32-bit MMU's provide.

      -paul

  • by IANAL(BIAILS) ( 726712 ) on Friday March 26, 2004 @12:05AM (#8676672) Homepage Journal
    The patch model for Internet security has failed spectacularly. To remedy this, there have been a number of suggestions for ways to try to shoehorn end users into becoming security experts, including making them financially liable for the consequences of their computers being hijacked by malware or miscreants.
    While I agree that the success of most internet worms does indicate that the patching model is no good, come on now - there is no way that that end users would be financially liable for their computers. No matter how good an idea it might sound at first, such a concept just isn't workable.
    • by ryanjensen ( 741218 ) on Friday March 26, 2004 @12:14AM (#8676738) Homepage Journal
      A driver is responsible for the upkeep of his vehicle if his negligence causes an accident ... a property owner is responsible for its upkeep if someone is injured on his property. I don't think it's a very large leap to be able to consider a computer owner liable for its upkeep if it is used in an attack, and I don't think many in this country would object either.

      The concept would be at least as workable, in the courts, as any liability legislation is currently.

      • by jmv ( 93421 ) on Friday March 26, 2004 @12:56AM (#8677003) Homepage
        Are you willing to bet a large amount of money (or jail time) that your computer will *never* be compromised. What if a worm before a patch is available. If you compare to cars, you'd have to say that you're responsible for what happens to your car even if it's been sabotaged.
        • If you're car has be sabotaged, and you *know about its resultant defect* you should be held liable. However, I think you are correct in saying that an owner should not be found negligent for unknowingly operating a sabotaged car.

          But I think your comparison is incorrect. I meant to liken the non-application of patches by computer users to the car owner who doesn't perform routine preventative maintenance on his vehicle. If a car owner doesn't replace his brakes for 45,000 miles after they first start s

      • by Flower ( 31351 ) on Friday March 26, 2004 @01:18AM (#8677111) Homepage
        A driver is responsible for the upkeep of his car but there is an assumption that the car is safe to drive to begin with when I buy it from the dealership. If it's the case that the car isn't safe there is usually a recall where I can take it in to the dealer for free and get the problem fixed. If there isn't a recall and the car isn't safe and I do have an accident then I can sue the manufacturer for selling me a defective product.

        When cars begin to become unsafe there are a variety of noticable warning signs that I need to maintain my vehicle. The oil light will go on, the brakes will grind, sundry odors emit from the hood, the tires begin to look flat... It doesn't even have to get that far. Some dealerships will send you mail reminding you that you might need an oil change. Of course there reason for doing this is to make some cash but it is a reminder to maintain your car and once at the garage things like rotating tires or what-not can also come up.

        To make this short [too late], there are a variety of mechanisms in place to let the driver know he needs to maintain his vehicle that simply isn't present or currently applicable when compared to a PC owner. From where I'm sitting there seems to be a great deal of wiggle room when applying the standards you propose.

      • by MyHair ( 589485 ) on Friday March 26, 2004 @01:33AM (#8677184) Journal
        A driver is responsible for the upkeep of his vehicle if his negligence causes an accident ... a property owner is responsible for its upkeep if someone is injured on his property. I don't think it's a very large leap to be able to consider a computer owner liable for its upkeep if it is used in an attack, and I don't think many in this country would object either.

        Your analogy fails on many levels, but I'm too tired to point them all out. Here's a biggie: Automobiles are highly engineered and legally regulated devices; there are safety standards to be met before you can put one on the road, and there are legal limits to how the end user can modify them. PCs and especially software don't have that kind of pre-consumer engineering.

        Another one: the roadways are public works. The internet as we use it is a collection of private agreements to communicate between points. Why don't the intermediate points share liability for passing on the attacking packets? Hell, the operators of the intermediate points are generally trained for their equipment and pay people to monitor traffic and health. (This is making a point; actually I don't want my ISP or any of their providers policing my internet connection.)
      • Yes, yes it is a large leap to any conclusion of that kind. To follow the car analogy, if someone were to steal my car and ram it into a crowded restaurant, I would not be held responsible even had I left the door open and the engine running. That is exactly what is happening with trojaned computers. It is the attackers that should be held responsible, not the poor sap who's computer got hijacked.

    • by gordyf ( 23004 ) on Friday March 26, 2004 @12:19AM (#8676770)
      That was not their conclusion. If you continued the quote, you'd see that they said much the same thing as you.

      When users participating in the best security practice that can be reasonably expected get infected with a virulent and damaging worm, we need to reconsider the notion that end user behavior can solve or even effectively mitigate the malicious software problem and turn our attention toward both preventing software vulnerabilities in the first place and developing large-scale, robust and reliable infrastructure that can mitigate current security problems without relying on end user intervention.
    • Sounds like Russ is on a rampage again. Russ Cooper (Doctor as it were) has a paper on this topic where a 'fine' would be levied on users who were unwitting victims in computer virii and worms.
      Example, a user opens an attachement that looses a worm on the internet, they are fined. When I read this I immediately dismissed Russ from my list of intelligent people.
      He has a site somewhere (can't find it at the moment) where he was calling for comments on his "Internet Penalty Plan".
      According to this plan an in
  • by Anonymous Coward
    Interesting. An article at zdnet [zdnet.com] suggests that the Witty was in fact a prototype, and could be the first example of cyber-terrorism. The combination of
    a)The destructive payload
    b)Time from disclosure to deploymentc)Large number of Ground Zero hosts
    suggests capabilities far beyond that of an autistic 17 year old in his parent's basement. Could this be the start of internet based Al Quaeda action, that anti terrorism experts have so long stated was coming?
    • by Anonymous Coward
      My god he's right! This is the start of the Al Qaeda internet terrorism initiative that non-ratings-concerned-non-sensationalist Fox News and MSNBC warned us about! Emmanual Goldstine is their leader and he will be issuing a communique to the Ministry of Truth shortly. Everyone should PANIC!

      Immediately put on your gas-masks and have your anthrax treatments ready! But, do not disconnect your machine from the network. Continue buying and supporting the economy. If you don't, THE TERRORISTS WIN.
      • You know what this means -- it's up to us Lunix nerds to save civilization! Just like Frodo and Sam!

        You guys go ahead. I'll catch up with you as soon as my 'emerge -u kde' finishes.

  • by neoThoth ( 125081 ) on Friday March 26, 2004 @12:06AM (#8676679) Homepage
    the rate of worm creation on this one was almost a little TOO quick. This time to creation would almost suggest that the author of the worm perhaps had inside knowledge. It's not entirely outside the realm of reason that the vulnerability leaked from ISS before the announcement was made.
    • by Yakman ( 22964 ) on Friday March 26, 2004 @12:09AM (#8676701) Homepage Journal
      It could also be that whoever wrote this worm found the vulnerability independently and had been writing code to exploit it, when he saw the security advisory go up he released it ASAP before people had a chance to patch their boxes. If the vulnerability hadn't been announced the worm may have been released later with a different payload.
    • by InfiniteWisdom ( 530090 ) on Friday March 26, 2004 @12:12AM (#8676725) Homepage
      I guess the writer had written the payload in advance and waited for an appropriated vulnerability to show up to use as a vectir. Generating exploits isn't rocket-science... in fact there are automated tools out there that will generate exploits for common holes like buffer/stack overflows.

      There is also the chance that the author discovered the bug either himself or through "black hat" groups before the advisory was put out.
  • by citking ( 551907 ) * <jay&citking,net> on Friday March 26, 2004 @12:10AM (#8676707) Homepage
    On Friday March 19, 2004 at approximately 8:45pm PST, an Internet worm began to spread, targeting a buffer overflow vulnerability in several Internet Security Systems (ISS) products, including ISS RealSecure Network, RealSecure Server Sensor, Proventia, RealSecure Desktop, and BlackICE. Emphasis mine.

    Man, I am so used to seeing IIS in a security vulnerability I had to give it a second glace. I guess people shouldn't use those letters in software abbreviations anymore. It's becoming bad luck!

    Seriously, worms like this that damage computers are very un-cool. As a freelancer I got to see this on only a few machines and by gratuitous use of recovery console, fixmbr, and (alas) one format and reinstall later I was able to fix them all.

    While doing this onsite at a realty company I asked what they used as a firewall. Seeing blank stares from them all wasn't the highlight of the day. Not having a hardware firewall handy it was quite fun to race against the vermin as I downloaded patches off of the net on a virgin XP install! I actually thought I heard giggling echoing from the DSL modem as the DL percentage ticked higher slowly but surely....

    • two things (Score:3, Insightful)

      by Daltorak ( 122403 )
      1) Internet Information Services's track record has improved dramatically in the last couple of years... the last security patch for it was in May of last year, and then the one before that was in 2002.

      2) Why didn't you enable XP's firewall before connecting to the Internet? That's a pretty effective way of preventing your machine from getting infected while collecting the various updates.

    • ...as I downloaded patches off of the net on a virgin XP install

      Windows Update is nice for keeping up to date with all the patches for windows as they are released. But using it to patch a series of machines doing fresh installs is silly.

      At the height of the Blaster worm, I had to reinstall Windows for a friend of mine. I connected to the net in order to update Window XP, and her machine was reinfected within five minutes of connecting: before the machine could be patched.

      I learned my lesson. Here [windows-help.net]

    • XP has a built in firewall you could have enabled BEFORE you connected the ethernet cable

      more than enough protection during your race to download patches.
      frankly, enough protection for concientious users ALL the time
  • by flopsy mopsalon ( 635863 ) on Friday March 26, 2004 @12:11AM (#8676712)
    Another day, another virulent internet worm utilizing an unaccounted-for "buffer overflow" to propagate itself throughout the internet. Users suffer and system administrators grind their teeth to clean out their networks.

    By now I am sure it has been noticed that the "buffer overflow" is a very common "exploit" used by these internet worms to infect machine after machine. One simple way to address this problem would be to replace these vulnerable "buffers" with something that will not overflow, perhaps something spongy and highly absorbent. Isn't anyone working on a solution along these lines? You never seem to hear about any progress being made. Honestly, sometimes it seems like no one in the technology industry has any common sense.
    • ZDNet UK [zdnet.co.uk] had a preview of Windows XP SP2 recently (see link) that included discussion of the pack's implementation of software-based overflow protection. It also mentions that 64-bit processors include this protection in hardware (NX or "no eXecute"). So, there is a little progress being made.
  • Net Telescope (Score:3, Interesting)

    by mmca ( 180858 ) on Friday March 26, 2004 @12:14AM (#8676734) Homepage

    Network Telescope

    The UCSD Network Telescope consists of a large piece of globally announced IPv4 address space. The telescope contains almost no legitimate hosts, so inbound traffic to nonexistent machines is always anomalous in some way. Because the network telescope contains approximately 1/256th of all IPv4 addresses, we receive roughly one out of every 256 packets sent by an Internet worm with an unbiased random number generator. Because we are uniquely situated to receive traffic from every worm-infected host, we provide a global view of the spread of Internet worms.


    They have 1/256th of all the IPv4 space?!?
    Thats alot of IPs that could be freed up for other purposes.

    Its great that they are doing this. And it is an interesting project. But I've been hearing about the lack of IPs for the last 5 years, and this one group has 1/256th of them.

    ------------
    www.ComicSmash.com [comicsmash.com]
  • by SmallFurryCreature ( 593017 ) on Friday March 26, 2004 @12:14AM (#8676740) Journal
    Cause Linux and BSD sure ain't safe against this. Bufferoverflows ain't nothing new and this analasys shows there is no security in being a small target.

    Might be time to make a security model that stops a firewall application from writing to the Harddisk or deleting files. Why should it after all? Or a limiting just how many emails a user can send, how many times do you send thousands in a minute?

    Perhaps even a delete mechanism that doesn't allow destruction of data without a password.

    Paranoid? 12.000 machines just went Poof in half an hour with this virus if the story tells it right. Doesn't exactly cheer me.

    • Though you're right in the respect that a stock disto of Linux or *BSD is just about as secure as Windows (perhaps a bit more), there is simply more you CAN do to secure Linux, versus Windows, in which almost all security has to be installed seperately.

      You can massively limit the damage done by a worm in Linux simply by running all processes that leave a port open in a chroot jail, or by doing so as a lesser privledeged user. This is one of the many simple solutions avaliable, while in Windows, its not so

  • Holy CRAP (Score:5, Insightful)

    by Saint Aardvark ( 159009 ) * on Friday March 26, 2004 @12:15AM (#8676746) Homepage Journal
    Jesus Christ, if you read that and weren't frightened, you're dead inside.

    The highest packet rate they saw was more than 23,000 per hour, sustained for at least one hour. The worm came out one day after eEye announced the vulnerability. It just went ahead and started erasing the hard drive, rather than just grep for passwords or credit card numbers. And this thing targeted and 0wned people who cared about the security of their computer!

    If you've read nothing else, check out the conclusion:

    It is both impractical and unwise to expect every individual with a computer connected to the Internet to be a security expert. Yet the current mechanism for dealing with security holes expects an end user to constantly monitor security alert websites to learn about security flaws and then to immediately download and install patches. The installation of patches is often difficult, involving a series of complex steps that must be applied in precise order.

    The patch model for Internet security has failed spectacularly. To remedy this, there have been a number of suggestions for ways to try to shoehorn end users into becoming security experts, including making them financially liable for the consequences of their computers being hijacked by malware or miscreants. Notwithstanding the fundamental inequities involved in encouraging people sign on to the Internet with a single click, and then requiring them to fix flaws in software marketed to them as secure with technical skills they do not possess, many users do choose to protect themselves at their own expense by purchasing antivirus and firewall software. Making this choice is the gold-standard for end user behavior -- they recognize both that security is important and that they do not possess the skills necessary to effect it themselves. When users participating in the best security practice that can be reasonably expected get infected with a virulent and damaging worm, we need to reconsider the notion that end user behavior can solve or even effectively mitigate the malicious software problem and turn our attention toward both preventing software vulnerabilities in the first place and developing large-scale, robust and reliable infrastructure that can mitigate current security problems without relying on end user intervention.

    I was thinking the other day about all the precautions you need to go through with a Windows box just to get a new install up-to-date; I was smug, and thinking that a Windows box without a firewall was like a person without a skin: no protection from infection, no way of stopping the most basic of attacks.

    And now reading this I feel that smugness just draining in a really hideous way. I use Linux and FreeBSD...what of it? I realize there is still a big difference between Unix and Microsoft, between a local and a remote exploit, between an ordinary user account and root. But I'm no longer convinced those differences are enough: there's a thousand programs available on my machines, and all that stands between me and 0wnership is a programming error and someone who decides that, you know what, seven thousand hosts is worth it.

    Nothing more to say at this point...I'm still staring uneasily at the blinking cable modem lights, wondering when it'll be my turn.

    • I care about security of my computer. But most windows host-based firewalls aren't focused on security, just creating the illusion of it.

      Suggestion - back up often.

      BTW a windows 95 box with the windows kernel update and MS Client turned off is pretty safe from network attacks - it has zero listening services. Only issue is if user runs malicious code.

      Stuff that erases the harddisk may be less to worry about than the more sneaky stuff that doesn't.

      Maybe this worm was written to discourage people from usi
    • Re:Holy CRAP (Score:5, Insightful)

      by astrashe ( 7452 ) on Friday March 26, 2004 @01:04AM (#8677043) Journal
      I don't know. This is scary, in a sense. But there's a lot of risk in the world, and you just have to live with it. If my computer gets wiped off, it's not the end of the world.

      I know that everyone isn't in a position to say that -- some people are running banks, or whatever. But most people can say it.

      We drive cars, even though cars crash and people die in them. Another person can crash into you even if you're doing everything right, and you'll die. We live and work in buildings, even though we know that there are fires every day in large cities. Sometimes people die in fires. You lock your doors, and you make a good faith effort to keep the bad guys out, but if someone really wanted to get in, they could.

      You just have to deal with uncertainty in life.

      Your computers are never going to be completely safe. The sun will come up tomorrow anyway.

      As a practical matter, people who take reasonable precautions *usually* come off pretty well with computers. They can hold on to their data and keep it out of other people's hands. There's no guarantee that will always be the case, but it's been true until now.

    • Adapt (Score:3, Insightful)

      by gad_zuki! ( 70830 )
      Instead of worrying about things we can't change (1 day/0 day exploits) lets focus on things we can change.

      Here are some hypotheticals and not-so hypotheticals.

      Are there any products that will ghost my drive onto another drive inaccessible to the OS by ordinary means every day?

      How can we teach people and developers the wonders of encryption so their credit card numbers and passwords can't be stolen?

      What will it take for hardware and OS makers to find a solution to most/all buffer overflows.

      Why are non-
  • by benna ( 614220 ) <mimenarrator@g m a i l .com> on Friday March 26, 2004 @12:16AM (#8676751) Journal
    This is the best named worm i've ever seen. When I first read headlines about it they said things like "witty worm attacks firewall." It took me a while to realize that was the name of the worm and not a judgement by the reporter (no I didn't read the articles)
  • KneeJerking (Score:5, Interesting)

    by minusthink ( 218231 ) on Friday March 26, 2004 @12:19AM (#8676769)
    Since I deal more with our internal software/services (opposed to dealing with the customers) I don't do really have to fix anything other than wipe a machine or two. However, for me, the worse part of this is the kneejerking that occurs right afterward.

    Now that this worm hit, management is crying for more security without really thinking it through. Now all staff machines need to be behind hardware firewalls. ALL machines. Linux, Solaris (95% of our boxes), Windows. Not such a big deal except they bought us cheapo netgear cable/dsl firewalls that I'm convinced will do nothing more than ipf/iptables to stop a determined cracker. These netgear firewalls stop me from mounting NFS of anything, they have no trusted hosts options. In fact, I can only port forward from everywhere, so in a sense it is lowering my security.

    Does anyone else experience reactionary steps like this from the PHBs?

    (THanks for reading my rant :)
  • analysis of the witty worm has revealed that it is wittier than most posts on slashdot
  • by gmuslera ( 3436 ) on Friday March 26, 2004 @12:41AM (#8676904) Homepage Journal
    .. this analisys shows the impact on internet as a whole of a worm that not was a microsoft software, not was very widespread, even was a security/firewall software, and patches/advisory was from just a day before.

    Under that conditions, if a similar flaw is found in i.e. iptables, ssh, bind, apache or postfix, it could have a similar impact, be the OS Linux, FreeBSD, MacOSX or whatever you consider "safe" and widely enough used.

    Of course, if the same would happened to a really popular software out there (clients are more popular than servers, we know the effect of outlook worms, and even by default installed servers, like IIS, or maybe even the Win XP SP2's bundled firewall) the effect would be much worse, but no OS connected to internet is safe against this. Maybe releasing policies will change putting the "when its ready" release date over the "when the marketing people say" on the light or the widespread of this kind of things.

  • A niche Warhol worm (Score:4, Interesting)

    by theCat ( 36907 ) on Friday March 26, 2004 @12:48AM (#8676943) Journal
    We tend to think of the M$ monopoly, and the subsequent homogenous pool of hosts, as being the reason for the rapid spread of worms. Actually, the monopoly means that most virus will be targeted for that platform because it is obvious, but a virus well targeted even for a niche platform like ISS can take off because there internet itself is now almost completely transparent.

    What this suggests is that the combination of 1) bandwidth commonly available and 2) CPU speed are now more than sufficient for a virus to find almost all of the hosts it needs to anywhere these are on the internet. When a few early, fast hosts can spew 11,000,000 pps to random IP addresses then it doesn't take long to find what one is looking for.

    No doubt this is part of the reason for the observation that when 2% of Windows sysadmins fail to patch for a known vuln, then the next worm to come along and exploit that vuln has a field day. 2% of a really big number is in turn a lot of hosts, millions of Windows hosts for example.

    And a million of anything, be it Mac OSX or NetScreen or Checkpoint or BeOS or OS/2 or Amiga or anything, is fair game when a smartly written virus can get them all.

    I guess I'll have to go back and review my Mac for system updates.
  • by LostCluster ( 625375 ) * on Friday March 26, 2004 @12:50AM (#8676956)
    What's most disturbing to me is that this worm appeared on about 200+ distinct hosts at such a rate of speed that it could not have done so that fast using it's main random-checking method. There clearly was some plan to pre-seed the worm into at least that many places before the worm started to spread on its own.

    I doubt whomever programmed this worm had legit access to that many well-destributed computers... so it appears that some carrier hack occured before this worm was released, which effectively took about 12 hours off of the reaction time clock before the white hats even realized what was hitting them. Are we about to see a rash of compound attacks where one worm has a second worm baked in?
  • by wintermute42 ( 710554 ) on Friday March 26, 2004 @12:52AM (#8676976) Homepage

    I'm a long time UNIX/Linux hacker (I first programmed on UNIX on a VAX). I've written a lot of C/C++ code. But long ago I used Pascal and more recently I've been using Java more.

    Both Pascal and Java do range checking. That is, they check the bounds of arrays (buffers) when they are accessed. This means that about half of the security exploits (including the one, targeted at BlackIce etc...) would not be exist if our software base was implemented in languages with bounds checking.

    The original reason that bounds checking was not implemented in C was that the early compilers were very basic (little in the way of optimization) and bounds checking overhead slows execution. Bounds checking overhead can be reduced through optimization, but Ritchie's original C compiler only did simple optimization.

    Another problem is that in C pointers and arrays are more or less interchangable. So bounds checking becomes difficult or impossible in all cases (C provides way too much pointer flexibility when it comes to enforcing bounds checking).

    If we were to add up the cost of all of the buffer overflow security attacks it must run in the billions. So the "power" of the C programming model has extracted a pretty high price. This puts an interesting retrospective slant on Brian Kernighan's 1981 article Why Pascal is Not My Favorite Programming Language [lysator.liu.se].

    I have to confess that I would not go back to using Pascal. But native compiled Java, with Java's bounds checks, would be far safer than C++. And it would result in software that is more robust against security attacks.

    Yes we can all learn to use fgets, strncpy and other safer library routines. But this only makes our code safer. It does not provide the complete protection against buffer overflow attacks. So perhaps it is time to reconsider the programming languages we are using. Perhaps unrestricted pointers and no bounds checking has become too costly.

    • I think bounds checking should be a compile time option. One of the reasons I switched to C++ actually was the ability to wrap [] (via templates) to automatically get bounds checking w/out relying on the compiler to do it for me. The overhead of bounds checking is not negligable for numerical work so while this is a boon for debugging, its nice to be able to turn it off for optimization once the code is "working", especially as we're not all writing daemon code (i.e. if i'm mucking about doing linear algeb
    • by Minna Kirai ( 624281 ) on Friday March 26, 2004 @03:10AM (#8677572)
      But native compiled Java, with Java's bounds checks, would be far safer than C++.

      Or how about native compiled C++, with bounds checks?

      There's nothing about C++ that means you can't have bounds checking! The specification allows for undefined behavior when an array is accessed incorrectly. The compiler author can decide for himself what that undefined response could be. It might be an invalid access (like most current compilers do), but there's no reason it couldn't hit a boundary-check and abort the program.

      Assorted add-in libraries to C++ compilers do this. They're not very popular, of course. But if programmers cared about safe insurance against memory overruns, they could achieve it without switching languages.
  • by lone_marauder ( 642787 ) on Friday March 26, 2004 @12:56AM (#8676996)
    Witty spread through a population almost an order of magnitude smaller than that of previous worms, demonstrating the viability of worms as an automated mechanism to rapidly compromise machines on the Internet, even in niches without a software monopoly.

    How many Linux, BSD, and Mac machines were infected?
    • by MyHair ( 589485 ) on Friday March 26, 2004 @01:56AM (#8677303) Journal
      How many Linux, BSD, and Mac machines were infected?

      Don't pretend that those haven't had remote root exploits before. (Well, not sure about Mac.) This incident seems to demonstrate that a destructive worm can be deployed in short order and rapidly spread even when the target population is in a tiny minority of internet hosts.

      That prompted me to insert a bridging Linux firewall and want to learn to tighten it up even further. (Blocking 1-1024 now plus ports like 3128 & MSSQL; I want to block all unwanted incoming connections but am yet unsure about Freenet, Kazaa Lite, bittorrent and Quake3 inbound needs.)

      (BTW, used LEAF uClib Bering for the bridging firewall. Axed the Shorewall and htb.init and put my own scripts in, though, due to issues with htb.init.)
  • by Animats ( 122034 ) on Friday March 26, 2004 @12:57AM (#8677004) Homepage
    Virus writers are now developing a tactical doctrine. This suggests that future viruses will be more effective, not for technical reasons, but because the attacks will be organized more like military attacks. We now see virus writers getting inside the OODA cycle of the defenders. This is consistent with modern military tactical doctrine. Read MCDP-1, Warfighting [usmc.mil]. This short Marine Corps publication tells you how to think about war and how to win it. This revolutionized USMC doctrine, which previously focused on heroically advancing no matter what the opposition.

    A key point of modern tactical doctrine is to act faster than the opposition can react. Special operations types talk about the "period of vulnerability", which begins when the defender notices an attack and ends when the attacker achieves relative superiority. Most attacks fail during the period of vulnerability. So modern tactical doctrine says that it's worth huge amounts of effort and money to cut that time down. This is why special ops people rehearse and train to a level that seems unreasonable. It's not to make them good, athough it does. It's to make them fast, so they get through those first seconds and minutes at the beginning of an attack before the defenders can react.

    That's exactly what we saw with this worm. The attack was launched in a way that rendered the usual strategies of anti-virus companies ineffective. Anti-virus companies, (and Microsoft), have known response and patching cycle times. The creators of this worm got inside that cycle time, by building both a fast-propagating worm and by starting it from multiple points.

    Military doctrine gives us some insights on what to expect next. This worm invoved a campaign, a series of battles fought to achieve a goal. One attack acquired machines to be used as bases in a later attack. That's standard doctrine. Other relevant military concepts include mutual support, feints, and diversions. We are starting to see worms and viruses that support each other, so that if one is removed, another attack lets it back in. We may see feints and diversions, where a big noisy attack is launched to divert attention from something more subtle.

    Another doctrinal concept is that of combined arms. So far, virus writers generally haven't utilized other hacking techniques, like dumpster diving, social engineering, or wiretapping. That may change.

    We may well see an attack that wipes out most of the Internet-connected Windows machines in the world in a single day.

  • Security defined (Score:5, Interesting)

    by mcrbids ( 148650 ) on Friday March 26, 2004 @01:04AM (#8677041) Journal
    I think we all have to come to terms with the fact that our current state of Computer Science is not up to the task of dealing with the Internet as it is becoming.

    Linux/BSD has a somewhat better security record than MSFT, but even after all the auditing effort put out by the guys over at BSD/OpenSSH, there have *still* been a number of security vulnerabilities of recent!

    The problem is not being viewed in the proper light. Something like a buffer overflow should not result in a compromisable host! Something like a misquoted SQL statement should not result in an SQL injection vulnerability!

    Applications and programming environments need to be structured and developed with the understanding that people make mistakes and there needs to be allowance for that.

    You can't expect a group of programmers to maintain 50,000, 500,000, or 5,000,000 lines of code without there being mistakes in there.

    It just cannot be done.

    So languages, programming techniques, and infrastructure needs to be developed that truly prevents the "bug==severe security risk" situation.

    Really, as much as we all laud their security record, Microsoft is in a good position to trounce the OSS crowd if they can come up with a software language and security system that allows for programming mistakes.

    The answer is NOT to make sure you input validate *everything* - although input validation is always a good thing.

    The answer is to develop a system where common programming mistakes do not result in a security issue.

    Get used to it. People are people. They make mistakes. We either cease being human, or develop a system that makes allowances for our humanity.

    Can we do it?
  • by rice_burners_suck ( 243660 ) on Friday March 26, 2004 @01:08AM (#8677065)
    Cooperative Association for Internet Data Analysis (CAIDA)

    In other news, the Action League department of the Cooperative Association for Internet Data Analysis (AL CAIDA) today announced new threats of technological terrorist attacks. Among other things, they threatened to use illegally acquired funds to purchase the Microsoft Windows source code, insert viruses directly into the operating system, and release them to the unsuspecting world. The most frightening of their threats was to implement a technology called Windows Scripting Host, which would execute malicious code upon reception in an email inbox. Such a technology would allow viruses to spread faster than with earlier diskette-based methods.

    Oh, wait... That's already been done for them. Back to the black hat drawing board with these computer crime organizations.

  • by calebb ( 685461 ) * on Friday March 26, 2004 @03:39AM (#8677659) Homepage Journal
    In light of this worm, I wonder if Microsoft is going to make any changes to the new Windows XP SP2 firewall? (i.e., a self-monitoring 'heurtistic' [wikipedia.org] process that watches for 'exploited-process-like-behavior.')
  • About a week ago, we had a vulnerability announced in OpenSSL [slashdot.org]. I imagine most of us patched pretty quickly. But the Witty worm appeared within twenty-four hours of the announcement [caida.org] of the vulnerability it attacked, and it infected 95% of vulnerable machines within 45 minutes [caida.org].

    Yes, it's funny that it was a Windows firewall that was attacked. Yes, it's especially funny that it was an expensive Windows firewall that was attacked. Laugh.

    But also think.

    This could just as easily have been us. From my root logs I patched my servers for the OpenSSL vulnerability on Sunday 21st, which was four days after it had been announced [us-cert.gov]. If the Witty worm had attacked OpenSSL, it would have got me. I suspect it would get most of us.

    Linux (or BSD, or whatever) is not immune to this sort of attack. On the contrary, we're just as vulnerable as anyone else. Those of us who administer public-facing servers have got to learn to be still more cautious, and still more proactive about fixing holes as they are announced.

    • by Phragmen-Lindelof ( 246056 ) on Friday March 26, 2004 @05:05AM (#8678003)
      How is a DOS attack anything like overwriting a hard drive? This is FUD.
      From US Cert [us-cert.gov]:
      II. Impact
      An unauthenticated, remote attacker could cause a denial of service in any application or system that uses a vulnerable OpenSSL SSL/TLS library.
      • He isn't saying this specific vulnerability was the one that could have done it. He's saying that if a vulnerabilty did come along that could enable someone to do it, that he would not have patched until it was too late.

        I wouldn't have either possibly, the point being you have to be sure that people can't get to your boxes like that. Either by patching or having layers of abstractions to stop it from happening. Most likely both.

        It's more of a hypothetical at this point, but saying "it will never ha
  • by Alex Belits ( 437 ) * on Friday March 26, 2004 @06:29AM (#8678285) Homepage
    ...anything that is called a "firewall":

    1. Should NOT contain any attack analysis. The only attack that any security software not in the hands of security researcher has a legitimate reason to "analyze" is an attack that already succeeded, and the user is recovering from the destruction caused by it. Announcing "prevented" attacks or modifying the host's response to "suspicious" data is at least a useless toy, and at most a target for a real attack (though most often it's in the middle, a nuisance that reduces the reliability). Keep it simple, stupid!

    2. Should be separated from the host that it protect by at least a virtual machine and (better) be on a separate device. Then the worst that can happen in the case of a firewall compromise is that the firewall will stop performing its functions. Running a "firewall" on the "firewalled" host is an equivalent of a person hiring himself as a bodyguard.

    3. If running on the "protected" host, it should be passive, and merely prevent other software running on that host from receiving packets from the Internet even if that software listens on the ports that the author believes, should not be opened. Still, calling this a "firewall" stretches the definition way too far.

    The original meaning of a firewall is a wall in the building that prevents fire from spreading when the building is already on fire, and firewall acts as a barrier for spreading it. It does not make a building non-flammable, and its design expects a building to contain flammable material, yet it prevents damage from spreading. A network firewall does something pretty close to this, it expect vulnerable hosts to be on either of its side, and merely reduces the probability of successful attack from "external" to "internal" network, yet being relatively simple, it is impossible or difficult to attack. Having a "firewall" full of "flammable" bells and whistles, and in the middle of a system that it assumes to be vulnerable is a very, very wrong kind of design.
  • by LuckyStarr ( 12445 ) on Friday March 26, 2004 @07:01AM (#8678389)
    Given, many hosts run the same OS (Linux, Windows, whatever) and the same binaries. Even if you compile the source from scratch the resulting binary is likely to be identical to other binaries on other machines.

    This leads to a situation where malicious code can rely on things like stack position and such, enabling it to insert its code into it.

    Idea:

    Is it possible to modify the compiler or binary-format to gather some unique information from the host it is running on and modify the binary in a way that it behaves in a unique way on this machine?
    For example in a way so that malicious code can not predict the position where it can insert itself, resulting in a crash rather than a compromise of the machine.

    Pros:

    - All malicious code would be obsolete if it doesnt know the "secret" of the machine and the method it uses to "scramble" its binaries and/or its memory.
    - All remote/local exploits in any form would be converted to a DoS, which I think is not as dangerous as a compromise.

    Cons:

    - Would presumably make debugging of programs even worse than it is now.
    - Insert "You stupid *%@&, you dont understand" here.

    Please reply, as I feel that I may have missed something important.

    --
    LuckyStarr
  • by leereyno ( 32197 ) on Friday March 26, 2004 @10:01AM (#8679531) Homepage Journal
    I spent most of yesterday rebuilding my Windows 2000 system at work. I did a raw copy of my windows partitions to a second drive using dd under Linux before I started the rebuild so I was able to preserve much of my data, but far from all of it. My outlook .pst file is the most painful loss so far, and who knows what else I'll find damaged beyond repair before I'm done.

    Once upon a time I would be furious about this. Nowadays I've come to expect it. It seems we live in a world where sociopaths are given free reign to harm others without penalty or consequence. Worms like this are concrete proof of the existence of genuine evil. What kind of a person would write create something for the sole purpose of ruining other people's computers? Other people who they don't know and who have never done anything to hurt them? I'll tell you what kind, the kind I'd kill in a cold second. I hope and pray that they find the people behind this, and that they are in a place where our law enforcement can get at them. The best thing would be just to take them out someplace and shoot them, but short of that a nice long prison sentence will suit me just fine.

    This worm has convinced me of the need to increase the steps we take in fighting people like this. The model where we work to protect our systems just doesn't work. Locking your door and windows and pulling the shades may keep an intruder out of your house most of the time, but it doesn't eliminate that intruder. It is far better to trap and kill a rabid animal than it is to simply put up barbed wire around your house. It is time that the would-be victims of these crackers went on the offensive. You wouldn't just stand there if someone was trying to beat you up. You'd fight back and if possible make sure your attacker hurt badly enough that they wouldn't be attacking anyone else anytime soon.

    Crackers are a not a computer problem, they are a people problem. If computers didn't exist they would find some other way to be destructive and malicious. Crackers are no more a computer problem than carjackers are a problem with your car. The only difference is that carjackers run the risk of getting shot by their would-be victims and/or being sent to prison. Crackers essentially operate with impunity. The only way the cracker problem is going to be effectively handled is to make that change.

    If I ever find out who is behind this worm and I'm in a position to do something about it... heaven help them because it will take an act of God to save them from me.

    Lee

I put up my thumb... and it blotted out the planet Earth. -- Neil Armstrong

Working...