Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Microsoft The Internet

When Will The Next Slammer Strike? 419

scubacuda writes "Business Week has an article on how the Slammer worm demonstrates just 'how vulnerable the Internet remains': MS's own DBs were affected, telephone/ATM/etc were knocked out, and if the worm had occurred only 48 hours later (preventing investor's trading, 911 calls, banking services), there could have been a 'virtual Net shutdown.' Vincent Weafer, director of the computer-security outfit Symantec's Anti-Virus Response Center (SARC), says that the likelihood that a Slammer-style worm will hit at a more vulnerable moment is high."
This discussion has been archived. No new comments can be posted.

When Will The Next Slammer Strike?

Comments Filter:
  • by ContemporaryInsanity ( 583611 ) on Sunday February 02, 2003 @03:20PM (#5211050)
    The same MS that didn't apply their *own* patches ?!?

    Hmmm...
    • The same unpatched Microsoft networks that Howard Schmitt was so recently quoted as dismissing irresponsible those who failed to apply the 6-mo old patch?

    • Maybe they know the patches have more holes than they fix...


      Any mission-cricial app simply shouldn't be on a MS system. They don't do what they say they do (Outlook 2000 can't even get sync over e-mail right given a dedicated in-house POP3 server) and charge you for tech support when you want to figure out how to work around their fucked-up code.

    • by BigBlockMopar ( 191202 ) on Sunday February 02, 2003 @04:00PM (#5211275) Homepage

      The same MS that didn't apply their *own* patches ?!?

      The problem that I have is, even though I don't run any Microsoft software, their incompetence keeps on screwing me around and costing me productivity.

      I get hundreds of e-mail virii per day, owning partially to incompetent users, but also partially to incompetent Outlook programmers.

      At the height of Code Red, I was getting hundreds of hits per day to my webserver.

      That last worm effectively shut down portions of the Internet.

      Now, here's the problem. If I'm driving down the road, and a Hyundai's brakes fail and cause it to run a red light and plow into the side of me, it'll piss me off, but it's a quirk, and shit happens.

      If, every couple of months, a Hyundai's brakes fail and I get hit, pretty soon, I'll start to get very pissed off, not just with the idiots who drive Hyundais, but also with Hyundai itself.

      This has gotten to be utterly ridiculous. We have to find some way of holding Microsoft accountable for their fucking ineptitude.

      • by ejaw5 ( 570071 ) on Sunday February 02, 2003 @04:51PM (#5211536)
        That's a great analogy..I'll add this though:
        Investigations from the NTSB and all will force Hyundai to recall all their affected cars and fix the brake problem. Don't expect such actions against Microsoft.
      • They are the ones that *propagate* this crap. This includes most any other 'known' virus/worm/trojan.

        While I agree Microsoft's track record is not good, no one is perfect.

        Especially In this case as there WAS a fix.. just no one bothered to apply it. So cant blame the messenger this time. ( and yes they should have applied the patch unilaterally which IS unacceptable, but again many many people didn't, and are equally to blame for the massive troubles.. )

        Yes there are *plenty* of other times you can blame Microsoft, but then again, you can *blame* other organizations ( OSS too ) as well for missing a hole out of potentially millions of lines of code.

        Just be realistic, bashing one company isn't going to help any. ( and no I'm not a Microsoft fan, I'm just smart enough to see who is to blame. )

        ( oh, and I'm not saying don't crucify the writers of such things. They should all be strung up, right beside the spammers )

        • It's it more important that MS SQL server shouldent be exposed to the internet directly in the first place. There are no public SQl servers than I can think of and no reason for them besides maybe some open testing and compatability public labs. Port filtering isn't a panecea but it's the second line of defence (after egress filtering by everybody) there is no reason that a SSH port forward or a VPN cant be used hell a GRE tunnel with no encryption instead of having it open and on the internet. This is also the case for many other packages how many MySQL ports I have seen open it's disgusting.
        • by Fulcrum of Evil ( 560260 ) on Sunday February 02, 2003 @06:52PM (#5212129)

          Especially In this case as there WAS a fix.. just no one bothered to apply it.

          It's been mentioned before, but it bears repeating: some subsequent security patches remove the fix.

          Further, Microsoft has a track record of releasing security patches that break or touch unrelated stuff, roll back other fixes, give Bill admin rights on your computer, or just plain hose your box. Because of this (and the volume of patches), keeping up with security on MS boxes is not a task to be taken lightly. You test and test and schedule downtime, and it still bites you. This is the root of this particular thornbush.

        • Please shut up. If you make a product easy to setup and administer, don't be suprised when incompetents or people are aren't dedicated IT dorks are responsible for things.

          The problem is poor design. If you design easy to use software, it should be easy to use safely.
    • by Bedouin X ( 254404 ) on Sunday February 02, 2003 @07:10PM (#5212198) Homepage
      They (MS) know better than anyone that applying an SQL Server hotfix is a royal pain in the ass. They just modified the initial Slammer vulnerability patch so that it has an installer. Before that you had to stop the server, backup the files, copy the new files manually into their respective directories, and then run a couple of queries in the query analyzer.

      This and MS's reputation for having to patch patches (sometime 2 or 3 times) is why people don't jump at the chance to apply one of those damn things. It took this incident for them to make installing a simple SQL Server hotfix less than a 25 minute job.

      I also downloaded SP3 4 times and every time I tried to run setup, I got a "setupsql.exe can not be found" error. I STILL don't have SP3 on my SQL server, but it's firewalled anyway so I'm not totally naked.
  • oh, wait, that's a different effect.
  • Next strike (Score:2, Interesting)

    by Blackbox42 ( 188299 )
    It's seems to be every 3 months or change of season. I'm betting on am IIS bug in March.
  • by zerosignal ( 222614 ) on Sunday February 02, 2003 @03:22PM (#5211066) Homepage Journal
    ...why ATMs were affected? I've seen this mentioned in a few articles but I didn't think banks would use the Internet to connect ATMs on their systems.
    • by Anonymous Coward on Sunday February 02, 2003 @03:24PM (#5211076)

      ATMs are not connected to the internet, but to the bank's private network, which, yes, runs over TCP/IP. So a computer that got infected and had access to the internal network would be enough to crash those reachable ATMs.

      Brett Glass : http://www.brettglass.com

    • by MoTec ( 23112 ) on Sunday February 02, 2003 @03:24PM (#5211080)
      Many ATMs use a phone line to connect to the network to run the transaction so if the phone lines are down so is the ATM. Some use leased lines or other communication technologies but a POTS line does the job and is often cheapest.
    • Maybe those ATM's are running Microsoft's SQL Server in the backend? Seriously, I've seen pics of ATM's that got the BSOD.
      • Seriously, I've seen pics of ATM's that got the BSOD.

        Possible I guess that MSSQL would be in backend (?) Oracle more likely, and ATM's w\ BSOD have got to be the touchscreen GUI, IMO

    • Could someone also explain why releasing the same virus on a weekday would have blocked access to 911?

      Sounds a lot like unfounded scaremongering by people who should know a lot better to me. 911 not only runs on a separate network (telephone != internet), but is just as busy on a Saturday (if not more so) than weekdays.

      In fact, sounds like the Mitnick fiasco, where any knowledge tangentially-related to the 911 system was assumed to have the power to prevent emergency calls from getting through.

      How can journalists make such claims without losing their jobs?

      • by Blkdeath ( 530393 ) on Sunday February 02, 2003 @04:23PM (#5211401) Homepage
        Sounds a lot like unfounded scaremongering by people who should know a lot better to me. 911 not only runs on a separate network (telephone != internet),

        Actually, 911 service runs on the PSTN, as does a very large portion of the Internet. The two (Internet and PSTN) are very inter-twined, as are the vast majority of corporate (including bank) networks.

        Remember, it was us geeks who convinced the suits that the Internet was the way to travel in the 21st century. Now it's our job to support that claim by providing them with a more reliable Internet.

    • by DJayC ( 595440 ) on Sunday February 02, 2003 @03:50PM (#5211233)
      It is unclear in the article if they mean ATM as in bank ATM's, or ATM as in asynchronous transfer mode networks. I'm sure the author doesn't even know in which context ATM is used.

      Just a thought *shrugs*
      • by JediTrainer ( 314273 ) on Sunday February 02, 2003 @05:26PM (#5211701)
        Yes. ATMs as in bank ATMs. Cash machines.

        I don't know about most people, but the outage affected customers of CIBC Bank in Canada, who couldn't withdraw their cash from many machines throughout Ontario (the news said Toronto only, but it affected some of my family and friends in other areas too).

        Being a customer of a different bank (TD Canada Trust), I was not affected.
    • by LostCluster ( 625375 ) on Sunday February 02, 2003 @04:06PM (#5211302)
      Just because something isn't technically on the Internet, doesn't mean it is on a completely walled-off pipe.

      Many stand-alone ATM structures use a satellite connection from Hughes Network Systems to securely connect to their company's network. But that's the same Hughes Network Systems birds that power DirecWay and DirecPC consumer services. So, if for some reason there was a sudden surge in Internet traffic (such as a worm randomly trying to infect IP addresses without caring whether or not there is a machine capable of being infected on the other end) the ATM might not be able to get enough satellite time to complete a transaction without timing out, therefore resulting a "lost my connection" message on the ATM.

      Think of it as a VPN tunnel over a network that is used partly for Internet, and partly for other things... if the Internet goes crazy, it affects those other things too.
    • by ergo98 ( 9391 ) on Sunday February 02, 2003 @04:08PM (#5211311) Homepage Journal
      My presumption is that they were running ATM VPN traffic over standard IP connections (basically like running an ADSL line to the site). This would affect anyone who is running a system critical service over the shared internet.

      Having said that, if they were affected then it demonstrates really poor planning: Any critical service should have QoS guarantees by their provider (which should have peer QoS guarantees, and so on), so if the ATM requires a minimum of x bandwidth, then the provider will guarantee that all other traffic will be throttled to accommodate it, building more bandwidth (fibre, etc) if they cannot accommodate all of their QoS guarantees at once. It most certainly seems ridiculous to even ponder things like 911 going down because of something like this.

      Let me put it another way: Many telcos share the same data lines for both voice traffic (long distance calls, etc), and Internet IP traffic: Internet traffic cannot take up so much bandwidth that it impedes the voice data, as the telco will always throttle it accordingly to ensure that voice always gets through with 100% throughput. These same sorts of guarantees hold true (or should hold true) for all other system critical type services, and it is brutal irresponsibility to do anything else. When some kid with a ping program can take down your system then it points out a pretty big flaw.
      • That's not quite true. The PTSN has a limited capacity, and those limits assume that not everybody will pick up the phone all at once. On 9/11/01, in parts of the country far away from Washington and NYC, there was no major failure of any local telephone equipment, yet there were many calls that could not be completed because there was a higher volume of phone calls than the system could handle.

        If an infected computer is on a dial-on-demand modem setup, the worm will spew non-stop Internet traffic, and the router will respond by firing up the Internet connection and using the phone line. If overall phone usage goes up a noticiable ammount, that could cause routing that make 911 a "can't get there from here" problem.

        But wait, 911 is supposed to be a priority call that should be able to kick other less-important calls off the system to clear the way. So, most communites have nothing to worry about here... then again, if we were in the perfect world, worms wouldn't be a problem at all.
    • Most banks don't, but Bank of America does. Washington Mutual also. Do a google search. Bank smart.
    • by Anonymous Coward on Sunday February 02, 2003 @04:28PM (#5211429)
      My assumption was that they were talking about ATM (Asynchronous Transfer Mode). Many ATM networks were significantly hurt by this because routers and switches that utilize SVCs kept building and rebuilding circuits.

      The whole point of this problem can be simplified to bad code and bad base installs. I keep hearing people say it's not MS's problem. I work with a wide variety of products in the networking (L2 & L3+ WAN) and systems world. Any one of the vendors that I deal with would lose serious market share if their products were found to be vunerable to something like this and they simply patched it but didn't change the base install to be "secure".

      Let's start by taking an example of a comparable product -- postgreSQL. We all know that a recent patch to this product fixed a possible remote exploit. Certainly the bug shouldn't have been there and it was something that should be patched. However, the point is that the postgreSQL base install doesn't even allow remote connections. In fact, the config file tells you that without remote connections allowed, it's still probably an liberal configuration that should be locked down more.

      I'll buy that MS has a large market share and that occasionally something will get through the normal protections; however, the base installs should be locked down. Why aren't they? It's a question that is very simple to answer.

      MS sold the Internet community a grand story. In this story, running a server is a simple task that anyone can do. For this story to be believed, they have to have the base install do everything out of the box without any special configuration which might require a real administrator, dba, network design specialist, etc. If the products were actually locked down like they should be (like most of the competing products are), MS would have a bigger job in support calls because 80% of the non-administrators that work with MS platforms would be ill-equiped to handle the proper configuration of the server to get it to work.

      I have a product that I use on linux that was written with this kind of security in mind. The config file is riddled with lines like: die "you didn't go through your config file!". If you don't completely configure the product, it keeps dying on startup. This is how products should be released--locked down and set to die if the configuration is not explicitly setup by the admin with them being aware of the dangers to each option they set back on.

      I also hear a lot of people complaining that people didn't install the patches, I again go to the point of the base install. If the product's base install were locked down, far less databases would have been open even if they were unpatched. Seriously, let's be reasonable, why should an SQL server open ports by default to anything except maybe 127.0.0.1. Many databases now only need one or two subnets open anyway since their database interaction goes on with an application server (often a web server) which serves as the db client for the users anyway and quite a few databases on the lower end systems (where most of the sysadmins who don't know how to lock things down are) reside on the same box as the app services.
  • by Anonymous Coward
    Then when they leave things unpatched and it happens again, you can yell, RTFM! STFU, Newb!
  • If they ever catch the guy that did this, I'm sure the news will give us all the "let's throw him in the Slammer" puns we can stomach.
  • I think we ought to make virus-protection code public and government funded.

    I know way too many people who can't afford 50 bucks on a virus scanner or decent firewall software in College, and I saw Nimda infections up until the end of last year.

    If people could get this type of thing for free - money that would ultimately ensure the safety of the net at large - I think it should be done.
    • by damiam ( 409504 ) on Sunday February 02, 2003 @03:35PM (#5211138)
      I think we ought to make virus-protection code public

      It [kernel.org] is [freebsd.org].

      who can't afford 50 bucks on a virus scanner or decent firewall software

      Then don't pay [com.com] 50 bucks.

      I saw Nimda infections up until the end of last year

      Norton and McAfee both provided free available Nimda removal tools. Besides, if you can afford IIS, you can afford a virus scanner.

      • I run a FreeBSD server for serving Windows users through Samba, and occasionally an infected Windows box drops malicious emails and exes all over my shared filesystem. You Unix zealots seem to brag about BSD not being as suspectiple. Need I remind you of Slapper [f-secure.com], wwhich only infected Linux/Apache machines, but athe same vulnerability existed on any system running Apache. What we (or at least, I) need is a Unix-based virus scanner that can prevent the spread of viruses for all platforms.
    • by matth ( 22742 ) on Sunday February 02, 2003 @03:36PM (#5211140) Homepage
      It *is* free http://www.grisoft.com (AVG)
    • by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Sunday February 02, 2003 @03:38PM (#5211156) Homepage

      I think we ought to make virus-protection code public and government funded.

      That doesn't help with new viruses, like the one this story is about.

      The problem is with patching. People don't install the available security patches. This problem had been known about for half a year.

      And some people refuse to install Microsoft's newer service packs, because of the changed license on them, which has some pretty gross clauses in it. I think that's almost criminal behavior by MS - "yes, we fixed the fatal bug in the software we licensed to you, but to get the patch you have to agree to some new random clauses - say, give us full access to your computer".

      On the other hand, if they had that full access, I think that at least their service packs would be installed, and these attacks wouldn't be so succesful.

      But I'll just stick with Linux, myself :-)

      • That doesn't help with new viruses, like the one this story is about.
        Newer versions of Norton AntiVirus contain heuristics to detect virus-like behavior. But I don't know if an AV would have helped Slammer, since it did not even touch the disk, there are no files to scan. Can AV programs scan RAM for potential worms?
        • Yes (Score:3, Informative)

          by Anonymous Coward
          , very well, thank you.

          And not only that, nonprofits and edu can get the server version of Norton Anti-Virus for FREE from techsoup.com.

          So it's doubly stupid that any college got hit.
      • And some people refuse to install Microsoft's newer service packs, because of the changed license on them, which has some pretty gross clauses in it.

        There's also the problem that a "service pack" might alter things you didn't want to change in the process of fixing any bugs.
    • by MadocGwyn ( 620886 ) on Sunday February 02, 2003 @03:39PM (#5211164)
      There are some companies that offer free services.

      <LI>http://housecall.trendmicro.com<LI&gt ;

      Free Java Based scanner, works well I've used it many times when I'm out fixing someones computer and they dont have a decent scanner.

    • My school bought a licence of McAfee that allowed everyone attending classes to have a legit copy.

      I think that your right in that College students should be given virus and firewall software for free, but I think that it should be the responsibility of the network that they connect to the internet through. Most likely the school they attend. Perhaps ISP's should be picking up a bit of the bill for "internet only virus" scanners.
    • Yes, but once people hear about "government software", the most likely reaction will be the tinfoil hat style response. Granted, the source will be public, but will Joe Undergrad or Jane TA trust the government enough to have government software on their machine while they are out protesting against the possibly imminent Iraqi war?

      People don't like the government to butt into their lives (unless it directly benefits them). Unless the project was funded by the government but in the hands of another body, I don't see it going anywhere.

      -Matt
    • Most Virus products couldn't have stopped Slammer. It never wrote to disk. It needed to do something different again.

      I think some more thought about how we build and patch software needs to happen.

      Virus scanners are a crutch.
    • I'm a PC tech at my college, and for the last few years we've purchased a site license for Norton Antivirus. Students are EXPLICITLY told their first day here that they need to go to Computer/Network Service's website and download the virus scanner, AND keep it up to date. (We had some problems with the download a little while ago, but it's since been repaired and highly advertised.)

      So EVERYONE has access to a program that installs easily, is FREELY downloadable, and requires only minimal maintenance (update your damn definitions once in awhile.) And yet, we still have Nimda and Klez flying around. Probably right now, there are Nimda infections running around on our network.

      People can be so incredibly dense when it comes to this stuff. We even have a virus scanner sitting on the mailserver, and STILL this shit abounds.

      And Klez still manages to find my email address once in awhile in some poor dope's addressbook, sending it around the world. Fabulous. School networks are a foul, foul microcosm that provide fertile breeding grounds for this shit.

      The biggest problem is, you can't MAKE people take basic security precautions. Some poor stupid college freshman who can't download a goddamned virus scanner sends out a fresh batch of Nimda every day. Should there be action taken against him?

      I'd love to see this stuff government-mandated. I really would. But I just don't know how possible it is in today's climate. I'd be overjoyed to see some semblance of security restriction imposed upon companies like Microsoft, that wave a patch around saying "Our ass is covered! We didnt' do it!" when 1) they didn't patch their OWN systems 2) the patch breaks everything else.

      But will it HAPPEN? Does government have the understanding of technological matters to make this happen without impinging more on our freedoms than they already do? I'm not feeling too reassured right now.
  • This is nothing yet (Score:5, Interesting)

    by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Sunday February 02, 2003 @03:26PM (#5211091) Homepage

    The scariest thing is actually that this kind of damage is being done by a worm that doesn't actually do anything except spread itself (as far as I know, anyway).

    Damage would be much worse if these things started cleaning hard drives after the action (yeah yeah, backups - just like all your databases always have the latest patches, right?)

    • by travail_jgd ( 80602 ) on Sunday February 02, 2003 @03:49PM (#5211226)
      Damage would be much worse if these things started cleaning hard drives after the action (yeah yeah, backups - just like all your databases always have the latest patches, right?)

      I would think that damage would be worse if the worm just sat quietly for a few weeks (or even months), slowly corrupting data in the database. At that point, backups may not be usable; at some point either the last backup media has been recycled, or new entries to the database would be too expensive to re-enter.

      A "stealth" worm, whose primary focus is remaining undetected rather than consuming huge amounts of resources would be a lot more devastating than an obvious one.
      • There are probably many such stealth worms crawling around right now. We just don't notice them because they're, well, stealth worms. Loud worms probably end up helping us by rubbing our noses into vulnerabilities that are being exploited far more malevolently by other worms.

        (On the other hand, writing a stealth worm is probably harder than it looks. Some sites carefully scrutinize their network traffic, and it only takes one of them to spot you. But would they tell anyone else?)

    • The scariest thing is actually that this kind of damage is being done by a worm that doesn't actually do anything except spread itself (as far as I know, anyway).

      On the contrary, in addition to spreading itself, it launches spoofed keepalive packets to SQL Servers which then bounce around between the servers indefinitely.

      That's how it managed to have such an impact on the Internet.
    • In other words, the worm iterations are like tribbles...
  • by Anonymous Coward
    Vincent Weafer, director of the computer-security outfit Symantec's Anti-Virus Response Center (SARC), says that the likelihood that a Slammer-style worm will hit at a more vulnerable moment is high.

    Wow, even SARC's director thinks a worm attack is likely? If someone that unbiased thinks so, I'd better upgrade my antivirus software now!

    I'm glad there's a "Post Anonymously" option--I only wish the "Post Posthumously" option were still there.
  • by Anonymous Coward on Sunday February 02, 2003 @03:29PM (#5211104)
    Too many lazy admins out there so people should counter the bad worms with good worms. Yep its not that ethical at all but it has got to be better than crossing your fingers.

  • by DJ Rubbie ( 621940 ) on Sunday February 02, 2003 @03:29PM (#5211110) Homepage Journal
    If people at least patch their system, things like this should never happen, but Microsoft should have made that secure in the first place to prevent this from happening. Face it, if someone can create a worm somehow causing all host/computer connected to send out 300 odd bytes to any random port to any random ip every millisecond or so, the net itself will be full of noise.

    Or you can just physically locate all the major routers/backbone of the net and somehow disable it, physically... yeah, you, get up and demonstrate how vurnerable the net is!
    • by dolson ( 634094 )
      You do realize that you're talking about Microsoft, right? The same company that released a web browser that would execute code so insecurely that it could wipe entire hard disks - A FRICKIN' WEB BROWSER!
  • by ksheka ( 189669 ) on Sunday February 02, 2003 @03:29PM (#5211111)
    When is the next Microsoft product being released?
  • by aaronhurd ( 630047 ) <slashdot&aaronhurd,com> on Sunday February 02, 2003 @03:29PM (#5211112) Homepage

    In my opinion, there are two ways that people will react to the problem of exploits in computer software:

    In the short term, I expect that the most recent attack will provide a huge sales boost to pre-packaged "security solutions" like firewalls, virus protection, etc. and will probably be used as an extra card that the government can play when arguing for implementing a comprehensive Internet monitoring system. Of course, both of these things are unfortunate, as neither one promotes security and the latter gives the government way too much power . . .

    Long term, the best protection against exploits in computer software is a shift in attitude about where software companies should place their priorities. At present, it is more lucrative for companies to push a piece of software out the door and sell upgrades than to spend extra time developing secure software. Only a strong fiscal mandate from corporate customers will change the way software companies do business . . . and I hope that mandate comes soon.

    • by GlenRaphael ( 8539 ) on Sunday February 02, 2003 @06:11PM (#5211954) Homepage
      In the short term, I expect that the most recent attack will provide a huge sales boost to pre-packaged "security solutions" like firewalls, virus protection, etc.

      Also, companies with hundreds or thousands of machines to administer will probably start buying large-scale third-party automated patch deployment systems. A system like Everguard [dvpm.com] or Patchlink [patchlink.com] or Bigfix [bigfix.com] will let you know where there are unpatched vulnerabilities on your network, help you patch them, and check that they've been patched.

      Most of these systems are cross-platform and at least one uses a linux-based [linuxjournal.com] server.

  • For instance, due to the sheer volume of overflow traffic, some outfits running Linux-based systems in the same data centers as Slammer-infected machines also lost access to their non-Microsoft systems,

    This is like stating the folks at a ballgame that bought popcorn, instead of the Hotdogs everyone got food poison from were affected as well due to restroom crowding. Shesh

  • Monocultures (Score:2, Insightful)

    It's just the problem of monocultures! Nothing less and nothing more...
  • It isn't the Internet that is vulnerable, it is Microsoft products which are vulnerable. Those products in turn affect other systems due to the sheer number of computers running MS products. Start holding MS accountable for the bugs in their products and everyone benefits.
  • by Istealmymusic ( 573079 ) on Sunday February 02, 2003 @03:39PM (#5211160) Homepage Journal
    This was posted on BugTraq:
    From: "Nicholas Weaver"

    Date: Fri, 31 Jan 2003 6:09 PM
    To: bugtraq@securityfocus.com
    Subject: The Spread of the Sapphire/Slammer SQL Worm
    We have completed our preliminary analysis of the spread of the Sapphire/Slammer SQL worm. This worm required roughly 10 minutes to spread worldwide making it by far the fastest worm to date. In the early stages the worm was doubling in size every 8.5 seconds. At its peak, achieved approximately 3 minutes after it was released, Sapphire scanned the net at over 55 million IP addresses per second. It infected at least 75,000 victims and probably considerably more.

    This remarkable speed, nearly two orders of magnitude faster than Code Red, was the result of a bandwidth-limited scanner. Since Sapphire didn't need to wait for responses, each copy could scan at the maximum rate that the processor and network bandwidth could support.

    There were also two noteworthy bugs in the pseudo-random number generator which complicated our analysis and limited our ability to estimate the total infection but did not slow the spread of the worm.

    The full analysis is available at

    David Moore, CAIDA & UCSD CSE
    Vern Paxson, ICIR & LBNL
    Stefan Savage, UCSD CSE
    Colleen Shannon, CAIDA
    Stuart Staniford, Silicon Defense
    Nicholas Weaver, Silicon Defense and UC
    Berkeley EECS

    A must read for anyone who wants to know about this worm. Its impact was huge--90% infection of all vulnerable hosts in 10 minutes . Even some E911 systems were knocked out. The internet routers at large were saturated with 120ms latency. Twice the speed of Code Red. All this with a simple PRNG scanning algorithm.
  • by bkontr ( 624500 ) on Sunday February 02, 2003 @03:40PM (#5211172) Homepage Journal
    MS products are too buggy for the internet. Even when MS comes out with patches sysadmins are extremely reluctant to apply them (even at Microsoft) in fear that the patch will cause more problems (ie BSOD) than it fixes. Remember Microsoft got hit by Slammer hard because it didn't install its own patches. Was Microsoft waiting for customers to beta test thier software before they even tried it themselves??? Plus the MS SQL server is not the only MS product that Slammer can infect......when are people going to hold Microsoft accountable for its lack of security and general poor coding??
  • Scary stuff, kids (Score:5, Interesting)

    by Saint Aardvark ( 159009 ) on Sunday February 02, 2003 @03:43PM (#5211185) Homepage Journal
    Posted to Bugtraq yesterday was a quick summary of a study of the Slammer worm and its effects. Quote:

    This worm required rougly 10 minutes to spread worldwide making it by far the fastest worm to date. In the early stages the worm was doubling in size every 8.5 seconds. At its peak, achieved approximately 3 minutes after it was released, Sapphire scanned the net at over 55 million IP addresses per second. It infected at least 75,000 victims and probably considerably more.

    I read that and my jaw just dropped.

    This worm, from what I've read (these aren't my conclusions; I'm not that smart), did two very interesting things. The first is that it used one UDP to spread: no waiting around for the three-way TCP handshake, no hanging waiting for a reply, just send and move on to the next one. From what I understand, that's pretty new. Second, it caused most of its damage not by trashing filesystems or anything like that, but just by spewing *huge* amounts of traffic.

    The first is interesting because as a tactic, it'll almost certainly be copied. The second is interesting because it probably won't be copied.

    Well worth your time; it's fascinating -- and frightening -- reading. Get it here:

    http://www.caida.org/analysis/security/sapphire [caida.org]

  • I'm curious... (Score:3, Interesting)

    by GreatOgre ( 75402 ) on Sunday February 02, 2003 @03:47PM (#5211215)
    If we were to begin attacking either Iraq or North Korea, what amount of damage could they do by launching worms like this towards the US? Furthermore, what are the chances that they are busy looking for more exploits like this? After all, the US government does use a lot of M$ software.

    Just my two cents though.
  • Now or Never (Score:3, Interesting)

    by PsiFireWhite ( 640596 ) on Sunday February 02, 2003 @03:48PM (#5211217) Homepage
    Give it about two weeks and everyone will forget what happened. Seems as though every time there is a net problem that effects 90% of the population it's big news and "a must fix problem." But we still have virii. Nothing has changed. So unless something is proposed in about 14 days, the masses will forget about it and it will loose it's panicy ferver that distrubing the masses unleashes.
  • Likelihoods (Score:5, Insightful)

    by Neophytus ( 642863 ) on Sunday February 02, 2003 @03:49PM (#5211227)
    Likelihood there will be another one: very high
    Likelihood that it will affect a Microsoft product: pretty high
    Likelihood that it will exploit a flaw that was fixed the summer before: almost certain

    As far as i'm concerned those with low maintenence co-located servers should pay more attention to security bulletins so that when when a major patch does come out they can fix it, then when something does hit their several-year-old computer it won't be thrashed to death by modern worms.
  • The net is pretty flexible, these worms are a part of a cycle of security.

    I am certain that there is a proportional relationship to the size of the impact of a worm and the time till the next big virus/worm outbreak. Basically after a worm strikes people suddenly become a lot more security conscious but this wears of after about 6 months (which is why we get roughly 1 or 2 of these events a year).

    I also can't help thinking that a massive attack capable of bringing about a "virtual net shutdown" (something that hasn't really happened yet) would cause so much trouble that security would become such a focus that measures would be taken to ensure that worms can't flourish on the net (mandatory use of firewalls ?, OS's that update themselves ?).
  • by Xacid ( 560407 ) on Sunday February 02, 2003 @04:00PM (#5211278) Journal
    When pogs become the next big thing. Duh.
  • ... to only spread and do nothing more damage not was the worst that could been happened. And I think is only the beginning.

    If efforts continues to go on disconver who was, instead of solving the real problem (widespread common vulnerabilities in zillons of interconnected computers) the next worm probably will be called properly "warhol worm".

    You need only one capable worm writer that get pissed off or tired of life or whatever, and losses could be huge.
  • by damieng ( 230610 ) on Sunday February 02, 2003 @04:04PM (#5211293) Homepage Journal
    Any sysadmin that leaves a port open to a database server is a bloody idiot regardless of which software you are using.

    If you really must allow remote SQL access do it over a VPN, that's what they are there for.

    If on the other hand you are providing data for your web site then either lockdown the db software to only accept connections on localhost or even better just don't allow it through your router/firewall.

    It's about time these security alert companies put some sort of "a sysadmin of the following competence level would have prevented this from being an issue".
    • by alispguru ( 72689 ) <bob,bane&me,com> on Sunday February 02, 2003 @05:47PM (#5211816) Journal
      ... because they had installed other MS software that installed MSDE 2000 (Microsoft SQL Server Desktop Engine) behind their back. There have been similar issues in the past with IIS being installed for "personal web sharing" and being up and running by default. These were in software packages for end-users (note the "Desktop" in the name above), which shouldn't require major admin competence to install and keep patched.

      To be fair, Linux distros used to default install Apache and leave it up and running by default. I don't know of any that do that by default any more - Linux distro developers learn from their mistakes.
  • by jhines ( 82154 ) <john@jhines.org> on Sunday February 02, 2003 @04:06PM (#5211301) Homepage
    What is an acceptable update frequency for something like a databese (in this case)?

    Of course patches should be made available asap, but I'm talking about more routine items.

    IMHO, a comprehensive service pack, that rolls up all fixes to a certain date, and is tested, is the best. This needs to be done semi-to-annual for current products.

    But what are some other view on how often should you take a working system and update it?
  • by DrSkwid ( 118965 ) on Sunday February 02, 2003 @04:13PM (#5211337) Journal
    1. Put eggs in Microsoft basket
    2. ????
    3. Loss

  • Never thinking I'd be one to say this, but this
    appears to be a kinda weak way to pick on
    microsoft today. Now, don't get me wrong...
    I LOVE trashing microsoft. It brings the worse
    linux and windows fanboys out to raise pure hell
    defending their favorite OS and/or decision in
    what to run on their hard earned hardware. You
    get to read tons of emotion filled posts with
    little to no fact checking, then read the replies
    from clueful people that tear those posts apart.
    This story just feels kinda cheap is all. Like
    beating a stable full of dead horses. It also only
    serves to whip up the fanboys and make them that
    much more zealous in their defense of their pet
    OS's, and increasingly silly in their replies.
    If the goal is enlightenment for the masses, we
    are missing the mark.
  • Hrm (Score:2, Insightful)

    by Isbiten ( 597220 )
    Who's to blame MS for making a patches that sometimes makes things worse and most sysadmins waits awhile before installing patches

    Or is it all those sysadmins who didn't install the patch because of annoying reboots and problems with the new patch?
  • Regulation (Score:5, Insightful)

    by kahei ( 466208 ) on Sunday February 02, 2003 @04:22PM (#5211395) Homepage

    Thing is, we're dealing with an industry (the IT industry) that does not have the safely regulations and standards common in older sectors. There is no standard saying what steps must be taken to prevent your own systems damaging others, and no regulatory body to enforce compliance. Worms like this are creating a pressure to bring IT into line with the more, hm, predictable business areas.

    Over time, IT, like other industries, will move toward public safety standards such as we see in transport, manufacturing, finance, and all those *boring* businesses. It's a necessary part of the evolution of this industry from backrooms to ubiquity, I guess.

    In 20 years time we'll probably see the government fining companies that don't patch their servers to a certain standard, just like we see airports and tire makers being fined now.

    This just reinforces what I've been thinking for a while now... time to move away from IT iself and into IT law/management/business...
  • Film at 11! (Score:3, Insightful)

    by kisrael ( 134664 ) on Sunday February 02, 2003 @04:23PM (#5211399) Homepage
    Death of the Internet! Film at 11!

    For all the publicity it gets, and tons of anecdotes that slammer really threw some places for a loop, it does seem that the system is pretty robust.

    But OFFLINE BACKUPS seem to be more and more of a must. Slammer didn't have much of a payload, but something like this could, and any system your responsible for had better have plans...
  • by mr_exit ( 216086 ) on Sunday February 02, 2003 @04:26PM (#5211423) Homepage
    I thought the whole reason worm writers release their creations in the weekend is so they have the best chance to spread before systadmins wake up and realise what is happening.

    If it WAS let out during business hours, whould it have gotten so far? would it have caused much dammage at all?

    • I thought the whole reason worm writers release their creations in the weekend is so they have the best chance to spread before systadmins wake up and realise what is happening.

      Actually, the worm "armed" it's attack before it "struck". It infected a large number of machines silently, without much noise, and at the given time, it opened up the fire hoses on the Net..

      I haven't heard much mention about this anywhere, but if you graph the attacks (if you had properly configured Snort, for example) you can see the attack curve rise to it's maximum in just under 20 minutes.

  • by caluml ( 551744 ) <slashdot@spamgoe ... minus herbivore> on Sunday February 02, 2003 @04:36PM (#5211460) Homepage
    "Banking services, which encrypt their data traffic over the public Internet, might have ground to a halt."

    Sheesh. If you use VPNs over the internet, you're getting WAN connectivity and 95+% reliability on the cheap. But it's a trade off.
  • by vanyel ( 28049 ) on Sunday February 02, 2003 @04:41PM (#5211478) Journal
    ...the Slammer worm demonstrates just 'how vulnerable the Internet remains'

    No, it demonstrates just how vulnerable a number of sites on the Internet that ought to know better are. "The Internet" stayed running just fine, though it maybe slowed down a bit in places. I certainly didn't notice any noticeable reduction in spam over it.

  • patches and rips (Score:5, Interesting)

    by urbazewski ( 554143 ) on Sunday February 02, 2003 @04:41PM (#5211479) Homepage Journal
    Okay, this is a bit offtopic, but I've been scanning the comments on various stories about the Slammer virus and have noticed that, according to many many posters, security patches can introduce new bugs in the software that cause it to behave erratically.

    My offtopic question is: why doesn't this happen with Linux ? (or does it happen with Linux?)

    I don't use Linux and I'm not a bonafide geek (I've never had 'root' access, which seems to be one of the key requirements --- that may change now that I use Mac OS X), and I've always wondered why using fixes, new functions, patches, whatever, written by numerous different people hasn't turned Linux or other open source into a non-functioning morass of code. I read Eric Raymond's The Cathedral & the Bazaar [oreilly.com] but I didn't really feel like he answered the question, other than refering to the gospel of Linus "with enough eyes, any bug is shallow."

    Isn't an operating system more complicated (or at least more fundamental) than an application? Why doesn't (or how often) does fixing one bug in Linux create two new ones?

    blog-O-rama [annmariabell.com]

    • why doesn't this happen with Linux ? (or does it happen with Linux?)

      Like other posters said, this does happen with Linux, but not as much. There are reasons why.

      Many good Open Source projects will usually separate their releases into to branches: stable and experimental. For example, in the Linux kernel, if the second number is even (x.2.x or x.4.x), then it is a "stable" release. If the second number is odd (x.3.x or x.5.x), then it is an experimental release.

      Most of the time new features are only put in the experimental release. There are features officially classified as experimental in the stable release, but you can only use them (or even see them) if you check the "prompt for development or incomplete drivers" option. There have been mishaps where a feature was added in the middle of a stable release and caused problems. One such example is the changes to the virtual memory system in about 2.4.4.

      Another reason this doesn't happen as often is many of the serious open source programmers do everything they can to prevent/fix bugs and are paranoid about security. Microsoft doesn't seem to care. When I run win98, there are always system crashes, settings being changed when I don't want them to, unstable programs (which are supposedly being made by professional companies) making other programs/the whole system unstable.

      In Linux, these problems are virtually nonexistant. I haven't seen many programs which will bring Linux down, and most of those don't crash the kernel. A buggy SVGAlib[1] program will either screw up the video or screw up the keyboard and disable virtual console switching[2]. XFree86 doesn't have this problem. Most buggy programs in X don't seem to affect it at all--there are problems such as X crashing with huge font sizes, but the main system was running fine. I just had to restart X. A misconfigured X may screw up the display, but most of the time I can use Ctrl-Alt-Backspace to kill X, display restores, and I fix the problem. Also, when Ctrl-Alt-Delete still works, it will properly shutdown the system--unlike Windows.

      Linux/open source has problems, but Microsoft has many more. In my twenty some years of using computers, I haven't seen anyone produce crappy software as Microsoft--except for script kiddies and the low end of shareware programmers.

      I've always wondered why using fixes, new functions, patches, whatever, written by numerous different people hasn't turned Linux or other open source into a non-functioning morass of code

      They do have project leaders and others who verify the patches. Open source projects don't accept just any old patch--there is a process of reviewing and testing submitted patches. This also varies from project to project. Some maintainters will just slap in anything, but the maintainers of very good and stable projects will try to understand what the patch is doing before even testing it out. It is a very long and arduous process to get a patch for a new feature into something like the Linux kernel. There are plenty of such patches floating around. For example, Openwall Linux [openwall.com] is a kernel patch that adds security features. From what it sounds, it may never get into the official kernel...

      Isn't an operating system more complicated (or at least more fundamental) than an application?

      An OS is the most fundamental part of the software. Any bug in the OS will often cause major problems everywhere. As to an OS being more complicated, it depends on the system and what you choose to define as the OS. Some people consider only the kernel/core part as the OS, and others include "essential" libraries--the definition of essential can vary greatly. Still some others include basic utility programs part of the OS.

      Why doesn't (or how often) does fixing one bug in Linux create two new ones?

      Any change in a project can cause a new bug, but as I said, they review and test the patches, so this doesn't happen as much as you seem to think it would. The problem with Microsoft bug fixes is they don't seem to test their changes very well, and they often bundle new (and possibly unwanted) features/modifications with these fixes. These features/ mods may have bugs or cause other problems. The high-end open source projects shy away from this practice. That is why they have a different branch marked experimental (or unstable)-- people who want to test (or use) the bleeding edge features can do so without affecting the stable branch.

      Footnotes:

      [1] SVGAlib is a library which allows a program to draw graphics on the screen with a virtual console. This library is dangerous because it requires the program to run as root (often suid root, which means any user will have root access with the program until the program drops privileges). The framebuffer is slightly safer because it is a kernel driver and you don't have to run it as root. Both of these can easily leave the video card in a messed up state if the program doesn't use them properly.

      [2] The virtual console is a part of the Linux kernel which handles the video display. In Linux there are multiple of these virtual consoles, and one can switch between them freely using the Alt key plus the arrows/function keys. Alt+F1 will switch to virtual console # 1. Alt+2 #2, and so on. A problem arises if a program sets raw keyboard mode (such as many SVGAlib/framebuffer programs do) as this disables the kernel from recognizing an Alt+function key as a request to change consoles.

    • by StormReaver ( 59959 ) on Sunday February 02, 2003 @09:37PM (#5212745)
      There are several reasons why Linux is not so adversely affected by security patches:

      1) Linux the kernel is distinctly independent from the applications that it runs and from the vast majority of device drivers that it hosts. This is most likely the single most important factor. For example, fixing Apache does not require tampering with the kernel, which is turn does not require tampering with the web browser, which in turn does not require tampering with the task manager, which in turn does not require tampering with the database server. With Windows, changing one area touches every single other part of the entire system, including some very large applications (because they are integrated with the kernel).

      2) Security releases are fast, furious, and focused. Only the affected pieces are replaced. When OpenSSL was compromised by Slapper, only OpenSSL was fixed. The fix didn't have to touch a hundred completely unrelated areas as happens when your entire kit and kaboodle (Windows) is tied together by spaghetti clusters. The fixes are released immediately after the vulnerability is discovered, and the full scope of the fix is detailed (parts are not hidden, as is the case with Windows). And the fixes, if anything was missed the first time, continue until the problem is erradicated.

      3) Full disclosure. The vulnerability is fully disclosed to the user base ASAP, and details provided to allow us to confirm the vulnerability. Since the vulnerable parts of the system are separate and distinct, fixing the individual parts can occur on a continuous basis. That is, not every affected component has to be fixed before other fixed pieces can be distributed.

      Not being a security type person, these are only things I can think of off the top of my head based on my own limited experience.
  • by pophop ( 604583 ) on Sunday February 02, 2003 @04:53PM (#5211549)
    1. The worm was strictly based on UDP 1434 transfer
    I find it very difficult to believe major corporation firewalls would allow UDP 1434 inside from Internet. Some, maybe - but few.
    So: I rule our direct penetration from the Internet for most corporate environments.

    2. Worm was memory resident only. Reboot cleared it.
    Most user PC's would be rendered useless by the worm. CPU and local Network saturation would do that. So I doubt that people got infected and THEN VPN'ed into work. They would reboot, clear the worm, possibly get re-infected - but I doubt
    if they would be able to bring an already infected machine into work via VPN.

    Note: If split tunneling was allowed then it is quite possible for an already conencted home PC to act as a vector into a company - my guess
    is that this is NOT common.
    So: I rule out employee remote access as a primary vector.

    3. This leaves me with back-end connectivity across private "trusted" comm channels. ( i.e. Frame ) .
    I know this was a vector in at least one case - and the circumstances ( misconfigured ACL's that were overly generous in what UDP traffic they
    allowed from "trusted" business partners ) is something that I suspect is very common in large organizations.

    The speed which this thing moved ( see: http://isc.sans.org/port1434start.gif ) and the actual vectors I saw make me very suspicious that
    the large organizations of the world are massively linked by misconfigured routers/firewall that allow way too much UDP traffic flow between
    trusted partners - affectively a "fuse" linking the worlds computing infrastructures.

    That's it. Wacky and overly-speculative perhaps but I would be interested in getting some anonymous feedback about the successful attack vecors
    other people saw in the propagation of the worm - particularly people in large organizations that have large "private" comm networks.
  • by supabeast! ( 84658 ) on Sunday February 02, 2003 @04:53PM (#5211550)
    If corporations are really interested in protecting themselves, they should stop slashing IT budgets and downsizing engineers. Security goes downhill fast when the techies are too busy to keep servers patched, and nobody is watching for idiots sticking database servers outside the corporate firewall.

    Every company with an internet-enabled IT infrastructure needs to have a dedicated sysadmin AND a dedicated security admin. If a company can't afford two full-time geeks to keep things secure, then they need to outsource server hosting to a secure facility.
  • by jsimon12 ( 207119 ) on Sunday February 02, 2003 @04:56PM (#5211562) Homepage
    The bigger question is why isn't Microsoft being held responsible? DSC was held resobsible when one of their faulty switches brought down the East coast's telephone lines, Ford/Firestone were held responsible for their faulty tires, vehicles. Sure they have statements that they aren't responsible in their EULA, but come on, doctors getted sued even though people sign waivers. We need to put blame where blame belongs, and that is the company that orginated this faulty and shoddy product
    • by Chester K ( 145560 ) on Sunday February 02, 2003 @05:35PM (#5211744) Homepage
      We need to put blame where blame belongs, and that is the company that orginated this faulty and shoddy product

      I disagree completely on the fact that holding Microsoft responsible would be a chilling precedent that would effectively squelch software development, because all software has bugs.

      Would you contribute to Open Source projects if you knew that any bug you write, no matter how obscure and unintentional, might become a liability to you? Would getting your name in the changelog of the kernel be worth putting your financial future at risk?

      Oh, and it doesn't matter who discovers the bug. Even if it's discovered before its exploited and you issue a patch for it (as Microsoft did in this case, I might add), you think the software author should still be held liable? Even thought you did your part and fixed the bug? Isn't it the sysadmin's fault at that point?
      • You must not read /. often. Open source is aways specifically excepted from all liability on the grounds that it.. well, uh, you know, freedom of thought and all that stuff.

        But seriously, you're absolutely correct that the surest way to kill the tech industry is to promote endless litigation and ambulance chasing instead of trying to build real solutions to the security problems (on all platforms) and punish the vandals.
      • I don't think it would set a disturbing precedent, lawsuits are about MONEY, plain and simple. Lawyers don't file lawsuits unless they can get money (for the most part, sure occasionally there is something filed for priciple, but it is a rarity). A class action against an OpenSource project wouldn't garner much more then maybe a couple thousand if even that. Which is by no means worth a lawyers time. Microsoft on the other hand......BILLIONS........

      • There is a difference, though. When people hand money over to Microsoft in exchange for a product, that is not only an economic transaction, that is a legal transaction, as well.

        A lot of states require, for example, a minimum amount of time for a customer to be able to return defective merchandise. When the company sells you a product, the company is agreeing to several legal responsibilities.

        When I give you a gift, I am not held legally responsible for that gift (unless the gift is illegal or stolen in the first place).

        With OSS software, there is no exchange of money with the author, so there is a lot less legal groundwork to work with.

        Places like RedHat, though, would be in a difficult situation, since they are selling a product.

        Your point about fixing the bug is an interesting one. Suppose Ford had discovered that there was a problem with the interaction between their tires and their vehicles, and then announced that they would replace the tires in a minor PR release somewhere. Suppose they required you to drive the vehicle to its originating factory (most likely Louisville, KY for Explorers) to be replaced.

        I think the government could argue that Ford did not do the appropriate thing to rectify a known problem.

        I am not too familiar with the MS SQL fix, but apparently it was not only difficult to install, but it was also broken by a later patch. That moves some of the responsibility from the sysadmin back onto Microsoft at that point, I would think.

        So in the end, I think it would be best to hold companies accountable for mistakes they knowingly should have fixed, and made those fixes easy to work with (within reason).

        (And, for factual clarification - most later simulations of the Ford/Firestone tire incidents leads to the conclusion that while the tires blew out more often than normal, and that the Explorer, like almost any SUV, tends to roll over more often than a car, most of the incidents were probably a result of driver error in correcting from a blown tire. Most drivers apparently slammed on the brakes and jerked the steering wheel, which will cause an SUV to roll even without a blown tire).
  • by mangu ( 126918 ) on Sunday February 02, 2003 @05:09PM (#5211622)
    Wait until mid-century, when nanotech is used everywhere, and hardware viruses and worms start appearing. Let's just hope that, by then, micro$oft will have been swept into the dustbin of history and nanotech will be open source...
  • How fast... (Score:5, Interesting)

    by sean23007 ( 143364 ) on Sunday February 02, 2003 @06:22PM (#5212008) Homepage Journal
    Boy, how fast would everyone drop MS once and for all if this worm had been written to corrupt filesystems and/or destroy data? As it is, everyone will just try to patch their systems and whine a little bit, but at the end of the day they will still write out a check to Microsoft. Eventually, along will come a worm that will cripple Microsoft's ability to sell products any longer: when it becomes clear that using MS software is practically a guarantee that your data is vulnerable and could even be destroyed, Windows is finished; Microsoft is finished.
  • by GC ( 19160 ) on Sunday February 02, 2003 @07:28PM (#5212268)
    Just how difficult is it to comeup with some code that goes about finding vulnerable machines, makes them invulnerable, and tries to spend a modest amount of it's time finding more vulnerable machines.

    Bring on the white-hat worms that actually fix problems, rather than cause them.

    Sure - ethics must be a problem, but there must be some slightly-un-ethical white hats out there ready to give this a go?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...