Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
IT

How One Company Survived a Ransomware Attack Without Paying the Ransom (esecurityplanet.com) 60

Slashdot reader storagedude writes: The first signs of the ransomware attack at data storage vendor Spectra Logic were reports from a number of IT staffers about little things going wrong at the beginning of the day. Matters steadily worsened within a very short time and signs of a breach became apparent. Screens then started to display a ransom demand, which said files had been encrypted by the NetWalker ransomware virus. The ransom demand was $3.6 million, to be paid in bitcoin within five days.

Tony Mendoza, Senior Director of Enterprise Business Solutions at Spectra Logic, laid out the details of the attack at the annual Fujifilm Recording Media USA Conference in San Diego late last month, as reported by eSecurity Planet.

"We unplugged systems, as the virus was spreading faster than we could investigate," Mendoza told conference attendees. "As we didn't have a comprehensive cybersecurity plan in place, the attack brought the entire business to its knees."

To make matters worse, the backup server had also been wiped out, but with the help of recovery specialist Ankura, uncorrupted snapshots and [offline] tape backups helped the company get back online in days, although full recovery took a month.

"We were able to restore everything and paid nothing," said Mendoza. "Other than a few files, all data was recovered."

The attack, which started from a successful phishing attempt, "took us almost a month to fully recover and get over the ransomware pain," said Mendoza.

This discussion has been archived. No new comments can be posted.

How One Company Survived a Ransomware Attack Without Paying the Ransom

Comments Filter:
  • by Kokuyo ( 549451 ) on Sunday July 17, 2022 @09:46AM (#62709664) Journal

    They were lucky to scrounge up some off site backup.. .so in essence, the didn't fail enough to be screwed.

    News at eleven? I mean seriously... If you have backup that is not writable by any old windows server, the scenario I'd expect is basic functionality in 24 to 48 hours depending on company size.

    Everything else and you failed so hard you should never get to be a CEO again.

    • True. Hilarious is that it was a "data storage vendor". Even in really small clients I set up at least one remote backup and one local nightly with a network disk that is mounted only during the backup time. These things are useful in much more than just ransomware situations.

      You cannot absolutely depend on your data, and not take care of it. I'm sure they don't leave boxes full of money lying around.

      • by raymorris ( 2726007 ) on Sunday July 17, 2022 @11:35AM (#62709868) Journal

        That's great. Even better would be pull backups.
        Consider if the (compromised) server can mount the the backup disk and push backups to it, it can also push destruction. Bad guys are well aware of this and WILL do so. They will mount and destroy backups.

        Safer is to have the backup system pull, using a read only account.

        • by saloomy ( 2817221 ) on Sunday July 17, 2022 @11:57AM (#62709904)
          We use a SAN for everything that has snapshots and remote replication. On the far side, tape gets the data off site and in archive mode, so they canâ(TM)t just be deleted. Oh, and most importantly, the disk system and the backup system has nothing to do with the windows environment. Windows systems do not even know thry are being backed up. Just that there is a volume shadow being created on a schedule that coincides with the back up. Completely different management domains and networks. The only link would be the backup systems read only block level access to the volumes which house the company data. When ransoms are attacks happen, we restore the entire solution to a snapshot after figuring out what is compromised and clean it before payload executes. Works like a charm. Breaches will happen, it is just a matter of making sure you have a good recent copy of your data easily restorable and inaccessible from the systems they are protecting.
          • Dont rely on snapshots. Modern ransomware has built-in API calls to call common SAN systems (NetApp, EMC, HPE, etc) and delete snapshots. The really intelligent ones lurk collecting passwords until it gets access to SANs, then starts by blowing away all snapshots on local AND remote systems. I've seen it firsthand (been in storage support for 20+ years). Once all the snapshots are gone, it starts the encryption. By the time you see the first message, the damage is done. The key part of what yo
            • You can not ping the SAN management vlans or the backup system on either side from the windows environmental VLANs. They are completely separate for us. Out of band management of data from systems is key. Plus, we can always get to tape (protects against rogue admins).
      • True. Hilarious is that it was a "data storage vendor".

        it's even better
        "The company (Spectra Logic) became the first to automate Advanced Intelligent Tape (AIT) magnetic tape in a robotic autoloader (tape library),[5] and was also the first tape-library vendor to implement the iSCSI networking protocol in its products.[6] Spectra Logic produced another first when it released[when?] a tape library with integrated hardware-based data encryption.[7]" Spectra Logic [wikipedia.org]
        Sounds like the company doesn't eat it's own dog food. Another point why in the world would anyone wh

    • Well, 48 hours is ambitious. You cannot trust any of your systems, so your best bet is to do a clean install. Maybe, for some few systems, you have full image snapshots, but probably not for everything. So reinstall, reconfigure, restore data, carefully test...

      Maybe you get critical stuff back in a day. I'm thinking an intense week for everything else.

      • by nuckfuts ( 690967 ) on Sunday July 17, 2022 @01:09PM (#62710046)

        Well, 48 hours is ambitious. You cannot trust any of your systems, so your best bet is to do a clean install. Maybe, for some few systems, you have full image snapshots, but probably not for everything. So reinstall, reconfigure, restore data, carefully test...

        Maybe you get critical stuff back in a day. I'm thinking an intense week for everything else.

        Absolutely. It's easy for people on /. to make judgements from the sideline, but having been involved in several ransomware recovery efforts I can assert it's not just as simple as clicking "restore" in your backup software. For one thing, there is the sheer volume of data in many companies today. You may backup your servers every night in a an hour or two, but those are typically differential backups. Full backups may take a long time to restore remotely. Also, if you backups are not "bare metal" backups, you need time to prepare the systems before restoring by doing a clean reinstall and configuration of the OS. You may also need to do forensics to determine the exact vulnerability that was exploited, otherwise you risk restoring everything only to get hit again a few days later. (I have, regrettably, seen this happen).

        These are just a few thoughts off the top of my head. Companies spend years setting up their computer systems. You don't realize how many details are involved until you are forced to suddenly rebuild everything. Don't just assume it's a trivial procedure, even with good backups.

    • by gweihir ( 88907 )

      the scenario I'd expect is basic functionality in 24 to 48 hours depending on company size.

      You are doing it wrong. The decision of how long recovery should take at maximum has absolutely nothing to do with company size. What you actually do is you do a BIA (Business Impact Analysis) that shows the effects and damage of different recovery times. Things like the criticality of your business to others play a major role here. "Queuing" effects play a role (customer communications piling up, for example). Legal requirements and deadlines play a role. For example, a health insurance company may only ha

    • Lucky, it's Spectralogic their business is making tape libraries. One would hope they had some tape backups.

    • What you do with this, it take it as a cautionary tale, and make sure your own backups are good. And to make sure they are good, you have to actually try to restore them. Every time.

    • This is an ad for Ankura then
  • (1) Make regular, frequent, isolated backups. (2) Make regular, frequent, isolated backups. (3) Make regular, frequent, isolated backups. Anything after that is to make recovery faster & easier.
    • by Opportunist ( 166417 ) on Sunday July 17, 2022 @10:03AM (#62709696)

      Also, test the effin' backups regularely. Do a clean restore onto another machine and see if it actually works. Nothing's worse than realizing what you backed up is the .lnk files to the files you actually needed.

      But hey, the backup really went lightning fast!

      • 3-2-1 rule. At least 3 copies, at least 2 forms of media, at least 1 off site. All of it needs to be tested periodically.

        A remote online backup solution can tick one box on each layer.

        D2D2T systems, disk to disk to tape, are quite cheap, common, and easy for any competent IT department to deploy. If a company has enough money for IT folks, then they can afford it.

    • by stooo ( 2202012 ) on Sunday July 17, 2022 @10:33AM (#62709734) Homepage

      4th rule is to not use MS Windows.

      • by gweihir ( 88907 )

        4th rule is to not use MS Windows.

        Indeed. And MS Office. And MS Outlook. Unfortunately many people have painted themselves into a corner in that regard. The MS crap makes it far, far too easy to make a tiny mistake with dire consequences. Professional engineering looks different.

    • by gweihir ( 88907 )

      (1) Make regular, frequent, isolated backups. (2) Make regular, frequent, isolated backups. (3) Make regular, frequent, isolated backups. Anything after that is to make recovery faster & easier.

      And fail.
      1. Make regular protected backups (worm, offline, whatever).
      2. Test the backups regularly.
      3. Test restore procedures regularly in realistic scenarios so you are sure you can restore.
      4. Assign responsibilities and establish a crisis organization.
      5. Establish rules on how to deal with this (and other) types of incidents. For example, a decision on whether even consider to pay or not has to be taken before you are under a lot of time pressure and may not be thinking too clearly.

      You missed most of the

      • I bet you're usually surrounded by people, hanging on your every word at parties.
      • We test our backups, both the local ones and offsite, all the time. Not really "on purpose", but at least once a month someone from the Applications team or our devs needs something restored that has already been moved offsite. Every week, sometimes 2-3 times a week, we have to pull items off onsite backup.
        • by gweihir ( 88907 )

          That sounds completely fine to me, and I see no need for additional tests of the file backup as long as this situation persists. Now make sure you can also restore complete systems and you are all set.

  • by SpzToid ( 869795 ) on Sunday July 17, 2022 @10:52AM (#62709766)
    It is satisfying reading an I.T. success story against ransomware, with the all details of how it was done. More like this please.
    • by splutty ( 43475 )

      There are thousands of these stories. Almost none of them make it into the news, though, because of obvious reasons.

      Any company that has thing properly set up, will be able to isolate these attacks quickly, and it'll be just another disaster recovery scenario.

    • This is not a success story. Getting lucky that the attacker didn't (maybe?) extract data before ransoming, is not success. You pay way more financially from personal data being leaked than you do from your systems being unavailable. This is pure luck, and an inexperienced attacker - that it wasn't much worse for them. "We were able to restore everything and paid nothing" - is not a defense
  • Planned to fail. (Score:4, Interesting)

    by Gravis Zero ( 934156 ) on Sunday July 17, 2022 @11:12AM (#62709804)

    "As we didn't have a comprehensive cybersecurity plan in place, the attack brought the entire business to its knees."

    A failure to plan is a plan to fail.

    "We were able to restore everything and paid nothing," ...

    "took us almost a month to fully recover and get over the ransomware pain," said Mendoza.

    When it comes to larger corporations, the expense of shutting down for a week (to restore from backups) is often higher than the ransom itself which is why it's often paid even when there are backups.

    lesson to be learned:
    If you aren't prepared to perform a site-wide restoration in a 24 hour window then you aren't prepared for a ransomware attack.

    • by gweihir ( 88907 )

      A failure to plan is a plan to fail.

      It is. And at this time this treat has been known long anough that anybody not prepared goes into "gross negligence" territory.

      If you aren't prepared to perform a site-wide restoration in a 24 hour window then you aren't prepared for a ransomware attack.

      That really depends on the BIA and risk management. For many businesses an RTO of, say, 3 days is entirely fine. No need to overdo things. Instead use established risk management approaches and yes, get consulting if you are not confident you really understand your situation.

  • Uuuh. This should literally be *every* company. If you have a company of any value at all, you *should* have offsite (and offline) backups, and should have a cloning mechanism to backups that can never be directly accessed.

    This is.. Data security 101.

    They failed in some of that, but succeeded in the most important part.

    If your company does not have this, then.. I don't know what to tell you, other than you really really don't want to work there.

  • Don't we all agree that certain mission-critical file areas should never be encrypted?

    When's a genius going to come up with a system code that provides a "NOCRYPT" attribute?

    I have to think of everything.

    • by gweihir ( 88907 )

      I have a better solution! Simply use a "NOSTORE" attribute. Files that are not stored cannot be encrypted. That will show the criminals!

  • Have WORM or offline backups, have tested BCM/DR. Also have reasonable sensors (simple for ransomware) and be prepared to isolate from the Internet and shut things down. Two of our customers had to do it last year, both were offline less than 3 days (completely non-critical in their line of business). Of course, they are both regulated, so I made sure in 2020 that they knew what they could face and hence they were prepared.

    My takes is that anybody that "has" to pay ransom in 2022 screwed up massively and ig

    • I would like a tape library vendor to have a way to physically flip the write protect tab on an LOT tape, but *only* from read/write to read only. Then when my tape reclamation kicks in I can move the tapes to the mail slot and have a human in person move them back to read/write. That way I get pseudo WORM tape for a lot less money.

      • by gweihir ( 88907 )

        As the "official" WORM tapes are just software-WORM, maybe you should complain to the vendor. Or you can just do the WORM side on the backup server. A no-login sftp-drop quasi-WORM server is not hard to set up or configure with Linux or one of the xBSDs.

  • It amazes me that some companies aren't prepared for hardware failures

  • What about the second ransom, the one they ask you for not publishing the data?
  • by rufey ( 683902 ) on Sunday July 17, 2022 @05:59PM (#62710618)

    Several lives ago we used Spectra Logic tape libraries. So when I started reading the summary, I assumed they would simply recover from backups (be it snapshots, nearline backups, or tape backups) and go on with business, since backup hardware is one of their business specialties.

    I have never known any company who has fully tested their disaster recovery by having to rebuild their live systems. Usually its a DR site somewhere, and the recovery will get most things, but there's always the little things here and there that fail.

    The fact they were able to recover their data and basically rebuild their network in a month's time and only have a few files that were lost is a disaster recovery that went right.

    Disclaimer: I've been a part of backups and DR planning for nearly 30 years.

  • by martinX ( 672498 ) on Sunday July 17, 2022 @06:52PM (#62710730)

    "We were able to restore everything and paid nothing,"

    That'll show 'em!
    with the help of recovery specialist Ankura..
    "took us almost a month to fully recover and get over the ransomware pain,"

    I'm don't think they paid 'nothing'.

  • I've done that twice. First time, we lost 4 files (from 100M), second time, 0. It took about 6 hours to get back up and running reach time and about 100Gb was encrypted each time. We didn't even need to resort to backups. You can architecture your infrastructure to be cryptolocker-resilient. It's not that hard but you need to do things the right way.
  • ... of the databank on the planet Scarif?

Things are not as simple as they seems at first. - Edward Thorp

Working...