Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security IT Technology

Company Shuts Down Because of Ransomware, Leaves 300 Without Jobs Just Before Holidays (zdnet.com) 135

An Arkansas-based telemarketing firm sent home more than 300 employees and told them to find new jobs after IT recovery efforts didn't go according to plan following a ransomware incident that took place at the start of October 2019. From a report: Employees of Sherwood-based telemarketing firm The Heritage Company were notified of the decision just days before Christmas, via a letter sent by the company's CEO. Speaking with local media, employees said they had no idea the company had even suffered a ransomware attack, and the layoffs were unexpected, catching many off guard. "Unfortunately, approximately two months ago our Heritage servers were attacked by malicious software that basically 'held us hostage for ransom' and we were forced to pay the crooks to get the 'key' just to get our systems back up and running," wrote Sandra Franecke, the company's CEO, in the letter sent to employees. She goes on to say that data recovery efforts, initially estimated at one week, have not gone according to plan and the company had failed to recover full service by Christmas. Franecke said the company lost "hundreds of thousands of dollars" because of the incident and have been forced to "restructure different areas in the company." As a result of the botched ransomware recovery process, the company's leadership decided to suspend all services, leaving more than 300 employees without jobs.
This discussion has been archived. No new comments can be posted.

Company Shuts Down Because of Ransomware, Leaves 300 Without Jobs Just Before Holidays

Comments Filter:
  • Telemarketing? (Score:5, Insightful)

    by nospam007 ( 722110 ) * on Friday January 03, 2020 @01:45PM (#59583020)

    "An Arkansas-based telemarketing firm sent home more than 300 employees"

    For once the right firm was targeted instead of hospitals etc.

    • Re:Telemarketing? (Score:5, Insightful)

      by LenKagetsu ( 6196102 ) on Friday January 03, 2020 @02:36PM (#59583208)
      Agreed, they did some good for once. More criminals need to pull the Pretty Boy Floyd approach and destroy companies the general public doesn't like. If a bunch of debt collectors had their entire records locked up and lost, nobody on this planet would shed a tear for them.
      • Patent Trolls next! Please!
      • Re: (Score:3, Insightful)

        f a bunch of debt collectors had their entire records locked up and lost, nobody on this planet would shed a tear for them.

        Because people who borrow money or incur a debt shouldn't have to take personal responsibility and pay back the money. Everything should be free.
        • Re: (Score:3, Insightful)

          I think you don't understand what debt collectors do.
          • In a nutshell: When somebody buys a new TV that they can't afford, or injure you while driving intoxicated and refuse to pay you for the damages awarded in the civil lawsuit, a debt collector will take care of it.

          • by bjwest ( 14070 )

            Because people who borrow money or incur a debt shouldn't have to take personal responsibility and pay back the money. Everything should be free.

            I think he knows exactly what a debt collector does. I believe it's you who doesn't, if you think it's something other than getting people to pay their debts, even if not to the original debtor.

        • Debt-collector, you keep using that word, I don't think it means what you think it means.

      • Agreed, they did some good for once. More criminals need to pull the Pretty Boy Floyd approach and destroy companies the general public doesn't like. If a bunch of debt collectors had their entire records locked up and lost, nobody on this planet would shed a tear for them.

        Except TFA is about a telemarketing company that small nonprofits outsource fundraising to. Not sure what debt collectors have to do with this thread.

    • by thomn8r ( 635504 )
      They fact that they just threw up their hands and shut the doors is damning evidence of their actual value.
      • Sounds to me more like the Cyberassholes took the money and ran.

        Otherwise, it says they paid the ransom, so why did they have to shut down after that? I think the above happened, then they found their Shareware "Back-U-Up" app or whatever (gasp) DIDN'T WORK as advertised!

        That's my guess.

        • Sounds to me more like the Cyberassholes took the money and ran.

          Otherwise, it says they paid the ransom, so why did they have to shut down after that? I think the above happened, then they found their Shareware "Back-U-Up" app or whatever (gasp) DIDN'T WORK as advertised!

          That's my guess.

          I agree that it smells like what you've posted.

          IT didn't have best practices implemented in the first place, and when the shit hit the fan, the goddam fan broke.

  • In this particular case I am super glad to hear it! More telemarketers should be targeted. If the bastards must do their lame ransomware attacks, at least attack telemarketers for the greater good and not hospitals.

  • by kackle ( 910159 ) on Friday January 03, 2020 @01:51PM (#59583044)
    "It's a Christmas miracle!"
  • No backups or was it an time bomb that took 2 week be triggering so they rolled away?

    • by raymorris ( 2726007 ) on Friday January 03, 2020 @01:59PM (#59583074) Journal

      I lot of companies have a system where servers PUSH their data to a backup device. Meaning that the server can write to the backup storage, often a file share. Ransomware encrypts any file shares it can reach, so it gets the backups too.

      To be safe from both ransomware and other threats, backups need to be PULL. The backup system needs to read from from the main system, rather than allowing the main system to write to th backup.

      Also, older backups should be taken off-line so that even if your backup systems are damaged, you have offline media with recovery data - even if that offline copy isn't super fresh.

      • by Major Blud ( 789630 ) on Friday January 03, 2020 @02:10PM (#59583114) Homepage

        Also, older backups should be taken off-line so that even if your backup systems are damaged

        Add to this that you should regularly test that you can restore from backup.

        • by cusco ( 717999 )

          No kidding. I needed to restore a file one time and found that Backup Exec had been reporting successful backups (with verification, mind you) for months and just creating a blank tape. Fortunately the user found an earlier copy somewhere before I had to go tell him, but it caused me to start testing my backups monthly. Finally the Backup Exec license expired soon afterward and I never had to deal with their shitty program and their shittier support staff again.

          • by raymorris ( 2726007 ) on Friday January 03, 2020 @03:05PM (#59583344) Journal

            That's very, very common. For small businesses who thought they had backups, over half aren't actually working and restorable.

            Very often, the backups had stopped working 6-12 months ago and nobody knew.

            You must have a calendar entry to test your backups regularly.
            Quarterly is probably reasonable in many cases. Of course the proper testing schedule depends on your situation.

            • That's very, very common. For small businesses who thought they had backups, over half aren't actually working and restorable.

              Very often, the backups had stopped working 6-12 months ago and nobody knew.

              You must have a calendar entry to test your backups regularly.
              Quarterly is probably reasonable in many cases. Of course the proper testing schedule depends on your situation.

              Absolutely agree.

              Every Wednesday, I deleted a file on each server and restored from backup.

              I got burned early in my career. I would come in every morning and look at the screens. "Backup Successful."

              Those were bogus and I learned a hard lesson.

          • by malkavian ( 9512 )

            Had that happen long ago too.. Once on Backup Exec on Windows, and once with ArcServe on Novell 5. The first one was where I discovered the horror of searching through the longer term backups to find one that worked, and explaining why some data was lost.
            The second was when I came into a post with responsibility, and I'd learned from that first incident that the first thing in any job when I'm taking over is to check the backups first.
            Turns out there was an NLM on the server that conflicted with the ArcSe

            • by cusco ( 717999 )

              I worked with a couple of different backup products, but the only one that I got to work consistently and to actually alert me to errors or failures was the built-in NTBackup. Eventually I found some cheap program that put an easy to use interface with a bunch of otherwise-undocumented switches on NTBackup and was a happy camper for the rest of the time that was one of my duties.

              • I worked with a couple of different backup products, but the only one that I got to work consistently and to actually alert me to errors or failures was the built-in NTBackup. Eventually I found some cheap program that put an easy to use interface with a bunch of otherwise-undocumented switches on NTBackup and was a happy camper for the rest of the time that was one of my duties.

                I liked NTBackup. It was simple and a real workhorse.

                I'm running Windows 10, backing up with Windows 7 backup instead of History. The 7 backup is essentially a GUI NTBackup.

            • Had that happen long ago too.. Once on Backup Exec on Windows, and once with ArcServe on Novell 5. The first one was where I discovered the horror of searching through the longer term backups to find one that worked, and explaining why some data was lost.
              The second was when I came into a post with responsibility, and I'd learned from that first incident that the first thing in any job when I'm taking over is to check the backups first.
              Turns out there was an NLM on the server that conflicted with the ArcServe one, so we could either run application it was built for, or it could be backed up, but not both (it still reported successes). Essentially, that server had never been backed up in its lifetime. I got a project underway to replace it pretty sharpish as it was a pig to back up (needed to have windows where the users, being 24x7, worked in 'downtime procedures' each evening to get the backup done, and then backfill when the system came back).

              Good memories. I remember NLM conflicts and ArcServe on Novell 3.1.

              We switched to Windows servers so I never jocked a newer version of Novell.

              Still, the basic practices are still the same. Backup, disconnect, store offsite, rotate, test ...

              I lost a lot of sleep worrying about getting a successful backup.

              I'd go in when I got an email that the backup didn't happen, even on weekends. I'd stay, watching that goddam pot of water until it boiled.

        • Also, older backups should be taken off-line so that even if your backup systems are damaged

          Add to this that you should regularly test that you can restore from backup.

          Preach it.

          The object of backup is not to store -- it's to restore.

        • by fred911 ( 83970 )

          'Add to this that you should regularly test that you can restore from backup.'

          More on topic, assure that you have sufficient resources to decrypt an encrypted volume. For some reason I assume this is where they failed. Having tools and insufficient skills to use them renders a key useless.

      • Seems like this problem could have been prevented by a $5000 tape drive. Of course, they'd lose all the data between the present and the latest known-clean backup, but that's better than nothing.

      • by thomn8r ( 635504 )

        I lot of companies have a system where servers PUSH their data to a backup device. Meaning that the server can write to the backup storage, often a file share.

        3-2-1

        3 copies
        2 different media
        1 offsite

        • I lot of companies have a system where servers PUSH their data to a backup device. Meaning that the server can write to the backup storage, often a file share.

          3-2-1

          3 copies

          2 different media

          1 offsite

          Good plan.

          I carried the tapes (later EHD) home with me and rotated 7 days.

      • PUSH vs PULL doesn't solve anything depending on how the ransomeware is triggered and what it locks. The ransomware can easily exist in your PULLed backup where the ransomware has encrypted your data but still allows the backup software to operate, and now your backup instance of the data is encrypted by the ransomware.

        Not keeping *versioned* backups is the problem. If your backup is always only "latest", then you are screwed if you need something that isn't in the latest backup but was in the previous back

        • That's a good description of the issue, AFAICT. I back up my laptops etc. to a removable USB drive, so it's not normally accessible and limits the infection spread to it because it's physically disconnected most of the time.

          I use Great Norton's Ghost to back up the systems. I keep at least 2 or more images of each system, taken about 6 months apart or so. Same idea, if the most current backup has been infected, the one from 6 months or a year ago will probably be OK.

          This scheme has been working for me for

          • OSX has this very nicely solved by Time Machine. While it's a pain to setup for networks with multiple machines, all backups are incremental deltas of only files that have changed, and a robust snapshot retention policy is provided out-of-box. I wouldn't use Ghost (or similar full-system imaging tools) for this, but would recommend similar software packages (Carbonite, etc) that have similar backup and retention policy capabilities built-in. Unlike Ghost, they also provide tools for the end-users to self-se

            • That's great for OSX but I'm a Linux/Windows person, so that's not an option for me.

              Also, Ghost (I use Enterprise 8.0, I know it's old but it works great) does have a "Ghost Explorer" program that lets you browse backup sessions and restore individual files, at least on the Windows platform, no such tool for Linux exists, so that's image only.

          • by Mal-2 ( 675116 )

            I use EaseUS Todo Backup for the same purpose. It will make a bootable mini-OS image for restoring those backups, too. It's nagware, but I use it so infrequently that it's not really a problem. Even WinRAR is more annoying, because it does its thing multiple times a day.

        • For complete protection, it needs to be both versioned (rotated, multiple ages) AND be pull.

          Versioned push is no good because the ransomware is going to ruin ALL the copies on the file share, not just the latest copy, if it has write access.

          Assuming that something bad can happen to BOTH your main server and your remote backup, such as hackers getting into both, your only recourse is offline backup. A lot of people simply won't do offline daily, but taking a copy offline monthly or quarterly can make it mer

          • You clearly do t know how backup software works. If the backup client supports versioning then the ransomeware has no way of manipulating old versions to corrupt them.

            Your assumption about push vs pull is naively misinformed.

            • I actually wrote perhaps the most advanced backup system on the planet. If the (rooted) client can write to the filesystem where the backups are stored, it DOESN'T MATTER what the backup software thinks about which file is which version. ALL the files will be GONE.

              If you want to push your "backups" to a writeable file share and you think that that backup client is going to prevent the bad guys from encrypting EVERYTHING on that file share, go ahead. You've been warned, it's your funeral, buddy.

      • by vux984 ( 928602 )

        You are right. But I think the key is that backups must not be based on mounting file systems. Over emphasizing "push" vs "pull" is a red herring.

        A backup server sitting next to the main server, that mounts the main server's filesystem and pulls a backup from it is just as vulnerable to ransomware. (albeit to the backup server getting infected instead of the main server getting infected) -- the odds of this are less; as many malware infections come in via regular users browsing habits and their email and th

        • > Consider the possibility that your backup server/system gets breached. Consider what an attacker can do... they could potentially, for example, push bare metal restores to all your servers to a point in time 3 years ago, and then ransomware the backup servers.

          The backup servers have READ access to the main servers for daily backups, not write access. That's the point of pull instead of push - no system has (unattended) write access to any other system. Restores involve an admin using their credential

          • by vux984 ( 928602 )

            "The backup servers have READ access to the main servers for daily backups, not write access."

            Yes, that's a good policy; I'm not familiar with many solutions like that though. That's not to say they don't exist, but all the solutions that I've seen that are managed via cloud consoles all have the cloud able to push restores back to the host without any additional authentication beyond having access to the dashboard. This solution is fine against run of the mill infections, and normal failures, but if the da

            • > all the solutions that I've seen that are managed via cloud consoles all have the cloud able to push restores back to the host without any additional authentication

              The setup instructions for such systems will either have an agent, or ask you to set up an account with a password or key that has read/write access. I suspect that most agentless systems would still work for backups if you instead set up a read-only account. Of course restores from the console directly to the affected system wouldn't work

      • I lot of companies have a system where servers PUSH their data to a backup device. Meaning that the server can write to the backup storage, often a file share. Ransomware encrypts any file shares it can reach, so it gets the backups too.

        To be safe from both ransomware and other threats, backups need to be PULL. The backup system needs to read from from the main system, rather than allowing the main system to write to th backup.

        Also, older backups should be taken off-line so that even if your backup systems are damaged, you have offline media with recovery data - even if that offline copy isn't super fresh.

        What I do is map a drive and do a backup.

        Then I disconnect that drive.

        Then I unplug that drive and replace it, in a 5-day rotation -- stored offsite.

        • Sounds great. Sometimes people don't realize they have been rooted until a couple weeks later. You might want to hold onto the 5-day old backup one day per week. Then once per month, reuse three of the four weekly backuos., Keeping one.

          What Clonebox did it kept a copy from yesterday, from 2 days ago, from last week, and from last month.

          The two-day old backup was saved only every third day. So that "2 days ago" drive (drive 1) would be reused on Jan 1 and Jan 2, then shifted over into the "keep" stack ev

          • Sounds great. Sometimes people don't realize they have been rooted until a couple weeks later. You might want to hold onto the 5-day old backup one day per week. Then once per month, reuse three of the four weekly backuos., Keeping one.

            What Clonebox did it kept a copy from yesterday, from 2 days ago, from last week, and from last month.

            The two-day old backup was saved only every third day. So that "2 days ago" drive (drive 1) would be reused on Jan 1 and Jan 2, then shifted over into the "keep" stack every three days, becoming drive 2. Every third "drive 2" became a "drive 3", and every third drive3 became a drive 4. In that way, with old five drives we had last night, yesterday, last week, and last month.

            Sounds like a great plan.

    • No backups or was it an time bomb that took 2 week be triggering so they rolled away?

      Probably 'No Backups'. In my experience, the time bomb ransomware is the sort of thing that slows you down, as long as the data lives on drives in an unencrypted state. Veeam, Altaro, BackupExec, and others allow file-based recovery. Restoring VMs may end up with the resultant VMs being encrypted therafter, so it might take a few attempts to realize the pattern and decide to do file-based restores on freshly built VMs. Even under those circumstances, it should be possible to get core applications and data back to a usable state within a week.

      The fact that it took two months, and they still weren't successful, leads me to believe that the timeline went like this: After realizing they had no backups (or, that their backups were returning a 'success' notification, but with no restoration tests to verify), they decided to pony up. Ransom guys gave them a decryption key, but they needed a second decryption key, because the files were double encrypted (I've had this happen). That was a second ransom, and the higher-ups weren't willing to pay the second ransom, so some data remains encrypted, and they decided to close up shop instead.

      I hate telemarketing as much as the next guy, but most of the telemarketers I know are people who aren't in the best position financially and took the job out of desperation. Nobody makes a killing telemarketing - most don't even earn a living. So, for them to lose their jobs just before Christmas, I feel for them, regardless of how I feel about the vocation itself.

      • No, the more likely scenario is that the company was struggling and this was a convenient excuse to shut it down, "restructure" with 300 fewer employees, and come back later.

      • by nytmare ( 572906 )

        "He makes money at it" is not an excuse for engaging in unethical or criminal activity. It disturbs me how many people in this country share your philosophy.

        Cold-call telemarketing is not a legitimate profession, no matter whether you're a high level exec or a low-level phone monkey. The good news is now these people can go get ethical jobs, so it's win-win.

  • This seems pretty strange and doesn't pass the "smell test." I suspect there are details they are leaving out. Anyone else have a hunch there is more going on than a ransomware attack?

    Are they hiding a scam they perpetrated? Is this a coverup of some sort?
    • The synopsis says they paid the crooks for the "keys", but they still weren't able to get back up and running?
    • I suspect there are details they are leaving out

      This caught my eye:

      Speaking with local media, employees said they had no idea the company had even suffered a ransomware attack

      Did these employees use computers? If so, what sort of ransomware attack was it that allowed them to still use their data? Did the management team just chalk it up to their "systems being offline"?

      • by Woeful Countenance ( 1160487 ) on Friday January 03, 2020 @02:17PM (#59583146)

        Indeed, the timing is confusing: "approximately two months ago our Heritage servers were attacked" ... "we were forced to pay the crooks to get the 'key'" ... "data recovery efforts, initially estimated at one week, have not gone according to plan and the company had failed to recover full service by Christmas." So what have their employees been doing all this time? If the data-recovery efforts took more than a week and were unsuccessful, how did the business continue operating? Maybe the data recovery did more harm than good, or maybe their efficiency was reduced enough to make the business unprofitable.

        • Just a hypothetical:

          Perhaps the key issue was making sure that the payments to the company management/owners were sufficiently hidden from bankruptcy, before shutting down, screwing over the creditors and opening up again under a new name?

      • by cusco ( 717999 )

        I'd be more interested in the opinion of their IT staff than their executives, but they're not allowed to talk to reporters generally. As much as anything I wonder if the "ransom" was paid to one of the executive's offshore bank account.

    • This seems pretty strange and doesn't pass the "smell test." I suspect there are details they are leaving out. Anyone else have a hunch there is more going on than a ransomware attack?

      Most telemarketing firms are just shell companies, which can be folded and bankrupted very quickly in case of legal action against the company. One week later, the owners are back in business running a new company with a different name.

      Quite possibly some sort of legal proceedings were taking place, and the owners are using t

      • by Fringe ( 6096 )

        Most telemarketing firms are just shell companies, which can be folded and bankrupted very quickly in case of legal action against the company. One week later, the owners are back in business running a new company with a different name.

        I hate when slimey internet trolls do this, but "cite please". Because you sir are, at least in my experience, a pathetic posturizing liar. I don't believe you have any knowledge of what you speak.

        I don't like or respect them, but I've worked with, for, around telemarketing companies. Any LLC can be reformed quickly, but that's not the same as a shell company. They still have to rent the office, bust their balls to get the contracts, hire the staff, get the H.R. cover, insurance, licenses. Most have

  • The real truth (Score:5, Insightful)

    by oldgraybeard ( 2939809 ) on Friday January 03, 2020 @01:57PM (#59583066)
    Due to the shortsightedness of our CEO, CTO and executive management. Our IT systems were not properly built and secured. We judged the risk as acceptable and diverted funds to more pressing company needs.

    Since it will cost more than we are willing to spend to repair things we are shutting down parts of our business.
    Big takeaway! Everyone involved in making the decisions will be cashed out. The employees! well they are out of luck. We gambled with their futures and they lost.

    Just my 2 cents ;)
    • by JcMorin ( 930466 )
      Should should have just paid the ransom and secure their server. They tried to be super hero and failed!
  • On The Brink (Score:4, Insightful)

    by Thelasko ( 1196535 ) on Friday January 03, 2020 @01:59PM (#59583072) Journal
    Sounds like the company was mismanaged on on the brink of bankruptcy in the first place. The ransomware was just the proverbial final shove that knocked over the Coke machine. [youtube.com]
    • One of our competitors trademarked the term "hypothesis". From now on, we will call them "boneheaded ideas"

      Humor reflects reality ;)
    • by DarkOx ( 621550 )

      Well what is a "telemarketing firm" anyway. Sounds like a couple servers, some AP/AR folks, a few IT guys, a SIP trunk, and lots a desk phones and cheap PCs, capped off by middle management and over paid exec team in a rented building.

      They probably don't keep much more cash on hand than is required to make pay roll, the rest they pay out as quickly as possible as bonuses to upper management/owners.

      Its not really "mismanaged" from the owner management perspective. It is a business with few assets that gener

      • Step 1) If YES - Make it so, If No - Just walk away". - There is no step 2"

        Step 3) PROFIT!!!

        • by DarkOx ( 621550 )

          Well that is the point though. They have already probably reached PROFIT; they are not going to foul that up by trying to bail a sinking ship. They paid the ransom (so they say) probably because that appeared to meet my 'fast and cheap" criteria. When that did not work they are left with walk away.

          That does not really bother them. They lost some chump change couple 100k. Which was probably the companies cash reserves. In the mean time the owners all have made the money. Now they simple move on to the ne

  • by DarkOx ( 621550 ) on Friday January 03, 2020 @02:01PM (#59583080) Journal

    "Most of us are convinced that they're not going to reopen. I'm pretty sure they're just buying time because they know as soon as they're not going to reopen we're going to have to get a settlement and I think they just don't want us to take them to court," the employee told KATV.

    I wonder what this person expects? I think AZ is an at-will state, so I don't quite follow why they think they are getting a settlement of any kind for being laid off. They were not fired, so wrongful dismissal is going to be a tough argument when the company can point to very clear and direct reason for layoffs. I also kinda doubt the typical non-management employee has any kind of severance package specified in their employment contract..

  • Their website [theheritagecompany.com] is still up and functional...
  • Well, yes. Hacking/cracking costs money. Money that is used to employ people. Which is why it needs to be a real crime with serious prison time.

    • The computer fraud and abuse act needs to be gutted. Most of what it describes are civil matters, and as such face civil penalties; yet for the most minor things they attach hefty sentencing, and for less-minor things they attach...hefty sentencing, but it hasn't stopped anything.

      Actions which threaten life and limb or breach national security (implied to threaten life and limb) should be criminal.

  • telemarketing firm

    The sound of the smallest violin in the world is playing.

  • by Tough Love ( 215404 ) on Friday January 03, 2020 @03:10PM (#59583374)

    OK, windows claims another victim. However, that victim was stupid enough to rely on Windows for critical business infrastructure, so what we are talking about here is pure Darwinisim. Too bad about those employees, I'm sure Bill Gates is weeping for them.

  • . . . to better people.

  • Telemarketing company huh? What a shame.
  • One bit. Company failed, came up with a good excuse.
  • Comment removed based on user account deletion

Remember the good old days, when CPU was singular?

Working...