Monroe College Hit With Ransomware, $2 Million Demanded (bleepingcomputer.com) 97
A ransomware attack in New York City's Monroe College has shut down the college's computer systems at campuses located in Manhattan, New Rochelle and St. Lucia. The attackers are seeking 170 bitcoins or approximately $2 million dollars in order to decrypt the entire college's network. Bleeping Computer reports: According to the Daily News, Monroe College was hacked on Wednesday at 6:45 AM and ransomware was installed throughout the college's network. It is not known at this time what ransomware was installed on the system, but it is likely to be Ryuk, IEncrypt, or Sodinokibi, which are known to target enterprise networks. The college has not indicated at this time whether they will be paying the ransom or restoring from backups while gradually bringing their network back online. "The good news is that the college was founded in 1933, so we know how to teach and educate without these tools," Monroe College spokesperson Jackie Ruegger told the Daily News. "Right now we are finding workarounds for our students taking online classes so they have their assignments."
Re: (Score:3)
Or the backups are on the nework, too, or in a cloud somewhere, which could be the case as backups are generally to fend off crashes, or fires.
Ransomware is a new kettle of fish.
Re: (Score:2)
Prevented? how?
It takes a lot more than just deploying wsus and laps, you actually need to understand what you're doing, both design and manage the network securely and keep on top of configuration. Do you know exactly how this attack happened?
Re: (Score:2)
Linux has a lot less legacy cruft, it is usually the legacy crap which is targeted by exploits.
Linux is overall a lot less complex, many of the attacks against windows exploit this complexity.
Good security practices are much easier to implement when the system is simpler and you have a better understanding of it.
Linux is overall an even bigger target than windows, but it's also a far more diverse target. There are billions of embedded devices out there running linux, and many of those devices are extremely
More incompetence (Score:5, Interesting)
Ransomware wouldn't be a concern if they had IT administrators who knew how to take care of their data.
Seriously, how many more highly public examples are we going to need before these places figure that out?
Re: (Score:2, Insightful)
"That happens to other orgs, It'll never happen to us" --every administrator everywhere
Re: (Score:1)
Not really. Where do you think unwanted baby cars come from.
Re: (Score:2)
Most are down to human error, but some are down to mechanical failure, animals, environmental factors etc.
Re: (Score:3)
Of course there's no way to 100% protect against ransomware. You'll note I never said there was.
What I said was that ransomware wouldn't be a concern if folks knew how to take care of their data. Let's say you're an admin and you have fully implemented and tested a backup strategy. You routinely test your restores and the historical backups are airgapped.
Ransomware then gets in and cripples your network. Instead of paying the demands, all you'd have to do is take everything down, reload it from your air
Re:More incompetence (Score:4, Interesting)
What you're describing is good practice and excellent mitigation but it's not a panacea. I had a friend who worked at a company who did exactly what you described. They had a ransomware attack and duly reinstalled and restored using their planned, tested backup strategy, all the while making snide remarks about what it would be like for organisations that didn't have competent IT people like themselves. All was well until the logic bomb in their data went off a week later. Still no problem, the restored from their older air-gapped backup.., and discovered that the logic bomb had remained quietly in their system until it was on all the backups and there was more than one of them. By this point down time and data loss is starting to add up. I think they paid in the end.
Now better OS/software design, good access controls, logs, policies, etc can certainly help but all you're really doing is minimising your risk - not eliminating it. And if the attacker is someone who at some stage had trusted access like a former (or current) IT person, contractor, high level manager, etc then all bets are off.
Re: (Score:3)
Agreed; you can never eliminate risk. More over, what you are describing sounds like a targeted attack, which by definition is both harder to mitigate and less common. There are a ton of security principles you can implement to minimize risk, but more related to backups; there are things you can do to your backups that can help expose these issues the moment they begin. Think: honeypots.
I've written some custom solutions for myself that have done just this. Caught some malware corrupting the user's prof
Re:More incompetence (Score:4, Interesting)
"Let's say you're an admin and you have fully implemented and tested a backup strategy. You routinely test your restores and the historical backups are airgapped."
Let's say that admin is you, because you have all that right? And your not 'fucking incompetent' right? How stale is your air gapped backup? How routinely do you test the restores? How do you validate the restore to ensure nothing is missing or corrupt?
"Ransomware then gets in and cripples your network. Instead of paying the demands, all you'd have to do is take everything down, reload it from your air gapped backups, bring everything back online ( after identifying the security issue and addressing it, of course ). If everyone did that, how prevalent do you think ransomware would be?"
Equally prevalent.
Ok... Tell me, what exactly is your defense against someone maliciously modifying the backup scripts and rulesets on one of your critical servers so that they are less effective than you think, a few file/folder exclusions and you aren't backing up some critical data that you should be, or perhaps they modify the encryption keys, and then they wait 2 weeks, or 4 weeks, to hit you with the ransomware; perhaps they even time it with your vacation. (Let's say they've breached your mail server with admin rights...so they can do all sorts of nasty stuff -- intercept password resets to breach various other online accounts, disable 2FA, setup additional admins, and see when you are taking some time off...)
Now, even if the backups from 15 or 30 days ago are pristine. Do you trust them? I wouldn't. Maybe the hackers have been inside even longer. I'm not doing a full system restore from those backup images and give them access. I'll recover the raw data... that's it, but im rebuilding the systems.
So now you've got to pave your network, rebuild from scratch and restore the last good data you've got. Which from testing is, say, 2 weeks old. How much will it cost to recreate the missing 2 weeks? 4? 5? 6 figures? All the customer orders you received for the last 2 weeks gone. All the output from the engineering team gone. The latest revisions to all the architects drawings... they were supposed to be delivered in two days ... gone. All the research and drafts of motions to file that legal was working on, pertinent to active litigation -- the last 2 weeks is gone.
Maybe even you, mr not-fucking-incompetent, will pay the ransom to get the data fresh from yesterday because its FAR cheaper than recreating it. Especially if you've got insurance.
We're not talking some little ransomware trojanhorse got in and started running amok on the network; that's easy to deal with by comparison. How do you secure your backups from an actual breach executed by pros.
"On the large scale ransomware is only effective because most admins are fucking morons. It wouldn't be worth the time/risk otherwise."
Sure there's lots of scriptkids out there spamming little ransomware droppers and other malware, but there's plenty of money in it for pros. And... what risk exactly?
Re: (Score:2)
You should realize that if you thought of all that in two minutes, I've already thought of it and modified my backup routine to accomodate for it. Where the backup sets are small enough I hash the files and validate the hashes on restore. Where that's too cumbersome I do spot checks. I have automated restore tests in an isolated system, as well as manual tests once a month. Backups are taken offline anywhere from 1 day to 1 week after creation.
But that's just the backup system. Your post ranged more int
Re: (Score:3)
Having the backups pulled can be dangerous in itself, what if the backup server gets compromised? Now the attacker has the ability to pull any data from your servers they wish, and this also requires a service running and listening with sufficient privileges to access all the data (ie running as root and network listening).
Pushing backups can also be hardened, if implemented in such a way that you can only send data and cannot trigger the modification of existing data. The target server can handle different
Re: (Score:2)
What you are describing is a targeted attack which is far less likely
No. I'm just describing a 'manned' attack of opportunity, vs purely automated malware/virus/script.
I've seen a couple now. The businesses were not "targeted", they were breached via random malware droppers via untargeted phishing/ads/whatever; but once a beachead was made it was manned from there to do the most damage possible.
You don't have super admins that have full access to all systems, and you have teams perform security audits on other team's servers. You can get crazy with this stuff
What you are describing is only possible in large enterprises. What do you suggest for the SME segment? Where IT teams are small, augmented by contractors and MSPs, or entirely outsou
Re: (Score:2)
Nothing is perfect, but a good strategy can make attacks far more difficult.
You have file integrity monitoring for the backup scripts and other core configuration files.
Have central log collection going to systems which are managed entirely separately, to minimise the risk both will be compromised at once.
Take the same approach with backup, you can push backup data to separately managed backup servers, but the rotation of old backup data is not triggerable by the servers being backed up so data cannot be wi
Re: (Score:2)
"Nothing is perfect, but a good strategy can make attacks far more difficult."
I'm most interested in solutions that fit a SME world. It's easy to come up with layers on layers of overlapping independent teams for companies with data centers worth of IT infrastructure.
A retail chain i work with, for example, has 50 locations, and 6 servers. There simply is no business case to have 'separate teams' and 'separate systems' in place to manage that. Adding a central logging server... well great, what if THAT goes down, now you need a redundant... and that needs to be backed up. You end up
Re: (Score:2)
Instead of paying the demands, all you'd have to do is take everything down, reload it from your air gapped backups, bring everything back online
I helped recover from ransomware on a large network before. It infected the UEFI systems. The motherboard manufacturer had an online update system but then disabled it and some hackers got hold of the IP address. The network became infected the first time a UEFI needed an update from the "official" servers.
During recovery. every time we cleaned a system and put it on the network, it would get reinfected by the other systems. Took a month to recover. Everything had to be taken physically off the network and individually re-imaged.
Is two million dollars worth recovering in a few hours instead of a few weeks? Maybe.
Yeah I know it's an ego-boost to shit on specialists, particularity security specialists. However, remember their careers are on the line and do this every day. A few sentences on slashdot from a non-specialist isn't going to have any insightful advice.
Is it your position that they should have paid the ransom then kept the compromised systems in place?
I would hope that's not what you're suggesting because it's fucking stupid. One way or another you would have ended up wiping and reloading the systems ( and it sounds like the systems had direct access to the internet, which is yet another fucking stupid mistake ), so your example doesn't exactly mean anything.
Re: (Score:2)
I never said they shouldn't have internet access, you made that up all by yourself.
Let's see if you can figure out what I did say instead that you got confused about. Bonus points if you can tell me why that's different from what you thought I said.
Re:More incompetence (Score:5, Insightful)
Re: (Score:2)
Idiocy is multi-cultural; skin color or gender isn't a determining characteristic.
Re: (Score:2)
It isn’t just a function of budget and competence. You have stupid things like computer labs that need access to campus resources, a multitude of cloud services, and individual departments that often get “managed service” from a central IT entity, along with self-managed systems that are often a necessary evil.
The department level disaster planning must be like herding cats, and the IT department likely focuses much of their energy on the resources that generate revenue for them.
Ultimately
Re: (Score:2)
...and nothing is happening to make it easier.
The starting point is to not use an operating system that automatically runs and spreads arbitrary programs with administrator-level access just because someone clicked on a file in a fucking email!
People never seem to learn. The first major step in prevention is to ban Windows from the network, and put Linux in its place. Actually, the first major step is to stop using proprietary software for important operations. Then ban Windows.
Re: (Score:2)
Good luck with that.
On a secure network you can impose certain restrictions that grossly limit productivity, but you can’t pull that off on a general purpose network.
Re: (Score:2)
The problem is absolutely that its a general purpose computing device, a highly complex piece of equipment which requires specialist skills to operate. People with those specialist skills are always going to be a tiny minority, and those skills are often mutually exclusive with other skills you may want people to have.
So you've basically two choices:
1, ensure you only hire people sufficiently skilled to operate the complex equipment - hire a network security specialist as your receptionist, paying them a sa
Re: (Score:2)
These infestations are happening due to people with little/no technical skills being given a complex tool designed for highly technical people to use.
Very few infestations are happening on chromebooks or ios devices, because these systems are better designed for the typical user.
Why does the receptionist even need a system which allows her to download and run random executable code? She has absolutely no legitimate business requirement to do that.
Re: (Score:2)
If not using bitcoin, the criminals would use something else - like western union, or prepaid cards etc.
Most criminal transactions take place using cash, should we ban that too?
Once you start paying the Danegeld (Score:2)
This is what happens when folks start to pay the Danegeld. It suddenly becomes a good investment for the criminal groups and 100x more saps get hit with the ransomware down the road.
Re: (Score:2)
It suddenly becomes a good investment for the criminal groups and 100x more saps get hit with the ransomware down the road.
This isn't all bad. Think of it as "outsourcing security testing".
Trump making America weak (Score:1)
Why hasn't Trump attacked the cities these bitcoin crooks live in?
seeking 170 bitcoins or approximately $2 million (Score:2)
Cheese and rice.... a longtime Bitcoin doubter, I wish I could properly kick myself in the ass for not investing $10K in the Ponzi scheme when /. first started reporting on it.
Re: (Score:1)
We'd all be billionaires if we hadn't trusted our instinct that Bitcoin is a scam and just started mining when the first story hit /.
But we'd also be billionaires if our parents had bought us Apple stock instead Macs in 1984.
The point is ... hindsight is 20/20. Don't bother kicking yourself. You've missed tons of opportunities to become a billionaire, and you'll miss a lot more.
Ransom-vs-Ransom (Score:2)
Monroe is a NYC for profit paper mill (Score:1)
This is a a for profit college in NYC that preys on minority and foreign exchange students and provides them with worthless "degrees". A real old fashioned paper mill.
Is everything connected to the internet untenable (Score:2)
Some things are simply undefendable long term. A wall-less city on an open plain is one, and any system connected to the internet is another. There are too many unknown points to defend against. There are too many people that have to be counted on to never make a single mistake.
I'm beginning to think that the idea of a network where anyt
Comment removed (Score:3)
Re: (Score:3)
Seriously.