Why Sharing Ransomware Code For Educational Purposes Is Asking For Trouble (betanews.com) 67
Mark Wilson writes: Trend Micro may still be smarting from the revelation that there was a serious vulnerability in its Password Manager tool, but today the security company warns of the dangers of sharing ransomware source code. The company says that those who discover vulnerabilities need to think carefully about sharing details of their findings with the wider public as there is great potential for this information to be misused, even if it is released for educational purposes. It says that 'even with the best intentions, improper disclosure of sensitive information can lead to complicated, and sometimes even troublesome scenarios'. The warning may seem like an exercise in stating the bleeding obvious, but it does serve as an important reminder of how the vulnerability disclosure process should work.
Re: (Score:3)
There's a big difference between a private company urging people to think about the potential damage that could come from releasing information improperly and the government passing a law that makes it illegal to release information.
Re: (Score:2)
Not really. The company just pays off the government to do its dirty work. Why else would it be made illegal? The government is performing a service.
Re: (Score:1)
Merely responding to your post.... Maybe you should take your own advice there?
Re: (Score:2)
I'm not the one who was making paranoid, delusional claims that the government made something illegal when absolutely nothing in the summary or article could possibly lead to that conclusion. The person who did that was you. Perhaps you shouldn't be so quick to dismiss my advice.
Re: (Score:1)
:-) You win the internet...But just for shits and giggles, reread your post I initially responded to.
For your convenience [slashdot.org]
I won't take it personal, it's just a thing, you know?
TNX
Re: (Score:2)
Perhaps you should re-read the posts.
The article is about TrendMicro urging people to consider the damage that could come from releasing security vulnerability information improperly, not about the government making it illegal.
The AC I originally responded to posted about prior restraint (censorship imposed on expression before the expression takes place), but no censorship is being imposed here. I pointed that out in my post.
You responded that the companied just pays the government off to make it illegal a
Re: (Score:1)
:-) You just couldn't accept your prize quietly, eh?
Corporate greed and stupidity is the only problem (Score:5, Insightful)
Most people that find vulnerabilities want to tell the manufacturer. But after a long history of being ignored or even being threatened, many have reverted to giving the corporations responsible a fixed, short time to fix things, because otherwise nothing happens. Giving time more time just makes them drag their feet, because fixing vulnerabilities costs money. Those complaining here are at the very root of the problem. I should also point out that this corporate fuck-up has been going on for a few decades now.
Re:Corporate greed and stupidity is the only probl (Score:4, Insightful)
Most people that find vulnerabilities want to tell the manufacturer. But after a long history of being ignored or even being threatened, many have reverted to giving the corporations responsible a fixed, short time to fix things, because otherwise nothing happens. Giving time more time just makes them drag their feet, because fixing vulnerabilities costs money. Those complaining here are at the very root of the problem. I should also point out that this corporate fuck-up has been going on for a few decades now.
You're confusing the goal with the process.
More secure software is the goal.
If a temporary process of punishing a product's users by spreading details on how to hurt them is deemed necessary in order for a company to "start treating security seriously", then that's an argument one might make.
If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything. It just spreads misery.
"For educational use" is as ludicrous and beside the point as "for backup purposes only" was for Hotline servers 15 years ago. If the company has or is in the process of acting reasonably fast, actually spreading the details (as opposed to threatening to spread the details) on how to hack someone just makes you a d-bag whose name will be cursed alongside that of the script kiddie who uses your info to hack someone.
Re:Corporate greed and stupidity is the only probl (Score:5, Interesting)
If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything.
That kind of assumes there aren't malicious people already exploiting the bug.
Sometimes it's better to let people know so they can defend themselves: either by closing a port, changing a configuration, turning off a service, fixing the bugs themselves and recompiling, or switching to another software system.
Of course, corporations don't like the last two options, but being able to recompile is a very real benefit of open source software.
Re: (Score:2)
If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything.
That kind of assumes there aren't malicious people already exploiting the bug.
Sometimes it's better to let people know so they can defend themselves: either by closing a port, changing a configuration, turning off a service, fixing the bugs themselves and recompiling, or switching to another software system.
Bullshit. Spreading details on how to protect yourself is not the same as providing an exploit. In some cases, an exploit is trivial enough to deduce from the mitigation that there's no real way to avoid it -- in most cases, however, it's not.
End users won't be recompiling firmware in their car, and in many or most cases of security bugs, the exploit *IS* the start of widespread use.
* Step 1: Someone announces a bug
* Step 2: Vendor/discussion/patch cycle/analysis begins
* Step 3: Some asshat releases an expl
Re: (Score:2)
Step 4: Now my boxes are actually getting exploited, and they mostly weren't before.
You hope.
Re: (Score:2)
Step 4: Now my boxes are actually getting exploited, and they mostly weren't before.
You hope.
That's orthogonal. (It's also, in many cases, verifiable for web-based exploits. That's what logs are for.)
"Here's a string to look for, and a mitigation strategy until you can patch" or "disable Bluetooth in your car adapter" is still not the same as "here's a script to hack in".
Re:Corporate greed and stupidity is the only probl (Score:5, Interesting)
No, selling stuff is the goal of many places that among other things care very little or not at all about security. Your bit about "If a company is (arguably) already treating security reasonably seriously" is very much the exception instead of the rule. I've reported gaping security holes that were left open for years and they were not taken seriously because nobody on the outside had been caught exploiting them - and that was on a cash handling system FFS!
I don't condone those making the bugs public but I can see why they do it. Reporting a serious security problem to some places can both land the reporter in deep shit and still result in nothing being done to fix the actual problem. Management in such places sees taking action against the reporter as the complete solution to the problem. Their reaction to an open farm gate would be to shoot each cow on the way out instead of shutting the gate.
Re: (Score:2)
I agree with what you said but I'd like to add to this part.
No, selling stuff is the goal of many places that among other things care very little or not at all about security
The people in charge of technology are responsible for selling the idea of security to the ownership. For those who have not been able to you need to do this (this is very basic and can be added to depending on the size of the company).
1. Identify all security concerns, rate them by severity, point out probability, draw up a solution and attach a cost to it
2. For each security concern list the potential damage. Some of the damages may not have a ha
Re: (Score:1)
You're confusing the goal with the process.
More secure software is the goal.
If a temporary process of punishing a product's users by spreading details on how to hurt them is deemed necessary in order for a company to "start treating security seriously", then that's an argument one might make.
If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything. It just spreads misery.
I see it as an investment.
I don't want companies that releases buggy code to be able to silently cover them up. I want them to get burned enough to change the way they work with software.
If they keep releasing exploits the damage should be maximized to drive as many customers as possible away from them.
Eventually that will lead to commercial software development maturing.
A few companies might be ran out of business in the process but in the end we will be better off. With some luck flash will be killed in t
Re: (Score:2)
You're confusing the goal with the process.
More secure software is the goal.
If a temporary process of punishing a product's users by spreading details on how to hurt them is deemed necessary in order for a company to "start treating security seriously", then that's an argument one might make.
If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything. It just spreads misery.
I want customers to get burned enough to change the way companies work with software.
If I keep releasing exploits the damage should be maximized to drive as many customers as possible away from them.
Eventually that will lead to commercial software development maturing.
A few companies might be ran out of business in the process but in the end the Giant Leap Forward will make us better off. With some luck some software I don't like will be killed in the process.
Hopefully they will also learn that storing sensitive data like the customers credit card information is a bad design choice. Enough harm needs to be done to innocent third parties who patronize companies I dislike to teach my political enemies or some shit to not store vital information like that.
FTFY.
Re: (Score:1)
This stuff doesn't apply to just software; business has always been that way. In fact, Humans, have always been that way. People don't want their weaknesses exposed, because other people _will_ take advantage of them. A business that covers it up is one thing; a business that doesn't know about it, and someone outright goes and tells everyone, "Hey, do this to totally screw everyone in the country over financially and all sorts of shit happens," is another.
I'd rather a business cover shit up, than have peop
Re:Corporate greed and stupidity is the only probl (Score:4, Insightful)
>Businesses don't just sit on their ass and let defects sit around, and security holes they know of wide open. They fix them. If, and when, they can.
Is this the case, though? We've seen Microsoft, Google, and others take the approach of "I won't fix it until it's discovered" or worse, "I won't fix that at all." (See many /. stories for examples.) Or say something closely related like "well then you better upgrade to Windows 10" which on its face seems reasonable but ....
Re: (Score:3)
"Yea but we're talking about Ransomware aren't we?"
Yes: we are talking here about ransomware AAAAND... the standard corporate director that can get a computer from a microwave: "We need to share knowledge 'how malware works', but sharing 'sample code' is not needed for that.".
Yes sure: I'm a corporate exec, I don't have to know the petty details, do I? Just show me a pretty powerpoint.
We actually don't WANT better ransomware (Score:2)
You've made a cogent, though slightly misguided argument for publicizing information about software that should be improved. This article, however, is about RANSOMWARE - software written by the bad guys, who use it to do bad things. We don't WANT better ransomware. We don't want to show the bad guys how to be more effective bad guys.
When you discover a flaw in a family of ransomware which allows you to retrieve the keys and decrypt the files which are held hostage, there is an argument to be made for rel
Re:We actually don't WANT better ransomware (Score:5, Informative)
Well, it seems to me that two things are likely true:
1) Making malware code public helps malware programmers (current and aspiring) write better malware programs.
2) Making malware code public helps anti-malware programmers (current and aspiring) write better anti-malware programs.
Who benefits more? I honestly don't know. However, my bias is towards openness over secrecy, and I think it needs to demonstrated that the risks of making malware code public outweigh any potential benefits.
Re:We actually don't WANT better ransomware (Score:4, Interesting)
1) Making malware code public helps malware programmers (current and aspiring) write better malware programs.
This request is specific to ransomware, not generic malware. Anyone with poor ethics can deploy either, but ransomware has the potential to make an irreversible impact on victims. Yes, malware can reformat a drive and wipe data, but ransomware provides greater motivation to attackers because of the potential for direct profit.
2) Making malware code public helps anti-malware programmers (current and aspiring) write better anti-malware programs.
Anti-malware code is a specialized field, and there are fewer than 50 companies who have much marketshare. Entry into this field is a high bar, requiring the trust of many people. Even then, many of the products are of poor quality, and/or have their own unethical behavior. An aspiring anti-malware author will have much greater difficulty breaking into the field than an ordinary app developer. There isn't much of a market for specialized anti-ransomware.
Who benefits more? I honestly don't know. However, my bias is towards openness over secrecy, and I think it needs to demonstrated that the risks of making malware code public outweigh any potential benefits.
Publishing the ransomware code creates very specific risks. If perfectly executed, ransomware results in absolute hijacking of the user's data. But as we know from legions of flawed security software, writing perfect code and implementing cryptographic algorithms perfectly is very difficult. Recent ransomware made the news because it was imperfect, allowing investigators to recover the encrypted data for all clients without paying the extortionists. The fear is that publishing the ransomware code will give a working example of properly executed encryption that researchers can't break.
You also have to consider how anti-malware code typically works. Much of it is still signature based, meaning that a working copy of the code can simply be tweaked or recompiled to evade signature detection, and the recompiled code will remain effective. Source code won't help the anti-malware authors much.
So overall, publishing the code will greatly benefit the attackers, and will be of only marginal benefit to anti-malware authors. It is hoped that anyone in possession of ransomware source code already understands these points, and will not be compelled to release the code for "noble purposes", as there would be virtually no nobility in the gesture.
If you are still interested in how ransomware works, I would recommend "Malicious Cryptography: Exposing Cryptovirology", by Drs. Young and Yung (Wiley, 2004.) This book was one of the first scholarly works on ransomware. You don't need the source code to learn about it.
Re: (Score:3)
Anti-malware code is a specialized field, and there are fewer than 50 companies who have much marketshare. Entry into this field is a high bar, requiring the trust of many people. Even then, many of the products are of poor quality, and/or have their own unethical behavior. An aspiring anti-malware author will have much greater difficulty breaking into the field than an ordinary app developer. There isn't much of a market for specialized anti-ransomware.
This is a request from one of those companies. It is in that companies interest to keep out as many competitors as it can. I am not saying they are wrong, just saying they have a conflict of interest.
As for trust, I have no real basis to trust these companies any more than any other. Any software that runs on my machine may potential compromise it. Name one sector of software companies that as more then 50 companies having much of a market share? Maybe games? Even still it not unheard of for games to instal
Re: (Score:2)
My point was only that publishing this code isn't likely to benefit anyone, even those who have an interest. "Legitimate" anti-virus companies aren't likely to need it, because they generally deal with the binary code anyway. If there are a few such companies that could benefit from it, the code could be made available to them via special arrangement instead of a public publishing process. It certainly doesn't have to be an exclusive deal; if you think Symantec, ESET, Kaspersky, and Trend Micro are all l
Re: (Score:2)
I've made the case forever that knowing about bad things doesn't make you a bad person, and this can be applied to just about any field, not merely malware.
Researching serial killers, for instance, doesn't mean you're going to turn into one. It may just be that you're fascinated by the psychology of what makes a person go bad. Sure, it may inspire a copycat or two, but the price you pay for freedom is the chance that bad things might happen.
All I know is there will be more serial killers regardless of whe
Re: (Score:2)
You also withhold knowledge from other security researchers that could build upon the work in the case of other malware. The only one being helped by withholding this info is the malware writers.
Re: (Score:2)
This is about randsomeware but do about finding faults in it, it is about not releasing the source code of that ransomware so other people can make better versions of it. Just like you don't want to release detailed instructions on how to make a nuclear bomb.
On one hand I can see the point, but understanding how ransomware works may also be useful in protecting yourself against it.
As someone who creates that protection, not needed (Score:2)
> On one hand I can see the point, but understanding how ransomware works may also be useful in protecting yourself against it.
I see both sides too. As someone who authors protective software for a living (when not I'm farting around on Slashdot), I do enjoy understanding how it works. Mostly I want to understand how the OTHER malware that exploits the system and gains access with which to run the ransomware works. To do that, to write the software which protects you, I don't need the source code of the
Re: (Score:2)
After thinking about this, what randsomware does (I think) I have not written any, is basically it encrypts your data, there is already open source code to do this. The biggest problems are how to get the money without being traced, being located in some third world country would help, and which files not to encrypt, if you make the computer unusable there would not be a way to demand your money if the computer simply did not start up. In fact why would you need to encrypt it at all just replace it with ran
M$ Ransom Sucks! (Score:2, Funny)
Open source ransomware is much better than the proprietary shit.
Re: (Score:2)
I LOL'ed.
(Maybe you should heed your own advice, killjoy.)
are you kidding me?! (Score:5, Interesting)
Martin Roesler, Trend Micro Senior Director for Threat Research says...
We need to share knowledge that creates understanding about potential damage, but not the ability to create it. We need to share knowledge about 'who exploits work', but not 'how to make use of them'. We need to share knowledge 'how malware works', but sharing 'sample code' is not needed for that.
i wouldn't consider him a reliable source considering he allowed them to write a password manager in javascript.
Re: are you kidding me?! (Score:3)
Don't blame Javascript or Node ... read (or listen to Gibson narrate) the email exchange between Google and TM on their vulnerabilities - it's enough to make you want to never trust a word out of TM for the rest of time. It's astonishing how bad the product was and even more astonishing how they handled it. No, worse.
Re: (Score:2)
This is the second story in as many days... (Score:4, Interesting)
This is the second story in as many days arguing for limitation of disclosure for an indeterminate period. The first was the story lauding GM for doing the same, when it made it's list of the types of disclosure for which it will not go after you legally.
You have to put a clock on these things; the only thing a company executive cares about is keeping the board happy, and the only thing that the board cares about is fiduciary responsibility to the stockholders, including themselves and the company executives.
This is what we incentivize with how we have built these systems to operate. And it incentivizes behaviours which are not in a customers/consumers best interest, in most cases.
If someone had come up with the GM ignition problem as a potential disclosure, and then gave them a three month clock to public disclosure, it would have been handled through a rather immediate recall. Instead, it was handled by accepting the lawsuit payouts as a "cost of doing business", and then determined that the highest actuarial benefit was to simply eat those costs while imposing gag orders, rather than taking the more expensive option of fixing all the ignitions in all the vehicles. It was less expensive, overall, to the company, that some people die in order that the company make a marginally higher profit.
While I doubt that many software vulnerability disclosures will result in deaths, the same principle holds true. Both GM and Trend Micro would like apriori restrictions -- one through veiled threat of legal action, with a bounty carrot, and one through guilt shaming for those who disclose.
Responsible disclosure is really the only ethical -- and moral -- option.
Put a clock on it. Always.
Re: (Score:2)
With open source software, you can release a patch with the disclosure. Yet another reason to favor open source over closed.
I use responsible disclosure for open source (Score:5, Interesting)
Many of open source projects I'm involved in use a responsible disclosure model. It has worked very well, getting most users patched a few days or a few hours BEFORE the bad guys knew how to exploit it, rather than soon AFTER they got exploited. I'll use as two examples issues I found in Wordpress and PowerDNS (used by wikipedia and other large sites).
I found an issue with Wordpress and opened a security ticket describing the issue and my proposal for a solution. As a security ticket, it was initially visible only to the security team. Over the next 24 hours or so, it was discussed and consensus was developed regarding the right solution. Over the next 24 hours, it was tested and (quietly) pushed to the repository. On the third day, everyone who had Wordpress set to automatically update got the fix, and admins with many, many Wordpress users such as Wordpress.com were notified. So maybe 80%-90% of the Wordpress users had the update on day three. On day 4, the information became public - 24 AFTER the updates had already happened.
Note it took a couple of days between the time the patch was ready and the time most users were protected. Had we released the patch and the information together, that would have been a day or two that the bad guys could have infected servers with persistent malware.
Power DNS was similar, except distros needed time to compile and package the fixed version. So the issue was discussed privately, and the fix tested. Had the vulnerability been public, someone would probably have used it to take down Wikipedia, so Wikipedia was notified of the fix along with a few other very large sites. While Wikipedia was patching, Redhat, Debian, and the other distros were preparing updated packages for their users. This was roughly day three. On the morning of day #4, Debian mailed their users to let them know that a security fix was available and that had information about the vulnerability- AFTER the update was already available from Debian's servers, which was a day or two after the source patch was privately distributed to the appropriately people.
Something else happened that day too. About an hour after the Debian security alert email went out, I had a job interview. When I told the interviewer I worked mostly with Red Hat systems, he seemed disappointed. The conversation continuedg
"We use Debian. Do you know anything about Debian?", he asked. ..." :)
I replied "did you see that Debian security alert about an hour ago?"
"Yeah, this one right here?" he said as he opened the email.
Looking at the first line of the email, he saw it said "Ray Morris discovered a vulnerability
Suddenly he seemed less concerned about my knowledge of Debian.
Re: (Score:2)
"We use Debian. Do you know anything about Debian?", he asked. ..." :)
I replied "did you see that Debian security alert about an hour ago?"
"Yeah, this one right here?" he said as he opened the email.
Looking at the first line of the email, he saw it said "Ray Morris discovered a vulnerability
Suddenly he seemed less concerned about my knowledge of Debian.
That is awesome. You are awesome.
Re: (Score:2)
[...]Suddenly he seemed less concerned about my knowledge of Debian. :)
Slashdot really needs a "like" system, separate from the moderation system. It wouldn't impact your ability to moderate, and it'd allow you to like something that was posted as a response or in a thread in which you posted... :^)
Re: (Score:2)
Slashdot really needs a "like" system, ...
Perhaps it could influence the system allowing much "liked" users to have a (slightly) greater chance of getting mod points?
I am sure it could be abused someway though.
I would hope it would function as a moderator encouragement, actually, although on an article by article basis, rather than on a user basis. Getting mod points tends to make me hesitant to post in an article commentary, unless I'm certain I don't want to moderate in it. I wouldn't want it to replace "karma" in this regard.
I guess to be effective, it would also need a "dislike", so that the influence couldn't be gamed, even though it's an influence through a human with mod points, rather than directly.
Looks like TrendMicro's playing the blame game (Score:1)
Question on cryptolockers (Score:2)
I have a simple question about cryptolockers, if any specialist could give some insights, that would be great.
Suppose you generate a big file of pure white noise (several GB), give it a video extension, and store it preciously on both your hard drive and a usb key put in your safe. When you get the cryptolocker, you then have both the original file and its encrypted version. Wouldn't that be sufficient to recover the encryption key? How big would the file need to be in order to allow breaking the key? How m
Re: (Score:3)
A cryptosystem that allows inferring the secret key (necessary for encryption/decryption) from plaintext+ciphertext with less-than-brute-force effort is considered broken. I'm guessing that successful cryptolockers use non-broken encryption. So no, having a plain and encrypted version of the same file is not enough to undo cryptolocker damage.
Usually symmetric ciphers use a block size between 128 and 256 bits, the amount of different blocks you can compose from those (and which your file would need to cont
Re: (Score:1)
I agree with the parent, but just to illuminate for the benefit of the grand parent. What we're talking about here is what cryptographers call a "chosen plaintext" attack which means that we have both a plaintext of our choice (the original whitenoise file) and a ciphertext (the encrypted version of the same file). The process of attempting to recover encryption keys by comparing differences between known plaintexts and corresponding ciphertexts is called "differential cryptanalysis". However, because this
What if? (Score:2)