Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Education

Why Sharing Ransomware Code For Educational Purposes Is Asking For Trouble (betanews.com) 67

Mark Wilson writes: Trend Micro may still be smarting from the revelation that there was a serious vulnerability in its Password Manager tool, but today the security company warns of the dangers of sharing ransomware source code. The company says that those who discover vulnerabilities need to think carefully about sharing details of their findings with the wider public as there is great potential for this information to be misused, even if it is released for educational purposes. It says that 'even with the best intentions, improper disclosure of sensitive information can lead to complicated, and sometimes even troublesome scenarios'. The warning may seem like an exercise in stating the bleeding obvious, but it does serve as an important reminder of how the vulnerability disclosure process should work.
This discussion has been archived. No new comments can be posted.

Why Sharing Ransomware Code For Educational Purposes Is Asking For Trouble

Comments Filter:
  • by gweihir ( 88907 ) on Wednesday January 13, 2016 @07:16PM (#51297261)

    Most people that find vulnerabilities want to tell the manufacturer. But after a long history of being ignored or even being threatened, many have reverted to giving the corporations responsible a fixed, short time to fix things, because otherwise nothing happens. Giving time more time just makes them drag their feet, because fixing vulnerabilities costs money. Those complaining here are at the very root of the problem. I should also point out that this corporate fuck-up has been going on for a few decades now.

    • by Etcetera ( 14711 ) on Wednesday January 13, 2016 @07:23PM (#51297279) Homepage

      Most people that find vulnerabilities want to tell the manufacturer. But after a long history of being ignored or even being threatened, many have reverted to giving the corporations responsible a fixed, short time to fix things, because otherwise nothing happens. Giving time more time just makes them drag their feet, because fixing vulnerabilities costs money. Those complaining here are at the very root of the problem. I should also point out that this corporate fuck-up has been going on for a few decades now.

      You're confusing the goal with the process.

      More secure software is the goal.

      If a temporary process of punishing a product's users by spreading details on how to hurt them is deemed necessary in order for a company to "start treating security seriously", then that's an argument one might make.

      If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything. It just spreads misery.

      "For educational use" is as ludicrous and beside the point as "for backup purposes only" was for Hotline servers 15 years ago. If the company has or is in the process of acting reasonably fast, actually spreading the details (as opposed to threatening to spread the details) on how to hack someone just makes you a d-bag whose name will be cursed alongside that of the script kiddie who uses your info to hack someone.

      • by phantomfive ( 622387 ) on Wednesday January 13, 2016 @09:47PM (#51297925) Journal

        If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything.

        That kind of assumes there aren't malicious people already exploiting the bug.
        Sometimes it's better to let people know so they can defend themselves: either by closing a port, changing a configuration, turning off a service, fixing the bugs themselves and recompiling, or switching to another software system.

        Of course, corporations don't like the last two options, but being able to recompile is a very real benefit of open source software.

        • by Etcetera ( 14711 )

          If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything.

          That kind of assumes there aren't malicious people already exploiting the bug.

          Sometimes it's better to let people know so they can defend themselves: either by closing a port, changing a configuration, turning off a service, fixing the bugs themselves and recompiling, or switching to another software system.

          Bullshit. Spreading details on how to protect yourself is not the same as providing an exploit. In some cases, an exploit is trivial enough to deduce from the mitigation that there's no real way to avoid it -- in most cases, however, it's not.

          End users won't be recompiling firmware in their car, and in many or most cases of security bugs, the exploit *IS* the start of widespread use.
          * Step 1: Someone announces a bug
          * Step 2: Vendor/discussion/patch cycle/analysis begins
          * Step 3: Some asshat releases an expl

          • Step 4: Now my boxes are actually getting exploited, and they mostly weren't before.

            You hope.

            • by Etcetera ( 14711 )

              Step 4: Now my boxes are actually getting exploited, and they mostly weren't before.

              You hope.

              That's orthogonal. (It's also, in many cases, verifiable for web-based exploits. That's what logs are for.)

              "Here's a string to look for, and a mitigation strategy until you can patch" or "disable Bluetooth in your car adapter" is still not the same as "here's a script to hack in".

      • by dbIII ( 701233 ) on Wednesday January 13, 2016 @09:52PM (#51297941)

        More secure software is the goal

        No, selling stuff is the goal of many places that among other things care very little or not at all about security. Your bit about "If a company is (arguably) already treating security reasonably seriously" is very much the exception instead of the rule. I've reported gaping security holes that were left open for years and they were not taken seriously because nobody on the outside had been caught exploiting them - and that was on a cash handling system FFS!
        I don't condone those making the bugs public but I can see why they do it. Reporting a serious security problem to some places can both land the reporter in deep shit and still result in nothing being done to fix the actual problem. Management in such places sees taking action against the reporter as the complete solution to the problem. Their reaction to an open farm gate would be to shoot each cow on the way out instead of shutting the gate.

        • I agree with what you said but I'd like to add to this part.

          No, selling stuff is the goal of many places that among other things care very little or not at all about security

          The people in charge of technology are responsible for selling the idea of security to the ownership. For those who have not been able to you need to do this (this is very basic and can be added to depending on the size of the company).

          1. Identify all security concerns, rate them by severity, point out probability, draw up a solution and attach a cost to it
          2. For each security concern list the potential damage. Some of the damages may not have a ha

      • by Anonymous Coward

        You're confusing the goal with the process.

        More secure software is the goal.

        If a temporary process of punishing a product's users by spreading details on how to hurt them is deemed necessary in order for a company to "start treating security seriously", then that's an argument one might make.

        If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything. It just spreads misery.

        I see it as an investment.
        I don't want companies that releases buggy code to be able to silently cover them up. I want them to get burned enough to change the way they work with software.
        If they keep releasing exploits the damage should be maximized to drive as many customers as possible away from them.
        Eventually that will lead to commercial software development maturing.
        A few companies might be ran out of business in the process but in the end we will be better off. With some luck flash will be killed in t

        • by Etcetera ( 14711 )

          You're confusing the goal with the process.

          More secure software is the goal.

          If a temporary process of punishing a product's users by spreading details on how to hurt them is deemed necessary in order for a company to "start treating security seriously", then that's an argument one might make.

          If a company is (arguably) already treating security reasonably seriously, then spreading details on how to hurt their customers does not achieve anything. It just spreads misery.

          I want customers to get burned enough to change the way companies work with software.
          If I keep releasing exploits the damage should be maximized to drive as many customers as possible away from them.
          Eventually that will lead to commercial software development maturing.
          A few companies might be ran out of business in the process but in the end the Giant Leap Forward will make us better off. With some luck some software I don't like will be killed in the process.
          Hopefully they will also learn that storing sensitive data like the customers credit card information is a bad design choice. Enough harm needs to be done to innocent third parties who patronize companies I dislike to teach my political enemies or some shit to not store vital information like that.

          FTFY.

    • by Anonymous Coward

      This stuff doesn't apply to just software; business has always been that way. In fact, Humans, have always been that way. People don't want their weaknesses exposed, because other people _will_ take advantage of them. A business that covers it up is one thing; a business that doesn't know about it, and someone outright goes and tells everyone, "Hey, do this to totally screw everyone in the country over financially and all sorts of shit happens," is another.

      I'd rather a business cover shit up, than have peop

      • by chipschap ( 1444407 ) on Wednesday January 13, 2016 @08:45PM (#51297657)

        >Businesses don't just sit on their ass and let defects sit around, and security holes they know of wide open. They fix them. If, and when, they can.

        Is this the case, though? We've seen Microsoft, Google, and others take the approach of "I won't fix it until it's discovered" or worse, "I won't fix that at all." (See many /. stories for examples.) Or say something closely related like "well then you better upgrade to Windows 10" which on its face seems reasonable but ....

  • by Anonymous Coward

    Open source ransomware is much better than the proprietary shit.

  • are you kidding me?! (Score:5, Interesting)

    by Gravis Zero ( 934156 ) on Wednesday January 13, 2016 @07:45PM (#51297383)

    Martin Roesler, Trend Micro Senior Director for Threat Research says...

    We need to share knowledge that creates understanding about potential damage, but not the ability to create it. We need to share knowledge about 'who exploits work', but not 'how to make use of them'. We need to share knowledge 'how malware works', but sharing 'sample code' is not needed for that.

    i wouldn't consider him a reliable source considering he allowed them to write a password manager in javascript.

  • by tlambert ( 566799 ) on Wednesday January 13, 2016 @08:46PM (#51297659)

    This is the second story in as many days arguing for limitation of disclosure for an indeterminate period. The first was the story lauding GM for doing the same, when it made it's list of the types of disclosure for which it will not go after you legally.

    You have to put a clock on these things; the only thing a company executive cares about is keeping the board happy, and the only thing that the board cares about is fiduciary responsibility to the stockholders, including themselves and the company executives.

    This is what we incentivize with how we have built these systems to operate. And it incentivizes behaviours which are not in a customers/consumers best interest, in most cases.

    If someone had come up with the GM ignition problem as a potential disclosure, and then gave them a three month clock to public disclosure, it would have been handled through a rather immediate recall. Instead, it was handled by accepting the lawsuit payouts as a "cost of doing business", and then determined that the highest actuarial benefit was to simply eat those costs while imposing gag orders, rather than taking the more expensive option of fixing all the ignitions in all the vehicles. It was less expensive, overall, to the company, that some people die in order that the company make a marginally higher profit.

    While I doubt that many software vulnerability disclosures will result in deaths, the same principle holds true. Both GM and Trend Micro would like apriori restrictions -- one through veiled threat of legal action, with a bounty carrot, and one through guilt shaming for those who disclose.

    Responsible disclosure is really the only ethical -- and moral -- option.

    Put a clock on it. Always.

    • It's also worth pointing out that "responsible disclosure" time is only a problem with closed source software.
      With open source software, you can release a patch with the disclosure. Yet another reason to favor open source over closed.
      • by raymorris ( 2726007 ) on Wednesday January 13, 2016 @10:20PM (#51298063) Journal

        Many of open source projects I'm involved in use a responsible disclosure model. It has worked very well, getting most users patched a few days or a few hours BEFORE the bad guys knew how to exploit it, rather than soon AFTER they got exploited. I'll use as two examples issues I found in Wordpress and PowerDNS (used by wikipedia and other large sites).

        I found an issue with Wordpress and opened a security ticket describing the issue and my proposal for a solution. As a security ticket, it was initially visible only to the security team. Over the next 24 hours or so, it was discussed and consensus was developed regarding the right solution. Over the next 24 hours, it was tested and (quietly) pushed to the repository. On the third day, everyone who had Wordpress set to automatically update got the fix, and admins with many, many Wordpress users such as Wordpress.com were notified. So maybe 80%-90% of the Wordpress users had the update on day three. On day 4, the information became public - 24 AFTER the updates had already happened.

        Note it took a couple of days between the time the patch was ready and the time most users were protected. Had we released the patch and the information together, that would have been a day or two that the bad guys could have infected servers with persistent malware.

        Power DNS was similar, except distros needed time to compile and package the fixed version. So the issue was discussed privately, and the fix tested. Had the vulnerability been public, someone would probably have used it to take down Wikipedia, so Wikipedia was notified of the fix along with a few other very large sites. While Wikipedia was patching, Redhat, Debian, and the other distros were preparing updated packages for their users. This was roughly day three. On the morning of day #4, Debian mailed their users to let them know that a security fix was available and that had information about the vulnerability- AFTER the update was already available from Debian's servers, which was a day or two after the source patch was privately distributed to the appropriately people.

          Something else happened that day too. About an hour after the Debian security alert email went out, I had a job interview. When I told the interviewer I worked mostly with Red Hat systems, he seemed disappointed. The conversation continuedg

        "We use Debian. Do you know anything about Debian?", he asked.
        I replied "did you see that Debian security alert about an hour ago?"
        "Yeah, this one right here?" he said as he opened the email.
        Looking at the first line of the email, he saw it said "Ray Morris discovered a vulnerability ..."
        Suddenly he seemed less concerned about my knowledge of Debian. :)

        • "We use Debian. Do you know anything about Debian?", he asked.
          I replied "did you see that Debian security alert about an hour ago?"
          "Yeah, this one right here?" he said as he opened the email.
          Looking at the first line of the email, he saw it said "Ray Morris discovered a vulnerability ..."
          Suddenly he seemed less concerned about my knowledge of Debian. :)

          That is awesome. You are awesome.

        • [...]Suddenly he seemed less concerned about my knowledge of Debian. :)

          Slashdot really needs a "like" system, separate from the moderation system. It wouldn't impact your ability to moderate, and it'd allow you to like something that was posted as a response or in a thread in which you posted... :^)

  • The way I see it, by having shitty software (and this was a HUGELY embarrassing mistake. I'm amazed that as a security company they didn't have dynamic or static analysis tools to detect user input being passed directly to ShellExecute) they screwed over their users and left their users open to being extorted by ransomware. On top of screwing over their users, they were really irresponsible in getting it fixed quickly. AND, even on top of that they even have the guts to play the blame game and point a fin
  • I have a simple question about cryptolockers, if any specialist could give some insights, that would be great.

    Suppose you generate a big file of pure white noise (several GB), give it a video extension, and store it preciously on both your hard drive and a usb key put in your safe. When you get the cryptolocker, you then have both the original file and its encrypted version. Wouldn't that be sufficient to recover the encryption key? How big would the file need to be in order to allow breaking the key? How m

    • A cryptosystem that allows inferring the secret key (necessary for encryption/decryption) from plaintext+ciphertext with less-than-brute-force effort is considered broken. I'm guessing that successful cryptolockers use non-broken encryption. So no, having a plain and encrypted version of the same file is not enough to undo cryptolocker damage.

      Usually symmetric ciphers use a block size between 128 and 256 bits, the amount of different blocks you can compose from those (and which your file would need to cont

      • by Anonymous Coward

        I agree with the parent, but just to illuminate for the benefit of the grand parent. What we're talking about here is what cryptographers call a "chosen plaintext" attack which means that we have both a plaintext of our choice (the original whitenoise file) and a ciphertext (the encrypted version of the same file). The process of attempting to recover encryption keys by comparing differences between known plaintexts and corresponding ciphertexts is called "differential cryptanalysis". However, because this

  • A lot of good points here in favor of limited or manged disclosure. And I'm not sure I disagree. But what would happen if all information on vulnerabilities and malware were instantly and fully disclosed? Yes, there are some specific cases one could cherry pick out and say something bad would happen. But what would the overall effect be? Would software makers be more concerned about making good software if they knew any and all vulnerabilities would be immediately and fully revealed to the public? And what

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...