Do Proof-of-Concept Exploits Do More Harm Than Good? (threatpost.com) 37
secwatcher writes:
When it comes to the release of proof-of-concept (PoC) exploits, more security experts agree that the positives outweigh the negatives, according to a recent and informal Threatpost poll.
In fact, almost 60 percent of 230 security pundits thought it was a "good idea" to publish PoC code for zero days. Up to 38 percent of respondents, meanwhile, argued it wasn't a good idea.
Dr. Richard Gold, head of security engineering at Digital Shadows, told Threatpost that PoC code makes it easier for security teams to do penetration testing: "Rather than having to rely on vendor notifications or software version number comparisons, a PoC allows the direct verification of whether a particular system is exploitable," Gold told Threatpost. "This ability to independently verify an issue allows organizations to better understand their exposure and make more informed decisions about remediation." In fact, up to 85 percent of respondents said that the release of PoC code acts as an "effective motivator" to push companies to patch. Seventy-nine percent say that the disclosure of a PoC exploit has been "instrumental" in preventing an attack. And, 85 percent of respondents said that a PoC code release is acceptable if a vendor won't fix a bug in a timely manner...
On the flip-side of the argument, many argue that the release of the Citrix PoC exploits were a bad idea. They say attacks attempting to exploit the vulnerability skyrocketed as bad actors rushed to exploit the vulnerabilities before they are patched... Matt Thaxton, senior consultant at Crypsis Group, thinks that the "ultimate function of a PoC is to lower the bar for others to begin making use of the exploit... In many cases, PoC's are put out largely for the notoriety/fame of the publisher and for the developer to 'flex' their abilities...."
This issue of a PoC exploit timeline also brings up important questions around patch management for companies dealing with the fallout of publicly-released code. Some, like Thaxton, say that PoC exploit advocates fail to recognize the complexity of patching large environments: "I believe the release of PoC code functions more like an implied threat to anyone that doesn't patch: 'You'd better patch . . . or else,'" he said "This kind of threat would likely be unacceptable outside of the infosec world. This is even more obvious when PoCs are released before or alongside a patch for the vulnerability."
And Joseph Carson, chief security scientist at Thycotic, tells them "Let's be realistic, once a zero-day is known, it is only a matter of time before nation states and cybercriminals are abusing them."
In fact, almost 60 percent of 230 security pundits thought it was a "good idea" to publish PoC code for zero days. Up to 38 percent of respondents, meanwhile, argued it wasn't a good idea.
Dr. Richard Gold, head of security engineering at Digital Shadows, told Threatpost that PoC code makes it easier for security teams to do penetration testing: "Rather than having to rely on vendor notifications or software version number comparisons, a PoC allows the direct verification of whether a particular system is exploitable," Gold told Threatpost. "This ability to independently verify an issue allows organizations to better understand their exposure and make more informed decisions about remediation." In fact, up to 85 percent of respondents said that the release of PoC code acts as an "effective motivator" to push companies to patch. Seventy-nine percent say that the disclosure of a PoC exploit has been "instrumental" in preventing an attack. And, 85 percent of respondents said that a PoC code release is acceptable if a vendor won't fix a bug in a timely manner...
On the flip-side of the argument, many argue that the release of the Citrix PoC exploits were a bad idea. They say attacks attempting to exploit the vulnerability skyrocketed as bad actors rushed to exploit the vulnerabilities before they are patched... Matt Thaxton, senior consultant at Crypsis Group, thinks that the "ultimate function of a PoC is to lower the bar for others to begin making use of the exploit... In many cases, PoC's are put out largely for the notoriety/fame of the publisher and for the developer to 'flex' their abilities...."
This issue of a PoC exploit timeline also brings up important questions around patch management for companies dealing with the fallout of publicly-released code. Some, like Thaxton, say that PoC exploit advocates fail to recognize the complexity of patching large environments: "I believe the release of PoC code functions more like an implied threat to anyone that doesn't patch: 'You'd better patch . . . or else,'" he said "This kind of threat would likely be unacceptable outside of the infosec world. This is even more obvious when PoCs are released before or alongside a patch for the vulnerability."
And Joseph Carson, chief security scientist at Thycotic, tells them "Let's be realistic, once a zero-day is known, it is only a matter of time before nation states and cybercriminals are abusing them."
Always keeps coming back ... no cover up!s! (Score:5, Insightful)
We keep coming back to this. If the security researchers don't release PoC, the vendors say "this is a theoretical vulnerability". If the researchers don't set a deadline then software vendors put off fixes forever. If security researchers default to giving contact information then they get sued.
"Responsible disclosure" has its place with clearly trustworthy vendors - like the ones that offer and actually pay bug bounties and guarantee that results can be published. For the rest, the default has to be anonymous release of PoCs and security report with at most minimal warning to vendors. It's going to be pretty rare that a security researcher actually is the first to discover a vulnerability since hackers have more motivation and money in the game. The damage this causes is less than you'd think, but the pressure it puts on vendors to behave reasonably is invaluable.
Re: (Score:3)
Stop posting.
Re: (Score:2)
Stop trolling.
Re: (Score:2)
Never started. And since when were you a creimer bot?
Re: (Score:2)
You're participating in harassing somebody for having a disability. It is disgusting. That doesn't make me a "bot," it makes you a nazi.
Stop.
Re: (Score:2)
He's harassing us for free ad clicks.
Re: (Score:2)
It's unreasonable to expect vendors to fix it instantly and it's unreasonable to expect all clients to patch instantly. In my opinion you should therefore give it to them in private and have two deadlines, on the first on you publicly disclose the vulnerability so the vendor should have a patch ready before that. On the second you publicly disclose the PoC making exploits easy, so the customers should have patched before that. Yeah some crooks probably have it already but you don't need to give everyone a p
Re: Always keeps coming back ... no cover up!s! (Score:2)
Many PoC are withheld, eg Bluekeep and a lot of the CPU issues. There are two issues with that, first of all, it makes it really hard to know whether you have successfully patched a system if the issue is obscure and you have no tests. The second is that most of the time any programmer can build a PoC by inspecting the patches and what it has changed. So you get into a situation where the customers can't test but the hackers can.
Re: (Score:2)
+1. guruevi is right and Kjella is wrong. PoCs help vendors and customers far more than they help attackers.
Re: (Score:1)
Re: (Score:2)
If corporations don't want to be pwn3d, they shouldn't have written the exploitable code to begin with.
Ah, the "just don't make mistakes" strategy. Yeah, that tends to fail spectacularly. Much better to assume that the code will contain vulnerabilities, and design defense in depth plus a good patching strategy and a generous bug bounty.
Re: Always keeps coming back ... no cover up!s! (Score:2)
Re: (Score:2)
Most companies don't have any strategy for avoiding security mistakes. If you interview their developers, a lot of them won't even know what an XSS exploit is, let alone common strategies for avoiding them. These companies are negligent.
Sure, developers should be trained and take care. But that will only reduce vulnerabilities, not eliminate them.
Re: Always keeps coming back ... no cover up!s! (Score:2)
Re: (Score:2)
A good chunk of them can be eliminated.
"A good chunk of them can be eliminated", means "that will only reduce vulnerabilities, not eliminate them."
Your team doesn't still have SQL injections, do they?
My team doesn't have SQL injections. My team doesn't use SQL. We work at the system, kernel and firmware layers. And we do a lot of work to prevent buffer overflows, integer overflows, TOCTOU errors and other race conditions. We also put a lot of effort into preventing side channel attacks, and to minimize the effectiveness of hardware attacks. We primarily use a safe, pointer-free subset of C++, u
Re: Always keeps coming back ... no cover up!s! (Score:2)
Re: (Score:2)
Usually when someone says, "it's impossible to get rid of all security vulns" it's an excuse to write crappy code. Entirely new classes of vulnerability are rare.
Not in my life... especially with side channels, we get a clever new vulnerability class every other year or so. :-)
Re: (Score:2)
Usually when someone says, "it's impossible to get rid of all security vulns" it's an excuse to write crappy code. Entirely new classes of vulnerability are rare.
Also, it's not really possible to eliminate all instances of known vulnerability classes either. I said "Even if you were perfect and could prevent...", figuring it was clear that I was implying that because you're human you can't. But just in case that wasn't clear: You're human. You can't.
Re: Always keeps coming back ... no cover up!s! (Score:2)
Exaxtly. The code IS ALREADY insecure! It WILL already be exploited. You must always assume that people DO have an exploit, whether it was published or not. This is only enabling slacking off, and endangering people with head-in-the-sand pseudo-security ignorance.
Re: (Score:3)
We keep coming back to this. If the security researchers don't release PoC, the vendors say "this is a theoretical vulnerability".
That's not the only issue, and there are vendors who take security seriously for whom this definitely isn't a problem. But even for them, a PoC helps them to be able to verify that they've fixed the issue. Even better, it's often not too difficult to turn a PoC into a test that can be added to the standard test suite for the product, to ensure that the vulnerability doesn't accidentally get reintroduced. I work on Android and for Android there's another reason such PoC-based tests are valuable: Google can
Downside isn't much of a risk (Score:1)
PoCs are a result of vendor denial (Score:5, Insightful)
How often have we heard a vendor dismiss a reported vulnerability with "It's strictly theoretical, there's no evidence it can actually be exploited in practice.". This is especially true of vulnerabilities like the Meltdown and Spectre families where fixes or mitigations are either extremely difficult or impose high performance or other penalties. PoC code removes the ability for the vendor to deny/dismiss the vulnerability as not possible in practice, and the fact that PoC code needs to be released before vendors will act points to the problem being vendor attitudes, not the publication of PoC code.
Re: (Score:2)
Re:PoCs are a result of vendor denial (Score:5, Informative)
We tried it that way already (and not even with mere escrow, researchers actively sent the PoC code along with the details to the vendors). The answer we got was the vendors ignoring the submission until it was reported publicly, then dismissing it and/or actually suing the researchers in an attempt to get them to shut up about vulnerabilities. The vendors created this situation, I'll believe we can trust them after I've seen them change their ways and stop trying to dismiss or minimize the severity of reported vulnerabilities.
"Fool me once, shame on you. Fool me twice, shame on me."
Re: (Score:2)
So, as an extension to the responsible disclosure system, how about a PoC escrow? Have a responsible third party such as Mitre or Google Project Zero hold the PoC code privately with researcher attribution until the mitigating patch has been available for a couple weeks or a public exploit becomes known.
What would be the point of escrowing the PoC? Just send it to the vendor along with the vuln description. Then give them a reasonable amount of time to deploy the bug before vuln description and PoC are published together. This is what Google Project Zero does. It seems the be the best of a set of less-than-ideal options.
PoC||GTFO (Score:5, Insightful)
Re: PoC||GTFO (Score:3)
On the other hand many companies and projects won't even take a bug seriously if you don't write a PoC.
Several years ago I found a bug in Samba that could cause a DoS (connections that were opened without traffic could spawn a hanging process for a really long time).
I wrote some really quick and dirty report and told them how to see it but I didn't have time to build a complete PoC that crashed a server. They left the bug report open agreeing with my result. It took another 2 years for someone else to find
Re: (Score:2)
Re: (Score:2)
That's kind of a mixed bag, keeping it opened meant having a published zero day for anyone who bothered looking.
Your security through obscurity does the harmN (Score:2)
You should *know* that keeping an exploit secret will mean nothing more than only the bad guys having the exploit now.
Your fallacy lies in the delusion that somebody running unsafe code would still be secure when the hole is still obscure.
When in reality, the whole point of publishing the exploit is to make everybody realize the harsh reality: They are NOT secure anymore, they could be cracked at any time, maybe already are, and should stop that code right freakin' NOW!!
Your auggestion enables them to keep
from experience of releasing a 0day.... (Score:1)
PoC code is critical (Score:2)
Where a language barrier or choice of vernacular can confuse an issue the PoC code always makes it clear. This helps people not only pen test the relevant system but developers understand the nature of the problem allowing them head off creating a similar problem in new code.
But ofcourse! (Score:1)