

Curl Battles Wave of AI-Generated False Vulnerability Reports (arstechnica.com) 25
The curl open source project is fighting against a flood of AI-generated false security reports. Daniel Stenberg, curl's original author and lead developer, declared on LinkedIn that they are "effectively being DDoSed" by these submissions.
"We still have not seen a single valid security report done with AI help," Stenberg wrote. This week alone, four AI-generated vulnerability reports arrived seeking reputation or bounties, ArsTechnica writes. One particularly frustrating May 4 report claiming "stream dependency cycles in the HTTP/3 protocol stack" pushed Stenberg "over the limit." The submission referenced non-existent functions and failed to apply to current versions.
Some AI reports are comically obvious. One accidentally included its prompt instruction: "and make it sound alarming." Stenberg has asked HackerOne, which manages vulnerability reporting, for "more tools to strike down this behavior." He plans to ban reporters whose submissions are deemed "AI slop."
"We still have not seen a single valid security report done with AI help," Stenberg wrote. This week alone, four AI-generated vulnerability reports arrived seeking reputation or bounties, ArsTechnica writes. One particularly frustrating May 4 report claiming "stream dependency cycles in the HTTP/3 protocol stack" pushed Stenberg "over the limit." The submission referenced non-existent functions and failed to apply to current versions.
Some AI reports are comically obvious. One accidentally included its prompt instruction: "and make it sound alarming." Stenberg has asked HackerOne, which manages vulnerability reporting, for "more tools to strike down this behavior." He plans to ban reporters whose submissions are deemed "AI slop."
Captcha (Score:1)
Isn't this a solved problem?
Re:Captcha (Score:4, Insightful)
The information is submitted by a human, who didn't use their brain to generate it- but rather directed an LLM to produce it.
LLMs can be useful in situations like this, but it requires some things that are not being utilized here, because these people aren't acting in good faith- rather, they're posers trying to use LLMs to level up past their current status of "fucking nobody"
Re:Captcha (Score:5, Interesting)
They're trying to get their hands on the bug bounty. As several people suggested in discussions elsewhere on this, they should consider requiring a small fee or deposit for entries to be eligible for the bug bounty, that might dissuade most of the chancers. Otherwise it's like spam, if it's basically free to make submission, then the miscreants only increase their chances of "winning" by generating more entries.
Re: (Score:2)
Re: (Score:2)
Not with the AIs of today.
We need something that's better than the captchas that we have seen so far.
In-person bug reporting. (Score:2)
Re: (Score:2)
Re: (Score:3)
Isn't this a solved problem?
Yes... solved with AI [cheq.ai].
Re: (Score:2)
Captchas were quite thoroughly beaten well over a decade ago. (I've pointed this out here before, so I'm not going to repeat all the citations.) That was well before the current flurry of activity in the ML/AI space, which will only serve to pound the nails into the coffin harder.
Anybody putting a captcha on their web site in 2025 is advertising that they have no idea how to actually defend it and that they haven't been paying attention to developments in infosec for a lon
Re: (Score:1)
There is considerable overlap between the apparent intelligence of the average user and the apparent intelligence of software trying to mimic their responses to bypass restrictions. The financial motivation to continually improve this mimicry makes it really hard to solve, you seem to only be able to maybe keep up.
Sue bad-faith actors (Score:4, Interesting)
You have to be careful though - there is obvious bad-faith reporting and there is reporting that you can't prove is bad-faith. if you start penalizing those who "might" be bad-faith actors, you will discourage less-experienced/less-expert good-faith actors who just happen to be sloppy or outright wrong.
Adding a checklist where the submitter is swearing to tell the truth might help "prove bad faith." Such a checklist would include things like:
* Did you use AI? If so, what models and prompts did you use? What additional work did you do after reviewing the information provided by the AI?
* Have you submitted any reports about this product in the last year? Which ones?
* What version of the product did you test, and what platform did you test it on (include version numbers where applicable)?
* Include your name, country of residence, and contact information for you or your legal representative.
It's not that there are any right or wrong answers, but being caught lying would be strong evidence of bad faith.
Re:Sue bad-faith actors (Score:2)
The thing about CAPTCHA is they worked better when they don't look for perfection. CAPTCHAs (prior to AI) would detect non-humanness if your mouse acted rapidly or in perfect straight lines. So regarding your checklist, AI would remember it submitted a report on Dec. 28th 2023, therefore not last year. There's lots of situations where someone can't recall if they did something if at the time they didn't consider it a big deal. For example, if someone submits bug reports all the time, they wont recall if the
Re: (Score:2)
If the scammers are in other countries, how would you go after them?
robot wars (Score:2)
Best to fight automation with automation. These need a response from Lenny.
https://www.lennytroll.com/ [lennytroll.com]
Much as I hate the word... (Score:2)
Everything is trending towards enshittification. Nothing is exempt. Certainly can't have a bounty program for a well-intentioned product... the second a dollar attaches the descent begins.
The only reasonable response would be to start beating people with sticks.
curl has had issues though (Score:1)
Issues that have caused problems in Transmission and other apps. It's a shame that AI Is fucking everything up. It's time for real ID for internet communication, I guess.
How can we ... (Score:1)
It's possible to imagine.. (Score:3)
..a future AI that can actually find tricky bugs and security problems
Unfortunately, today, scumballs gotta scumball
It will get a lot worse before it gets better
Re: (Score:3)
The problem - as you mention- is that they're not foolproof at it. They're something that can assist with it.
However, they can be used not as an assistant, but as the whole shebang- and scumballs gotta scumball.
What's the solution? I don't know. But it's a problem.
I've heard suggests of requiring a small payment for each submission, to possibly be returned if submission is judged to be in good faith, or it pans out.
Re: (Score:2)
South Park (Score:2)
Make it lame, and gay.
Nicely shows how well AI code analysis works (Score:5, Interesting)
As soon as we leave toy examples behind, the answer apparently is "not at all" and "100% hallucination"...
The Singularity Is Not Near (Score:2, Troll)
Twenty or thirty years ago we started anticipating the AI singularity.
Today we see that instead we're going to get an AI crapularity.