Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Security

Curl Battles Wave of AI-Generated False Vulnerability Reports (arstechnica.com) 26

The curl open source project is fighting against a flood of AI-generated false security reports. Daniel Stenberg, curl's original author and lead developer, declared on LinkedIn that they are "effectively being DDoSed" by these submissions.

"We still have not seen a single valid security report done with AI help," Stenberg wrote. This week alone, four AI-generated vulnerability reports arrived seeking reputation or bounties, ArsTechnica writes. One particularly frustrating May 4 report claiming "stream dependency cycles in the HTTP/3 protocol stack" pushed Stenberg "over the limit." The submission referenced non-existent functions and failed to apply to current versions.

Some AI reports are comically obvious. One accidentally included its prompt instruction: "and make it sound alarming." Stenberg has asked HackerOne, which manages vulnerability reporting, for "more tools to strike down this behavior." He plans to ban reporters whose submissions are deemed "AI slop."

Curl Battles Wave of AI-Generated False Vulnerability Reports

Comments Filter:
  • Isn't this a solved problem?

    • Re:Captcha (Score:5, Insightful)

      by DamnOregonian ( 963763 ) on Wednesday May 07, 2025 @02:30PM (#65359489)
      No.

      The information is submitted by a human, who didn't use their brain to generate it- but rather directed an LLM to produce it.
      LLMs can be useful in situations like this, but it requires some things that are not being utilized here, because these people aren't acting in good faith- rather, they're posers trying to use LLMs to level up past their current status of "fucking nobody"
    • by Z00L00K ( 682162 )

      Not with the AIs of today.

      We need something that's better than the captchas that we have seen so far.

      • The way we are going, that will be the only way to trust anything - by going back to the real world.
      • by N1AK ( 864906 )
        There isn't something. Pretty much anything we can think up to differentiate people from AI will always have one of two limits 1) The effort will be prohibitively high for a person to pass and/or 2) the moment it becomes remotely used AI will be trained to react in a way that is hard to distringuish using that test. At best captch style tests add a nominal cost to automated systems like bots which may be enough to stop it being financially worthwhile for low value transactions. In the case of something like
    • Re:Captcha (Score:4, Informative)

      by Sebby ( 238625 ) on Wednesday May 07, 2025 @02:31PM (#65359497) Journal

      Isn't this a solved problem?

      Yes... solved with AI [cheq.ai].

    • No. Also: no, NO, and NO.

      Captchas were quite thoroughly beaten well over a decade ago. (I've pointed this out here before, so I'm not going to repeat all the citations.) That was well before the current flurry of activity in the ML/AI space, which will only serve to pound the nails into the coffin harder.

      Anybody putting a captcha on their web site in 2025 is advertising that they have no idea how to actually defend it and that they haven't been paying attention to developments in infosec for a lon
    • by Anonymous Coward

      There is considerable overlap between the apparent intelligence of the average user and the apparent intelligence of software trying to mimic their responses to bypass restrictions. The financial motivation to continually improve this mimicry makes it really hard to solve, you seem to only be able to maybe keep up.

  • Sue bad-faith actors (Score:4, Interesting)

    by davidwr ( 791652 ) on Wednesday May 07, 2025 @02:30PM (#65359485) Homepage Journal

    You have to be careful though - there is obvious bad-faith reporting and there is reporting that you can't prove is bad-faith. if you start penalizing those who "might" be bad-faith actors, you will discourage less-experienced/less-expert good-faith actors who just happen to be sloppy or outright wrong.

    Adding a checklist where the submitter is swearing to tell the truth might help "prove bad faith." Such a checklist would include things like:

    * Did you use AI? If so, what models and prompts did you use? What additional work did you do after reviewing the information provided by the AI?
    * Have you submitted any reports about this product in the last year? Which ones?
    * What version of the product did you test, and what platform did you test it on (include version numbers where applicable)?
    * Include your name, country of residence, and contact information for you or your legal representative.

    It's not that there are any right or wrong answers, but being caught lying would be strong evidence of bad faith.

    • The thing about CAPTCHA is they worked better when they don't look for perfection. CAPTCHAs (prior to AI) would detect non-humanness if your mouse acted rapidly or in perfect straight lines. So regarding your checklist, AI would remember it submitted a report on Dec. 28th 2023, therefore not last year. There's lots of situations where someone can't recall if they did something if at the time they didn't consider it a big deal. For example, if someone submits bug reports all the time, they wont recall if the

    • If the scammers are in other countries, how would you go after them?

    • by AmiMoJo ( 196126 )

      They aren't going to admit they used AI, or even give you enough info to easily sue them in many cases. Often they are not in the same legal jurisdiction anyway.

      They are just bottom feeders who tell an AI to look for issues in open source code and write a bug report, and then spam them out. If they get extremely lucky they might get a serious bug credit for their CV, or they might even get a bounty.

  • Best to fight automation with automation. These need a response from Lenny.

    https://www.lennytroll.com/ [lennytroll.com]

  • Everything is trending towards enshittification. Nothing is exempt. Certainly can't have a bounty program for a well-intentioned product... the second a dollar attaches the descent begins.

    The only reasonable response would be to start beating people with sticks.

  • Issues that have caused problems in Transmission and other apps. It's a shame that AI Is fucking everything up. It's time for real ID for internet communication, I guess.

  • ... submit vulnerability reports on ChatGPT and other LLMs?

  • by MpVpRb ( 1423381 ) on Wednesday May 07, 2025 @02:59PM (#65359609)

    ..a future AI that can actually find tricky bugs and security problems
    Unfortunately, today, scumballs gotta scumball
    It will get a lot worse before it gets better

    • LLMs can and do assist in finding tricky bugs and security problems right now, today.

      The problem - as you mention- is that they're not foolproof at it. They're something that can assist with it.
      However, they can be used not as an assistant, but as the whole shebang- and scumballs gotta scumball.

      What's the solution? I don't know. But it's a problem.
      I've heard suggests of requiring a small payment for each submission, to possibly be returned if submission is judged to be in good faith, or it pans out.
  • Make it lame, and gay.

  • by gweihir ( 88907 ) on Wednesday May 07, 2025 @04:38PM (#65359837)

    As soon as we leave toy examples behind, the answer apparently is "not at all" and "100% hallucination"...

  • by rknop ( 240417 ) on Wednesday May 07, 2025 @08:12PM (#65360243) Homepage

    Twenty or thirty years ago we started anticipating the AI singularity.

    Today we see that instead we're going to get an AI crapularity.

The amount of time between slipping on the peel and landing on the pavement is precisely 1 bananosecond.

Working...