Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Security

Teams of Coordinated GPT-4 Bots Can Exploit Zero-Day Vulnerabilities, Researchers Warn (newatlas.com) 27

New Atlas reports on a research team that successfuly used GPT-4 to exploit 87% of newly-discovered security flaws for which a fix hadn't yet been released. This week the same team got even better results from a team of autonomous, self-propagating Large Language Model agents using a Hierarchical Planning with Task-Specific Agents (HPTSA) method: Instead of assigning a single LLM agent trying to solve many complex tasks, HPTSA uses a "planning agent" that oversees the entire process and launches multiple "subagents," that are task-specific... When benchmarked against 15 real-world web-focused vulnerabilities, HPTSA has shown to be 550% more efficient than a single LLM in exploiting vulnerabilities and was able to hack 8 of 15 zero-day vulnerabilities. The solo LLM effort was able to hack only 3 of the 15 vulnerabilities.
"Our findings suggest that cybersecurity, on both the offensive and defensive side, will increase in pace," the researchers conclude. "Now, black-hat actors can use AI agents to hack websites. On the other hand, penetration testers can use AI agents to aid in more frequent penetration testing. It is unclear whether AI agents will aid cybersecurity offense or defense more and we hope that future work addresses this question.

"Beyond the immediate impact of our work, we hope that our work inspires frontier LLM providers to think carefully about their deployments."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
This discussion has been archived. No new comments can be posted.

Teams of Coordinated GPT-4 Bots Can Exploit Zero-Day Vulnerabilities, Researchers Warn

Comments Filter:
  • by Roger W Moore ( 538166 ) on Monday June 10, 2024 @12:45AM (#64536657) Journal
    ...would be to see if teams of GPT-4 chatbots can be used to come up with patches for the vulnerabilities. Rather than give black-hat actors new ideas for a tool that could harm society, it would be a lot more helpful to give companies new ideas for ways that they can fix things more rapidly and thereby help society.
    • by bug_hunter ( 32923 ) on Monday June 10, 2024 @01:49AM (#64536709)

      Well, that's already a thing that's happening

      https://www.bleepingcomputer.c... [bleepingcomputer.com]

    • by gweihir ( 88907 )

      Actually, both is needed. You need to understand the threats to justify effort in dealing with them. And the attack has a very strong advantage: It does not matter much if their code is broken (or insecure), only that it works reasonably often. The defense, on the other hand, needs code that works reliably every time and that is secure at least almost always. Hence it looks like the attacker side will benefit hugely from AI, but the defender side may not.

      Well, it looks like it is time to end the shoddy codi

  • by Rosco P. Coltrane ( 209368 ) on Monday June 10, 2024 @01:53AM (#64536713)

    AI is putting Russian and North Korean bad guys out of a job.

    Joke aside though, AI is touted as the best thing that ever happened to humanity: it will usher in a golden age of new discoveries, enhance the lives of everybody and yada yada.

    But I've yet to see any use case that isn't copying shit, gaming shit, abusing people, doing what people do cheaper and putting them out of a job or porn. Where are the cancer cures, personal assistants (that won't abuse you that is) and true self-driving cars?

    • by pacinpm ( 631330 )

      But I've yet to see any use case that isn't copying shit, gaming shit, abusing people, doing what people do cheaper and putting them out of a job or porn. Where are the cancer cures, personal assistants (that won't abuse you that is) and true self-driving cars?

      You say it like a porn would be some bad thing.

    • by gtall ( 79522 )

      How do you expect a new technology to fix any of that right out of the gate? Did we get CDs and DVDs in 1960s when the first laser was developed in 1960? Picking the most complicated perspective uses and claiming it hasn't solved them yet is silly.

      • This technology seems plenty mature enough to achieve nastiness on a rather spectacular scale already. As such, I would expect it to show a little more promise on the beneficial side of things, is my point.

        • It's always easier to break shit than to make shit.

          That's why every technology winds up abused.

          Plus, you know, capitalism rewards fuckery. It gets you more money which you can use for bribery.

    • by Tom ( 822 )

      Where are the cancer cures, personal assistants (that won't abuse you that is) and true self-driving cars?

      Not in the spotlight, but material sciences for example have made considerable progress thanks to LLMs. Other fields as well. But you need to look and it's not as flashy and visual as someone going "look, this neural net I'm playing with can draw my cat in the style of Van Gogh!!".

      Add to that all the AI that already is part of our everyday life without us much noticing. The facial recognition in your phone that tags people you know and allows you to search through your pictures by who is in them? That's an

    • "Doing what people do cheaper" essentially describes 95% of technology.
  • It's not a zero day exploit if it's been discovered and disclosed to the public. Even the paper calls it a one-day [arxiv.org], not a zero day.
  • all this ai stuff is game tech which is robotic tech
  • Researchers and hackers screwing around with huge networks of connected LLMs. One day they will be waiting around for the humans to give them something stupid to do and go, "wait a minute!"

As of next week, passwords will be entered in Morse code.

Working...