


Sloppy AI Defenses Take Cybersecurity Back To the 1990s, Researchers Say 20
spatwei shares a report from SC Media: Just as it had at BSides Las Vegas earlier in the week, the risks of artificial intelligence dominated the Black Hat USA 2025 security conference on Aug. 6 and 7. We couldn't see all the AI-related talks, but we did catch three of the most promising ones, plus an off-site panel discussion about AI presented by 1Password. The upshot: Large language models and AI agents are far too easy to successfully attack, and many of the security lessons of the past 25 years have been forgotten in the current rush to develop, use and profit from AI.
We -- not just the cybersecurity industry, but any organization bringing AI into its processes -- need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president. "AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago." Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI. "It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver." Many of the AI security flaws resemble early web-era SQL injection risks. "Why are all these old vulnerabilities surfacing again? Because the GenAI space is full of security bad practices," said Nathan Hamiel, senior director of research and lead prototyping engineer at Kudelski Security. "When you deploy these tools, you increase your attack surface. You're creating vulnerabilities where there weren't any."
"Generative AI is over-scoped. The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface." He added: "Don't treat AI agents as highly sophisticated, super-intelligent systems. Treat them like drunk robots."
We -- not just the cybersecurity industry, but any organization bringing AI into its processes -- need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president. "AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago." Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI. "It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver." Many of the AI security flaws resemble early web-era SQL injection risks. "Why are all these old vulnerabilities surfacing again? Because the GenAI space is full of security bad practices," said Nathan Hamiel, senior director of research and lead prototyping engineer at Kudelski Security. "When you deploy these tools, you increase your attack surface. You're creating vulnerabilities where there weren't any."
"Generative AI is over-scoped. The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface." He added: "Don't treat AI agents as highly sophisticated, super-intelligent systems. Treat them like drunk robots."
"Generative AI is over-scoped" (Score:5, Funny)
Re: (Score:3)
^Yes!!^
Been saying this since I saw the first "AI" headline on here.
Another good reference is "The Second Renaissance" (from The Animatrix) (although, that one is orders of magnitude darker)
How? (Score:2, Informative)
"This over-generalization leads you to an increased attack surface."
How is that?
"Why are all these old vulnerabilities surfacing again?"
Are they?
"You're creating vulnerabilities where there weren't any."
Prior to deployment, there aren't any, that's for sure. But whether there are any vulnerabilities upon deployment depends on what's being deployed. What vulnerability is there if the AI doesn't control anything.
Another BS AI FUD article, nothing more.
Re:How? (Score:4, Informative)
If you knew *anything* about how generative AI systems actually work - think stochastic regurgitation - you would not say such things. I wouldn't trust any such AI system any further than I could throw it. For non-entertainment purposes, such systems are only usable if you are smarter than than the AI *and* double check everything. Contemporary AI systems are subject to model collapse, confabulation, delusional behavior, anti-social or amoral goal seeking if given any kind of leeway, and on and on, as has been well established for quite some time now. And they are only gradually improving in those respects because those weaknessess are fundamental to the way they work. There is no there there - no logic, no reasoning, no reality, no morality, or anything like those things - just garbage in, garbage out.
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)
You are conjecturing a specific behavior pattern for AGI from the observation of human behavior. The mistake you are making is likely assuming that many/most humans have general intelligence. I think the evidence for such a claim is getting worse and worse.
Re: (Score:2)
LLMs are not AGI, for sure.
Yep. But I am beginning to think only a few humans actually have General Intelligence.
Re: (Score:2)
Actually, yet another insightless and dumb reaction to a credible report of actual problems.
Yep (Score:2)
Also, cheap labor
I tried (Score:5, Funny)
Treat them like drunk robots.
ChatGPT told me to bite its shiny metal ass.
IT Security in 1990 was not so bad (Score:4, Informative)
Also, attacks back then required at least some technical understanding of computers, while prompt injection or access to public cloud databases works even for people who could not write a 3 line function.
Re: (Score:3)
1990s not "1990". Something called "the web" happened in the 1990s and by the end of 1999 (still in the 1990s) there 120 million Internet users in the U.S.
Halfing Less Children with More Apparent Errors (Score:2)
Re: (Score:2)
-1, Doesn't Promote AI as a New God (Score:3)
Heaven forbid someone point out there may be some minor negatives to rushing headlong toward letting AI do everything just because it may save a buck.
I know it's too much to ask that we analyze our trajectory on AI and think about the consequences for society as a whole. We've made it abundantly clear that the most important part of society is profit for the few, and any consequences outside of that are, at best, a secondary or even a tertiary concern. But security talk may still perk a few corporate decision-maker ears. I especially like that they refer to AI as being a bit like a toddler. So many have utter reverence for AI at this point it's kind of become its own joke. They aren't gods, but too many people already seem to believe them to be. It's nice to see a story that shows at least a little bit of caution towards the AI trend.
Re: (Score:2)
Yep. What we see here is that the Dunning-Kruger Effect is much worse than expected. From what I see, LLMs are a severe threat to security and safety and that is probably the only reason why this even gets reported.
"Drunk robots" (Score:2)
Seems about adequate. If that "drunk" is "very, very, very drunk" and "robot" means "has no understanding whatsoever".
I think the only really useful thing LLMs have brought us is that they make really stupid humans easier to identify. Simply look at all that claim LLMs are great or intelligent. These people will not be very smart.
Of course (Score:3)
We spent some decades getting a pretty good idea of what the major issues and challenges in software development actually are. Security is a major one, under the overarching umbrella of code quality. Our biggest issues have always been security vulnerabilities, lack of robustness against edge cases, bad performance, lack of reproducibility, bad interfaces and specifications... not that we didn't manage to churn out new features fast enough.
A few years ago it was blatantly clear to anyone knowledgeable in the field that the crisis in software quality will have to be solved through *anything but* producing even more code even more quickly.
Then LLM coding assistants turn up, and suddenly the field is forgetting every lesson learned in software development since the 50s. All because it can slightly accelerate that one small part of the job that was never the bottleneck to begin with.
Contrary to popular opinion (at least among non-developers) I'm not worried about the profession. The need for actual engineers who know what they're doing will grow even more massively soon. Just because Copilot and Claude are around now, computer-controlled systems didn't magically get less prevalent, critical, or more error-tolerant.