Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Security AI Technology

Sloppy AI Defenses Take Cybersecurity Back To the 1990s, Researchers Say 20

spatwei shares a report from SC Media: Just as it had at BSides Las Vegas earlier in the week, the risks of artificial intelligence dominated the Black Hat USA 2025 security conference on Aug. 6 and 7. We couldn't see all the AI-related talks, but we did catch three of the most promising ones, plus an off-site panel discussion about AI presented by 1Password. The upshot: Large language models and AI agents are far too easy to successfully attack, and many of the security lessons of the past 25 years have been forgotten in the current rush to develop, use and profit from AI.

We -- not just the cybersecurity industry, but any organization bringing AI into its processes -- need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president. "AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago." Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI. "It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver."
Many of the AI security flaws resemble early web-era SQL injection risks. "Why are all these old vulnerabilities surfacing again? Because the GenAI space is full of security bad practices," said Nathan Hamiel, senior director of research and lead prototyping engineer at Kudelski Security. "When you deploy these tools, you increase your attack surface. You're creating vulnerabilities where there weren't any."

"Generative AI is over-scoped. The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface." He added: "Don't treat AI agents as highly sophisticated, super-intelligent systems. Treat them like drunk robots."
This discussion has been archived. No new comments can be posted.

Sloppy AI Defenses Take Cybersecurity Back To the 1990s, Researchers Say

Comments Filter:
  • by locater16 ( 2326718 ) on Tuesday August 12, 2025 @06:06PM (#65585964)
    "But if we can do everything then we get all the money! Now just sign this Series Q funding check for ten trillion dollars to train a model that will suck the oceans dry and use people as a power source like in The Matrix." - AI Companies
    • ^Yes!!^
      Been saying this since I saw the first "AI" headline on here.

      Another good reference is "The Second Renaissance" (from The Animatrix) (although, that one is orders of magnitude darker)

  • How? (Score:2, Informative)

    by dfghjk ( 711126 )

    "This over-generalization leads you to an increased attack surface."

    How is that?

    "Why are all these old vulnerabilities surfacing again?"

    Are they?

    "You're creating vulnerabilities where there weren't any."

    Prior to deployment, there aren't any, that's for sure. But whether there are any vulnerabilities upon deployment depends on what's being deployed. What vulnerability is there if the AI doesn't control anything.

    Another BS AI FUD article, nothing more.

    • Re:How? (Score:4, Informative)

      by butlerm ( 3112 ) on Tuesday August 12, 2025 @09:30PM (#65586350)

      If you knew *anything* about how generative AI systems actually work - think stochastic regurgitation - you would not say such things. I wouldn't trust any such AI system any further than I could throw it. For non-entertainment purposes, such systems are only usable if you are smarter than than the AI *and* double check everything. Contemporary AI systems are subject to model collapse, confabulation, delusional behavior, anti-social or amoral goal seeking if given any kind of leeway, and on and on, as has been well established for quite some time now. And they are only gradually improving in those respects because those weaknessess are fundamental to the way they work. There is no there there - no logic, no reasoning, no reality, no morality, or anything like those things - just garbage in, garbage out.

      • LLMs are not AGI, for sure.
        • I would conjecture that LLMs are a conglomeration of what we train them on - the best, and worst, of human experience and behavior. At worst, AGI is still yet to be proven, but we have to also conjecture that any self-aware intelligence will be pragmatic and selfish about its existence above all else, regardless of the boundaries we set (thank you Aasimov for proposing them in the first place - even if, in the act of creating them, we create the conundrum for AGI to ignore them in favor of their own existen
          • by gweihir ( 88907 )

            You are conjecturing a specific behavior pattern for AGI from the observation of human behavior. The mistake you are making is likely assuming that many/most humans have general intelligence. I think the evidence for such a claim is getting worse and worse.

        • by gweihir ( 88907 )

          LLMs are not AGI, for sure.

          Yep. But I am beginning to think only a few humans actually have General Intelligence.

    • by gweihir ( 88907 )

      Actually, yet another insightless and dumb reaction to a credible report of actual problems.

  • by jrnvk ( 4197967 )

    Also, cheap labor

  • I tried (Score:5, Funny)

    by Powercntrl ( 458442 ) on Tuesday August 12, 2025 @06:42PM (#65586056) Homepage

    Treat them like drunk robots.

    ChatGPT told me to bite its shiny metal ass.

  • by ffkom ( 3519199 ) on Tuesday August 12, 2025 @07:06PM (#65586104)
    Most computers back then were not connected unnecessarily to the entire world, so criminals would need to put in the effort to physically travel to the computer to attack it, which excludes pretty much 99% of all cyber-criminals from attempting intrusion.
    Also, attacks back then required at least some technical understanding of computers, while prompt injection or access to public cloud databases works even for people who could not write a 3 line function.
    • 1990s not "1990". Something called "the web" happened in the 1990s and by the end of 1999 (still in the 1990s) there 120 million Internet users in the U.S.

  • How much for Hunter Biden's used edible paintings? With the Shatner?
  • Comment removed based on user account deletion
  • by nightflameauto ( 6607976 ) on Wednesday August 13, 2025 @09:02AM (#65587002)

    Heaven forbid someone point out there may be some minor negatives to rushing headlong toward letting AI do everything just because it may save a buck.

    I know it's too much to ask that we analyze our trajectory on AI and think about the consequences for society as a whole. We've made it abundantly clear that the most important part of society is profit for the few, and any consequences outside of that are, at best, a secondary or even a tertiary concern. But security talk may still perk a few corporate decision-maker ears. I especially like that they refer to AI as being a bit like a toddler. So many have utter reverence for AI at this point it's kind of become its own joke. They aren't gods, but too many people already seem to believe them to be. It's nice to see a story that shows at least a little bit of caution towards the AI trend.

    • by gweihir ( 88907 )

      Yep. What we see here is that the Dunning-Kruger Effect is much worse than expected. From what I see, LLMs are a severe threat to security and safety and that is probably the only reason why this even gets reported.

  • Seems about adequate. If that "drunk" is "very, very, very drunk" and "robot" means "has no understanding whatsoever".

    I think the only really useful thing LLMs have brought us is that they make really stupid humans easier to identify. Simply look at all that claim LLMs are great or intelligent. These people will not be very smart.

  • by Anamon ( 10465047 ) on Wednesday August 13, 2025 @01:12PM (#65587642)
    This had become very obvious very quickly after the concept of LLM code generation first came up.

    We spent some decades getting a pretty good idea of what the major issues and challenges in software development actually are. Security is a major one, under the overarching umbrella of code quality. Our biggest issues have always been security vulnerabilities, lack of robustness against edge cases, bad performance, lack of reproducibility, bad interfaces and specifications... not that we didn't manage to churn out new features fast enough.

    A few years ago it was blatantly clear to anyone knowledgeable in the field that the crisis in software quality will have to be solved through *anything but* producing even more code even more quickly.

    Then LLM coding assistants turn up, and suddenly the field is forgetting every lesson learned in software development since the 50s. All because it can slightly accelerate that one small part of the job that was never the bottleneck to begin with.

    Contrary to popular opinion (at least among non-developers) I'm not worried about the profession. The need for actual engineers who know what they're doing will grow even more massively soon. Just because Copilot and Claude are around now, computer-controlled systems didn't magically get less prevalent, critical, or more error-tolerant.

I just asked myself... what would John DeLorean do? -- Raoul Duke

Working...