Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
IT

Google's AI Overviews Led Users Astray, Reports Say Some Phone Numbers Are Scams (androidcentral.com) 24

Google's AI Overviews has returned fraudulent customer service phone numbers in multiple reported incidents.

A Reddit user reported their friend received a fake number when searching "Swiggy [an Indian food delivery firm] customer care number," leading to attempted screen-sharing and money request scams. Facebook user Alex Rivlin encountered scammers after searching "royal caribbean customer service phone number 24 hours usa." The fraudulent representative requested credit card information before Rivlin detected the scam. Google said it is "aware" of the issue and has "taken action" against identified numbers. The company stated it is working to "improve results."

Google's AI Overviews Led Users Astray, Reports Say Some Phone Numbers Are Scams

Comments Filter:
  • AI lies. That's why lawyers keep getting in trouble with judges.

    Trusting AI is no different than looking at a kid with chocolate smeared all of it's face and asking it who ate the cake.

    You are going to get an answer, but there is no reason at all to believe it.

    • You are going to get an answer, but there is no reason at all to believe it.

      The cake is a lie. -AI

    • How come I just asked it for my bank's customer service number, and it was correct?

    • by Jeremi ( 14640 )

      Lying requires intention, and while it's certainly possible to program an AI to intentionally lie, that isn't what happened here.

      What happened here was that an AI's training data included fraudulent forum posts, and it didn't understand that they contained misinformation, and it included the fraudulent misinformation in its summaries later on. A real human could easily make the same mistake when googling for a phone number to send to his boss, and nobody would accuse him of lying. Carelessness or naivete,

      • by Rinnon ( 1474161 )
        I completely agree, but I'd go one step further and point out that, like lying, carelessness or naivete are also personifications that we are projecting onto these so-called AI. Let's not give them the credit of treating them like a naughty child who can be taught to do better and just call them what they are: kinda buggy (or janky, if you like).
      • I wonder if we could teach an AI how to recognize that scam the way a person who knew how to spot it would teach someone who didn't and it's context clues. Not just the information but where is it, who is saying it, what else is on this site.

        Of course I also don't understand enough about how these systems work that it would biff up a such a request when that request has a singular correct answer and that answer is going to be at the companies website, same way any of us would look it up if a person asked t

      • Lying requires intention, and while it's certainly possible to program an AI to intentionally lie, that isn't what happened here.

        This is accurate, it's not the AI that is lying. It's the people selling AI.

      • "AI" will give you something that looks like a phone number, without any concern for whether it is the actual phone number.

    • Was helping my neighbor work on his car today. Did not have a shop manual, so asked Gemini the torque value for a particular bolt on his year/model/etc. Gemini said 20 ft/lbs. Seemed low, so did a text search in the browser. AI overview said 80 ft/lbs. Seemed high. Went in the house, did a basic search, found model owners forums, found shop diagram, 40 pounds it is. We just did this on another friends car back in the spring so I had a pretty good idea what a proper value should be.

      It's safe to say
      • by spitzak ( 4019 )

        Much less dangerous, but I had an Nvidia driver update that failed (on Linux). Google search for "Nvidia driver 450 and later don't work on Linux" and it came up with something like "it's a known fact that versions 450 and later sometimes fail and here are some solutions...". I tried a few solutions and they did nothing. I then got more accurate tests of versions done and changed the search to "Nvidia driver 435 and earlier work..." and it came up with "it's a known fact that versions after 435 sometimes fa

  • GMail/Google has been hosting Nigerian princes since long before LLMs.

  • I asked ChatGPT to turn a text file of locations into a shape file. It returned only a .shp file. which is wrong. I told it shape files have 4 files...so it made 4 files with those file extensions...all empty files. It's was like talking to a pathological liar.
    • If you told me to do it, would you have to explain a bit more what a shape file is?

      • Sure, but at least you know you don't know what a shape file is.
        I've worked with map stuff before, so I am familiar with them. I'm usually looking to convert to GeoJSON because that's what works best with the tools I use, and sometimes it is shape files I am converting from.

        If you are wondering, it is a type of map data file originally invented for use by ArcView GIS

  • by sconeu ( 64226 ) on Tuesday August 19, 2025 @06:45PM (#65601070) Homepage Journal

    Slashdot's front page led users astray; reports duplicate stories are new.

    https://yro.slashdot.org/story... [slashdot.org]

    • by Pollux ( 102520 )

      msmash probably asked Google Gemini, "Has Slashdot posted this story yet?", and it hallucinated the wrong answer.

    • Slashdot's front page led users astray; reports duplicate stories are new.

      https://yro.slashdot.org/story... [slashdot.org]

      When TFS or the headline are misleading, you head to the comments to complain about it. Maybe a few people read TFA. Engagement goes up anyways.

      When the Google AI summary is wrong you close the browser tab because you got what you came for, as far as you know. Even if you knew it was wrong, how do you let others know?

      I don't want to in any way defend engagement bait headlines, but at least their purpose on a place like this or a call-in show or whatever is to argue about it, not to send you on your way tota

  • by jenningsthecat ( 1525947 ) on Tuesday August 19, 2025 @08:37PM (#65601246)

    I use uBlock in Firefox, and I've simply blocked the shitty AI summary crap on both Google and DDG. I never see it anymore. If I want AI I'll log into ChatGPT. In fact I did that recently, in order to find a YouTube video about which I didn't remember enough to find via a search engine.

    As far as I'm concerned, shoving unasked-for stuff in my face is the very best reason not to use it.

  • I hope that there will be a court case, in which Google gets slammed hard for having provided wrong information in their "AI summary".

    The world needs something like that, to set precedent to force AI companies to take responsibility for what their products produce.

  • Technology will move on, stuff will change, but there will always be scammers and suckers dumb enough to fall for the latest trick. Trusting a phone number from an AI based on the shit peeps post about cats is so dumb. If you're going to phone your bank, go the bank website for it's customer service details. You could still be scammed (DNS takeover) but it's a lot harder.

We can found no scientific discipline, nor a healthy profession on the technical mistakes of the Department of Defense and IBM. -- Edsger Dijkstra

Working...