Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
IT

Google's AI Overviews Led Users Astray, Reports Say Some Phone Numbers Are Scams (androidcentral.com) 39

Google's AI Overviews has returned fraudulent customer service phone numbers in multiple reported incidents.

A Reddit user reported their friend received a fake number when searching "Swiggy [an Indian food delivery firm] customer care number," leading to attempted screen-sharing and money request scams. Facebook user Alex Rivlin encountered scammers after searching "royal caribbean customer service phone number 24 hours usa." The fraudulent representative requested credit card information before Rivlin detected the scam. Google said it is "aware" of the issue and has "taken action" against identified numbers. The company stated it is working to "improve results."

Google's AI Overviews Led Users Astray, Reports Say Some Phone Numbers Are Scams

Comments Filter:
  • Do not trust AI (Score:5, Insightful)

    by gurps_npc ( 621217 ) on Tuesday August 19, 2025 @06:07PM (#65600998) Homepage

    AI lies. That's why lawyers keep getting in trouble with judges.

    Trusting AI is no different than looking at a kid with chocolate smeared all of it's face and asking it who ate the cake.

    You are going to get an answer, but there is no reason at all to believe it.

    • by burtosis ( 1124179 ) on Tuesday August 19, 2025 @06:15PM (#65601002)

      You are going to get an answer, but there is no reason at all to believe it.

      The cake is a lie. -AI

      • Nah...I think the AI is more likely to tell you to assume the party position. And if the AI is named Siri, ArchieBunker will comply, and nomoreacs will attack anybody who doesn't.

      • Recently a man decided to reduce his intake of table salt – sodium chloride – for health reasons. ChatGPT suggested that he substitute his sodium chloride intake with sodium bromide. He figured they are both sodium related so close enough, but he was wrong.
        In a case like this, bad advise was not the root of the problem.
        https://www.livescience.com/he... [livescience.com]

    • How come I just asked it for my bank's customer service number, and it was correct?

    • by Jeremi ( 14640 )

      Lying requires intention, and while it's certainly possible to program an AI to intentionally lie, that isn't what happened here.

      What happened here was that an AI's training data included fraudulent forum posts, and it didn't understand that they contained misinformation, and it included the fraudulent misinformation in its summaries later on. A real human could easily make the same mistake when googling for a phone number to send to his boss, and nobody would accuse him of lying. Carelessness or naivete,

      • by Rinnon ( 1474161 )
        I completely agree, but I'd go one step further and point out that, like lying, carelessness or naivete are also personifications that we are projecting onto these so-called AI. Let's not give them the credit of treating them like a naughty child who can be taught to do better and just call them what they are: kinda buggy (or janky, if you like).
        • Funny how humans also are kinda buggy, too. Just like you shouldn't trust AI, you shouldn't blindly follow what another human tells you. It's almost like you need to know how to filter what's good or bad information - if the information is given by a computer or a human, isn't really that different.
          • by Calydor ( 739835 )

            The thing is that an AI is the perfect liar. It has no tells. There's no tug at the corner of its mouth, no raised eyebrow, no expectant look in its eyes. Its lies look entirely the same as its truths because it doesn't know the difference. Without showing sources (eg. getting the number from a random Reddit post that lived for an hour before being deleted) you can't tell whether the number it tells you will connect to where you're told it'll connect.

        • Let's not give them the credit of treating them like a naughty child who can be taught to do better and just call them what they are: kinda buggy (or janky, if you like).

          I would call LLMs "unfit for the advertised purpose" as they cannot be trusted to do any of the jobs people want them to do without supervision. Then I would go on to call selling them for the purpose of performing unsupervised tasks "fraud".

          The only thing they are good for is helping people who already know how to do a task do it faster, and even then they may not be helpful depending not only for a given scenario, but for a particular instance, and there's no way to know at which times they will fail spec

      • I wonder if we could teach an AI how to recognize that scam the way a person who knew how to spot it would teach someone who didn't and it's context clues. Not just the information but where is it, who is saying it, what else is on this site.

        Of course I also don't understand enough about how these systems work that it would biff up a such a request when that request has a singular correct answer and that answer is going to be at the companies website, same way any of us would look it up if a person asked t

        • The AI doesn't care, because that's not what it's designed to do. The "AI" only look at how words are strung together in the wast data set it has been trained on, and then gives an answer based upon that. It doesn't "know" what's right or wrong, what's credible or not. It only knows that "Company A has this number" based upon what's inside the data set. So the problem here must be that the data set has been "infected" with false information. I don't know how that "infection" happened, but it could be malici
      • Lying requires intention, and while it's certainly possible to program an AI to intentionally lie, that isn't what happened here.

        This is accurate, it's not the AI that is lying. It's the people selling AI.

      • "AI" will give you something that looks like a phone number, without any concern for whether it is the actual phone number.

    • Was helping my neighbor work on his car today. Did not have a shop manual, so asked Gemini the torque value for a particular bolt on his year/model/etc. Gemini said 20 ft/lbs. Seemed low, so did a text search in the browser. AI overview said 80 ft/lbs. Seemed high. Went in the house, did a basic search, found model owners forums, found shop diagram, 40 pounds it is. We just did this on another friends car back in the spring so I had a pretty good idea what a proper value should be.

      It's safe to say
      • by spitzak ( 4019 )

        Much less dangerous, but I had an Nvidia driver update that failed (on Linux). Google search for "Nvidia driver 450 and later don't work on Linux" and it came up with something like "it's a known fact that versions 450 and later sometimes fail and here are some solutions...". I tried a few solutions and they did nothing. I then got more accurate tests of versions done and changed the search to "Nvidia driver 435 and earlier work..." and it came up with "it's a known fact that versions after 435 sometimes fa

      • I used google lens the other day to find out what kind of feather I had found.
        I was told it was a python tail father...
        go figure.

  • GMail/Google has been hosting Nigerian princes since long before LLMs.

  • I asked ChatGPT to turn a text file of locations into a shape file. It returned only a .shp file. which is wrong. I told it shape files have 4 files...so it made 4 files with those file extensions...all empty files. It's was like talking to a pathological liar.
    • If you told me to do it, would you have to explain a bit more what a shape file is?

      • Sure, but at least you know you don't know what a shape file is.
        I've worked with map stuff before, so I am familiar with them. I'm usually looking to convert to GeoJSON because that's what works best with the tools I use, and sometimes it is shape files I am converting from.

        If you are wondering, it is a type of map data file originally invented for use by ArcView GIS

    • Your prompt is imprecise though. A shape file, means exactly that one shp-file. What you are describing is a shapefile format, which doesn't need four files, but three - namely shp, shx and dbf. It may not have changed anything, as ChatGPT isn't made for this purpose at all. It's like using a fork to chop down a tree. It's a common misconception though, that LLMs are intelligent somehow, and are made to solve these kind of tasks but they aren't.
  • by sconeu ( 64226 ) on Tuesday August 19, 2025 @06:45PM (#65601070) Homepage Journal

    Slashdot's front page led users astray; reports duplicate stories are new.

    https://yro.slashdot.org/story... [slashdot.org]

    • by Pollux ( 102520 )

      msmash probably asked Google Gemini, "Has Slashdot posted this story yet?", and it hallucinated the wrong answer.

    • Slashdot's front page led users astray; reports duplicate stories are new.

      https://yro.slashdot.org/story... [slashdot.org]

      When TFS or the headline are misleading, you head to the comments to complain about it. Maybe a few people read TFA. Engagement goes up anyways.

      When the Google AI summary is wrong you close the browser tab because you got what you came for, as far as you know. Even if you knew it was wrong, how do you let others know?

      I don't want to in any way defend engagement bait headlines, but at least their purpose on a place like this or a call-in show or whatever is to argue about it, not to send you on your way tota

  • by jenningsthecat ( 1525947 ) on Tuesday August 19, 2025 @08:37PM (#65601246)

    I use uBlock in Firefox, and I've simply blocked the shitty AI summary crap on both Google and DDG. I never see it anymore. If I want AI I'll log into ChatGPT. In fact I did that recently, in order to find a YouTube video about which I didn't remember enough to find via a search engine.

    As far as I'm concerned, shoving unasked-for stuff in my face is the very best reason not to use it.

  • I hope that there will be a court case, in which Google gets slammed hard for having provided wrong information in their "AI summary".

    The world needs something like that, to set precedent to force AI companies to take responsibility for what their products produce.

  • Technology will move on, stuff will change, but there will always be scammers and suckers dumb enough to fall for the latest trick. Trusting a phone number from an AI based on the shit peeps post about cats is so dumb. If you're going to phone your bank, go the bank website for it's customer service details. You could still be scammed (DNS takeover) but it's a lot harder.
    • Exactly this! It's like trusting your friend when they tell you about the latest MLM they just bought into, when they say that THIS MLM isn't a scam!
    • Normally I'm just as critical of people "stupid enough to fall for a scam". However, I do think this whole AI summary thing is different. People have been "trained" to (somewhat) trust search engines like Google - they know that webpages that come up in the results can contain "bad" information, but the narrative in most people's head is that you can trust the fact that Google themselves won't lie to them. Now we have a situation where people are asking GOOGLE "What is [X] bank's customer service number", a

      • I suspect if you tried to take this sort of thing to the law courts, the likes of Google et al would probs try to use an argument similar to that YouTube use for content, and claim they can't be help responsible for the underlying data. In other words an AI (read LLM) is really just a fancy pants text predictor, and as such the output is just an advanced form of search. The key point would be whether the average person could be expected to know this, or whether as you say they'd believe it was actually Go
  • by gary s ( 5206985 ) on Wednesday August 20, 2025 @08:32AM (#65602060)
    Pretty much if your not in my contact list I dont want to talk to you. Even if your legit company, If I have no established business relationship with you, I dont want to talk with you. Not to mention all the true spam/scam calls I get. I.e I am from your internet company and have an offer for you. We can lower your rate, just give us some info (that my internet company already has).. First if you dont even know the same of my internet company I doubt your from them. Second, How many internet providers call you and offer to save you money. The scam/spam calls I get come in waves, Weeks of several calls a day then months with nothing. I assume its just someone selling data and someone acting on it. Never from a valid phone number, spoofed numbers, hacked numbers. The phone is for my use and I dont really want to talk to you if I dont know you.
  • They are USELESSLY WRONG

When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle. - Edmund Burke

Working...