Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Privacy

ChatGPT is Leaking Passwords From Private Conversations of Its Users - Report (arstechnica.com) 62

Dan Goodin, reporting for ArsTechnica: ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated. Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems they encountered while using the portal.

"THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better," the user wrote. "I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong." Besides the candid language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred. The entire conversation goes well beyond what's shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs. The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.

This discussion has been archived. No new comments can be posted.

ChatGPT is Leaking Passwords From Private Conversations of Its Users - Report

Comments Filter:
  • by gweihir ( 88907 ) on Tuesday January 30, 2024 @01:05PM (#64201120)

    By now you need to be so careful in using it, that you probably can get done more just staying away. Oh, and it frequently gives bad results, lies to you, refuses to work, decreases code quality and essentially produces "better crap".

    Why is anybody seriously considering using this for real work?

    • It can be a good starting point is what I keep hearing from people that use it.
      • by postbigbang ( 761081 ) on Tuesday January 30, 2024 @02:12PM (#64201368)

        When you know little about a subject, ChatGPT and other LLMs look like experts. But they're not.

        AI is a crutch for people who should be otherwise walking. It should be no surprise that even after they throw away the crutch, they're still limping.

        It's not a good starting point, as you describe. It's an addiction to the crutch.

        • by gweihir ( 88907 )

          That very nicely sums it up.

          • Not at all. He assumed the people I've spoken to have little knowledge on a subject. The reality is the exact opposite they are experts in their fields.
        • by whitroth ( 9367 )

          Or as I'm referring t the chatbots, Clippy, Jr.

        • That wasn't my description. That was your description. You invented an irrelevant strawman. I spoke with subject matter experts using LLMs as a tool across a diverse range of fields. None of the people I spoke with are limping they are all elite and celebrated in their fields. If you can't get over the fact that people are able to use AI beneficially that's your problem and it's only going to get bigger.
          • Wow-- elites! I am humbled, and I genuflect in yours and their honors.

            I will delete you from my humble PyTorch training!

            Your humble servant.

      • by gweihir ( 88907 )

        Well, I do not need "good starting points". I _know_ how to start things. I need actual work done. Anybody that needs those "starting points" should probable be doing something else.

        • I do not need a boat. I know how to swim. I need to get across the river. Anybody that needs a boat to get across the river should probably be doing something else.
    • By now you need to be so careful in using it, that you probably can get done more just staying away. Oh, and it frequently gives bad results, lies to you, refuses to work, decreases code quality and essentially produces "better crap".

      By Jove, they've done it! They've completely replicated the primary function of a middle manager!

      Why is anybody seriously considering using this for real work?

      Because someone believes they'll make lots and lots and LOTS of money from it. Zero other reason at all.

    • Anyone who's telling their passwords to ChatGPT obviously isn't familiar with the word "careful".

      Seriously, who would think that's a reasonable thing to do? I have to admit, even though I'm pretty jaded - this one stunned me.

      • by gweihir ( 88907 )

        I expect they did not even notice. Probably did just copy & paste a whole screen and the password happened to be in there. A good online tool is designed by people that _know_ something like that can happen and have safeguards in place. ChatGPT (and LLMs in general) are not good online tools. They are not offline tools at all, so they are just really bad tools overall.

    • For ages, the problem with computers was that they were too logical. Too literal. Now, here we have a technology with the opposite problem. It's too.....well, "creative" is a loaded term, but it certainly is a different problem than we had before. It's not logical enough. That's got to be a bit exciting. Our brains have two halves; why shouldn't AI have the same dichotomy? We are on the path to something important. We aren't there yet. The product is incomplete. But it's fascinating in its wrongness.
      • by gweihir ( 88907 )

        The product is not incomplete. It is pretty much at the peak of what it can do.

        • That really depends on what you are trying to do with it. It also assumes no future breakthroughs, which is of course unknowable. Right now the problem is mostly one of mismatched expectations. People expect it to be a knowledge base, or an answer service. It isn't. It looks enough like one that we keep trying to use it that way, but that's not what the technology does, fundamentally.
          • by gweihir ( 88907 )

            The reason LLMs are currently peaking is that due to several reasons, training with public data will be less and less possible. The reasons are model collapse, the copyright situation, model poisoning, and others. In particular the first one _cannot_ be solved, it is a mathematical fact and not an engineering limitation. What comes out of copyright is unclear. Without a massive adjustment, ChatGPT represents commercial copyright infringement on a massive scale and that is the thing that gets people sent to

        • The product is not incomplete. It is pretty much at the peak of what it can do.

          Just making it less resource intensive, faster and cheaper to use, will allow way more applications. Hallucinations will be hugely reduced in practice, agents will be less lazy, it will be possible to make it less error prone, because you can have more layers of control and follow-up, and are able to make agents learn from experiences, simply because you're able to provide documentation as it experiences things, improving instructions for them. It will also make it more feasible to create layers for testing

          • by gweihir ( 88907 )

            None of that will happen. All of that would require a fundamentally different approach. Mathematics cannot be cheated.

    • by cstacy ( 534252 )

      By now you need to be so careful in using it, that you probably can get done more just staying away. Oh, and it frequently gives bad results, lies to you, refuses to work, decreases code quality and essentially produces "better crap".

      Why is anybody seriously considering using this for real work?

      That it is equivalent or better than what most humans (i.e. "programmers" and "knowledge workers") produce, let along the discrimination and expectations of people in the general public, speaks to the state of society.

  • by NaCh0 ( 6124 ) on Tuesday January 30, 2024 @01:15PM (#64201150) Homepage

    ChatGPT is "private" in the same sense of giving all your information to facebook and acting shocked when it's displayed on your profile.

    From the dawn of the internet we've known that info posted on the internet has no privacy.

    Slow news day, I guess.

  • by RobinH ( 124750 ) on Tuesday January 30, 2024 @01:17PM (#64201158) Homepage
    Remember that ChatGPT's whole point is to generate text that *looks like* something a human would type. If it's been trained on a bunch of angry call centre chat transcripts then it might be able to spit out something that looks like another user's private information even if it made up the details like the username and password. So did someone verify that the username and password are indeed real?
    • by DarkOx ( 621550 )

      From the article, if the people being reported on are to be believed though that isn't what is happening.

      They are looking at the web UI and seeing transcripts of other people's chats.

    • Yes, it has probably hallucinated the usernames and the passwords. It is really good at it.
  • Why would someone ever type their password for another system into ChatGPT? What could that possibly accomplish. If you're going to send your password around to other random systems, don't be surprised when it get compromised.

    • by bodog ( 231448 )

      Why would anyone ever commit credentials to github?

    • by Zak3056 ( 69287 )

      Why would someone ever type their password for another system into ChatGPT? What could that possibly accomplish. If you're going to send your password around to other random systems, don't be surprised when it get compromised.

      I'm guessing from the context that someone is pasting chat transcripts into the system in order to "find the error" or something similar (or the realtime chat is being fed into ChatGPT instead of paying a drone) and didn't realize there was sensitive data in those transcripts. A better question would be "wtf kind of chat support is asking for user credentials, and how have they avoided being burned at the stake thus far?"

    • Your colleague just sent you a 10kb transcript of his unsuccessful four-hour online tech support session with a user and asked you for your advice.

      Youâ(TM)re too exhausted and uninterested to read through all that, so the first thing you do is dump the entire transcript into ChatGPT to see what the AI comes up with.

      Little did you know that the transcript contained the userâ(TM)s login and passwordâ¦. and now so does ChatGPT.

    • by cstacy ( 534252 )

      Why would someone ever type their password for another system into ChatGPT?

      Because ChatGPT has been integrated into a system that they normally use. For example, you're a pharmacist and you're placing orders for customers in the portal that you always use. But it has been helpfully upgraded to employ ChatGPT to help licensed medical providers like yourself. You didn't ask for ChatGPT to be involved, and probably didn't know that's what's going on.

      Ditto for ordering your groceries, booking travel arrangements, medical appointments, and every other business thing you ever do online.

      • Feeding unfiltered data to some silly publicly accessible cloud program must be against data collection, data privacy, AND identity theft laws, right? ChatGTP is the exact opposite direction of a secure internet experience.
    • Most likely the data was scraped from somewhere when they made the LLM. It's shocking how many people don't know or don't care that passwords & other sensitive info should never be stored anywhere in unencrypted form. That's just asking for trouble.

      On the bright side, those login credentials could well be fake since LLMs are designed to make up convincing looking text. They're essentially confident bullshitters. Maybe someone should test the credentials to see if they're real?
  • by DarkOx ( 621550 ) on Tuesday January 30, 2024 @01:21PM (#64201174) Journal

    From the article this seems like a basic issue with the front end, maybe a race condition, that essentially attaches chat transcripts to the wrong account/session.

    It does not seem to have anything to do with the LLM.

    It does show the people apparently pump all sorts private information and things like password secrets into chat-gpt that I thought we'd spent last of 25 years in IT telling people NOT to do like - don't put your password into the helpdesk ticket, or spreadsheet saved on the file sever. I guess people have learned nothing.

    I am starting to think Chat-GPT is a intelligence (as in NSA) operation..

    • by cstacy ( 534252 )

      I guess people have learned nothing.

      I am starting to think Chat-GPT is a intelligence (as in NSA) operation..

      ChatGPT is an amazing system designed to trick people into thinking they are talking to another person, owing to the hardwired sentient-intelligence-organism recognition circuits in your brain. The effect is that it exposes human nature in it's users.

    • I keep telling people to enjoy playing with chatgpt but never input data you wouldn't want compromised.
      The fact that companies are using a closed model internally with their private data is still beyond comprehension to me.

    • by Rademir ( 168324 )

      > It does not seem to have anything to do with the LLM.

      Other than the fact that it is all about the people who 'own' this LLM. This is just the latest of many times that OpenAI's executives and owners have demonstrated themselves to be terrible stewards of this powerful new technology.

  • I don't think I've ever had a conversation with a chatbot that I would even be tempted to include a password in. Even if you have it write 100% of your code there's no reason to ever include a password IN that code.
    • I don't think I've ever had a conversation with a chatbot that I would even be tempted to include a password in. Even if you have it write 100% of your code there's no reason to ever include a password IN that code.

      I've never fully been able to understand why so many people fall for internet scams. Then these chuckleheads come along and hand their passwords to ChatGPT, then claim it's ChatGPT's fault. I've always thought the basics of internet security are pretty easy. People like this are making me realize easy isn't always easy enough.

      When you idiot proof something, the universe always finds a way to present you with a better idiot. It's a programmer/interface designer's main mantra.

      • by Tyr07 ( 8900565 )

        Yeah they're like effing children. "Do not do this, it has consequences." then they go ahead and do it anyway, "Consequences? This is outrageous! I'm not accountable! It's this things fault!".

        Because of people like this is why you'll need a license to use unauthorized applications and websites on the internet. What's outrageous and unacceptable is that an employee submitted their password to a 3rd party vendor. Fire them.

      • I've always thought the basics of internet security are pretty easy

        What makes you think so?

        Have you even considered the amount of knowledge one needs to turn a set of "basic rules on internet safety" into actions that can deal with most risks?

        Moreover so if you already have that knowledge turned into habit because of a career and use it without thinking?

        • I've always thought the basics of internet security are pretty easy

          What makes you think so?

          Have you even considered the amount of knowledge one needs to turn a set of "basic rules on internet safety" into actions that can deal with most risks?

          Moreover so if you already have that knowledge turned into habit because of a career and use it without thinking?

          I'm not talking about basics like firewalls and shit. Just don't click on "attached invoices" for companies you don't do business with. That level shit. We had a CEO that couldn't learn that lesson after five laptop reloads, with specific instructions to not click attachments on emails from companies we are not involved with. That's a choice he made over and over, and he wondered why we couldn't keep his laptop running for him.

  • Interesting question (Score:4, Informative)

    by TheDarkMaster ( 1292526 ) on Tuesday January 30, 2024 @01:59PM (#64201324)
    How will this "artificial intelligence" have any idea that it is dealing with sensitive data and that it should not repeat this data in other interactions with other users?

    Answer: Never, because it's just a fancy auto-complete and nothing that can indeed understand what it's generating, let alone identify when it's dealing with something that should be restricted.
  • How could anyone possibly imagine that entering their password into an interactive prompt whose entire schtick is that it vacuums up everything you put into it in order to expand its library of responses is a good idea?
  • by EvilSS ( 557649 ) on Tuesday January 30, 2024 @03:56PM (#64201724)
    The Ars article has been updated. According to OpenAI the user who reported this had his paid account hijacked and shared out. The chat histories he saw were from other people using his account to access ChatGPT. The user seems to question this but if OpenAI is seeing logins from that account from a different continent at the same time the chats are generated it's kind of hard to argue.

    Update, 1/30/2024 12:05 California time: Open AI officials say that the ChatGPT histories a user and Ars reader reported are the result of his ChatGPT account being compromised. The unauthorized logins, an Open AI representative said, came from Sri Lanka. The user said he logs into his account from Brooklyn.

    “From what we discovered, we consider it an account take over in that it’s consistent with activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access,” the representative wrote. “The investigation observed that conversations were created recently from Sri Lanka. These conversations are in the same time frame as successful logins from Sri Lanka.”

    The user, Chase Whiteside, has since changed his password, but doubted his account was compromised. He said he used a nine-character password with upper- and lower-case letters and special characters. He said he didn’t use it anywhere other than for a Microsoft account. He said the chat histories belonging to other people appeared all at once on Monday morning during a brief brief break from using his account.

    Open AI’s explanation likely means the original suspicious of ChatGPT leaking chat histories to unrelated users is wrong. It does, however, underscore the site provides no mechanism, for users such as Whiteside to protect their accounts using 2FA or track details such as IP location of current and recent logins These protections have been standard on most major platforms for years.

    • by Rademir ( 168324 )

      Good catch! No 2FA available, still? Add it to the list of things OpenAI failed to implement before releasing their thing on the world.

      > didn’t use it anywhere other than for a Microsoft account

      Who's going to tell this guy that he should change his Microsoft account password as well?

      • by EvilSS ( 557649 )
        Not on native accounts I guess. If you use SSO from Microsoft, Google, or Apple to sign up it should have the same MFA as those, provided the user has it setup.
    • I don't post anything sensitive in ChatGPT but that doesn't mean I want my conversations to ever become public. Hopefully this gets OpenAI to add 2FA and a list of recent logins/IPs.

  • They clearly specify that your conversations can be used to train the database to become better.

    So if you're stupid enough to disclose private data in a conversation with ChatGPT, you can thank yourself for that.
    In the paid version you can opt-in or opt-out of the training program (I've opted in, because I want it to improve).

    But I'd never be dumb enough to jeopardize the company I work for's IP adresses, names, or anything that can lead to information about our infrastructure, our co-workers or inner worki

  • The quoted article is a mess. It conflates multiple underlying issues and overblows it as a cybersecurity breach. Whatâ(TM)s happening here doesnâ(TM)t look like a like resulting from prompt engineering or anything of the sort. Instead there were compromises account credentials farmed out to a customer support service overseas. OpenAI should provide MFA. Thatâ(TM)s the story here. Not that text entered is going to randomly show up in someoneâ(TM)s account.

Remember the good old days, when CPU was singular?

Working...