ChatGPT is Leaking Passwords From Private Conversations of Its Users - Report (arstechnica.com) 62
Dan Goodin, reporting for ArsTechnica: ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated. Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems they encountered while using the portal.
"THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better," the user wrote. "I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong." Besides the candid language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred. The entire conversation goes well beyond what's shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs. The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.
"THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better," the user wrote. "I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong." Besides the candid language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred. The entire conversation goes well beyond what's shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs. The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.
Re: (Score:2)
I don't understand how a conversation with someone else's computer could possibly be considered private
Private doesn't mean secret from anyone but you. It means limited from public exposure or use. There is nothing about it involving someone's computer, as the sole qualifier, that would make it non-private.
especially if that someone-else never even promised it would be private.
This is the only part that is relevant, and also true of interactions with people. Are they going to keep the conversation private? Intentional, or accidental, security problems exists in both cases. None of this is to say you should/shouldn't intrinsically trust a company with your information. It's only
Re: "private" conversations (Score:1)
This crap tech becomes less usable every day (Score:4, Interesting)
By now you need to be so careful in using it, that you probably can get done more just staying away. Oh, and it frequently gives bad results, lies to you, refuses to work, decreases code quality and essentially produces "better crap".
Why is anybody seriously considering using this for real work?
Re: (Score:2)
Re:This crap tech becomes less usable every day (Score:4, Insightful)
When you know little about a subject, ChatGPT and other LLMs look like experts. But they're not.
AI is a crutch for people who should be otherwise walking. It should be no surprise that even after they throw away the crutch, they're still limping.
It's not a good starting point, as you describe. It's an addiction to the crutch.
Re: (Score:2)
That very nicely sums it up.
Re: (Score:2)
Re: (Score:2)
Or as I'm referring t the chatbots, Clippy, Jr.
Re: (Score:2)
Re: (Score:2)
Wow-- elites! I am humbled, and I genuflect in yours and their honors.
I will delete you from my humble PyTorch training!
Your humble servant.
Re: (Score:2)
Well, I do not need "good starting points". I _know_ how to start things. I need actual work done. Anybody that needs those "starting points" should probable be doing something else.
Re: (Score:2)
Re: (Score:2)
By now you need to be so careful in using it, that you probably can get done more just staying away. Oh, and it frequently gives bad results, lies to you, refuses to work, decreases code quality and essentially produces "better crap".
By Jove, they've done it! They've completely replicated the primary function of a middle manager!
Why is anybody seriously considering using this for real work?
Because someone believes they'll make lots and lots and LOTS of money from it. Zero other reason at all.
Re: (Score:2)
Indeed. On both counts.
Re: (Score:2)
Anyone who's telling their passwords to ChatGPT obviously isn't familiar with the word "careful".
Seriously, who would think that's a reasonable thing to do? I have to admit, even though I'm pretty jaded - this one stunned me.
Re: (Score:2)
I expect they did not even notice. Probably did just copy & paste a whole screen and the password happened to be in there. A good online tool is designed by people that _know_ something like that can happen and have safeguards in place. ChatGPT (and LLMs in general) are not good online tools. They are not offline tools at all, so they are just really bad tools overall.
Re: This crap tech becomes less usable every day (Score:3)
Re: (Score:2)
The product is not incomplete. It is pretty much at the peak of what it can do.
Re: This crap tech becomes less usable every day (Score:2)
Re: (Score:2)
The reason LLMs are currently peaking is that due to several reasons, training with public data will be less and less possible. The reasons are model collapse, the copyright situation, model poisoning, and others. In particular the first one _cannot_ be solved, it is a mathematical fact and not an engineering limitation. What comes out of copyright is unclear. Without a massive adjustment, ChatGPT represents commercial copyright infringement on a massive scale and that is the thing that gets people sent to
Re: This crap tech becomes less usable every day (Score:2)
Re: (Score:2)
Actually insurmountable. It is a mathematical property of LLMs. You need an entirely different technology to not have that problem.
Re: This crap tech becomes less usable every day (Score:2)
Re: (Score:1)
The product is not incomplete. It is pretty much at the peak of what it can do.
Just making it less resource intensive, faster and cheaper to use, will allow way more applications. Hallucinations will be hugely reduced in practice, agents will be less lazy, it will be possible to make it less error prone, because you can have more layers of control and follow-up, and are able to make agents learn from experiences, simply because you're able to provide documentation as it experiences things, improving instructions for them. It will also make it more feasible to create layers for testing
Re: (Score:2)
None of that will happen. All of that would require a fundamentally different approach. Mathematics cannot be cheated.
Re: (Score:2)
By now you need to be so careful in using it, that you probably can get done more just staying away. Oh, and it frequently gives bad results, lies to you, refuses to work, decreases code quality and essentially produces "better crap".
Why is anybody seriously considering using this for real work?
That it is equivalent or better than what most humans (i.e. "programmers" and "knowledge workers") produce, let along the discrimination and expectations of people in the general public, speaks to the state of society.
Private? (Score:3)
ChatGPT is "private" in the same sense of giving all your information to facebook and acting shocked when it's displayed on your profile.
From the dawn of the internet we've known that info posted on the internet has no privacy.
Slow news day, I guess.
Was the information verified valid? (Score:5, Interesting)
Re: (Score:2)
From the article, if the people being reported on are to be believed though that isn't what is happening.
They are looking at the web UI and seeing transcripts of other people's chats.
Re: (Score:2)
This same bug happened last year as well and they fixed it.
Re: (Score:1)
Why give it your password (Score:2)
Why would someone ever type their password for another system into ChatGPT? What could that possibly accomplish. If you're going to send your password around to other random systems, don't be surprised when it get compromised.
Re: (Score:1)
Why would anyone ever commit credentials to github?
Re: (Score:3)
Why would someone ever type their password for another system into ChatGPT? What could that possibly accomplish. If you're going to send your password around to other random systems, don't be surprised when it get compromised.
I'm guessing from the context that someone is pasting chat transcripts into the system in order to "find the error" or something similar (or the realtime chat is being fed into ChatGPT instead of paying a drone) and didn't realize there was sensitive data in those transcripts. A better question would be "wtf kind of chat support is asking for user credentials, and how have they avoided being burned at the stake thus far?"
Re: Why give it your password (Score:2)
Your colleague just sent you a 10kb transcript of his unsuccessful four-hour online tech support session with a user and asked you for your advice.
Youâ(TM)re too exhausted and uninterested to read through all that, so the first thing you do is dump the entire transcript into ChatGPT to see what the AI comes up with.
Little did you know that the transcript contained the userâ(TM)s login and passwordâ¦. and now so does ChatGPT.
Re: (Score:3)
Why would someone ever type their password for another system into ChatGPT?
Because ChatGPT has been integrated into a system that they normally use. For example, you're a pharmacist and you're placing orders for customers in the portal that you always use. But it has been helpfully upgraded to employ ChatGPT to help licensed medical providers like yourself. You didn't ask for ChatGPT to be involved, and probably didn't know that's what's going on.
Ditto for ordering your groceries, booking travel arrangements, medical appointments, and every other business thing you ever do online.
Re: (Score:2)
Re: Why give it your password (Score:2)
In places where they have such laws and they are meaningful? Yes.
That doesn't include the USA.
Re: (Score:2)
On the bright side, those login credentials could well be fake since LLMs are designed to make up convincing looking text. They're essentially confident bullshitters. Maybe someone should test the credentials to see if they're real?
It seems more like a basic web vuln (Score:5, Interesting)
From the article this seems like a basic issue with the front end, maybe a race condition, that essentially attaches chat transcripts to the wrong account/session.
It does not seem to have anything to do with the LLM.
It does show the people apparently pump all sorts private information and things like password secrets into chat-gpt that I thought we'd spent last of 25 years in IT telling people NOT to do like - don't put your password into the helpdesk ticket, or spreadsheet saved on the file sever. I guess people have learned nothing.
I am starting to think Chat-GPT is a intelligence (as in NSA) operation..
Re: (Score:2)
I guess people have learned nothing.
I am starting to think Chat-GPT is a intelligence (as in NSA) operation..
ChatGPT is an amazing system designed to trick people into thinking they are talking to another person, owing to the hardwired sentient-intelligence-organism recognition circuits in your brain. The effect is that it exposes human nature in it's users.
Re: (Score:1)
Re: (Score:2)
I keep telling people to enjoy playing with chatgpt but never input data you wouldn't want compromised.
The fact that companies are using a closed model internally with their private data is still beyond comprehension to me.
Re: (Score:2)
> It does not seem to have anything to do with the LLM.
Other than the fact that it is all about the people who 'own' this LLM. This is just the latest of many times that OpenAI's executives and owners have demonstrated themselves to be terrible stewards of this powerful new technology.
What kind of conversations are people having? (Score:2)
Re: (Score:2)
I don't think I've ever had a conversation with a chatbot that I would even be tempted to include a password in. Even if you have it write 100% of your code there's no reason to ever include a password IN that code.
I've never fully been able to understand why so many people fall for internet scams. Then these chuckleheads come along and hand their passwords to ChatGPT, then claim it's ChatGPT's fault. I've always thought the basics of internet security are pretty easy. People like this are making me realize easy isn't always easy enough.
When you idiot proof something, the universe always finds a way to present you with a better idiot. It's a programmer/interface designer's main mantra.
Re: (Score:2)
Yeah they're like effing children. "Do not do this, it has consequences." then they go ahead and do it anyway, "Consequences? This is outrageous! I'm not accountable! It's this things fault!".
Because of people like this is why you'll need a license to use unauthorized applications and websites on the internet. What's outrageous and unacceptable is that an employee submitted their password to a 3rd party vendor. Fire them.
Re: What kind of conversations are people having? (Score:2)
Who is going to fire them? They own the company.
Re: (Score:2)
Let them suffer the consequences then, either way, why the need to be so specific?
Re: (Score:2)
I've always thought the basics of internet security are pretty easy
What makes you think so?
Have you even considered the amount of knowledge one needs to turn a set of "basic rules on internet safety" into actions that can deal with most risks?
Moreover so if you already have that knowledge turned into habit because of a career and use it without thinking?
Re: (Score:2)
I've always thought the basics of internet security are pretty easy
What makes you think so?
Have you even considered the amount of knowledge one needs to turn a set of "basic rules on internet safety" into actions that can deal with most risks?
Moreover so if you already have that knowledge turned into habit because of a career and use it without thinking?
I'm not talking about basics like firewalls and shit. Just don't click on "attached invoices" for companies you don't do business with. That level shit. We had a CEO that couldn't learn that lesson after five laptop reloads, with specific instructions to not click attachments on emails from companies we are not involved with. That's a choice he made over and over, and he wondered why we couldn't keep his laptop running for him.
Interesting question (Score:4, Informative)
Answer: Never, because it's just a fancy auto-complete and nothing that can indeed understand what it's generating, let alone identify when it's dealing with something that should be restricted.
"private"? (Score:2)
Article updated: Account was hacked and shared (Score:5, Informative)
Update, 1/30/2024 12:05 California time: Open AI officials say that the ChatGPT histories a user and Ars reader reported are the result of his ChatGPT account being compromised. The unauthorized logins, an Open AI representative said, came from Sri Lanka. The user said he logs into his account from Brooklyn.
“From what we discovered, we consider it an account take over in that it’s consistent with activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access,” the representative wrote. “The investigation observed that conversations were created recently from Sri Lanka. These conversations are in the same time frame as successful logins from Sri Lanka.”
The user, Chase Whiteside, has since changed his password, but doubted his account was compromised. He said he used a nine-character password with upper- and lower-case letters and special characters. He said he didn’t use it anywhere other than for a Microsoft account. He said the chat histories belonging to other people appeared all at once on Monday morning during a brief brief break from using his account.
Open AI’s explanation likely means the original suspicious of ChatGPT leaking chat histories to unrelated users is wrong. It does, however, underscore the site provides no mechanism, for users such as Whiteside to protect their accounts using 2FA or track details such as IP location of current and recent logins These protections have been standard on most major platforms for years.
Re: (Score:2)
Good catch! No 2FA available, still? Add it to the list of things OpenAI failed to implement before releasing their thing on the world.
> didn’t use it anywhere other than for a Microsoft account
Who's going to tell this guy that he should change his Microsoft account password as well?
Re: (Score:2)
Re: (Score:2)
I don't post anything sensitive in ChatGPT but that doesn't mean I want my conversations to ever become public. Hopefully this gets OpenAI to add 2FA and a list of recent logins/IPs.
They never said they would be private (Score:2)
They clearly specify that your conversations can be used to train the database to become better.
So if you're stupid enough to disclose private data in a conversation with ChatGPT, you can thank yourself for that.
In the paid version you can opt-in or opt-out of the training program (I've opted in, because I want it to improve).
But I'd never be dumb enough to jeopardize the company I work for's IP adresses, names, or anything that can lead to information about our infrastructure, our co-workers or inner worki
Dumpster fire (Score:1)