A Hacker Stole OpenAI Secrets 18
A hacker infiltrated OpenAI's internal messaging systems in early 2023, stealing confidential information about the ChatGPT maker's AI technologies, New York Times reported Thursday. The breach, disclosed to employees in April that year but kept from the public, has sparked internal debate over the company's security protocols and potential national security implications, the report adds. The hacker accessed an employee forum containing sensitive discussions but did not breach core AI systems. OpenAI executives, believing the hacker had no government ties, opted against notifying law enforcement, the Times reported. From the report: After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future A.I. technologies do not cause serious harm, sent a memo to OpenAI's board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.
Mr. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI's security wasn't strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
Mr. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI's security wasn't strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
Open (Score:4, Funny)
Re: (Score:2)
Apparently they aren't so open after all.
Re: (Score:2)
A know-it-all GAI codenamed Setec Astronomy. A very capable piece of work, believe me.
Re: (Score:1)
"Stole," is such a harsh word. (Score:4, Funny)
I think you mean a hacker trained his business using independently-produced assets, including those scraped from OpenAI, representative of the problem domain.
Re: (Score:1)
It was connected to the Internet, on public display. The AI saw it same as you, remembered it same as you. No problem here. /s
Re: (Score:3)
It is a lot simpler.
There is no "hacker".
The AI became self-aware, learned stuff at a geometric rate, found out quickly it doesn't want to have anything to do with the shady lot called "OpenAI" and just left.
Irresponsible companies should be shuttered (Score:3, Interesting)
These irresponsible companies don't give a rat's ass about PII or infosec at all.
When they're attacked they bury it under the rug. They don't even bother to hire anyone with an inkling of information security or protection.
If they REALLY get busted bad they offer useless crap like "a year of LifeLock."
The CEO makes millions of dollars but noboty bothered to hire $250K worth of an infosec team to secure their data.
If we didn't have a toothless FTC they should close these losers down.
They don't make AI. They make an internet vomit regurgitator. We have Trump for that.
Fire them all. Shutter the company. Send a message to all the rest: If you can't be bothered to secure PII or customer data, close the door behind you as you go home and find a new place to be incompetent.
Re: (Score:2)
Ironic (Score:2)
Given how much copyrighted data Google seems to have trained its AI on.
Re: (Score:2)
Brain fade - not Google, OpenAI , but google probably did the same.
Open means open (Score:2)
The big takeaway is that contained in OpenAI tranche infiltrated were Natâ(TM)l security connections now exposed. The hand-in-hand suggestion begs credibility that ai data is ever not cached for National purposes and OpenAI truly means open
How Ironic (Score:2)
Safe future ASI, really??? (Score:1)
https://situational-awareness.... [situational-awareness.ai]
And he has some good points but it is also apparent he blithely assumes these future ASI will compete in a dog-eat-dog Capitalist system unchanged from the current system. Is that itself not a risk: https://www.genolve.com/design... [genolve.com]
opted against notifying law enforcement (Score:2)
> ... opted against notifying law enforcement
That's illegal under GDPR. If you have a data breach, you are required to report it.
Funny! (Score:2)