Anthropic's Mythos Model Is Being Accessed by Unauthorized Users (bloomberg.com) 28
Bloomberg reports that a small group of unauthorized users gained access to Anthropic's restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. [...] To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.
Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.
Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.
Re: (Score:2)
Because it's "too good" at finding vulnerabilities [slashdot.org], so big projects need head starts. Like Mozilla is claiming that it'll eventually be able to find all their defects. [mozilla.org]
Re: (Score:2)
Guaranteed Mozilla sweeps at least half of the discovered defects reports under the rug.
Or they give up on Moz and become another Chrome derivative because "it's too hard" yada yada
Re: (Score:2)
Guaranteed Mozilla sweeps at least half of the discovered defects reports under the rug.
Except that Mythos doesn't just list the vulnerabilities, but also gives you the fixes.
Re: (Score:3)
Fixes? Yeah, good luck with those...
Re: (Score:1)
Modded down for telling the truth, as usual. Still, I'm surprised Mozilla developers even use the web, let alone Slashdot.
No joke, the NSA (Score:2)
Aka the government agency which is #1 in violating domestic civil liberties. They have access. Even though Anthropic is currently on the do-not-purchase list by the feds. Also the UK "intelligence" agencies. And a few dozen other organizations.
This is either because (a) Anthropic doesn't have the datacenter capacity to roll this out to the world all at once, so they are only letting the chosen few access for now or (b) because they really and truly believe it is too dangerous for the world at large to have
People? (Score:2)
And do we know exactly who these people are that are able to use it?
People or other AI? :-)
great job (Score:5, Insightful)
Claim to make the best hacking software, then fail to secure access to it. What could go wrong?
Re: (Score:2)
Of course, somebody is watching their servers with something like LeechFTP (miss those days) and the moment Mythos shows up, it gets downloaded, and the way to access it is already spread across every site you can think of before they even notice.
So, even if they block that one avenue of access, now people will keep hammering at the door until they find a failure.
Hence, if you post it online (social media, or a big company), it will be found and downloaded.
Re: (Score:2)
Well, looks like they cannot even secure their own stuff competently. Probably an indicator for the quality of their product ...
Re: (Score:2)
I guess Mythos can't figure out its own security.
Fun to watch (Score:4, Interesting)
Re: (Score:2)
Same here. It nicely puts the grand marketing claims into perspective,
Re: (Score:2)
In this specific case, the LLM security hype has been focused at the code unit level, where context can be kept small. Which is apparent in Mozilla's claims of we can handle all the defects in the code.
But gives that time old lesson not losing track of the forest for the trees, even if you have a nifty branch scanner.
Perhaps there is a simple explanation (Score:2)
They just are not important enough to be on the allowed customers' list, or perhaps it is just too dangerous to let them use it.
Jesus Built My Hotrod (Score:3)
Mankind did burn the sky. Not to prevent AIs* from taking over but to give birth to them.
* actually unreliable LLMs, we all know the score here on
Re: Jesus Built My Hotrod (Score:2)
Hegseth was right (Score:2)
Anthropic is a a national security supply-chain risk. If they can't even keep people out of their stuff, what business do they have handling classified material?
Re: (Score:1)
Hegseth was right
Really? When exactly was that, date and time?
Re: (Score:2)
Shhh! Nobody tell him about Google.
Obviously (Score:3)
Re: (Score:2)
Someone forgot to tell Mythos to make itself secure
So just lazy... (Score:2)
To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.
Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models.
So Anthropic builds this super power model that is to dangerous to let just anyone near, but does not bother putting any real fine grained authorization controls on it. They just set access to to be control by some kinda tech-preview group membership they use for other things as well, and hope none of those people go sniffing around and find it?
WTF how much effort could have been in the context of a project like Mythos development to create a few more IAM objects? These guys want the world to take t