Rabbit R1 AI Device Exposed by API Key Leak (404media.co) 15
Security researchers claim to have discovered exposed API keys in the code of Rabbit's R1 AI device, potentially allowing access to all user responses and company services. The group, known as Rabbitude, says they could send emails from internal Rabbit addresses to demonstrate the vulnerability. 404 Media adds: In a statement, Rabbit said, "Today we were made aware of an alleged data breach. Our security team immediately began investigating it. As of right now, we are not aware of any customer data being leaked or any compromise to our systems. If we learn of any other relevant information, we will provide an update once we have more details."
Crap company has crap IT (Score:2)
What else is new?
Re: (Score:3)
What else is new?
They revoked the keys so now they're bricked https://pivot-to-ai.com/2024/0... [pivot-to-ai.com]
Re: (Score:2)
Re: (Score:2)
I wonder how one can recover from something like that. If the mechanism a device uses for pulling firmware isn't compromised [1], having it use a different API key, or perhaps even generate its own API key, send it up to be certified, then the cert downloaded, might be more secure. Having a solid, fail-safe firmware fetching mechanism for a device that is designed to be always connected can mitigate API key leaks.
[1]: Of course, the signing key needs to be in a HSM, otherwise, if the key is compromised a
Re: (Score:2)
You are assuming they do this securely. That is probably not a valid assumption.
The problem here is that the API key is a secret key. Updates get verified with a public key (signature verification). Hence you need a second secret key that can be used to protect a new API key in transit (encryption). The update verification key cannot do that.
Re: (Score:2)
For information accuracy, the devices were down briefly but not bricked. From what I can see from posts online, they were back up within 1-2hrs.
I lifted that link from the wikipedia article https://en.wikipedia.org/wiki/... [wikipedia.org] as I had no Idea what it was. If there is new information notably neither the source article nor wikipedia has been updated.
Re: (Score:2)
Re: (Score:2)
This must affect, well, the 10 people who bought the thing...
Thought API keys were a solved problem... (Score:2)
I thought API keys were a solved problem. If I had special devices which used a private API, as opposed to just a generic RESTful API for read only stuff, or API keys connected with the current user account of the device, I'd either be storing the device keys in some type of secure storage, be it something like a TPM, or like the example with my Raspberry Pi, ZymKeys. Even that, someone who knows how to decap a chip might be able to tease out the API key, so maybe this is something that needs to be done b
AI (Score:2)
In social media, you were the product they sold. In AI, not only are you the product, you also make their product for them.
No leak. (Score:2)
we are not aware of any customer data being leaked
Thank goodness. That could have impacted tens of people.
Such a silly name. (Score:3)
Re: (Score:2)
This is less about email than getting access to email accounts. One -can- send an email out like that, but it won't be signed, just like someone can make an app that is named after some other product, but without the correct signature and certificate, it will at best get ignored, or at worst, get actively blacklisted.
With DMARC/DKIM/SPF, spoofing email users are a lot harder than it was back when you could telnet to an open port 25 and throw out a message.