Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security

Okta's Source Code Stolen After GitHub Repositories Hacked (bleepingcomputer.com) 45

Okta, a leading provider of authentication services and Identity and Access Management (IAM) solutions, says that its private GitHub repositories were hacked this month. From a report: According to a 'confidential' email notification sent by Okta and seen by BleepingComputer, the security incident involves threat actors stealing Okta's source code. BleepingComputer has obtained a 'confidential' security incident notification that Okta has been emailing to its 'security contacts' as of a few hours ago. We have confirmed that multiple sources, including IT admins, have been receiving this email notification. Earlier this month, GitHub alerted Okta of suspicious access to Okta's code repositories, states the notification. "Upon investigation, we have concluded that such access was used to copy Okta code repositories," writes David Bradbury, the company's Chief Security Officer (CSO) in the email.
This discussion has been archived. No new comments can be posted.

Okta's Source Code Stolen After GitHub Repositories Hacked

Comments Filter:
  • by Anonymous Coward

    That's fantastic. Where can I find the source? Because they have a bug I desperately want to track down and their support staff is a joke.

  • In other words, "cloud" company uses another "cloud" company to host its code and between the two of 'em couldn't keep the data out of unauthorized hands.

    Gee, if only the "cloud" security company had setup an internal/self-hosted/on-prem code versioning repository and protected access to it with their own products/services (which they claim are super secure and the bees-knees), instead of hosting it with another "cloud" company...

    Yeeeeah... no sympathy here.

    • But see, this way Okta can blame github - and github can blame Okta. Whoever you talk to the result will be 'it's not our fault'.

      Also, you are only as secure as the dumbest employee.
    • by chipset ( 639011 )

      I have often wondered why companies would host their business critical IP in the public cloud. Security issues abound. At least if it is behind a corporate firewall, there are multiple barriers to get to the Crown Jewels.

      • Modern business seems to hate VPNs and tends to use web based services, for better or worse. Honestly, security has been a challenge for everyone in 2022.

      • I have a few personal private GH projects, but they are small little dumb things I don't want the rest of the world to see. No real concern if somehow it got compromised. But even I did some diligence before using a private GH repo...

        That a large corp would simply hold their very proprietary code in a GH repo behind a thin veneer of "private"... Color me naïve, but that is sort of nuts.
        • This is where Github AE was supposed to shine but data consistency and azure capacity issues shelved that product indefinitely. A few other engineers tried to start another hosted GitHub Enterprise product but I wouldnâ(TM)t be surprised if Microsoft let them go for trying to help end users.

    • by ls671 ( 1122017 )

      This is ridiculous! Run your own private instance of gitlab or plain git if you are half serious about protecting your assets and require at least a VPN connections if you wish to make your employees access it from home.

      Cloud hosted github and gitlab are fine for public open source code but I would never put anything private and closed source there!

      • What does a VPN offer that mTLS does not besides slow down my internet connection?

        • By itself, nothing. However, most modern VPN solutions can require both user and device authentication, as well as various other host posture checks that typically can mitigate many identity-based compromises. Unless I haven't been keeping up, vanilla mTLS, not so much. While is may be more challenging to steal a digital certificate, it certainly isn't impossible if they aren't hardware-protected in a smartcard or TPM or some such, which really doesn't work for mobile and as such frequently makes i
      • Or even GitHub Enterprise, which is an on-prem distribution of GitHub.

    • by gweihir ( 88907 )

      In other words, "cloud" company uses another "cloud" company to host its code and between the two of 'em couldn't keep the data out of unauthorized hands.

      Yep. Sounds like standard for "cloud" stuff. Do not trust vapor.

  • by greytree ( 7124971 ) on Wednesday December 21, 2022 @10:02AM (#63147636)
    Forced open source.

    I like it!
  • by Joey Vegetables ( 686525 ) on Wednesday December 21, 2022 @10:03AM (#63147642) Journal

    A halfway decent lock still works perfectly even if you know exactly how it's designed.

    Also, secrets of any kind (encryption keys, etc.) should NEVER be in source code respositories, not even private ones.

    If this compromised source code makes it possible to hack Okta's customers' accounts, then something is seriously, seriously wrong.

    • Definitely. Security should not rely on the implementation being secret.
    • by ffkom ( 3519199 )

      A halfway decent lock still works perfectly even if you know exactly how it's designed.

      Yes. But if you rent a lock, and can no longer be sure whether the lock you are using was modified by some thief before being handed to you, that is a problem. Do you want to just believe Okta that those who obtained access to their private repositories only used that access to strictly perform read-only operations? Do they provide evidence how the service they sell is actually based on unmodified code?

      • True, but presuming they are using Git, and that the breach was of source code only, it should be a simple matter to check commit logs to see if anything was modified.

        If secrets were potentially stolen (e.g., passwords, SSH keys, etc.), then obviously they need to be changed. As I mentioned earlier, those kinds of things should not be in source code control anyway.

        • by ffkom ( 3519199 )

          True, but presuming they are using Git, and that the breach was of source code only, it should be a simple matter to check commit logs to see if anything was modified.

          As someone who is used to be looked at like a weirdo because I insist on signing all my commits to corporate git repositories using GPG, I have experienced basically no one else in the industry doing this, and some of the standard ways github offers for handling pull-requests and merging even deliberately remove commit signatures by rebasing.
          And not signed commits are trivial to modify into some existing source tree: Just copy somebodies currently open legitimate pull request, locally modify it to contain

          • This certainly seems possible, but wouldn't this cause issues with other devs that they would notice when they do a pull? I'm not a Git expert, and honestly haven't thought through how one might try to pull something like this off.
            • by ffkom ( 3519199 )

              This certainly seems possible, but wouldn't this cause issues with other devs that they would notice when they do a pull?

              Most developers in corporations are used to review other people's changes (if at all) only as part of "pull requests", but perform "git pull" routinely from central repositories without questioning whether what they pulled only consists of what they previously saw as part of a "pull request".

              • OK. I guess I'd be a little more concerned now. And more convinced that signing commits by default is probably the right thing to do.
      • This is still Git. They can't change the source code without it being glaringly obvious. Each modification to the source code is cryptographically unique and cannot be modified without Git instantly flagging it the next time a developer pulls the repo.

        • by ffkom ( 3519199 )

          This is still Git. They can't change the source code without it being glaringly obvious. Each modification to the source code is cryptographically unique and cannot be modified without Git instantly flagging it the next time a developer pulls the repo.

          In the (bigger) corporations I have been working for, basically all developers would "git pull" (from the potentially compromised github repository) routinely without reviewing the pulled changes locally, especially because they assume they already saw the changes as part of some prior pull-request. While in the case of an adversary changing the repository, they of course never saw the modification in a PR.

    • by znrt ( 2424692 )

      absolutely. and if they had made their code public from the start that would have been excellent practice, a great example and would have bolstered their reputation quite a lot (if the code were any good, that is).

      deciding to keep it secret but not being able to do so, though, as a security focused company no less, that's pretty embarrassing and definitely not good for their business.

    • by gweihir ( 88907 )

      The problem is that they might have in part relied on that "obscurity" aspect. And then this stuff getting exposed _is_ a security risk.

  • But we should trust these guys with our corporate logins?

  • How exactly were their GitHub repositories "hacked?" If they got in by defeating GitHub security measures, then this is a much much bigger story, but if they got it via poor access management or social engineering or a stolen laptop/device, then that's a different story and bad on Okta.

  • "Private" repos (Score:4, Insightful)

    by twdorris ( 29395 ) on Wednesday December 21, 2022 @10:48AM (#63147746)

    "private" GitHub repositories

    You keep using that word. I do not think it means what you think it means.

  • It is just source code. It isn't really all that big of a deal unless the hacker starts selling access to it.
    • by Macfox ( 50100 )

      That's a certainty. Okta is used widely in Fortune 500. Getting insight into the code base for possible vulnerabilities is very attractive. Just look at the aftermath with Cisco and MS leaks.

    • So you're saying there wouldn't be any insights into vulnerabilities by having a peek at the code, and library versions they are using that may or may not be up to date? Seriously?

      • Hopefully, since they're a massive security company, you'd think that code review, pentesting, dependency analysis etc have been done regularly already so many times, that no -- not many insights could be gleaned. Unless they're doing something stupid like storing secrets in code (which they almost certainly aren't), it would take a skilled threat actor a ton of time to evaluate the code; and they'd likely find nothing useful.

        Sure, some dependencies might have some vulnerabilities or various things like

        • I'm sure there was no pressure to ignore vulnerabilities and meet the release date. No one in any large company ever said "Ignore the knows issues and release the software, we can fix them later if we have time."

          Also, no one who can steal code from GitHub could be proficient enough to use a code scanning tool and have a list of their most critical vulnerabilities in minues. It's just not possible.

          In conclusion, Okta software is perfectly safe...

          • Uhm..maybe if you were working at a little podunk software shop, or a bigger company making software that doesn't handle such sensitive use cases, like games.

            At Okta, every single code commit comes with that scan..and you can't commit code if the code fails the scan. Then, each merged version is run through dynamic scanning as well...so not just static analysis scan, an active scan of the live web app. THEN, a third party security firm regularly runs their own scans with different engines.

            If you were

  • Okta is a garbage company. Here is a taste:

    https://www.cvedetails.com/vul... [cvedetails.com]
    https://techcrunch.com/2022/03... [techcrunch.com]
    https://www.darkreading.com/ap... [darkreading.com]
    https://www.esecurityplanet.co... [esecurityplanet.com]
    https://apnews.com/article/tec... [apnews.com]

    Seems like more of a security risk than a security solution to me.

  • Hiding the source code is security through obscurity. The software should be just as secure with the source code openly available. Let us all pray.

    I've written security critical software, and while I don't know about a possible attack if the source code were known, and there shouldn't be a possible attack, I wouldn't bet there is none. Now the second stage would be getting the keys to upload new (hacked) versions to the App Store. These keys need to be kept very, very secure. Without them you can only in
  • So, Gartner rated Okta in its upper right quadrant - will they be losing their rating due to security concerns now? And hmm...doesn't Okta's competitor for IAM own GitHub (i.e. Microsoft)...starts one wondering..
  • ...it was storing the cloud's authentication code .... in the cloud. And I guarantee nobody learns their lesson!
  • Time to drop their solutions. If they cannot even secure their own stuff...

    That said, in _closed_ source software, the source code leaking can be a massive security risk. Typically people tend to code a lot worse when they know nobody outside is going to see what they wrote.

"How to make a million dollars: First, get a million dollars." -- Steve Martin

Working...