Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security IT

20% of GitLab Employees Handed Over Login Credentials in Phishing Test (siliconangle.com) 81

SiliconANGLE reports: [C]ode repository management firm GitLab Inc. decided to phish their own employees to see what would happen. The result was not good: One in five employees fell for the fake emails...

The GitLab team behind the exercise purchased the domain name gitlab.company, then used G Suite to facilitate the delivery of the phishing email. ["Congratulations. Your IT Department has identified you as a candidate for Apple's System Refresh Program..."] The domain name and G Suite services were set up to look legitimate, complete with SSL certificates to make the emails look less suspicious to automated phishing site detection and human inspection.

Fifty GitLab employees were targeted with an email that asked them to click on a link to accept an upgrade. The link took them to the fake gitlab.company website where they were asked to enter their login details. On the positive side, only 17 of the 50 targeted employees clicked on the provided link.

However, 10 of those 17 then attempted to log in on the fake site.

Six of the 50 employees reported the email to GitLab's security operations team, the article notes. "Those who logged in on the fake site were then redirected to the phishing test section of the GitLab Handbook."
This discussion has been archived. No new comments can be posted.

20% of GitLab Employees Handed Over Login Credentials in Phishing Test

Comments Filter:
  • Fire them (Score:5, Interesting)

    by Akardam ( 186995 ) on Sunday May 24, 2020 @02:41PM (#60099528)

    That's really all you can do. Even assuming that the employees have not received training along these lines (in which case GitLab is about as stupid as 20% of the employees they hire), but to be a technology worker in this day and age and not understand what phishing is, or more to the point how common it is, passes understanding, and any decent technology company should not want people like this working for them.

    • ...and any decent technology company should not want people like this working for them.

      Clearly they shouldn't but if the methods of Corporate America's HR departments are any indication, those are exactly the sort of people they want.

    • You think we should fire the random sample unfortunate enough to have been selected in a phishing test?

      I would assume it's a broader problem than just the ones caught in the test. Maybe punish all the employees.

    • At the very least, make the ones who failed have to get their manager to log in for them for 6 months. Then it punishes the manager too, so they'll push out better training.

    • by Cylix ( 55374 )

      Doesnt really sound like they an SSO.

      That takes a lot of the confusion out of the mix.

      • In most organizations SSO is still a pipe dream. Multiple login requests, from different systems, and no one is ever notified when process and login procedures change, leaving employees bewildered and jut trying to get their work done. In short, most IT organizations do this to themselves.
        • At my company, EVERYTHING is through Okta which requires MFA.

          We simply don't use or allow systems not compatible with it.

          Does that limit us from using certain systems? Yup. Deal.
          • by Bert64 ( 520050 )

            MFA is very useful, but it's not perfect... The MFA codes may only last a minute, but if you proxy the connection and grab a session you have a lot longer to use that session. A phishing site can easily replicate the okta login screen and proxy it.

            SSO can often be a disadvantage, as people are used to using the same creds everywhere they will be less hesitant to enter them, as opposed to separate systems where they will think "do i have a login for this"...

        • In my very large organization, we have at least three different SSO solutions active. Had to use five when I started, but they've slowly whittled them down.

          On the other hand, security sends out phishing emails like these at least every month. You get an ecard at the end of the year if you successfully flag them all.

          You can't do this sort of thing as a one-time deal. Just like sending fake weapons through airport security periodically, you have to train people that they may actually occasionally find what yo

        • SSO is here, and it works. We have over 10k users using it, and it works.
          Microsoft says SSO takes 15 minutes to implement in a web app. When I did that for the first time, I spent more, maybe 30-45 minutes. But I was completely new to this, and it used technologies I had not used before - And I wanted to get some understanding of it. So it is trivial to implement on web solutions.

          We have demanded SSO from suppliers for 2-3 years as a hard demand, had it as a very important feature for 5 years or so (SAML ba

    • Re:Fire them (Score:5, Insightful)

      by fahrbot-bot ( 874524 ) on Sunday May 24, 2020 @03:01PM (#60099596)

      ... but to be a technology worker in this day and age and not understand what phishing is, or more to the point how common it is, passes understanding, and any decent technology company should not want people like this working for them.

      Yes, because if someone id deficient in one area, they must be deficient in everything. /sarcasm

      No, you don't fire them for one failure like this in a test condition, you (re)train them. I worked for a large defense contractor who routinely did this kind of testing. They used the results to tune their required training and testing. Firing 20% of their employees, many of whom have security clearances (that the company had to fund), would be counter-productive. Now, if they repeatedly fail the testing and/or don't take the training, that's another matter.

      Your everything is black/white attitude is immature.

      • by Akardam ( 186995 )

        Oh, lighten up. It was an opinion expressed, that's all. Rest assured I am not in a position of authority anywhere to dictate the firing of anyone for any reason.

        Based on the wording at the top of the summary, and near the top of the article, it would be easy to assume that this the first phishing test the company conducted. Let's assume for the moment that it is, because if this wasn't the first time, then that makes the numbers even worse. There's also no indication whether any of the test pool had previo

        • by Bert64 ( 520050 )

          There are lots of legitimate services which send emails that would be classed as suspicious by your criteria.
          I don't disagree with your criteria btw, i disagree with these companies that are undoing all the work going into phishing education.

    • If it was 20% of the entire company, most of them probably weren't tech workers.
    • I was going to type up how my company solved this problem, but I see another commenter, "rantan", already posted with exactly the same experience:

      Rantan said:
      --
      The financial institution I work for started doing phishing tests a couple years ago, and we saw about 15% of employees falling for it the first time. It dropped to under 10% when it was done again six months later, and I believe it is almost 0% now. This was after doing yearly training on phishing attacks for a few years before. It seemed that doing

    • by Luthair ( 847766 )

      The fact that they were forced to turn off their own email protection system in order to send a look alike email doesn't really make this a representative scenario. Its like hiding from your kid in the park and hiring an actor to dress like a police officer, then giving your kid shit for talking to a stranger.

      Preventing exterior phishing emails from reaching your employees is an IT job. Also a corporate management job is deploying a single sign on system so your employees don't get used to entering their p

      • by Bert64 ( 520050 )

        You will never prevent 100% of phishing emails or other kinds of unwanted emails from reaching users... At best you can reduce the flow, and the tighter you try to control the more likely false positive will get blocked and you'll lose legitimate communication with customers.

        So you still need training for the remainder that get through. It makes sense that they would have bypassed the mail filtering system as they were aiming to test the response of users, the goal of the exercise was not to spend their tim

        • by Luthair ( 847766 )

          So you still need training for the remainder that get through. It makes sense that they would have bypassed the mail filtering system as they were aiming to test the response of users, the goal of the exercise was not to spend their time trying to evade their email filtering system.

          Yes, except that isn't what this training tested. This training tested a fictional scenario where emails which would not be delivered to employees was sent to them. Ultimately all they did here is show that its possible to construct confusing URLs and demonstrate why its important to block them.

    • One of the places I used to work we had the policy of every staff member had to undergo security induction before using a system. We would then on a regular basis phish a sample of the user base, if you were caught out in in this fake phishing you would have system access suspended and sent to compulsory security training for reeducation. You would be amazed how careful people get after a few rounds of this. within a 18 months incidents of this nature dropped drastically. But the thing that amused or baffle
    • Iâ(TM)ve noticed that in the âoetrainingâ most companies donâ(TM)t actually teach people how to read a URL - which is the only real way to protect against this. Instead they teach them all idiotic stuff like âoedonâ(TM)t click link in emailâ while also having every internal system requiring people to click links in email.
      • by Bert64 ( 520050 )

        Teaching people how to responsibly judge the situation is much harder than teaching a kneejerk response of ignoring every remotely suspicious message.

        In reality you should read the URL, and you should get confirmation from a legitimate source before you follow any instructions sent via email etc, and possibly open the url in a sandbox to see what it does... A few of us used to troll phishing sites by writing scripts to flood them with bogus credentials for instance.

        The same advice is given for things like s

      • I assume a big part of the problem is that in order to read a URL you first need to get access to that URL. Outlook goes way out of its way to hide things like the headers from the email and the source for HTML messages. It doesn't surprise me that people clicked on the URL - sometimes the easiest way to see where the URL is going to take you is to click on it and see where it actually takes you, assuming you've got a sandbox or whatever in case there is something bad there. Of course, you also have to d

    • by Bert64 ( 520050 )

      Part of the problem is that legitimate services often work in the same way as phishing emails...
      Your company signs you up for some service that you're using, and you get an email from the service inviting you to log in. This happens often, and in many cases the company doesn't pre notify their staff to expect these mails.
      I had this recently in the form of an employee survey, i received an unexpected email from an external survey company telling me about the employee survey... I thought it was suspicious and

    • by nasch ( 598556 )

      I just recently heard an NPR story about a study of this problem, and they found that while initial phishing training was not particularly effective, retraining people right after they failed a test was quite effective and greatly reduced the number of people who fell for it next time. How long the effect lasts wasn't clear to me, but the upshot is that firing them is not "all you can do".

  • by ranton ( 36917 ) on Sunday May 24, 2020 @02:41PM (#60099530)

    The financial institution I work for started doing phishing tests a couple years ago, and we saw about 15% of employees falling for it the first time. It dropped to under 10% when it was done again six months later, and I believe it is almost 0% now. This was after doing yearly training on phishing attacks for a few years before. It seemed that doing these phishing tests increased awareness far more than the training ever did, which makes sense because it caused embarrassment which provided real incentive to change behavior.

    Although it is still distressing that it isn't at 0% now. There always seems to be 1 or 2 people who enter in their credentials. Although there hasn't been anyone in management with any significant access to private information for a while.

    • by cusco ( 717999 )

      I had an instructor whose day job was pen testing financial institutions. He said that they would get assigned a random conference room and as he was unpacking and setting up equipment his partner would start calling branches of the bank. "Hi, this is Pherd, the new guy in IT. All my co-workers are in a benefits meeting and I need to fix the switch in your office, and they forgot to add me to the networking group. Your manager has login permissions on that switch, can I get them?" By the time the equip

    • awareness far more than the training ever did, which makes sense because it caused embarrassment which provided real incentive to change behavior.

      There should be a large billboard in a common area with 'Phishing Victims'.

    • Send a well designed phishing email around open enrollment time in the USA, tell people their benefits will lapse and they will not be able to make changes until the following year's open enrollment, unless they log in before the end of day to confirm their benefits elections. Direct them to a convincing looking site, then ask them to provide credentials to log in, then redirect them to the actual benefits site (even if they have to re-login). I'd be willing to bet if you have a large enough organization, y

    • by jandoe ( 6400032 )

      In my company it was around 17%. I have to admit I didn't see the phishing page as suspicious and didn't login only because it wasn't using https. I thought that 'if they don't think it's important enough to protect it with https it's not worth my time looking at it'. Since I'm enough security aware to check the protocol and still almost fell for it I blame the company and it's habit of changing intranet domains, using different names for it not related to company name, using different version of company wi

    • by Anonymous Coward
      My employer does phishing tests. They also send official emails using a third party click counter (click.it or something). And they'll use that for pages which require my highest security password. These emails look like phishing attempts, but are actually legit. Idiots.
  • by idontusenumbers ( 1367883 ) on Sunday May 24, 2020 @02:56PM (#60099578)

    Microsoft uses phishing-like domains in their own documentation and authentication emails. To setup multifactor authentication you need to go to aka.ms/MFASetup which reeks of phish. I opened a ticket with them and they closed it wontfix. https://github.com/MicrosoftDo... [github.com]

    • by Anonymous Coward
      aka.ms is Microsoft shortening URL service. You do not have to use the short URL version. But anything under that domain is a MS resource.
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Most security-minded people will consider ANY shortening URL suspicious.

      • by Bert64 ( 520050 )

        Anything under that domain *WAS* an MS resource at the time it was created...
        MS often move things around, sites and locations come and go, who's to say that some of the redirects might point to old addresses that have subsequently been reassigned? Perhaps a recycled ipv4 address, perhaps a domain that expired and got picked up by someone else?
        These kind of attacks have happened, for instance:
        https://www.theregister.co.uk/... [theregister.co.uk]

      • That's nice, but do you expect the users to know that? And once you've got them trained that some cutesy name like aka.ms (Also Known As Microsoft???) is actually really legit, what do you think is going to happen when these same users are hit with something else cutesy like gitlab.company?

        We use Microsoft's MFA at work and it's an absolute mess.

    • Not just that but microsoft use so many different domains, there is no way to tell if itâ(TM)s even a legitimate domain. Just try googling the official list of microsoft domains names. Microsoft donâ(TM)t care because all links in email when using microsoft (office365, outlook etc) are changed to intercepts by microsoftâ(TM)s own services. So microsoft answer is just use *their* email, which will automatically work out for you which ones are safe and which are not.
  • by SuperKendall ( 25149 ) on Sunday May 24, 2020 @03:03PM (#60099604)

    "Restocking the Phishing Ponds".

    Sorry to those college grads who know what they are doing and are less cavalier....

  • by 93 Escort Wagon ( 326346 ) on Sunday May 24, 2020 @03:04PM (#60099608)

    The people who clicked on this link were giving away their credentials to receive a 2019 15" MacBook Pro - the last one with the crap keyboard. Even if I were dumb enough to fall for the phishing email, I'd have stopped the moment I saw what they were trying to dump on me.

    (Yes, I'm just being silly; but no, I'm not kidding [gitlab.com])

    On a more serious note - I'd like to see the breakdown regarding what type of employees these are. It's a bigger problem if they're coders than if they're clerical workers. I'd cut more slack to the non-technical folks.

  • ... and other very common stupid things that many large corporations continue to use pave the ground for this kind of phishing.

    Sure, employees should be less naive and more security aware, but that should first and foremost apply to those that still decide to run mail-infrastructure without signatures, and those who decide to employ all sorts of weird 3rd-party services running at arbitrary addresses that they expect employees to actually use. After experiencing for the 10th time that your reporting of a s
  • by blincoln ( 592401 ) on Sunday May 24, 2020 @03:11PM (#60099630) Homepage Journal

    As a security consultant, I've run phishing campaigns for quite a few clients, usually as part of a pen test where we'd use any captured credentials as a foothold for further testing. Typically, I expect about a 1-5% of recipients to click on the link and enter their credentials, with a convincing email and website combination.

    Ten years ago, I might have placed most of the blame on users, for not observing obvious warning signs in the email and after clicking on the link, but these days I put the majority of the blame on the engineers and developers building the legitimate systems that those employees use.

    10-20 years ago, one could be pretty sure that any credentials for a given company (let's call them "TransferLicious") would be entered somewhere in the website whose name was the one domain associated with that company ("transferlicious.com"). Over time, devs and engineers embraced vanity/novelty domains for a variety of purposes, and now the same company might legitimately have login forms on "transferlici.os", "xfrlcs.io", "transferliciousbanking.com", and so on. Those URLs might be further masked by link-shortening services.

    How many enterprise/social-media single-sign-on services involve redirections to other domains? Now the problem is multiplied, because their employer uses "BlueSkies SSO", and their devs and engineers do the same thing. Am I getting sent to a login page from "blueski.es" now instead of "online.blueskies.com" because it's a phishing attack, or because a BlueSkies dev thought it would be "sick" to use a vanity domain instead?

    Browser vendors have made hiding technical information from users a priority, and a huge number of users are on mobile devices that don't support things like hovering the cursor over links anyway, so there's another "how to spot a malicious link" technique down the drain.

    Users shouldn't have to care about details like that in the first place, but the people building the systems and browsers have done such a terrible job that there aren't even any consistent rules that users can keep in mind. This makes it easy for me to phish people during pen tests, which is great, but it's sad from just about every other perspective.

    • by gweihir ( 88907 ) on Sunday May 24, 2020 @04:17PM (#60099860)

      Here is a nice story for you (I am also a security consultant): Recently I got an obvious phishing mail (or phishing test mail) on a customer email account, asking me to send all my personal details to a somewhat shady looking web-site in a different country. I dutifully reported it as phishing and forgot about it. Then I got it again. And again. (The customer is a large bank.) After about 5 months, I got an angry email from HR as to why I had not answered to the vetting email they had asked an external provider to send to me. They also claimed it contained documentation as to what was going on (it never did and a screenshot proved that nicely, which deflated them somewhat). But when I explained to them that this looked just like the regularly run phishing exercises and that maybe they should at the very least send such a thing from an _internal_ email account or at least send some warning from an internal email account that this was coming, they did not understand what I was talking about. I can only conclude that most people just answered the request without any additional verification.

      With this extreme level of corporate stupidity, it is not surprise people keep falling for phishing.

      Incidentally, whenever I can I read email with mutt (html filtered through lynx, thus exposing all technical details), because I do not trust the modern "email clients" one bit and I agree that they are very, very badly done with regards to security. I guess the damage this causes is just too well hidden for anything to be done about that.

      • Whenever some nutbar (usually an HR turd or some quack management nutjob) sent me an e-mail containing a clickety-pokey (link) I would just "Reply All" requesting a copy of the applicable Risk Assessment so that I could verify that the "clickety-pokey" had been appropriately Risk Assessed and had received the appropriate sign-off.

        No risk assessment, no clicky. I don't give a fuck if you do claim to be the CEO ...

    • by MobyDisk ( 75490 )

      THIS.

      The outsourcing of benefits has made this awful. You work for company "bingo123" and your HR system is at "bingo123.humanresourcescompany.com" and you are expected to type in your bingo123 domain credentials into that. Then you get management surveys from corporatesurveys.com/survey&companyname=bingo123 and stuff like that. It's awful. If you are going to make a system to outsource human resources or surveys, at least spend the $15 to get their DNS to serve it up on humanresources.bingo123.com,

    • You can try to make real sites identifiable, and to train user, but the only way to really prevent phishing is to make it impossible, which means using two-factor authentication with a security key. If the user is able to give the attacker information, a sufficiently-clever attacker will get the information. It's much, much harder to talk people into handing over a physical object, both because it means the attacker (or an agent of the attacker) must meet with the user, and people are much, much more relu

    • The problem goes deeper than that. Often companies train people to click through self-signed certs, to log into completely unsecured sites, click on email from 3rd parties, etc. I've seen it many times, and if you ever submit a ticket they look at at you like you're an idiot - just click "proceed anyways", or "just enter your credentials", etc.... The problem is that they don't see it as a problem, but if you don't do what they say you cannot get your job done, so everyone ends up doing it and after a while

    • by khchung ( 462899 )

      Ten years ago, I might have placed most of the blame on users, for not observing obvious warning signs in the email and after clicking on the link, but these days I put the majority of the blame on the engineers and developers building the legitimate systems that those employees use.

      RIGHT ON.

      Where I worked, we got training every year teaching us to recognize phishing mails, watchout for external emails with links to external suspicious domains, and we also have internal testing phishing mails sent out with links that logs who you are, then anyone caught will have to attend extra classes about phishing mails.

      With all that, our managers hired some outside companies to do an important survey, which was done by sending external mails with links to external domains, which looked exactly lik

  • Repeated mock phishing and training will get it down, but you will always have some percentage that will fall for it. Especially when it's a targeted attack that is intelligently constructed. The real solution is using multi-factor authentication so that stolen passcodes are worthless.
    • by gweihir ( 88907 )

      I agree, but I do not believe 2FA/MFA does help that much. Attackers have to get a bit more agile when these are in use, but they can still basically get everything when the user is stupid enough.

  • Very interesting and great to see the transparency. Would be cool to see more details around stats. This post i just read highlights some of the metrics that could be tracked in addition - including # of users who's MFA token got phished: https://wunderwuzzi23.github.i... [github.io]
  • At least with regards to any security functionality. This level of sheer incompetence and stupidity is astounding. Not in an average person, but these people are supposed to be professionals. We really, really need to weed out the ranks of IT personnel of all types and get rid of the dangerous dross. We also need to fire all managers that did hire them and did "supervise" them and noticed nothing.

  • seems like that'd be ideal for home workers logging in remotely.
  • Not bad. That was probably just the sales team and a few managers, right?
    • 50 employees were targeted.
      17 of those clicked the link.
      10 of those *ATTEMPTED* to log into the fake site.

      Well, did those 10 attempt to log in with VALID credentials, or was it maybe something along the lines of:
      Username:NiceTryScammer
      Password: SuckMyBigOne

      If the attempted logins didn't use any valid credentials, then I would say that ALL the employees tested passed the test.
  • If you want to be the trusting type, you can easily fall for the better phishing emails. The ones used for testing are usually quite bad, compared to the real thing. And the good ones are better than standard emails from real sources.

    I recently received a legitimate email telling me I had to complete required training, after being told that a "phishing test" was coming. The IT department wasn't too happy when I returned their notification with 10 noted phishing traits listed, including requesting that we vi

  • by FeelGood314 ( 2516288 ) on Sunday May 24, 2020 @07:27PM (#60100318)
    I worked for a very well know security company. Our Phishing training required employees to log into an external site using their company login credentials. There was no easy way to authenticate that this external company was legitimate. The email invite to do the training didn't even come from within the corporate network. Of 500 employees only 2 others reported that it looked suspicious. So if you crafted a legitimate looking phishing training email I bet you to could get 99% of employees to click you link and enter their credentials.
    • Phishers can be very creative. The successful ones target execs with emails crafted to look exactly like something coming from the CEO or CFO. They get their hands on some old emails and they'll use the kind of wording the CEO uses. In this case you are virtually assured someone will log in to the server (located in Russia or China). Sadly, many companies respond by forcing password changes every few months instead of teaching how not to fall for this.

  • Phishing games anyone?
  • Instant feedback is the best way to teach something to any animal.
    I suggest the phishing emails and phony web site take you right to a goatse pic.
    Later you can bring all the people who fell for it come into a conference room and ask them to take turns explaining why they're looking at butt porn at work.
    Never let them know it was a test.

  • They probably have phishing training videos hosted on an external website where you have to enter your corp LDAP credentials. They probably have internal documentation for the certain services like the shared fileserver hosted on an external domain. Their ticketing system for hardware purchases and the like probably sends you to an external vendor. Basically, they've probably been trained in a thousand ways that even if you want to hold the URL to a high standard, some moron is going to force you to lowe

  • if you work for a it/tech company it doesn't mean you're tech savy, they have a lot of other departments that are not really it/tech related and don't require you to have tech skills beyond the basic workings of a pc.

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...