Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security IT

Target Ignored Signs of Data Breach 95

puddingebola writes "Target ignored indications from its threat-detection tools that malware had infected its network. From the article, 'Unusually for a retailer, Target was even running its own security operations center in Minneapolis, according to a report published Thursday by Bloomberg Businessweek. Among its security defenses, following a months-long testing period and May 2013 implementation, was software from attack-detection firm FireEye, which caught the initial November 30 infection of Target's payment system by malware. All told, up to five "malware.binary" alarms reportedly sounded, each graded at the top of FireEye's criticality scale, and which were seen by Target's information security teams first in Bangalore, and then Minneapolis.' Unfortunately, it appears Target's security team failed to act on the threat indicators."
This discussion has been archived. No new comments can be posted.

Target Ignored Signs of Data Breach

Comments Filter:
  • Honestly, how hard can be be to look after the source of executive pay?

    • So I think what you're doing here is...

      you're shooting for a first post that's impossible to child under without painting the off-topic bulls-eye right on the front of my shirt.

      Well done.

  • by pushing-robot ( 1037830 ) on Friday March 14, 2014 @05:48PM (#46487555)

    In Target's defense, FireEye said it would have to restart the computer to remove the threats.

    • Maybe it is me, but that seems gobsmackingly wrong. If Target cannot tolerate a server being unavailable for a few minutes, there must be something wrong with the entire technical infrastructure. There must be single points of failure all over the place. (Not trying to be snarky. Please tell me I am wrong!)
  • ...maybe they just had shitty email prioritization and crappy (read: default) alerting configs on their gear? Given that the typical admin in a large corp gets bombarded with a jillion emails daily (ranging from fluff to drop-dead serious, because vendors rarely know the difference), I can see warnings get buried in the pile pretty easily. Mind you this is not to excuse not acting on the warnings, but instead is posited as a way to explain why the warnings got missed in the first place.

    All that said, any se

    • by Desler ( 1608317 )

      They didn't miss the warnings. They simply ignored them.

    • by ackthpt ( 218170 )

      So... the emails warning of the threat were treated as spam? That's kinda funny.

      • by skids ( 119237 )

        These IPS/IDS systems literally generate more alerts (usually including a bunch of false positives) than you could possibly read in a day. Heck, it would take a year or two to learn up on in detail on each signature/threat they have in their catalogue; only people who specialize in security and keep up to date daily can make the calls as to what alarms are noise and what's indicative of real activity (no the default "levels" shipped with the product don't cut it, because if you only look at the "red" ones

        • Then the few serious hacks are handled by the capable PHB, Management will barely hear about it, and wonder why 2 expensive PHB and a expensive intrusion system are used. After all, what information could really be hacked.

          It is not like that target have to pay the victims. The risk is not at the correct party.

    • Re:To be fair? (Score:4, Insightful)

      by MightyMartian ( 840721 ) on Friday March 14, 2014 @06:16PM (#46487851) Journal

      Maybe they're just fucking idiots, with an IT department that either is utterly inept or had been so marginalized by MBA morons and sociopaths.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Maybe they're just fucking idiots, with an IT department that either is utterly inept or had been so marginalized by MBA morons and sociopaths.

        Or, with a name like 'Target', they were pretty much asking for it?

      • by Lumpy ( 12016 )

        "Maybe they're just fucking idiots,"

        This describes the management of all large corporation IT departments. IT is an expense, they are not really important like the marketing department.

      • by jd2112 ( 1535857 )

        Maybe they're just fucking idiots, with an IT department that either is utterly inept or had been so marginalized by MBA morons and sociopaths.

        Don't worry. Instead of security they have been moved to the online sales department. Now they are all doing photoshop work for the summer swimsuit line now so they will be unlikely to cause Target any further embarrassment...

    • Re:To be fair? (Score:5, Insightful)

      by James-NSC ( 1414763 ) on Friday March 14, 2014 @06:45PM (#46488097) Homepage

      I'd wager it wasn't the security team that dropped the ball. I work in the same role (I'm the most senior member of the security team), and I can tell you first hand that I don't have the authorization to act in matters of that scope independent of the executive team in situations like those. I have to forward my recommendations up the chain and get approval.

      That causes delays. Often times, things then get lost in the executive level. Whenever there are contractors involved it's even worse as they spend a week or so arguing over whose responsibility it is, who is going to pay for it, how much down time it's going to represent, how much money they're going to lose, etc,etc, etc. Executives are also really bad at judging risk when it comes to security. They'll expose themselves and their companies to staggering amounts of risk - if for no other reason - than the fact that the failure/security breach/what-have-you isn't impacting business "right now" but shutting down an ecommerce system to patch it will impact the bottom line *right now* and they would rather risk "maybe" losing money at some future date than know they're losing money "right now".

      Executives will mortgage their companies futures at every possible opportunity for a few extra dollars today.

      The number of times I've taken a GLARING security issue up only to have the "how long can we leave it before it impacts business" be their main concern. If it's a vulnerability on a production, WAN facing system - but we don't have evidence of it being actively exploited - it's not considered to be as critical as taking that system offline for an hour to patch/test it. The certainty of lost revenue in that hour is more meaningful than the potential of abuse at a later date. Worst part of it all is that when that later date does come around and things get really bad, they all point their collective fingers at the security team and none of them take any responsibility whatsoever.

      You're damned if you do, damned if you don't and blamed all the way around.

      Corporate InfoSec is a very, very frustrating occupation. I feel for those poor guys at Target.

      • Re:To be fair? (Score:5, Interesting)

        by DarkOx ( 621550 ) on Friday March 14, 2014 @07:27PM (#46488387) Journal

        The security team should have a license to kill from the executive team. We do, our instructions are if we believe we breach is in progress, "shut it down".

        Mind you we have never done it. We came very very close to doing so once on a false positive. The operations team failed inform us of some activity they were going to be doing. Fortunately the guy answered his phone, but otherwise we would have pulled the plug and islanded the entire dmz ecommerce and the corporate home page and all.

        After reviewing the after action report the executive team agreed and would've been right to do it given what we knew.

        That is how it should work

        • by Lumpy ( 12016 )

          At comcast that is how we worked from 1998 to 2007 when I was there. Security breach? I can tell the CEO to fuck himself to his face and yank the plug. And at many times I saw executives escorted out of the data center by guards because they were being idiots demanding we restore internet access. Management are clueless morons, they must be left out of the loop for security.

          It's why Cops dont have to call the mayor when they see a guy running into a bank with a gun in his hand and a big sack with a dol

          • "It's why Cops dont have to call the mayor when they see a guy running into a bank with a gun in his hand and a big sack with a dollar sign painted on it."

            Yeah. That's why I never paint a dollar sign on my big sack, or my gun either for that matter!

        • I used to work on. DoD research lab. Whenever the security team didn't inderstandt something, they shut down the network. They didn't know much, so they shut down systems often and for long periods. And their performance was judged entirely on howamy attacks got through, not on our lab's productivity.

          It's a big reason why they kept on loosing gold developers and researchers, me included.

      • by jacobsm ( 661831 )

        Not only InfoSec, most warnings from the people who know up to the people who don't know, but have authority to act, or spend money are just ignored.

        Several years ago I told Data Center management that a vital piece of hardware had reached end of life and needed to be replaced else we'd be at risk for a total system outage that might last for days.

        They didn't want to spend the $30,000 dollars until they absolutely had to, so they ignored my recommendation. In the end, nothing bad happened, but it very easil

      • The breach started two days before Black Friday. What incentive would management have to do anything that would jeopardize their ability to sell all the way until Christmas?

        Levy a fine against them equivalent to their entire profit from November 27 until December 19 when they finally admitted the breach. Maybe companies will think twice before trying to sweep these things under the rug.
    • Given that the typical admin in a large corp gets bombarded with a jillion emails daily (ranging from fluff to drop-dead serious, because vendors rarely know the difference), I can see warnings get buried in the pile pretty easily.

      I used to work in the NOC for a major telco and let me tell you, your ENTIRE job is being able to filter through that shit to see the big picture. I was looking at hundreds of network alarms simultaneously at any moment on any given day, and I had to know what was going on.

  • I'd wager there's about an 80% chance someone said the following:
    "There's no way someone could have infected the POS systems; must be something wrong with this stupid FireEye thing..."

    • A contractors credentials were stolen / used to gain access . Anybody thinking of that ? That was the point of entry
  • From TFA:

    "With today's amount of detection data, just signaling an alarm isn't enough. The operator/analyst should be able to understand the risk as well as the recommendation of each incident, in order to be able to prioritize."

    My experience is that companies skimp on the 7x24 NetworkOperationCenter personnel. Get cheap "eyes" on the logs and then hope that they are trained to recognize what is going on.....In most cases they just forward to someone else, and when you get the 15 false positive everybody

    • All told, up to five "malware.binary" alarms reportedly sounded, each graded at the top of FireEye's criticality scale, and which were seen by Target's information security teams first in Bangalore, and then Minneapolis.

      Well, there you go.

      • by ark1 ( 873448 )

        Well, there you go.

        We don't know the whole story here but it looks like Bangalore was a 1st tier center that escalated this issue to someone in US for further investigation. Sounds to me like the problem was in US.

        • by jd2112 ( 1535857 )

          Well, there you go.

          We don't know the whole story here but it looks like Bangalore was a 1st tier center that escalated this issue to someone in US for further investigation. Sounds to me like the problem was in US.

          Target is most likely using a 'Follow the sun' model, so if the alert happened at night (in the US) the Bangalore security team would have been monitoring at the time.

      • by skids ( 119237 )

        Define "seen." Being "seen" among a flood of similar alarms too big for the team to handle is a bit different than being "seen" as one of a few of the day's most elevated alarms. Many of these devices crank out thousands of top-of-scale alerts per hourm, and add tens to hundreds of new alerts to their catalogue each day.

        From the article it looks like Target determined in hindsight that they needed to do a better job on their in-house classification and prioritization configurations. Probably means they d

    • by DarkOx ( 621550 )

      I think the big problem is that 24 x 7 monitoring tends to be outsourced. It's not a good model. SIEM systems or good if anything to deserve human attention. But they either get so over tuned they don't really detect much of anything or they throw a lot of false positives.

      Long as your in-house cert team is watching the SIEM that works they know the network. They recognize that radius server is likely to produce a lot of multiple authentication failed followed by authentication succeeded events against t

  • by joe_frisch ( 1366229 ) on Friday March 14, 2014 @06:08PM (#46487785)

    It isn't clear (at least to me) how many false alarms they got before they got the real one. The key to a good security monitoring system is not just to catch all the real threats, but to not flag imaginary or minor ones.

    • by Anonymous Coward on Friday March 14, 2014 @08:19PM (#46488781)

      Their alerts are the closest thing to security magic I have ever seen. Their false positive rate is astronomically low and they really do detect brand new malware.

      On the FireEye system I use at work if it alerts we take action. Always. For URLs they sometimes get it wrong but we see 1 false positive a year with binaries. That's way beyond impressive when protecting tens of thousands of particularly gullible users, it's downright witchcraft. We often find another systems like URL filtering, IPS or endpoint protection prevented a true infection but we always do the homework when FireEye triggers. When you have real confidence the security threat is real doing legwork to confirm infection is easy.

      For Target to have ignored FireEye's data borders on criminally negligent. It's really common to dig back through IPS logs once you know something was wrong and find a trove of data about the attack. FireEye is something else altogether; it's the most actionable security intelligence I have ever seen. It's truly astonishing technology since it's so effective. It captures binaries and URLs from the wire (IPS-style), email (SMTP MTA) and file shares and runs them in VMs. If enough malicious activity is detecting like deleting itself, changing registry keys, or contacting suspicious or blacklisted IPs (along with lots of other things) the binary is flagged in an alert. It's prefect for filling in the gaps left by traditional antivirus and the noise of intrusion prevention.

      • by Anonymous Coward

        ALERT: Anonymous Coward detected from vpn.marketing.fireeye.com

        • I'll back up what he says and put my name to it. I work for a reseller and I've deployed and managed FireEye, Palo Alto, Cisco, Sourcefire, and Juniper (ScreenOS and the JunOS mess) appliances. They all have their strengths and weaknesses, although they aren't obviously equal. .

          Fireye's false positive rate is damn low in comparison to it's competitors. Sourcefire with FireSIGHT is pretty awesome as well (passive fingerprinting of endpoint traffic automatically correlated against breach attempts aka fil
      • by skids ( 119237 )

        For Target to have ignored FireEye's data borders on criminally negligent.

        They may have had FireEye running alongside noisier products in a merged event stream. At that point employees working the alarms have to get to know each source/category of events and get a feel for the reliability of each product. WIth enough products involved and a low false positive rate, it would just take a typical understaffing/underskill situation for the staff not to know it was an extremely trustworthy source. Whether criminal negligence was involved cannot be determined from a distance.

  • They had a target on their back.
  • As we put more online we need to adjust laws to properly punish companies otherwise they'll continue not to care. Fine them something like 50% of their revenue (not profit) for the year of the incident and then they'll start to care.
  • It is utterly amazing how many people find solace in the aspect of satisfying PCI guidelines particularly when that which makes the security industry being human makes security a moving target (on daily, not annual basis). Not to mention that with what the NSA did was render all the security upgrades everyone was forced to pay for worthless as the encryption was broken well before it was released to the market and packaged and put to work in new compiled libraries to run in payment card apps. I think its

  • its the first time my lazy ass bank changed my debit card number in a decade

  • The first time this story was posted a month ago, it was reported that Target's internal security team warned management months in advance that there was a huge problem.

    Target's Internal Security Team Warned Management [slashdot.org]

    So which is it?

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...