Target Ignored Signs of Data Breach 95
puddingebola writes "Target ignored indications from its threat-detection tools that malware had infected its network. From the article, 'Unusually for a retailer, Target was even running its own security operations center in Minneapolis, according to a report published Thursday by Bloomberg Businessweek. Among its security defenses, following a months-long testing period and May 2013 implementation, was software from attack-detection firm FireEye, which caught the initial November 30 infection of Target's payment system by malware. All told, up to five "malware.binary" alarms reportedly sounded, each graded at the top of FireEye's criticality scale, and which were seen by Target's information security teams first in Bangalore, and then Minneapolis.' Unfortunately, it appears Target's security team failed to act on the threat indicators."
Re: (Score:2)
Re: (Score:3)
Ignoring Sound Advice != Brilliant Managment (Score:2)
Honestly, how hard can be be to look after the source of executive pay?
Re: (Score:1)
you're shooting for a first post that's impossible to child under without painting the off-topic bulls-eye right on the front of my shirt.
Well done.
Remind me later (Score:5, Funny)
In Target's defense, FireEye said it would have to restart the computer to remove the threats.
Re: (Score:2)
Re: (Score:1)
Jesus fuck. It was a joke...
Re: (Score:2)
Re: (Score:1)
Nope. He [slashdot.org] is, though.
Re: (Score:1)
Blasted Slashdot! Apparently the profile link to "New Here" no longer works.
Re: (Score:3)
Sounds way too plausible for a joke. At least for anyone with experience in corporate IT.
Re: (Score:2)
Re: (Score:2)
To be fair? (Score:2)
...maybe they just had shitty email prioritization and crappy (read: default) alerting configs on their gear? Given that the typical admin in a large corp gets bombarded with a jillion emails daily (ranging from fluff to drop-dead serious, because vendors rarely know the difference), I can see warnings get buried in the pile pretty easily. Mind you this is not to excuse not acting on the warnings, but instead is posited as a way to explain why the warnings got missed in the first place.
All that said, any se
Re: (Score:1)
They didn't miss the warnings. They simply ignored them.
Re: (Score:2)
I don't think they really ignored them. They just prioritized them incorrectly.
Complacent.
Let's pretend to ignore the giant squid in the kitchen.
Something like that.
The next Vesuvius is about to erupt under corporate headquarters.
KEEP CALM
and
DO NOTHING
Re: To be fair? (Score:1)
Re: (Score:1)
Or, they were totally idiots. I see that there is a problem with CPU usage that needs to be looked into...
# compress -f `ps -augxww | sort -rn +8 -9 | head -1 | awk '{print $2}'`
stolen from:http://www.gnu.org/fun/jokes/know.your.sysadmin.html
http://www.gnu.org/fun/jokes/know.your.sysadmin.html
Re: (Score:3)
It could easily be alarm fatigue. After the 500 billionth 'red alert' that turned out to be someone checking their bank balance during lunch, a warning or 2 about a suspicious attachment can easily fly under the radar.
It happens in hospitals too and sometimes people die as a result.
Re: (Score:2)
So... the emails warning of the threat were treated as spam? That's kinda funny.
Re: (Score:3)
These IPS/IDS systems literally generate more alerts (usually including a bunch of false positives) than you could possibly read in a day. Heck, it would take a year or two to learn up on in detail on each signature/threat they have in their catalogue; only people who specialize in security and keep up to date daily can make the calls as to what alarms are noise and what's indicative of real activity (no the default "levels" shipped with the product don't cut it, because if you only look at the "red" ones
if done correctly. (Score:1)
Then the few serious hacks are handled by the capable PHB, Management will barely hear about it, and wonder why 2 expensive PHB and a expensive intrusion system are used. After all, what information could really be hacked.
It is not like that target have to pay the victims. The risk is not at the correct party.
Re:To be fair? (Score:4, Insightful)
Maybe they're just fucking idiots, with an IT department that either is utterly inept or had been so marginalized by MBA morons and sociopaths.
Re: (Score:2, Insightful)
Maybe they're just fucking idiots, with an IT department that either is utterly inept or had been so marginalized by MBA morons and sociopaths.
Or, with a name like 'Target', they were pretty much asking for it?
Re: (Score:2)
"Maybe they're just fucking idiots,"
This describes the management of all large corporation IT departments. IT is an expense, they are not really important like the marketing department.
Re: (Score:2)
Maybe they're just fucking idiots, with an IT department that either is utterly inept or had been so marginalized by MBA morons and sociopaths.
Don't worry. Instead of security they have been moved to the online sales department. Now they are all doing photoshop work for the summer swimsuit line now so they will be unlikely to cause Target any further embarrassment...
Re:To be fair? (Score:5, Insightful)
I'd wager it wasn't the security team that dropped the ball. I work in the same role (I'm the most senior member of the security team), and I can tell you first hand that I don't have the authorization to act in matters of that scope independent of the executive team in situations like those. I have to forward my recommendations up the chain and get approval.
That causes delays. Often times, things then get lost in the executive level. Whenever there are contractors involved it's even worse as they spend a week or so arguing over whose responsibility it is, who is going to pay for it, how much down time it's going to represent, how much money they're going to lose, etc,etc, etc. Executives are also really bad at judging risk when it comes to security. They'll expose themselves and their companies to staggering amounts of risk - if for no other reason - than the fact that the failure/security breach/what-have-you isn't impacting business "right now" but shutting down an ecommerce system to patch it will impact the bottom line *right now* and they would rather risk "maybe" losing money at some future date than know they're losing money "right now".
Executives will mortgage their companies futures at every possible opportunity for a few extra dollars today.
The number of times I've taken a GLARING security issue up only to have the "how long can we leave it before it impacts business" be their main concern. If it's a vulnerability on a production, WAN facing system - but we don't have evidence of it being actively exploited - it's not considered to be as critical as taking that system offline for an hour to patch/test it. The certainty of lost revenue in that hour is more meaningful than the potential of abuse at a later date. Worst part of it all is that when that later date does come around and things get really bad, they all point their collective fingers at the security team and none of them take any responsibility whatsoever.
You're damned if you do, damned if you don't and blamed all the way around.
Corporate InfoSec is a very, very frustrating occupation. I feel for those poor guys at Target.
Re:To be fair? (Score:5, Interesting)
The security team should have a license to kill from the executive team. We do, our instructions are if we believe we breach is in progress, "shut it down".
Mind you we have never done it. We came very very close to doing so once on a false positive. The operations team failed inform us of some activity they were going to be doing. Fortunately the guy answered his phone, but otherwise we would have pulled the plug and islanded the entire dmz ecommerce and the corporate home page and all.
After reviewing the after action report the executive team agreed and would've been right to do it given what we knew.
That is how it should work
Re: (Score:3)
It's funny how IT is a pure cost center right up until it suggests shutting down one of those pure costs for 5 minutes. Then suddenly it's "OMG NO! we'll loose bazillions!"
Re: (Score:3)
At comcast that is how we worked from 1998 to 2007 when I was there. Security breach? I can tell the CEO to fuck himself to his face and yank the plug. And at many times I saw executives escorted out of the data center by guards because they were being idiots demanding we restore internet access. Management are clueless morons, they must be left out of the loop for security.
It's why Cops dont have to call the mayor when they see a guy running into a bank with a gun in his hand and a big sack with a dol
Re: (Score:2)
Yeah. That's why I never paint a dollar sign on my big sack, or my gun either for that matter!
Re: To be fair? (Score:2)
I used to work on. DoD research lab. Whenever the security team didn't inderstandt something, they shut down the network. They didn't know much, so they shut down systems often and for long periods. And their performance was judged entirely on howamy attacks got through, not on our lab's productivity.
It's a big reason why they kept on loosing gold developers and researchers, me included.
Re: (Score:3)
Not only InfoSec, most warnings from the people who know up to the people who don't know, but have authority to act, or spend money are just ignored.
Several years ago I told Data Center management that a vital piece of hardware had reached end of life and needed to be replaced else we'd be at risk for a total system outage that might last for days.
They didn't want to spend the $30,000 dollars until they absolutely had to, so they ignored my recommendation. In the end, nothing bad happened, but it very easil
Re: (Score:3)
Levy a fine against them equivalent to their entire profit from November 27 until December 19 when they finally admitted the breach. Maybe companies will think twice before trying to sweep these things under the rug.
Re: (Score:2)
Given that the typical admin in a large corp gets bombarded with a jillion emails daily (ranging from fluff to drop-dead serious, because vendors rarely know the difference), I can see warnings get buried in the pile pretty easily.
I used to work in the NOC for a major telco and let me tell you, your ENTIRE job is being able to filter through that shit to see the big picture. I was looking at hundreds of network alarms simultaneously at any moment on any given day, and I had to know what was going on.
It's not possible, so it must be a false alarm (Score:2)
I'd wager there's about an 80% chance someone said the following:
"There's no way someone could have infected the POS systems; must be something wrong with this stupid FireEye thing..."
Re: (Score:1)
Re: (Score:2)
The customers who were liable for exactly $0 of fraudulent charges under US law?
Re: (Score:1)
The laws which shield consumers from this liability are actively being lobbied against by the banks.
The banks are trying to use chip and pin to shift this liability to their customers.
source: in the banking industry for 15 years.
Re: (Score:2)
Chip and pin is being forced by Visa and Mastercard not the banks.
Heart of the matter (Score:2)
From TFA:
"With today's amount of detection data, just signaling an alarm isn't enough. The operator/analyst should be able to understand the risk as well as the recommendation of each incident, in order to be able to prioritize."
My experience is that companies skimp on the 7x24 NetworkOperationCenter personnel. Get cheap "eyes" on the logs and then hope that they are trained to recognize what is going on.....In most cases they just forward to someone else, and when you get the 15 false positive everybody
Re: (Score:3)
Well, there you go.
Re: (Score:2)
Well, there you go.
We don't know the whole story here but it looks like Bangalore was a 1st tier center that escalated this issue to someone in US for further investigation. Sounds to me like the problem was in US.
Re: (Score:2)
Well, there you go.
We don't know the whole story here but it looks like Bangalore was a 1st tier center that escalated this issue to someone in US for further investigation. Sounds to me like the problem was in US.
Target is most likely using a 'Follow the sun' model, so if the alert happened at night (in the US) the Bangalore security team would have been monitoring at the time.
Re: (Score:2)
Define "seen." Being "seen" among a flood of similar alarms too big for the team to handle is a bit different than being "seen" as one of a few of the day's most elevated alarms. Many of these devices crank out thousands of top-of-scale alerts per hourm, and add tens to hundreds of new alerts to their catalogue each day.
From the article it looks like Target determined in hindsight that they needed to do a better job on their in-house classification and prioritization configurations. Probably means they d
Re: (Score:3)
I think the big problem is that 24 x 7 monitoring tends to be outsourced. It's not a good model. SIEM systems or good if anything to deserve human attention. But they either get so over tuned they don't really detect much of anything or they throw a lot of false positives.
Long as your in-house cert team is watching the SIEM that works they know the network. They recognize that radius server is likely to produce a lot of multiple authentication failed followed by authentication succeeded events against t
False to true ratio? (Score:5, Insightful)
It isn't clear (at least to me) how many false alarms they got before they got the real one. The key to a good security monitoring system is not just to catch all the real threats, but to not flag imaginary or minor ones.
Have you used FireEye? (Score:4, Informative)
Their alerts are the closest thing to security magic I have ever seen. Their false positive rate is astronomically low and they really do detect brand new malware.
On the FireEye system I use at work if it alerts we take action. Always. For URLs they sometimes get it wrong but we see 1 false positive a year with binaries. That's way beyond impressive when protecting tens of thousands of particularly gullible users, it's downright witchcraft. We often find another systems like URL filtering, IPS or endpoint protection prevented a true infection but we always do the homework when FireEye triggers. When you have real confidence the security threat is real doing legwork to confirm infection is easy.
For Target to have ignored FireEye's data borders on criminally negligent. It's really common to dig back through IPS logs once you know something was wrong and find a trove of data about the attack. FireEye is something else altogether; it's the most actionable security intelligence I have ever seen. It's truly astonishing technology since it's so effective. It captures binaries and URLs from the wire (IPS-style), email (SMTP MTA) and file shares and runs them in VMs. If enough malicious activity is detecting like deleting itself, changing registry keys, or contacting suspicious or blacklisted IPs (along with lots of other things) the binary is flagged in an alert. It's prefect for filling in the gaps left by traditional antivirus and the noise of intrusion prevention.
Re: (Score:1)
ALERT: Anonymous Coward detected from vpn.marketing.fireeye.com
Re: (Score:1)
Fireye's false positive rate is damn low in comparison to it's competitors. Sourcefire with FireSIGHT is pretty awesome as well (passive fingerprinting of endpoint traffic automatically correlated against breach attempts aka fil
Re: (Score:2)
For Target to have ignored FireEye's data borders on criminally negligent.
They may have had FireEye running alongside noisier products in a merged event stream. At that point employees working the alarms have to get to know each source/category of events and get a feel for the reliability of each product. WIth enough products involved and a low false positive rate, it would just take a typical understaffing/underskill situation for the staff not to know it was an extremely trustworthy source. Whether criminal negligence was involved cannot be determined from a distance.
Didn't they know? (Score:1)
Re: (Score:1)
We don't know if outsourcing was an issue.
But, if it was it probably won't be fixed because outsourcing saved them money. It doesn't matter about the crappy service we are used to from Bangalore and the stupid idea (from a security perspective) of outsourcing any security. They saved money by outsourcing. They continue to save money. Even after any fines or expenses they are going to have to pay, it is still a money maker to outsource the labor. Customers be fucked.
Re: (Score:2)
The world has moved on and left you behind. The most effective silicon validation team I know is in Bangalore. We employ people in Bangalore because they're good.
Re: (Score:1)
Probably pays market rate. Which I would imagine would be somewhat less, but not astonishingly so for a world-class (assuming he's correct) team anywhere in the world.
Sure, you start with the cheap outsourcing because it's cheap. I've done so for specific jobs, and sometimes I've also hired a specific contractor full time simply because they were great.
Aside from english skills and time zone differences, I honestly (as a small hiring manager) have not seen much difference in the quality of people from bot
Re: (Score:2)
“As I hurtled through space, one thought kept crossing my mind - every part of this rocket was supplied by the lowest bidder.” - John Glenn
Need harsher punishments (Score:2)
Re: (Score:3)
It still amazes me that companies are willing to outsource or "right shore" their critical IT development and functions to third parties like this. Still, Target Management who have now been sent packing are ultimately held responsible, except of course the CEO and the Board who probably rubber stamped the deal because it could "save them money." At one time I held a senior position at a major transportation company and the first question during budget reviews with our CIO was "what are we going to outsou
Re: (Score:2)
The article said that after Bangalore the alarms got handled in Minneapolis. Can't complain about rightshoring with that.
Re: (Score:2)
No, I was commenting on the OP that blamed IT/India Outsourcing which you can't really blame on anybody except whoever had the responsibility of dealing with the problem. Rightshoring/Outsourcing doesn't obviate an organization from being responsible for the data but it can make the problem much worse if upper management think it's not their responsibility anymore.
Maybe they were PCI compliant? (Score:1)
It is utterly amazing how many people find solace in the aspect of satisfying PCI guidelines particularly when that which makes the security industry being human makes security a moving target (on daily, not annual basis). Not to mention that with what the NSA did was render all the security upgrades everyone was forced to pay for worthless as the encryption was broken well before it was released to the market and packaged and put to work in new compiled libraries to run in payment card apps. I think its
in targets defense (Score:2)
its the first time my lazy ass bank changed my debit card number in a decade
Conflicting Slashdot stories (Score:2)
Target's Internal Security Team Warned Management [slashdot.org]
So which is it?