

Data Storm Caused Nuclear Plant To Shut Down 178
rs232 writes to let us know that the US House of Representatives Committee on Homeland Security called this week for the Nuclear Regulatory Commission to further investigate the cause of excessive network traffic that shut down an Alabama nuclear plant. Investigators want to know whether the data storm could have been initiated from outside the plant.
Re: The reason? (Score:5, Funny)
Re: (Score:2)
Storm in the tubes (Score:5, Interesting)
1) They can't describe what happened
2) They can't tell if outside interference, whatever the nature occurred
3) That this might have an internal/design cause
Re:Storm in the tubes (Score:5, Insightful)
Re: (Score:3, Insightful)
Maybe it's the precursor to a logic bomb [wikipedia.org]!
Wow, can't you request article deletion from Wikipedia on the basis of "ridiculous term"?
Or better yet, mind erasing for the very same reason...
Re: (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
4) Apparently the computers which control a nuclear plant are connected to the public Internet, allowing anyone in the world to send them commands, viruses, or random garbage, therefore allowing them to gain remote control over the reactors. Oh, and according to TFA, another nuclear plant runs Windows (since it was hit by the Slammer worm).
Someone please tell me that I'm wrong and the people who design these plants aren't this stupid. Please ?
Re: (Score:2)
One should never ever count on software to handle emergency shutdown of anything - you simply cannot risk having the emergency shutdown wait
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Might I recommend you to RTFA?
The "data storm" appears to have been on a internal network (not seemingly connected to anything apart from other internal networks), where a data acquisition and control device barfed on some bad data and started to spew garbage onto the network. Inadequate data validation combined with inappropriate or i
Re:Storm in the tubes (Score:5, Informative)
I used to work as embedded developer, and we used that term.
It was used in embedded communications when one or several devices went bonkers and flooded common bus.
Bit like packet storm, but without IP or other packet protocol, so it was called data storm.
It stands to reason that in nuclear plant there are a lot of old fogeys, so company jargon might be bit outdated and odd sounding to outsider.
Re: (Score:2)
Back in the old days, this happened (occasionally) with old coax-based ethernet LANs. Decent 10Base-T hubs have provisions for blocking individual devices that 'go nuts' and keep them from screwing up the entire network. Its still possible for a single device to lock up, but it would be a very poorly designed net
Re: (Score:2)
is it because they are deliberately trying to confuse the issue to avoid blame ?
That is the truth. 25 years ago when deploying ArcNet and later thin net we isolated our production systems, while we didn't have in-securable OS issues back then we realized PC/user behavior was not worth the risk to automated machinery. Today, it is more critical than ever.
But the gross incompetence here goes to management. They should all be fired with cause and with prejudice and shut the system down until it is design
Datastorm wrote procomm (Score:2)
Now back on topic: If you RTFA it says that there was an embedded networked controller that was "babbling" (flooding the internal network with unwanted traffic). Unless some hacker from o
Perfectly cromulent (Score:2)
http://kb.iu.edu/data/aibq.html [iu.edu]
Re: (Score:2)
Two key ingredients if you want people to
a) Think you know what you are doing and,
b) Scare them enough into doing something.
Shut down? (Score:5, Insightful)
Do invesigators also want to know how a "data storm" could have caused a nuclear plant to shut down?
Re: (Score:2, Informative)
Re: (Score:3, Informative)
If there was a communications problem and a PLC blinks out of existence on a mission critical system, it's only the safe thing to fail the entire system to prevent damage to people, the environment, and equipment.
Re: (Score:2)
Do invesigators also want to know how a "data storm" could have caused a nuclear plant to shut down?
Not really, it is BS grandstanding for politics like they are doing something when they are not.
For if it were not, the plant would have been shut down by now. They are talking this happened in August, and I would bet the problem still exists.
The answer is easy (Score:2)
nothing to see, move along. (Score:5, Insightful)
Some choice quotes, emphasis added:
An investigation into the failure found that the controllers for the pumps locked up following a spike in data traffic -- referred to as a "data storm" in the NRC notice -- on the power plant's internal control system network. The deluge of data was apparently caused by a separate malfunctioning control device, known as a programmable logic controller (PLC).
"Conversations between the Homeland Security Committee staff and the NRC representatives suggest that it is possible that this incident could have come from outside the plant," Committee Chairman Bennie G. Thompson (D-Miss.) and Subcommittee Chairman James R. Langevin (D-RI) stated in the letter. "Unless and until the cause of the excessive network load can be explained, there is no way for either the licensee (power company) or the NRC to know that this was not an external distributed denial-of-service attack."
Wow. Just...wow. As if you needed more proof that this wasn't a hacking attempt:
"The integrated control system (ICS) network is not connected to the network outside the plant, but it is connected to a very large number of controllers and devices in the plant," Johnson said. "You can end up with a lot of information, and it appears to be more than it could handle."
Seriously, how stupid do you have to be to think "OMG, Haxxors?" Answer: work at Homeland inSecurity, or be a Congresscritter. They already figured it out. It was a controller for a specific piece of equipment that flooded the network and triggered a bug in the variable-frequency-drive controllers for pumps.
You missed one.... (Score:4, Interesting)
Sounds to me that the vendors under-engineered their network and still charged mega-bucks for it. The auditors, I'm sure, are making the most out of this to justify their fee.
Nothing to see, move along - I'll say!
Re: (Score:3, Funny)
Re:nothing to see, move along. (Score:5, Informative)
However, I will fully put the blame on the PLCs. Those little suckers come in handy but if you don't completely understand every line of code and every instruction they can f_ck you over.
I also love how they say "well if you can't prove it wasn't, then it must have been".
Re:nothing to see, move along. (Score:5, Informative)
Re: (Score:2)
Driven to it by idiot management and idiot politicians.
Going for idiot "employees" and "designers" is going after the effect, not the cause. You want idiots, you hire them. You want "cheap" designs, the designer will design them. Employees and designers are told what to do. If you push back in these roles too much you will not have a job. Mind you, I would include the primary contractor.
Fixing this means fixing management. The best way to do this is to sh
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Gawd, another one.
1. It wasn't my assertion -- I did not make the original post about Browns Ferry. Try reading next time!
2. I just happened to hear an article on PBS about Browns Ferry the day of this post.
3. As I mentioned before, you can confirm it using Google. Here, I'll even show you how to find it using google [google.com]
4. What is it about "/. is not an encyclopedia" that you don't understand?
There
Here is a citation (Score:2)
list of incident at brown ferry (click on candle) [ccnr.org]
Re: (Score:2)
Though I do notice that it says the plant has been closed for a decade until this year. I guess it's best to check all options, but how is that tiny possibility remotely any sort of news when compared to the vastly more likely problem of a bug in a brand new control system, especially when it seems they found exactly what that bug was already?
It's not stupid. (Score:5, Insightful)
Seriously, how stupid do you have to be to think "OMG, Haxxors?" Answer: work at Homeland inSecurity, or be a Congresscritter. They already figured it out. It was a controller for a specific piece of equipment that flooded the network and triggered a bug in the variable-frequency-drive controllers for pumps.
As someone who used to work in system's engineering for a sister BWR, I think the inspection is a good idea. Oh, there's dumb and there's nuclear dumb but this is not a case of either. Nuclear dumb involves putting machine guns nests inside the plant. Finding the root cause of the accident is a good idea.
Handwaving about a PLC device won't do. What ultimately caused the PLC malfunction needs to be answered at a component level. There's going to be something wrong with it and that should be reported and every other device like it needs to be ripped out and trashed. If there is not component failure, there's a software problem which also must be understood.
Yes, it could have been hackers. The "internal control network" might at some point hits a desk that's connected to the wider world. It could be something mundane and unintentional, like an operator's virused up laptop.
An outage like that is something that's going to have both NRC and corporate ass-chewers looking at everything. Corporate might want to paint a nice picture for the NRC, but the poor devil that lies to them goes to jail. In either case, the problem will be identified and eliminated.
You might also have noted in the article that this is not the first plant to go thumbs down over some winblows born virus. In 2003, the slammer worm caused havoc at an offline Ohio plant [securityfocus.com]. Yes, that was hackers. They did not mean to do it, but the plant's systems were open to it and failed. That's not acceptable from any standpoint.
Despite the better advice of the computer people at the plants, Entergy is a big M$ Partner. They take the big dogs out fishing and sell them the works. Ten years ago, M$ had something worth while and interesting. It was used in places it should not have been. Worse, the flaws from ten years ago have not been addressed or fixed. A good clean up is in order.
A BWR dude, downmodded fast. (Score:2)
Wow, someone who's worked in a BWR down modded in less than ten minutes. Nice work, trolls.
Erris == twitter (Score:2)
Mod parent up ... (Score:2)
Re: (Score:2)
Sadly, he's right, though. MS stuff isn't appropriate in life/safety-critical applications. Maybe for running e-mail systems, but not for anything that'll have bad physical effects if it fails. Embedded systems, specialized OS's, or even minimal UNIX systems do the job with far more reliability.
-b.
Re: (Score:2)
One way to safeguard against data storms to some extent is to create network segmentation so that a malfunction in one part of the plan
Re:nothing to see, move along. (Score:4, Interesting)
A random fluctuation in internal traffic levels seems equally unlikely. Why? Because it has worked for some time, and I doubt the reactor was doing anything unusual at the time. A true network storm is unlikely - the term exists, but describes an astronomically rare situation. If a network is flooded, it is either near or at capacity. A network storm is when capacity is exceeded in a way that is self-perpetuating. The last time I remember the term being used in a public forum was I think over twelve years ago when a public demonstration of the multibone caused a cascading router flap that shut down a large segment of the Internet backbone due to total gridlock. It wasn't just that nothing else could get through - nothing AT ALL could get through.
What does this leave us? It makes it extremely unlikely that the network traffic per se had anything to do with the shutdown. Much more likely is a cumulative error in the devices involved that merely happened to turn into a fatal bug at roughly the same time as the network spiked. It might be network related, but nobody here can seriously believe it was network caused. Networks may be polled, in which case network traffic that escapes being polled is simply never seen. Network drivers may also be event-driven, but if the interrupt handler is buggy - which would usually mean the handler can be interrupted by itself indefinitely - it's hardly the fault of the network.
In other words, this is a gross programming error that the coders and managers are desperately trying to blame on something - anything - other than their own ineptness. It might merit Scott Adams making a Dilbert cartoon over, but that's it.
Re: (Score:3, Insightful)
Look up "Poisson distribution". At low packet rates, large rate fluctuations by random chance are the rule. You also have to consider events that can trigger a common packet rate spike, such as a a non-critical subnet being power cycled. Combine this with a device that has an overflowable packet buffer and you have a recipe for inevitable failure.
Re: (Score:3, Insightful)
This is not about the network being highly loaded with lots of packets comming from all sorts of places. This is about a single device for some reason flooding the network. I have seen the results of units flooding a network with broadcast traffic. I don't consider it highly unlikely for one unit to eventually start doing that becaus
Re: (Score:3, Insightful)
When investigating an accident you cannot ground rule out an occurence that is unlikely or rare - unless you have positive evidence that said unlikely or rare condition did not occur, or positive evidence of another cause. "Unlikely"
Re: (Score:2)
Re: (Score:2)
Wrong. You shouldn't be able to VPN in -- there should be dedicated machines specifically for the purpose of accessing and monitoring that critical network (assuming that it's really that critical). Once a VPN link is opened, malicious traffic can traverse it the same as any other network. The only totally airtight security is complete physical seperation.
-b.
Re: (Score:2)
Re: (Score:2)
Certainly easier for them if they have an outside contractor to blame if something fucks up with dangerous equipment. In the current liability climate, I can't blame them too much.
-b.
Re: (Score:3, Informative)
In my case it's for two reasons. One, the disconnected network is considered the critical one, and is far more locked down than the one connected to the internet. Second, the one connected to the internet is the one used 99% of the time.
Anytime we touch a system there's a chance we'll screw it up/break it. Our treatment of the isolated network is pretty much 'don't fix what isn't broken'. It wasn'
Re: (Score:2)
Well, the reactor should be critical. If the network controlling it were critical too, I'd be a bit disturbed.
Standards! (Score:5, Insightful)
You'd hope that in something as critical as a nuclear power plant the answer would be, very quickly, "no, it didn't come from an external source because that's impossible". Followed by detailed analysis of the logs to determine which internal system screwed up.
That said, the article is a bit sparse on actual technical details, so my derision may be unwarranted.
Re: (Score:3, Interesting)
Re:Standards! (Score:5, Insightful)
Actually, power plants have to have a connection to the outside world. Why? Load-balancing for the power grid. If another plant goes down somewhere, this plant needs to know about it so that it can adjust output to compensate. For that, all the plants need to be hooked to a communications grid, which could conceivably be hacked (even though -- I would hope -- it's not connected to the Internet).
Re:Standards! (Score:5, Interesting)
Given that, any hacking would have to include a social engineering element designed to fool the operators into making the wrong decisions. If we include that stipulation, yes, it's quite conceivable. If we postulate someone bridging the air gap, maybe by something as simple as hooking a laptop that also contains a wireless card into the control network, then a non-social engineering attack becomes conceivable, but not really otherwise.
DOE and NRA doctrine is that adjusting reactor output based solely on a trigger event outside the core instrumentation is supposed to always require a high level human decision. Supervisors are also at least supposed to be trained to the point where they can make these decisions without adding any more response time than a conventional, (i.e. hydroelectric or coal based), plant would need for their human level decision events. (Yes they have them. For example the four TVA dams that supply Alcoa aluminum face a whole series of individual and joint human level decisions every time Alcoa's main furnace system glitches, and these have to include how long Alcoa expects them to need to dump power elsewhere, and for each of them, what options the other three dams are considering).
The DOE does not legally presume that reactors are even as responsible for balancing the grid as conventional plants, but given how much older a lot of the conventional plants are, it's pretty easy to do much, much better than is strictly required, and it should be noted that, in the last New York blackout all the cascade effects and switching failures happened in 1940's era or earlier fossil fuel plants, and the worst points were 1930's or even 1920's era designs. Still, the rules are that if the conventional plants are failing at load balancing, even if the grid is experiencing severe cascade failures, the nuclear sites will let the whole thing crash rather than take the risks of trying to stabilize the grid by actually modulating their reactions.
Re: (Score:3, Interesting)
It's called hydro - or sometimes even pump storage. Conventional thermal power is cheap but it takes a long time to increase output unless there is already spinning reserve. Non-conventional thermal power still takes time BECAUSE IT IS NOT MAGIC unlike what we are led to believe by those that want to build a few hundred 1950's style plants painted green. Nuclear power possibly would be a
Re: (Score:2)
I try to always point out that I'd use the nuclear power to replace coal power, which takes nearly as long to cycle up or down, not to try to replace more responsive
Re:Standards! (Score:5, Informative)
After some R&D and building some prototypes of promising new designs I'd be right with you - but our current best bets are things out of South Africa (pebble bed) and India (accelerated thorium) done on very small buidgets with very small teams and they need more work. The mainstream is just chasing taxpayer supplied pork. If they were after more than a handout they would be putting in some effort - instead they spend orders of magnitaude in PR, advertising and outright bribes than R&D.
As for costs - you can't just conveniently ignore capital costs. If you could hydro, wind, solar etc would win every time even in those places where it would be a stupid idea or where the capital costs are far too large for the return. Nuclear power is a possiblity in those places that have the infrastucture of a weapons program but everywhere else you would have to build up an entire industry from scratch. Iran is the best example currently where that is taking place and it has cost them a fortune to do so - hence few people think it is for purely civilian purposes there. In South Africa it was possible to take people from the weapons program to develop pebble bed. It is also far too big an investment for private enterprise - hence no new plants getting built while governments had cold feet on the issue and the "new generation" designs from companies like Westinghouse are just tweaked 1950s designs painted green.
Re: (Score:2)
Who says that I'm ignoring [uic.com.au] them? Referenced site is for European power, and nuclear comes in cheaper at 23.7 E/MWh, vs 28.1 for coal. That's including capital costs, but not including CO2 tax, which raises coal to 44.3.
If you could hydro, wind, solar etc would win every time even in those places where it would be a stupid idea or where the capital costs are far too large for the return.
In the USA, hydro is considered 'overutilized' IE we can'
Re: (Score:2)
Now we are getting somewhere - but to see what I mean you will have to see what the assumptions applied to get those numbers are, wonder why the British experience is vastly different in economic terms despite doing everything the same way, and wonder why private enterprise never builds these things even though they are portrayed as being the winning option. Once you've done
Re: (Score:2)
We are seeing a number of new reactor construction projects - it's
Re: (Score:3, Interesting)
Indeed.
Unfortunately, sometimes our favorite software supplier is involved... [theregister.co.uk]
Re: (Score:2)
Redesign the entire infrastructure (Score:2, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Re:Redesign the entire infrastructure (Score:5, Insightful)
i think the fact that an unforeseen erroneous condition caused the plant to *shutdown* and not *meltdown* is a pretty good indication that it was designed quite well.
There will always be unforeseen situations. The key is for the system to shutdown in an orderly fashion. In programming, this is accomplished through use of error traps.
Now, the hysteria surrounding terrorism is another thing the plant engineers have to worry about.
i just wonder if and when we get to put this hysteria behind us, and get along with our lives. unfortunately, terry gilliam's brazil is on a constant loop in my mind these days. . . .
mr c
Re:Redesign the entire infrastructure (Score:4, Interesting)
This might sound unreasonable but I would never expect a power plant (which has a lot of things depending on it) to shut down unless there was a major failure of a component or some other safety risk. Network traffic on its own, or its effects shouldn't ever be the cause. In a nuclear power plant you control ALL the nodes attached to the network, the nodes attached should not be in a position where they can saturate any individual node to the point of failure, especially if that failure causes a shut down of something as critical as a power station.
I can think of times where I have seen massive network spikes usually caused by issues with routing on fairly non-trivial networks, or loops where mistakes have been made and policies have not been followed, (lack of sleep or lack of patience), but then comparing an advertising companies internal network at 3am, or a paper factories network at midnight to a nuclear power station is taking it a little far.
That would be fair if we were talking about a software failure after some sort of unforeseen environmental issue, it would even be OK if an auto plant stopped production because of an unforeseen fault, and whilst power plants should certainly fail safe, they should be robust enough that a situation where failure is the only option is extremely difficult to achieve. whatever happened to redundancy?
I would suggest that this is hype to 1) keep terrorism at the top of everyone's agenda, and make people feel unsafe, after all that sells papers and grabs viewers (which in turn sell advertising) 2) deflect some of the negativity that this incident would produce (I wish that I could blame terrorists for my mistakes sometimes... "no that project plan... I haven't got it, but I'm checking to see if my poor time management is caused by terrorism or simply my inability to organise my resources properly") and 3) Security risks presumably attract additional funding, sureley it would be nice to get an extra few million in the next budget.
Honestly, this probably shows a component failure and some poor design, understandable, but unacceptable in this area. If and I say If with some considerable doubt, this turns out to be, or is reported as an external event, then whoever enabled external network access to what appear to be critical systems within a nuclear power plant on the US mainland need to be identified and punished, together with the contractors who built or maintained it, the managers or consultants that assessed and managed it and the politicians who have responsibility for public safety. But as I said, it will probably turn out to be a simple component failure and some poor design.
Re: (Score:2)
sorry, i was going on the information another poster provided: it was not external network -- that is, it didn't happen over their DSL line . .
a sensor evidently went haywire and started to dump a ton of data out on the internal sensor network. the way i imagine it, the metric shit ton of sensors in the plant are all networked in some fashion. one went bad, generating the analog of a DOS attack. the plant SHUT DOWN. this is good. this is better than chernobyl, eg. way better.
sure: redundancy, disc
Life at a power plant. (Score:2, Informative)
Firstly I would re-design that entire infrastructure and rid that power plant of incompetent IT people.
You need to find the root cause [slashdot.org]. You don't know it yet, so you don't really know what to do.
Chances are, the cause has been written up by the four or five systems engineering people in charge of the plant. They ARE competent, but they are never given the resources they need.
Why wasn't there any failover who knows.
There was a failover - they overrode the broken thing. Had the operators been g
Re: (Score:2)
This story smacks of a slow news cycle and nuclear fear mongering.
Political FUD (Score:4, Interesting)
What network technology were they using? (Score:5, Interesting)
Re:What network technology were they using? (Score:4, Insightful)
Ethernet isn't perfect but it's the only realistic option. Managed properly, it can be very reliable. The biggest problem I see from this article is that there is a lack of regulation and testing of the equipment that goes in to these plants. These poor TCP/IP stacks should have never gotten past the testing phase when it comes to a nuclear power plant.
Re: (Score:3, Insightful)
Re:What network technology were they using? (Score:4, Interesting)
Even stupider (Score:5, Insightful)
"What is happening in this marketplace is that vendors will build their own (network) stacks to make it cheaper," Peterson said. "And it works, but when (the device) gets anything that it didn't expect, it will gag." So you mean to tell me pretty much there is no enforcement for manufacturers to maintain compliance on their products even if those products are going into a nuclear *ANYTHING... Which on the worst case scenario could cause catastrophe, yet we have regulatory commissions on the flow of ketchup, regulatory commissions/directions/etc., on weight loss products, lipsticks, etc. (FDA), but this place is not concerned with nuclear plants. Sinful.
Re: (Score:3, Informative)
Brown's Ferry *AGAIN!?!??!* (Score:4, Informative)
At least their reactor failed to "off" this time...
Schwab
Re: (Score:3, Informative)
It didn't just "fail to off", they manually shut it down. They followed procedures and placed it in a safe condition. No need to sensationalize it.
Re: (Score:2)
Even in a free market democracy, people are complacent, careless, greedy, dumb and just plain human.
important safety tip .. (Score:2)
Don't try and find sources of draft using inflammable foam and a candle
Don't route the backup system through the same conduits as the primary one
was Brown's Ferry *AGAIN!?!??!* (Score:4, AGhaaaaaa !!!, ohh God, make me a believer )
Wow! (Score:2)
Behold, the power of Slashdot!
a cat (Score:2, Funny)
Saturday is bought to you by the color Orange (Score:3, Interesting)
Is that a nice way of saying they were downloading pr0n?
> US House of Representative's Committee on Homeland Security called further investigate
Boss: "So we don't have the backups for the first two weeks in April"
Employee: "Yes Boss. They were obviously misplaced by terrorists"
When Homeland Security is done, my refrigerator door was left ajar last night. I think it was terrorists too. Think I'll phone this one in.
The way I think the conversation went (Score:5, Funny)
"It seems the problem was with the NC9828A chip"
"Oh? And what was the problem?"
"It melted, basically. It went bonkers."
"Ah, and then what happend?"
"Err... it caused the shutdown."
"But how?"
"Well, I presume the AH-982's got deluged with data, so they shut off."
"Ah, so it was some sort of data thing."
"Kind of, the failing chip would start sending data in the network t--"
"Hey, it's like a storm of data! Hah! I get it!"
"Umm, basically."
"Oh man. A data storm! I better tell the NRC"
"Ok, sure."
Later...
"Sir, I have the cause of the shutdown, it was caused what the tech guys here would call a data storm."
"A data storm? Wow. So your reactors got a bunch of bad datas, right?"
"Errr.. kind of, the microchips melted."
"Data can do that?"
"Yeah, it's like a storm on our, uh, logic networks. I guess that can melt the microchips"
"Uh oh. Maybe this storm came from outside the plant! One of those hacker attacks!"
"Hmmmm, the guy said it melted, but I suppo--"
"Oh crap I better inform Homeland Security!"
"Ok, sure."
Later still...
"Yeah, we had a data storm and it melted the reactor networks."
"How did this data storm happen?"
"I don't think they know yet, but it messed up big time."
"My God. Do you realize this could be Al Qaeda?!!"
"Could realize wha--"
"Al Qaeda! Terrorists. Internets terrorists."
"I don't know if the reactors are hooked up to the Interne--"
"Listen. Keep this quiet, but make sure you tell everyone you know. These reactors are not safe! No one is safe from the terror!"
"Well, it was a data storm. Can terrorists make data storms?"
"Yes. They caused your meltdown."
"No, no, the microchips melted down because of the storm. A meltdow--"
"In the terror business, there's more than one type of meltdown, you just let us handle this."
"Ok, sure."
/.ed Again (Score:2)
This is scary (Score:2)
At least it wasn't connected to the public internet. Can you imagine the havoc THAT would create. But I wonder why they're treating this as a criminal thing. Did someone modify a device?
Risks (Score:2)
Good news. (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
That works fine until its the plant manager who insists on having a real time process monitoring app. running on his laptop and then be able to visit the goa
Re: (Score:2)
Yeah. But it's fun to think about. Although, in some ways it's too bad our society isn't run more along the lines of the Klingon Empire. I mean, a Klingon underling wouldn't bother grabbing a boneheaded supervisor by the neck and shaking him. He'd execute the bastard on the spot for incompetence and disloyalty to the Empire
Klingon power plants are run very securely, I understand.
Wait a cotton pickin' minute here... (Score:3, Interesting)
The article says that explicitly. "Internal network." The DHD is worried about outside agents penetrating the plant personnel, not someone with a laptop uploading a virus like Jeff Goldblum in "Independence Day."
If there *was* such a "data storm" attack, it would _have_ to be caused by an inside saboteur. The plant needs to focus on HUMAN security, not computer security. Either that or they need to reconsider a faulty design.
But can we try, just try, not to write completely hysterical baloney? Hysterical baloney is a tradmark of "Homeland Security," and they might see fit to sue.
--
Toro
Network stack has too high priority (Score:3, Insightful)
Solution: Make the network interrupt handler threaded and prioritize it below the real-time application. Sure, that doesn't help the SCADA performance, but you have to make sure that the real-time application meets it's deadlines no matter what is going on on the network. I simply don't buy that you can secure a network stretching over more than 1 meter against "data storms."
replace the Network stack .. (Score:2)
Network stack has too high priority (Score:1)
Here's what bothers me most about this... (Score:2)
In accord with current regulations, NRC staff decided against investigating the failure as a "cybersecurity incident" because 1) the failing system was a "non-safety" system rather than a "safety" system,
If failure of the component is dangerous enough to force a system shutdown due to lack of cooling, that IS a safety system. It's like saying there's no need for bracing the BOTTOM of the ladder, cuz it's
Data Storms Have Lots Of Causes (Score:3, Informative)
The wierdest I ever saw was a situation at a school where the entire network (built around high-end Cisco switches) crashed hard. It took 3 hours of troubleshooting and disconnecting various segments to finally pin down the cause. It was a little mini-switch that some teacher attached to the LAN that somehow had a meltdown and began spewing "valid" Ethernet packets with all kinds of random garbage source and destination MAC addresses, random payload, and valid checksums. No hosts were attached to the mini switch, so it had to be something in its microcontroller going haywire. This cause every switch to go nuts trying to maintain its forwarding tables ("show cpu" was 100% utilization) and resulted in no traffic going anywhere. It even crossed VLAN boundaries since all the switches had "trunk" ports using tagged VLANS, so the garbage packets still made it through the entire LAN.
These things happen sometimes. Network gear is generally pretty robust, but can still fail in wierd ways.
Re:No kidding (Score:5, Funny)
Sometimes such connections are sooo slow, it makes users cry. They don't call it onion routing for nothing, eh?
Re:The last thing we need (Score:4, Funny)