Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security

Cyberattack On German Steel Factory Causes 'Massive Damage' 212

An anonymous reader writes: In a rare case of an online security breach causing real-world destruction, a German steel factory has been severely damaged after its networks were compromised. "The attack used spear phishing and sophisticated social engineering techniques to gain access to the factory's office networks, from which access to production networks was gained. ... After the system was compromised, individual components or even entire systems started to fail frequently. Due to these failures, one of the plant's blast furnaces could not be shut down in a controlled manner, which resulted in 'massive damage to plant,' the BSI said, describing the technical skills of the attacker as 'very advanced.'" The full report (PDF) is available in German.
This discussion has been archived. No new comments can be posted.

Cyberattack On German Steel Factory Causes 'Massive Damage'

Comments Filter:
  • yeah right (Score:5, Funny)

    by Anonymous Coward on Sunday December 21, 2014 @05:25AM (#48645753)

    "sophisticated social engineering techniques"

    So they got some pizza delivery before this all started.

  • by Archtech ( 159117 ) on Sunday December 21, 2014 @05:33AM (#48645767)

    About 20 years ago I used to lecture on the topic of computer security. Taking my cue from UK government experts whom I had met back in the 1980s, I used to point out that the only secure computer system is one that cannot be accessed by any human being. Indeed, I recall one expert who used to start his talks by picking up a brick and handing it round, before commenting, "That is our idea of a truly secure IT system. Admittedly it doesn't do very much, but no one is going to sabotage it or get secret information out of it".

    I still have my slides from the 1990s, and one of the points I always stressed while summing up was, "Black hats could do a LOT more harm than they have so far". To my mind, the question was why that hadn't happened. The obvious reason was motive: why would anyone make considerable efforts, and presumably put themselves at risk of justice or revenge, unless there was something important to gain?

    Stuxnet was the first highly visible case of large-scale industrial sabotage, and I think everyone agrees it was politically motivated - an attack by one state on another, and as such an act of war (or very close to one). This looks similar, and apparently used somewhat similar methods.

    The article tells us that "...hackers managed to access production networks..." The question is, why was this allowed? If "production networks" cannot be rendered totally secure, they should not exist. Moreover, if they do exist they should be wholly insulated from the Internet and the baleful influence of "social networks" and the people who use them.

    • by burni2 ( 1643061 ) on Sunday December 21, 2014 @06:15AM (#48645893)

      1.) "is the one that cannot be accessed by any human being"

      - virtual or physical -

      So the answer what real secure system (composed of human, machine or both) you have in mind is. none.
      You need people or machines to built things, there you go again, you implement the human factor from the start.
      And your approach just points out the fact that nothing is 100% safe. This thought is so utterly flat as it is true, but it does not offer any train thought which steps to undertake to at least increase the security.

      2.) We will see more failiures that big in the future as the buzzword "industry 4.0" is coined. Due to the approach of interconnecting each and everything, all your lamenting does not stop anyone from doing it.
      If you cannot stop or deflect a movement, at least try to alter the movement.

      3.) "Why was this allowed?"

      Because your typically ERP System SAP & Oracle to name the big to be frail twins does exactly this. It interconnects production, accounting, document maangement, it can control your whole material workflow.
      All on the same system.
      Yes, this is a weakness, gain access to SAP-accounts with acting power and you can make a factory start order and producing tons of bullshit.

      4.) "Black hats could do a LOT more harm than they have so far"
      Good lord, another one of those general thoughts.
      - suicide bombers certainly don't fear the death, so death penalty for suicide bombers is a bad idea.

      5.) the best approach to in an insecure world is to start asking the "what can possibly go wrong" and "how can we prevent the risk" and "how can we mitigate the consequences" questions
      In engineering this is called an FMEA(1) and this works for computer security too. Because it does take the human factor into account.

      (1) http://en.wikipedia.org/wiki/F... [wikipedia.org]

      • No big red button? (Score:2, Interesting)

        by Hognoxious ( 631665 )

        Sure. But software shouldn't be able to make hardware damage itself.

        Also, designing something like a steelworks without some kind of hardware-level override is so stupid it borders on criminal.

        • by Shinobi ( 19308 ) on Sunday December 21, 2014 @07:23AM (#48646063)

          "Sure. But software shouldn't be able to make hardware damage itself.

          Also, designing something like a steelworks without some kind of hardware-level override is so stupid it borders on criminal."

          As long as software can make the hardware do something, it can make it damage itself.

          As for the damage, it was probably the emergency shutdown that caused the damage(i.e, what you incorrectly label hardware-level override), since it does a direct quick stop, without following the proper, slower and safer procedures for shutdown.

          • by GNious ( 953874 )

            As long as software can make the hardware do something, it can make it damage itself.

            So data-invariance is not an option on a hardware level?
            it should be possible to design hardware, where critically-dangerous input is filtered or rejected, such that most attempts to willfully bring it into a dangerous scenario will fail.
            Yes, emergency-shutdowns should remain possible, though I'd question why that would be something controlled by a computer, and not a big red button that needs to be physically pressed somewhere in the office.

            • by Shinobi ( 19308 ) on Sunday December 21, 2014 @09:07AM (#48646329)

              Data invariance, even if you can somehow implement it properly on a hardware level, does not protect you if it's the execution pattern that is the attack method for example.

              As an example, rapid power cycling/power state change due to a program swiftly being shunted between CPU intensive and idle threads, etc can cause power surges that can damage the PSU or the motherboard or even the CPU(as voltage regulators etc move onboard, they become ever more vulnerable to this), and for all intents and purposes the data input to the program will be fully valid and unchanged. Excessive head parking on a mechanical HD can cause the HD to become faulty. Frequent standby/active cycles on monitors can kill them fairly rapidly.

              As for the emergency shutdown, nowadays, with modern equipment, the big red button and the emergency shutdown button in the control program do the same thing: Send a signal to the correct circuit and halt all operation. In some heavy machinery that means just cutting all power, in others it disengages pneumatic valves and thus engaging mechanical brakes etc etc. It depends on what kind of machinery it is.

        • by burni2 ( 1643061 ) on Sunday December 21, 2014 @08:00AM (#48646157)

          blast furnace:

          You intermix iron ore and coke (not the drug! it's processed coal)
          and then you start an exothermic reaction, what you then do is process control, you blow in Oxygene to react carbon to CO2 to a certain percentage and when the steel is ready you poke a hole into the furnace and then molten steel poures out.

          This is a reaction that is ongoing.

          We are talking here about huge amounts of energy.

          A smaller example: ever been test running inside a wind turbine of +1,5MW megawatt class, during nominal power operation ?

          Push the red button and you will realize what energy is - rollercoaster ride - and how long the rotor will need to come to a full stop.

          Bigger Bigger example, push the red button in a nuclear power plant, yes the control rods will react, but if you don't cool the heat from radiactive decay away, you will get a Fukushima.

          I hope you are not a pro nuke, because keeping that in mind (the virtually non 100% hardware red button) you would now have ruled operators of nuclear power plants as stupid that it borders on criminal.

          Also there were hardware level overrides and they worked, however if you leave the molten mass inside the furance it will solidify == damaged beyond repair

          Which happend there, you have then to rebuild the furnace and beforehand have to cut the wrecked furnace open with a many ton heavy steel clump (happy cutting)

        • The problem is, by making systems where software is the last line of defence against damage, you typically can make much more efficient systems. Note car engines that use variable valve timing can damage themselves (e.g. by opening the valve during combustion, and allowing exhaust/plasma to back flow into the injectors), but they're much much more efficient than engines with a cam rod.

          • Do you know at all how VVT works? There are 2 distinct types of VVT systems I have encountered and both use cam rods. One has different sets of lobes (the most I've seen is 3) for discreet, still hardware-limited, valve timing, while the other uses an adjustable gear at the end of the camshaft, allowing maybe 15-20 degrees of adjustment in total; still hardware-limited. The VVT systems I've seen have all been configured such that the earliest and latest physically possible timings were still well within saf
        • by mikael ( 484 )

          The problem is that blast furnaces aren't simply switched on and off, but have feedback software systems that adjusts fuel feeds, cooling systems and exhaust extraction to achieve the desired temperature while minimizing fuel consumption, cooling and pollution. Much the same way as electronic car ignition. The operating temperature would have to be ramped up and down slowly to avoid any damage through thermal stress.

          It's the hardware overrides that would allow the cooling system to be reduced or switched of

        • by AK Marc ( 707885 )

          Sure. But software shouldn't be able to make hardware damage itself.

          So you want the third rule of robotics above the first two?

          Seriously, you should work at Airbus, but not Boeing. One of the fundamental differences between the companies is the order of the Three Laws (specifically #2 and #3). Airbus will guess what the pilot wants, then give it in a controlled manner. Boeing will let a pilot shake the controls until he damages the plane.

          We have software that lets the hardware damage itself, when it's trivial to do otherwise. And that's accepted in a higher-safety en

          • by mikael ( 484 )

            Unfortunately for Airbus, it didn't quite work out when an airshow decided to have an aircraft do a low fly-pass in front of the crowds. The combination of low altitude, low speed with flaps and landing gear lowered made the AI think that the pilots wanted the plane to land. So the flight control system cut the engine power in preparation for landing.

            • by AK Marc ( 707885 )
              The pilots also made some actual pilot errors. They underestimated the response of the engines for throttle-up. The plane would never "force" a landing. Go-around is common, and would be allowed for.

              In that case, the pilots were "landing" 50' below ground (as they were executing an actual failed approach at ground level, and aborting the landing too late), 50' below ground because they didn't account for the trees. They should have simulated landing 50' above, not below ground, but that wouldn't have
            • by TarPitt ( 217247 )

              I have this mental image of Clippy popping up on the flight control monitor saying, "It looks like you are trying to land. Do you need help?"

        • by Shoten ( 260439 ) on Sunday December 21, 2014 @01:23PM (#48647453)

          Sure. But software shouldn't be able to make hardware damage itself.

          Also, designing something like a steelworks without some kind of hardware-level override is so stupid it borders on criminal.

          This is like saying "Sure, but car's shouldn't have anything that propels them forward...that's how car crashes happen."

          The sole and entire point of control systems (aka SCADA, DCS, or ICS) is to make it possible for software to control hardware. And it's impossible to make *anything* that can't be broken or cause damage if it's abused. When you factor in things like blast furnaces, substations, or other real-time applications that involve massive amounts of energy (kinetic, electrical, thermal or otherwise), you're harnessing one hell of a big thing, and that means careful balances and lots of risk. You can't have a situation where there's thousands of degrees of heat and gigantic crucibles of molten steel and yet have it impossible for something to be done wrong.

          It always makes me crazy when assholes (yes, that's my word for a novice who pontificates about the "incompetence" of actual professionals without citing anything concrete or meaningful) who don't have any experience whatsoever with control systems put forth their idolized version of reality that somehow means that everything can be simple and as safe as a Fisher-Price toy, despite the fact that these environments have never been foolproof in all of human history. Trains crash, pressure vessels explode, chemicals leak, boilers beer-can, transformers flash...it's always been that way, and always will be. Control systems make them less likely to do so for accidental reasons, but also allow an attacker to force it to happen for deliberate ones. That's the trade-off, and to this day it's still a trade-off that's had a positive outcome. It makes no more sense to back out these systems than it did for banking to go back to using adding machines, just because there were cyber security incidents early on in the financial sector. The next step forward is better security for these environments, which is in the process of happening as we speak.

        • by sjames ( 1099 )

          As I understand it, the damage was indirect. The software was left in such a state that the furnace was at the time undamaged but could not be properly shut down. That left only the emergency shutdown procedure which was the cause of the damage.

          The real failure was not being able to physically operate the controls to at least manage a clean shutdown.

      • "This thought is so utterly flat as it is true, but it does not offer any train thought which steps to undertake to at least increase the security".

        Precisely! The purpose of such statements is to focus the listener's mind on the highly unwelcome (and perhaps unfamiliar) idea that security is utterly antithetical to everything else we seek in a computer system.

        Good security usually means lower performance, slower response time, greater cost, far less user-friendliness, and very noticeably less convenience in

        • by burni2 ( 1643061 )

          In his generality he simply missed the opportunity to project his view and understanding of the problem onto his audience. Which in contrast to some /..ers back then might not have been aware of certain threats and criminal intents. Interpreting his statement, he did that again.

          This is what I criticised, to say it with a metaphor:
          Sometimes a statement is like a fart in the air, it stinks, but when its gone nobody cares.

      • 3.) "Why was this allowed?"

        Because your typically ERP System SAP & Oracle to name the big to be frail twins does exactly this. It interconnects production, accounting, document maangement, it can control your whole material workflow.
        All on the same system.
        Yes, this is a weakness

        Yes, it's a weakness - but it's also the whole point of having an integrated system in the first place. The armchair sysadmins here on Slashdot keep missing that point... these systems exist for a reason.

    • by WoOS ( 28173 ) on Sunday December 21, 2014 @06:24AM (#48645905)

      The article tells us that "...hackers managed to access production networks..." The question is, why was this allowed?

      When I was in university we wrote an optimizer in "Operations Research [wikipedia.org]" for a still-mill as a practise which determined optimum cutting lengths of steel 'bars' based on customer orders.

      Orders probably arrive in the office network. I can well understand people don't want to walk with a USB stick (if that would survive the environment at all) from their office to the plant to feed instructions into the industrial control units. So probably some network connection was introduced and thought to be sufficiently secured. And then the Windows on the "safe" side was never updated because it couldn't connect to the internet anyway. Wind forward 10 years and you have a Windows full of completely unimaginable holes (which are easy to exploit because Windows is the same everywhere) which is indirectly accessible from the internet.

      • by JaredOfEuropa ( 526365 ) on Sunday December 21, 2014 @06:56AM (#48645999) Journal
        Sure, information needs to be passed back and forth between the office and the plant. The first step in security is to assume that your office network is the same as "the Internet": you don't know what's on there, it is full of malware and hackers, and they are actively out to try and get you. Assume your office network fully compromised, and secure the production network accordingly.
        • by dkf ( 304284 )

          The first step in security is to assume that your office network is the same as "the Internet": you don't know what's on there, it is full of malware and hackers, and they are actively out to try and get you.

          Unfortunately, the office network is also definitely full of managers, and prizing a bit more convenience at the cost of "a little" more risk is a classic thing that managers order. They are also usually able to find people who will carry out the orders.

      • by amorsen ( 7485 )

        It is difficult to think of something LESS secure than plugging USB sticks into production equipment. I will take ethernet over that any time. At least the ethernet controller and driver is likely to be fairly secure, unlike the USB host and driver.

        • by Entrope ( 68843 )

          On the one hand, you have to worry about security holes in the USB driver and file system.

          On the other hand, you have to worry about security holes in every piece of software that talks to the network.

          If I really wanted to reduce exposure for a network, I would probably use single-session CDs to cross the air gap, and make sure to pack any extra space with random data.

          • by itzly ( 3699663 )
            An air gap is not very useful if it needs to be crossed on a regular basis. If you write your single session CDs on a compromised network, the instructions on the CD itself can also be compromised. Also, when you make it too inconvenient for the operators to do their jobs, they'll undermine your security plan by installing a hidden access point somewhere.
            • by Entrope ( 68843 )

              The point of an air gap is to make data transfers much more controlled. Some can be crossed regularly (with appropriate control), and some should not. One should only adopt any security measure after a cost-benefit analysis. The depth and rigor of that analysis should be determined by the expected costs (ongoing/operational) and potential costs (from a successful exploit).

              Thus, I said "If I really wanted to reduce exposure", not "Everybody should do this to reduce exposure". If the productivity costs ar

              • A really secure air gap that would work with continous data streams should be built somewhat like this. 1. Define a simple protocol for the instructions. In the case of this steel mill it should be "produce x amount of class y steel". Thus there is limited ways of compromising the system via the protocol since there is no detailed instructions to fuck up the mill as in the article. 2. Air gap it by having the computer connected to the internet print out the order to paper. The the operator moves that pape
                • by Entrope ( 68843 )

                  Sure... if.

                  1) If you can define the protocol to be simple enough, and
                  2) if you can be sure that only the intended application will process the data stream on the secure side, and
                  3) if you actually test that application enough to be confident it is secure, and
                  4) if you can ensure that sensitive information will not (improperly) leak back down the other direction, and
                  5) if you use it often enough to pay for that development cost, and
                  6) if you can resist the pressure to add features or "generality" to the prot

                  • by Kjella ( 173770 )

                    For your simplified example, it is probably cheaper -- and just as secure -- to have an operator enter the dozen or so keystrokes to order "produce x amount of class y steel" than to design, build, install and support a more automated method. Human involvement has the added bonus of (nominally) intelligent oversight of the intended behavior for the day.

                    Do you have any idea what the error rate for manual data entry is? Typically about 0.5% of the entries will be wrong. Retyping information is a very error prone process.

                    • by Entrope ( 68843 )

                      Do you have any idea what the error rate for manual data entry is? Typically about 0.5% of the entries will be wrong. Retyping information is a very error prone process.

                      Do you have any idea that there are known good practices for checking entered data before committing to it? And that most people would want to apply this kind of check before kicking off a production run, of just about anything, regardless of how the order was sent to the system?

                      What is it about this topic that makes people forget basic eng

              • Except things that we regularly bring to oil rigs and plug into the 'secure' side of the network: .xlsx and .docx files containing installation instructions and checklists .pdf files with 'red markups' of changed logic .exe files fetched from manufacturer websites with firmware upgrades
                A ton of files in proprietary file formats we have no actual way to check the contents of other than trusting the software which created the files.

                We essentially have to trust that McAfee and MS endpoint protection will keep

          • by AK Marc ( 707885 ) on Sunday December 21, 2014 @01:05PM (#48647357)
            What people fail to account for is someone willing to spend $1B to break a $1M machine. This type of insanity is ignored. But, if someone did want to break your toy, you couldn't stop them.

            Step 1, they buy your $1M machine (duplicate from the manufacturer). They use it. They find the USB port. They determine the exact signature sent by it.

            Step 2. They make USB drives with firmware that looks for that signature and sends different drivers if detected. So the USB drive will serve good drivers and work properly when put in a computer to load the files on. But when you put it in the industrial machine, it will not share the files, but serve up the custom-buit virus.

            Step 3. Go to the plant you want to break as a visitor. Drop 10 of the USB drives (all in different colors, styles and sizes, so nobody thinks they are 10 of the same thing). Someone will grab one from the Lost and found when needed. Drop a few in the parking lot. If you are really spending $1B, then sell them too them at a good deal, as anyone using USB for a critical function will be buying USB drives often. Sell them in the stores near where the workers live.

            Then wait. Someone will plug you trojan horse into the right gear eventually. Unless they manufacture their own USB drives, they will be vulnerable to this attack.

            Security only exists to deter. It can never be both secure and usable.
            • by Entrope ( 68843 )

              If you have a air-gapped system, you don't let people plug either random USB devices or random Ethernet devices into it. You help enforce this by disabling USB ports, MAC-locking switch or router ports, making it clear that only specific authorized people can import data, and making sure those authorized few use hygienic practices. It's IT security, not brain surgery.

              • by AK Marc ( 707885 )
                So social engineer someone to place a compromised single-session CD in the unsecure network. Again, you are thinking small. I can think of hundreds of ways to breach a "single session CD" security. You can't make security that can't be breached. You just hope to make it harder to get in than the value of getting in.
              • With sufficiently 'annoying' security practices, people stop following them.

                We were issued password-protect usd sticks for secure use at work, and a month later we got ones without passwords. Why?
                People found the encrypted and protected sticks "too cumbersome" and just went out and bought a cheap 16 gig stick for themselves....

                I bet the procedures will not be properly followed until one of the oil rigs get taken down. It pains me to know the issues and have zero ways to affect it....

    • by oodaloop ( 1229816 ) on Sunday December 21, 2014 @06:48AM (#48645957)

      I still have my slides from the 1990s

      How much clip art was there?

    • by amorsen ( 7485 )

      Moreover, if they do exist they should be wholly insulated from the Internet

      Systems which are insulated from the Internet rarely get security updates and security reviews often miss them. Yet all it takes is a compromised laptop on the wrong network or a USB stick inserted into the wrong machine, and suddenly the whole "secure" network is up for the taking.

      Critical systems should be designed to function despite FSB, Mossad, and the NSA all have having direct access to every LAN. Alas, that is practically impossible to achieve today, industrial systems and management functions do no

    • by Bob9113 ( 14996 )

      If "production networks" cannot be rendered totally secure, they should not exist. Moreover, if they do exist they should be wholly insulated from the Internet

      There's always a connection to the Internet. Sometimes it is sneakernet, sometimes it uses photonic information dellivery to bio-ocular scanning device, which uses cranial data storage and processing, and meatfingers to transmit the data through an array of buttons commonly called a "keyboard"; but there is always a connection. Hacking airgapped netwo

    • by TheCarp ( 96830 )

      Well hindsight is always 20/20. Few people look into securing their houses what haven't been robbed or known someone who was. Nobody benefits from this sort of attack; like you say, its a motive issue. Why does the production network need so much proection? Up until now it hasn't. There was nothing of any value there for anyone....only of theoretical value.

      The only people who carry out this sort of attack are the ones who work for armies because they don't have to worry about personal reprisal and they are

    • by Bengie ( 1121981 )
      The enemy of good is perfect. It's better to design your security around best practices and have recovery modes. My immune system doesn't stop me from getting sick all of the time, but it does a good job recovering. That should be the goal.
    • by hey! ( 33014 )

      You can turn that question around. Given the manifest possibility of such a act, why haven't more organizations taken steps to prevent them?

      We keep hearing from the companies attacked and the press that these attacks are "sophisticated", but this attack started with a simple spear phishing attack. People use "sophisticated" to mean "more trouble than we were prepared for."

      Comparisons to Stuxnet seem overblown and (in some cases) self-serving. Stuxnet was designed to undermine systems the perpetrator had n

      • People use "sophisticated" to mean "more trouble than we were prepared for."

        Well, it's partly that and part face-saving spin. No one wants to admit they were duped by a simple attack. Only a fool would fall for something like that.

    • by AK Marc ( 707885 )

      Moreover, if they do exist they should be wholly insulated from the Internet and the baleful influence of "social networks" and the people who use them.

      And even if they are, they are still vulnerable. Air gap doesn't work. Security through obscurity does. Especially when "obscurity" means "renders unusable".

  • What, like, extra-lying? Doubleplusgood lying? I don't get it. There is only one way to not tell the truth.

    • by Entrope ( 68843 )

      There are techniques like "Hello my name is Solicitor Darren White, my client has just deceased and left you a sum of $1,000,000,000 (ONE BEEELLION DOLLARS)...". There are also techniques like "Registration is now open for [industry-relevant convention], please visit [malware-infected site] to sign up so you can keep up with new developments." Beyond that are very individualized attempts to gain the target's confidence, perhaps involving apparently independent contacts -- persona A contacts the target ove

  • by Anonymous Coward

    Easy - ransom.

    Now they can point to this and say 'you are next - unless you pay'

    The one thing driving hacking now is monetising hacks - from crypto ware to bigger things.

    • I have to admit, it could hardly come at a better time. Budget talks are due.

      (this is me doing my happy dance)

  • by thegarbz ( 1787294 ) on Sunday December 21, 2014 @05:45AM (#48645807)

    Ok everyone is going to leap into the whole world of control system, cybersecurity and what not, but I have a far deeper question.

    What kind of a plant is designed in a way that a full failure of their control system would result in being unable to shutdown in a controlled manner. Where is the safety instrumented systems that can shutdown processes at a push of a button? Where are the manual overrides? Where is the big-arse power switch, and if that can't shut down the plant safely then where is the system that drops the plant to a safe state in the advent of loss of power.

    This scenario to me sounds like cybersecurity was the lease of their problems.

    • by Shimbo ( 100005 ) on Sunday December 21, 2014 @06:05AM (#48645863)

      Uncontrolled is not necessarily the same as unsafe. If you pull the power to a steel plant, you have have steel set in all the wrong places, and it will be a devil's own job to return the plant to working order.

    • by AmiMoJo ( 196126 ) *

      You have to differentiate between a safe but damaging shut down, where there is no risk to human life, and an unsafe shutdown. You use a car analogy parts of the body work are designed to fail in a way that destroys them, but keeps the occupants of the car safe. Industrial systems are often designed on the same principals.

      More over, it is very difficult to design any kind of complex machine that can never fail in a way that damages it. Even if it can be done, often it doesn't make economic sense to since th

      • by Anonymous Coward

        Safety includes property as well as people.

        When my employer was designing a land mine detector for the US government (which used a partially automatic hydraulic mount), we were explicitly required to consider and address the risks of damage to people, the system, and third-party objects/property in our safety analyses. Even in case of system faults, it was crystal clear that we were expected to avoid, or failing that minimize, collateral damage.

        Of course, that didn't stop drivers from using the system to p

    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday December 21, 2014 @07:29AM (#48646081) Homepage Journal

      What kind of a plant is designed in a way that a full failure of their control system would result in being unable to shutdown in a controlled manner.

      Pretty much all of them. At best, you can lose a batch of something if the process fails in the middle. If Sunsweet loses power in the middle of cooking a batch of fruit paste, the batch not only fails and has to be trashed but cleaning the system is far more difficult than if the batch succeeds. At the point where factories become complex enough to need digital automation, you cannot reasonably create a failsafe mechanism which will prevent an error from losing a batch. The best you can hope for in some situations, probably most, is to create mechanical interlocks which will prevent immediately catastrophic combinations of inputs and outputs.

    • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Sunday December 21, 2014 @07:30AM (#48646083)

      That is pretty much how industry works. There is a right way to shut down a plant, and it involves a lot of things done in the right order. You can do an emergency shut down, and that will not kill anyone, but you will at minimum have to throw a lot of the stuff away that was going through the plant at the time.

      Steel works are about a worst-case example of this. Lose power at the wrong time and you have no-longer-melted steel stuck in all the wrong places with no way to remove it. Removing this risk is impossible.

    • by 140Mandak262Jamuna ( 970587 ) on Sunday December 21, 2014 @08:39AM (#48646261) Journal

      Where is the big-arse power switch?

      It is a bloody blast furnace. They could hold anywhere between 20 and 120 tons of liquid molten iron. They are designed to hold that much of liquid metal continuously for five to 10 years. They keep adding raw materials, keep pouring batches and batches of it out. But it always 50% to 100% full of liquid metal. Once in 10 years, they drain, and essentially dismantle the lining of the furnace, and relay the refractory bricks. A three to six month process typically. I don't know the details, I am sure they have a safety pit lined with refractory bricks to drain the furnace in an emergency, like earthquakes, floods or factory fire. It is possible that process was triggered in this instance.

      • For three hundred years people were able to run them furnaces without the aid of computers just fine. But after the 'puter takes over, you can't do anything without it, even if the damn thing goes south... I'd say it's not a very good design.
        • by Mal-2 ( 675116 )

          For three hundred years people were able to run them furnaces without the aid of computers just fine. But after the 'puter takes over, you can't do anything without it, even if the damn thing goes south... I'd say it's not a very good design.

          If by "just fine" you mean having a small fraction of the throughput of the modern machinery. The automated systems can be (and thus are) run at damn near peak capacity at all times, which means that when they do fail, it will inevitably be at the worst possible time -- because it's always the worst possible time. The trick lies in determining whether this increased cost of failure is offset by the increase in production. From the widespread adoption of such processes worldwide, it would appear the answer i

  • by Opportunist ( 166417 ) on Sunday December 21, 2014 @05:57AM (#48645839)

    I'd rather not call the average attack "very advanced". I'd rather call the average security situation in the average company "very crappy".

    And I have little reason to assume this being different.

  • by Anonymous Coward

    The problem is, these companies seem to barely afford one system let alone a backup system. They don't do the primary right, so who should expect a good backup plan? Look at Sony for example, we find open emails exposing primary passwords and users to their main system. Its like handing over a key to your house to a thief. When it comes to this German plant, what appears to have happened was no means to take the furnace control offline and manually shut it down. This is a dangerous decision that was probabl

  • by Bomarc ( 306716 ) on Sunday December 21, 2014 @06:09AM (#48645873) Homepage
    I read this type of issue time after time.
    Why are such critical systems connected to the internet... and further why are they (these critical systems) allowed to see "foreign" websites?
    Start with this story: Why is there critical systems allowed to be in the same network as email? They should be physically separated - and never see the light of the www, Degrade the subject to Target, Home Depot et al, and why do their critical systems see anything (everything) on the www? At BEST the only equipment these computers should be seeing is the ONE system they need to communicate with to transfer their business.
    Take it one step further: Why do banks - or email (Yahoo, Hotmail, Gmail) NOT allow me to block access from other countries (and/or identify which country I'm visiting)?
    Yes, I know that they can use 'other systems' to attack (right now: someone from IP 185.14.30.79 has been using such an attack against my web server for a couple weeks: It's getting really annoying) however such attacks can also be viewed and guarded against.
    Leaving the barn door open (by connecting critical systems to the www) for such attacks seems very short sighted.
    • by burni2 ( 1643061 )

      Because, like it or not, the "modern" production works or is at least wished to work with small human interaction.

      The general wish is that you can do Enterprise Resource Planning(1) (SAP/R3 & Oracle for example). That you can modell your whole value added chain into such a system.

      Also these ERPs can do a process simulation with alteration of certain factors, this helps the "gold collars" to make a choice not soley based on their gut feeling.

      - yes many times these models are far from reallity and SAP

    • by ljw1004 ( 764174 )

      Why is there critical systems allowed to be in the same network as email?

      Email from operations to the shop floor: "Hey Klaus, we've determined that for the following job we need parameters set at P=123.79 and Q=119.11". Klaus prints out out from his email-connected computer. Picks up the printout, walks across to the control computers, and starts typing in the parameters from the printout. Unfortunately he has a typo that causes the entire batch to be not quite up to spec.

      Solution: come up with a way for the parameters to be taking precisely from email into production, without t

      • You can have electronic communication on an isolated network. And there are plenty of ways to input data accurately with error checking. Add CRC to the input. If they don't match, find the error(s). Or dual/triple/quad entry so that it's only accepted if the fields match. Like when you're creating a password for a new account. Or print a QR code and scan it on the isolated system. You can pack a lot of data and error correction into QR codes.

        Of course, this all assumes that the input is legitimate.

        • Error checking won't catch parameter p.11.23. Vs p.11.32

          Qr code isn't a bad idea though.

        • by Shinobi ( 19308 )

          Actually, bar codes and QR codes are used in some industrial systems to input orders, for larger batch jobs. The problem is when you are in need of continuous feedback etc, or running lots of small custom jobs.

    • Why do banks ... NOT allow me to block access from other countries (and/or identify which country I'm visiting)?

      A: You need to change banks.

      My online banking allows me to block the use of my card to make in-store purchases or ATM withdrawals within and/or outside the country and/or EU. I can also enable or disable the use of my card for online purchases. I can also enable or disable any use of the card for other than logging into online banking from within the country--that last item takes a call to the bank. (Not sure whether not being able to lock yourself out unless you're overseas is a good or bad thing.) I can a

    • Why is there critical systems allowed to be in the same network as email?

      Because manual entry is likely to see things like "their" spelled "there". Which kinda-sorta looks right, but isn't.

  • English translation (Score:5, Informative)

    by WoOS ( 28173 ) on Sunday December 21, 2014 @06:11AM (#48645883)

    Translation to English to the best of my abilities:

    3.3 Incidents in private enterprises
    In contrast to governmental offices there is no duty up to now for private companies to report grave security incidents to the BSI.
    [.... ]
    3.3.1 APT attacks on plants in Germany
    Issue
    Targeted attack on a steal plant in Germany
    Method
    Using spear-phishing and advaced social engineering the attackers gained initial access to the office network of the plant. From there they gradually penetrated into the production networks.
    Damage
    Failures of individual control units or complete facilities occured increasingly. The failures prevented the controlled shut down of one blast furnance and brought it into an undefined state. As a result the facility sustained heavy damage.
    Targets
    Operators of plants
    Technical capabilites
    The attackers showed very advanced technical capabilities. Several different internal systems up to industrial components were compromised. The know-how of the attackers did not only cover IT-security very thoroughly but also included detailed technical knowledge on the running industrial control units and production processes.

  • I can understand attacking a plant in the US, but Europeans sell anything to anyone with the cash (and then bitch at us for being hypocrites).

    Russians, maybe, since Merkel wanted to stay tough on sanctions?

    • by burni2 ( 1643061 ) on Sunday December 21, 2014 @06:56AM (#48645997)

      Your numbers are not existent:

      compare the numbers in steel production from germany & U.S. to for example china, US ranks No 3 germany ranks No 7, but they do play in the same league. (1)

      Also if you take a look at this map(2) you will recognize China, US and Germany on all exported goods do play in the same league.

      according to the table from (3) which is based on data (4)

      1.) China - 1.898.600
      2.) US - 1.480.646
      3.) Germany - 1.473.889

      Conclusion:
      IRONY_ON
      Yeah, it's totally transparent to me, germany does really not sell anything!
      IRONY_OFF

      Germany does export many things, however not much on such low level things like raw steel.

      Further conclusion, divide the export numbers and the amount of population, and you will recognize the efficiency gap.

      1.) China - 1.366.040.000
      2.) USA - 317.238.626
      3.) Germany - 80.760.000

      (1) http://en.wikipedia.org/wiki/L... [wikipedia.org]

      (2) http://de.wikipedia.org/wiki/D... [wikipedia.org]

      (3) http://de.wikipedia.org/wiki/W... [wikipedia.org]

      (4) http://stat.wto.org/Statistica... [wto.org]

    • I can understand attacking a plant in the US, but Europeans sell anything to anyone with the cash (and then bitch at us for being hypocrites).

      No, they don't. There are currently EU trade sanctions in place against a whole lot of countries: see here. [europa.eu] Restriction of goods seems to be mostly arms, but the list on North Korea is pretty extensive, although it apparently still doesn't include raw steel.

    • by u38cg ( 607297 )
      Russian non-state hackers, I would posit; I can't see the point of burning state resources on this. And I can't see anyone else with the motivation plus capability. The only other remote possibility is some sort of false flag operation to demonstrate the need for more resources from some Western agency.
  • Looks like the hackers did hit the weak spot.

  • by WoOS ( 28173 ) on Sunday December 21, 2014 @06:48AM (#48645953)

    Googling for "steel furnance shutdown" [google.com] finds more reports on unexpected shutdowns this year.
    Two in Ashland, Ky [daytondailynews.com], and one or two somewhere in Indiana [nwitimes.com] and one in Bhopal, India [indiatimes.com]. Note that they all seem to have occured in June/July.

    Maybe some competitor trying to up his margin by reducing supply?

    • by burni2 ( 1643061 )

      Interesting thought.

      But there is another explanation at hand, India has a bad grid, plagued with outages and "variable" frequency. So big factories have their own power plants. These can fail too, they have also cooling requirements which can be difficult to satisfy during june/july in India. Also during June & July the normal grid is under heavy load from air conditioners.

      A grid fault sometimes bears the property of being able to affect such backup units. The swtich over from grid to island operation i

  • 2.2 Angriffsmittel und -methoden 15
    2.2.1 Spam 15
    2.2.2 Schadprogramme 16
    2.2.3 Drive-by-Exploits und Exploit-Kits 17
    2.2.4 Botnetze 18
    2.2.5 Social Engineering 19
    2.2.6 Identitätsdiebstahl 20
    2.2.7 Denial of Service 20
    2.2.8 Advanced Persistent Threats (APT) 21
    2.2.9 Nachrichtendienstliche Cyber-Angriffe 22

    I can understand Spam but Drive-by-Exploits? Social Engineering? Denial

  • or maybe they should check who got fired in the last few months... or overlooked for a promotion.

  • If compananies want their business insured, perhaps the insurance companies can make having an 'airgap' a requirement of having coverage.

Factorials were someone's attempt to make math LOOK exciting.

Working...