Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Math Hardware

Cryptography Expert Sounds Alarm At Possible Math Hack 236

netbuzz writes "First we learn from Bruce Schneier that the NSA may have left itself a secret back door in an officially sanctioned cryptographic random-number generator. Now Adi Shamir is warning that a math error unknown to a chip makers but discovered by a tech-savvy terrorist could lead to serious consequences, too. Remember the Intel blunder of 1996? 'Mr. Shamir wrote that if an intelligence organization discovered a math error in a widely used chip, then security software on a PC with that chip could be "trivially broken with a single chosen message." Executing the attack would require only knowledge of the math flaw and the ability to send a "poisoned" encrypted message to a protected computer, he wrote. It would then be possible to compute the value of the secret key used by the targeted system.'"
This discussion has been archived. No new comments can be posted.

Cryptography Expert Sounds Alarm At Possible Math Hack

Comments Filter:
  • The NSA (Score:5, Insightful)

    by proudfoot ( 1096177 ) on Sunday November 18, 2007 @10:45PM (#21402787)
    The problem with backdoors, is that noone can guarantee who uses them. While it allows for (possibly) justified surveillance by our government, it also allows for it by others.

    The United States, or the NSA, doesn't have all the world's best cryptographers. Russia, China, etc, other nations have excellent skill in these endeavors. Ironically, by trying to protect the nation, the NSA runs the risk of opening us up to foreign espionage.
    • Re: (Score:2, Insightful)

      by hax0r_this ( 1073148 )
      Which is why I, for one, doubt that the back door was intentional. The approval that NSA gives is primarily for use by the US government itself, and most of the obstacles that NSA faces in spying on our own government are bureaucratic ones, not technical ones.
      • Which is why I, for one, doubt that the back door was intentional. The approval that NSA gives is primarily for use by the US government itself, and most of the obstacles that NSA faces in spying on our own government are bureaucratic ones, not technical ones.
        I agree, for what it's worth (not much, but we're mostly all armchair generals here, why not join in the fun?).

        The flaw seems too obvious to really have been something illicit. If it was an attempt at a backdoor, it was pretty stupid. And it was a weird/improbable way to create a backdoor -- it was PRNG, not really a cryptographic function per se, and while knowing its output could help you break a system, it wouldn't guarantee it. The people at the NSA had to know it would be combed over.

        But the fact that it seems to be incompetence rather than malice doesn't make me feel a whole lot better. There are still a bunch of secret-algorithm ciphers [wikipedia.org] around and in use (and which the government, in its infinite wisdom, treats as more secure than the openly-reviewed ones), that the NSA is basically the only organization that has any access to. If they could miss such a trivial flaw in a PRNG that they knew was going to go out for public scrutiny, what could they have let slip by in a cryptographic function that was supposed to be a state secret?
        • by igb ( 28052 ) on Monday November 19, 2007 @08:01AM (#21405905)

          There are still a bunch of secret-algorithm ciphers around and in use (and which the government, in its infinite wisdom, treats as more secure than the openly-reviewed ones),
          The breadth and depth of cryptographic skill,. experience and knowledge behind the wire at Cheltenham and Fort Meade is orders of magnitude than that outside. The review process internally is actually far higher quality than that externally. This isn't like software, where even Microsoft doesn't employ a measurable fraction of the software engineers in the world. GCHQ plus NSA is the vast majority of the cryptographers, plus they have libraries and testcases and methodologies dating back fifty years that the rest don't have access it.

          In that case, the benefit of open review (that, just possibly, someone in the small pool of non-spook cryptographers who know what they're doing might find a flaw) is far less than the downside (that your opponents get to see what a modern code system looks like). The lowdown on a modern close-world cipher system would reveal attacks they are defending against, give a good impression of their real capabilities and so on. Yes, in a real shooting war, the spooks have to allow for their crypto systems falling into the wrong hands. But in the current climate, the tactical stuff will be exposed, but the strategic stuff can be closed algorithms and closed keys: what's not to like?

          This reminds us all of the S Box hoo-hah, where elaborate theories were put forward by open community `experts' about the `flaws' in the S Boxes in DES. It turned out, of course, that they were optimal against an attack that wasn't even public, and close to optimal against other attacks that (allegedly) weren't known to anyone. I'd take a cipher system that the NSA or GCHQ approves for government use over anything advocated outside the wire., simply because the chances of an intentional weakness in the former are far smaller than the chances of an accidental weakness in the latter.

          We went through all this is the discussion about the S Boxes

          • Re: (Score:3, Interesting)

            by pthisis ( 27352 )
            The breadth and depth of cryptographic skill,. experience and knowledge behind the wire at Cheltenham and Fort Meade is orders of magnitude than that outside. The review process internally is actually far higher quality than that externally. This isn't like software, where even Microsoft doesn't employ a measurable fraction of the software engineers in the world. GCHQ plus NSA is the vast majority of the cryptographers, plus they have libraries and testcases and methodologies dating back fifty years that th
    • Re: (Score:3, Interesting)

      by Anonymous Coward
      Exactly, which is sort of the best proof against the NSA trying to do something like this. If anything they aren't that stupid and they seem to take their mission pretty seriously. Don't forget that half of their goal is to protect US signals.

      I'm not sure, maybe it's election season and so some of these guys are tying to raise the specters again. The Intel bug was with floating point operations and the vast majority of cryptography doesn't use any of that. Of course it's possible that there could be o

  • So... (Score:3, Insightful)

    by teh moges ( 875080 ) on Sunday November 18, 2007 @10:47PM (#21402799) Homepage
    So, if a security bug is present an exploit could happen...?
  • Original article (Score:5, Informative)

    by sk19842 ( 841452 ) on Sunday November 18, 2007 @10:47PM (#21402803)
    TFA is just a summary of an article yesterday in the NYT: http://www.nytimes.com/2007/11/17/technology/17code.html?ref=technology [nytimes.com]
    • Re: (Score:3, Informative)

      by RuBLed ( 995686 )
      Yup and TFA really had nothing much to do or even related with NSA's officially sanction random number generator. Mr. Shamir is talking about math error in our processor's ever increasing complexities, much like what happened in Intel back then.

      There are no terrorist mentioned!! Sensationalist networkworld...
  • by Kuciwalker ( 891651 ) on Sunday November 18, 2007 @10:49PM (#21402813)
    It seems to me that the most likely source of a math error is in the floating point unit, since floating point math is far more complex than integer math. I've always understood that most crypto is based on integer math, both because it's based on number theory and because floating point math isn't exact. Doesn't that make this sort of exploit extremely unlikely?
    • by Traa ( 158207 )
      Compared to cryptographic algorithms, floating point math isn't that much more complex then integer math. Also, floating point math is exact since floating points representations (like IEEE 754) are eventually all calculations and representations in bits which are always exactly reproducible.
      • Re: (Score:3, Informative)

        by Kuciwalker ( 891651 )
        Compared to cryptographic algorithms, floating point math isn't that much more complex then integer math.

        Yet the claim is that an actual error in the implementation of elementary amthematical operations on the processor could weaken a cryptographic algorithm run on that processor, even if the algorithm itself is implemented flawlessly in source. Therefore the relevant question remains "where are processor bugs most likely to occur?"

        Also, floating point math is exact since floating points representatio

      • by gweihir ( 88907 )
        Actually IEEE 754 does not describe what algorithms have to be used. It does, however, require that calculations have to be exact in every result bit and recommends to use longer numbers internally and well-conditioned algorithms. Sometimes implementers screw up, though.
    • by evanbd ( 210358 ) on Monday November 19, 2007 @12:07AM (#21403347)
      In the past there have existed implementations of integer math that used the floating point unit. The only one I know of off hand is the Prime95 Mersenne prime search program. I imagine there are others, though. The reason for this is simply that the floating point units were faster -- more bits per operation. The x87 FPU instructions operate on 80 bit floating point numbers, compared to 32 bit integers (the floating point numbers can't use the exponent bits, but it's still more than 32 by a lot). If your code is sufficiently parallel, and you put forth the effort, there was a performance gain to be had. I don't know if this is still the case in modern CPUs (especially 64 bit ones), but it's entirely possible to do high-performance integer math on the floating point unit.
  • WTF "terrorist" (Score:5, Insightful)

    by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Sunday November 18, 2007 @10:52PM (#21402841) Homepage Journal
    Wouldn't pulling off something like this require a level of knowledge and togetherness more in line with a government agency, rather than a "terrorist" group? The results would also be more in line with what a government agency would want ("we have your secrets, ha!"), rather than what a terrorist would want ("Maybe I can't blow up a bridge / poison your water supply / whatever. But then maybe I can. So while you're deciding whether to go do things or hide under your bed all day, I have a question for you: do you feel lucky?").
    • Re:WTF "terrorist" (Score:4, Interesting)

      by the eric conspiracy ( 20178 ) on Sunday November 18, 2007 @11:12PM (#21403001)
      While government agencies surely have the upper hand here, there is always the possibility that a mole in the NSA gets their hands on the backdoor information, or a lone genius working in say Russia finds a mathematical flaw in the system.

      As far as poisoning your water supply etc. lookie here:

      http://sandia.gov/scada/home.htm [sandia.gov]

      Hardware errors are a potential problem, but they are #3 on the list after human and software problems. Why search for hardware problems when the first two are far more likely to bear fruit?

    • Wouldn't pulling off something like this require a level of knowledge and togetherness more in line with a government agency, rather than a "terrorist" group?

      Not necessarily. If you have a fault in a processor that will get a certain calculation always wrong in a predictable way, and the source code for a decryption engine available, then this _may_ be enough for a talented hacker with lots of time, with the help of a good mathematician, to crack the system. Depends on what the fault is.

  • Terrorists? (Score:4, Insightful)

    by Anonymous Coward on Sunday November 18, 2007 @10:52PM (#21402845)
    Why does everything have to come back to terrorists? They kill a small number of people and people go nuts about them. Hunger, disease, motor cars, lightning, ... All these things have killed far more people than terrorists and they don't get brought up at every *FUCKING* opportunity. Yeah. I'm pissed off. If the terrorism obsessed turned on their brains for a picosecond they might realise that they have caused far more damage than any terrorist has.
    • Re: (Score:2, Insightful)

      Hunger, disease, motor cars, lightning, ... All these things have killed far more people than terrorists

      It's about the derivative. Terrorism deaths are growing geometrically. The other causes of death you mention are essentially steady-state. Think about it. In the 70s terrorism acts killed in the single digits (Munich). In the 80s, individual acts of terror killed in the 100s (Lockerbie). In the 90s/00's they have upped the ante to 1000's. And if they get their hands on a dirty bomb or chemical weapon, t

      • by gweihir ( 88907 )
        Terrorists will not kill millions. Far too bad press for them. They are not waging a physical war, but one of the mind. As long as uou US sheep keep being afraid, they may not even consider serious new killings. After all they do want terror, not real damage. 9/11 was a bit excessive on the damage side, but, to misquite a Geman banke, 9/11 was "peanuts" in the greater sheme of things. The reaction to 9/11 was not, unfortunately. The reaction may by now have caused a damage multiplication by 1000 or even mor
      • It's about the derivative. Terrorism deaths are growing geometrically. The other causes of death you mention are essentially steady-state.

        They are? I don't know about that, have you read the recent numbers on hemorrhagic Dengue fever in Brazil, for instance? Did you consider the recent Bangladesh cyclone? I'd like to see how you treated the 1970 one that killed 500,000 people.

        LOL :-)

        Anyways, didn't the US military say terrorist attacks in Iraq were going down?

        Where the hell do you get your facts from?
    • You have a point but I would rather talk about wars not about terrorism because wars killed million of people. As for lightning there's not much we can't do about it, unless you prefer to walk in Faraday cages for the rest of your life -- btw, I assume that caged terrorists cannot be hit by lighting...
    • Re: (Score:3, Insightful)

      by bigberk ( 547360 )
      I agree, and I'd say the bigger threat in the context of this article is organized crime. Take for example the botnets/zombie networks, which are an advanced network technology made possible through software exploits. These technology attacks are leveraged for spamming, marketing, denial of service and other forms of extortion.

      As far as threats to the nation, the spam and popups are just the "tip of the iceberg".

      Obviously, the criminals use some pretty smart minds to seek and exploit software weaknesses. I
    • Risk evaluation (Score:3, Insightful)

      by mcrbids ( 148650 )
      People generally evaluate risk on largely emotional terms. For this reason, we frequently make gross errors in risk assessment.

      1) When we think there's somebody out to get us, we evaluate that risk very highly, even when there are more immediate but "random" risks clearly at hand. For example, a "terrorist" is a bogey-man, it's somebody out to get you. But hunger has no bad guy, and neither do disease, auto accidents, and lightning.

      2) We evaluate as "risky" situations where we are not in immediate control,
    • Re: (Score:3, Interesting)

      by jimicus ( 737525 )
      A very good friend of mine unwittingly gave me an insight which I think explains it very nicely.

      As far as I can tell, his source of news is "whatever the headlines in the mainstream media are this week". When the corrections come out much more quietly six months later, buried underneath an advert for a home course in Swahili, he misses them entirely.

      As far as he's concerned, Osama bin Laden is from Afghanistan (and is probably still living in a cave there), Saddam Hussein had weapons of mass destruction an
  • don't understand (Score:4, Interesting)

    by TheSHAD0W ( 258774 ) on Sunday November 18, 2007 @10:55PM (#21402869) Homepage
    I'm not sure how Mr. Shamir envisions a simple "math error" causing a problem. A buffer overflow exploit, perhaps, but not a math error... A user on a flawed but protected computer receives a "poisoned" encrypted message, opens it... And what happens? The math error, say, elicits some aspects of the user's private key in the decoded message; but how does the attacker then obtain that information without already having access to the machine? Further outgoing messages wouldn't have any usable information, no modern cryptosystem allows a received message from affecting any such message; a code exploit might affect the system's PRNG, but a math error shouldn't feed back to the PRNG unless it was horribly implemented. Without something affecting the user's machine's code execution, I can't see any way for an attacker to utilize a math error in a decryption function.
    • by SiliconEntity ( 448450 ) on Sunday November 18, 2007 @11:10PM (#21402985)
      I can't see any way for an attacker to utilize a math error in a decryption function

      Actually this is a common attack scenario in security protocol analysis. While it does not always happen in real life there are ways it can occur. For example, you try to decrypt the message and get garbage. So what do you do? You send the garbage back to the guy, saying, I couldn't read your message, all I got was this junk. Now you have been tricked into acting as what is called an "oracle" for the decryption function. This opens up a number of attacks which is why the best cryptosystems are immune to such problems.
      • Re: (Score:3, Insightful)

        Wow...and I thought I knew the extend of user stupidity, sending back an unsolicited message because you couldn't decrypt it (since it's fairly obvious these people wouldn't be simply sitting around waiting for people to ask them to send an encrypted message) seems to me to be quite absurd, sending it back partially decrypted even more so.

        I mean, I could understand it if it was solicited communications, but what are the odds you'll happen to start into an encrypted conversation with someone who just wants y
        • On the other hand, if you had taken control of someone's account, then you'd be masquerading as the intended recipient. Seems perfectly reasonable to work with someone who they'd think should be getting the message.
        • Consider low-level handshake protocols. There is, for example, an attack on SSL that allows recovering private RSA key by measuring response delays of a victim. These responses are mandated by a protocol, so they are (in a way) solicited.
          • by yabos ( 719499 )
            Can you post a reference for that? If that's true how come people aren't breaking SSL all the time?
      • You send the garbage back to the guy, saying, I couldn't read your message, all I got was this junk.
        While this could certainly happen, the brief reports I've seen suggest that the math error is in itself sufficient, you don't also need the targeted user to be incredibly stupid.
    • Sorry, I was looking at this the wrong way. The "math error" Mr. Shamir must be talking about, with regard to "chips", must be an error in the logic system in an arithmetic logic unit. An error that might, for instance, cause one or more bits in a register to stick in one state or another, would indeed affect future messages, disrupting PRNG (both encryption algorithms and one-way) and public-key computations. I doubt a system so badly affected could continue to operate for very long, but an attacker who
    • Re: (Score:3, Insightful)

      by garompeta ( 1068578 )
      >I can't see any way for an attacker to utilize a math error in a decryption function.

      In the same way you aren't the "S" in RSA. Give him some credit, will you?
      • In the same way you aren't the "S" in RSA.
        He's also the same 'S' in the FMS attack that first cracked the WEP encryption protocol. Like Schneier, I'd trust his opinion until it's proven otherwise.
    • From the brief report, it sounds like any bug whatsoever would be sufficient to compromise any system. In the slightly more detailed version to which someone posted a link, you see that the vulnerability requires knowing of a pair of integers whose product is computed incorrectly. It also requires some more minor assumptions.

      Alas, Shamir's post didn't clarify, at least to my undereducated ears, how the targeted machines are coerced into producing a reply. Do most machines have ports open that will engage
  • Is this chip / software used in any slot / video poker games?
  • how to cause the blue screen of death to happen simultaneously across all computers...
  • In my experience, the slightest error will render the whole cypher text unreadable, so it won't take long for people to complain that all HTTPS shopping sites don't work with a specific computer system and then that system will end up in land fills really quickly.
    • About the only way for the attack to work without all of the SSL and HTTPS implementations breaking is if the bug affected less than say 10E-9 of normal HTTPS/SSL sessions, and the attacker knows exactly which operands produce a broken result. The attack also depends on the broken hardware being either very common or the attacker knows that his/her target is using the broken hardware. This is a great agument against a hardware monoculture.

      I'd think it more likely that a bug in a popular encryption relate

  • Pentium FDIV Bug (Score:2, Informative)

    by rubicon7 ( 51782 )

    Remember the Intel blunder of 1996?
    Don't you mean 1994 [wikipedia.org]?
  • Yeah know, I've noticed this problem on a series of processors at my college. I had to write a basic key based cryptography program in C#. Well I created the system with no problem. But if you ran the program in a certain lab where all the computer are identical (hardware and software) I could generate the same 4 key sets each time. My solution was just to use and external DLL with my own generator from another language.

    My point for this example is that I don't believe its the processors fault. If the
    • Re: (Score:3, Informative)

      by DrJokepu ( 918326 )
      You are aware that computers can only generate pseudo-random numbers, right? The random number generator in C# actually doesn't generate random numbers but numbers that look random. These numbers are generated by a 'seed'. If you give the same seed to the computer, it will generate the same set of numbers. The C# implementation (if you don't supply a seed yourself) uses the system clock as seed, hence if you start your random-number-generation session in the same millisecond on same computers, they will gen
      • Re: (Score:3, Informative)

        by evanbd ( 210358 )
        It doesn't have to be a geiger counter. There is plenty of randomness to be had in the exact timing of key presses, exact behavior of rotating media, incoming network information, etc etc. It can be harder to make use of (poor or unknown distribution, patterns that you might not know about), and it might be insecure (especially if it came from the network card), but there are plenty of physically derived things a modern computer can measure and generate randomness from with enough processing of the raw da
      • Just use a lava lamp [wired.com] or a camera [hackaday.com] with the lens cap on.
    • by hhawk ( 26580 )
      IHMO, a RNG is really a hard problem because if it isn't truly random you weaken the crypto. IHMO, you really need to use a physical source to generate the randomness and not some direct method as a chip, esp. a built in RNG function. A physical source might be radioactive decay but there are other things that could be sampled.

  • The first thing that went through my head as I read the story was:

    "Mr. Potatohead! Mr. Potatohead! Backdoors are not secrets!

  • If a guy like Shamir says this, I'd say it's full-red-alert for all those manufacturing this sort of chip. We are just doing asymetric crypto in Math 1 (Bachelor CompSci) and my brain goes into overload-error every 5 minutes or so as soon as the professor starts talking about it. Someone like Shamir (the "S" in "RSA" btw.) who can come up with this sort of thing should be considered 'God' in the field of cryptography and his call upon action should be noted duely.
  • I'm sure Intel test their chips very carefully now, to avoid further embarrassment, but even for the average user it wouldn't be hard to do. A program that tests all mathematical functions of a CPU over the entire range of 8, 16, 32 and 64 bit numbers (both float and integer) for errors would be fairly easy to craft. Might take a while but would only need to be done once per revision of CPU.
  • I have thought about this a lot. Perhaps mathematicians should be locked up as terrorists. They would not be able to get out unless the can factor rsa120 in their head. Clearly math is a subversive activity. It leads to all sorts of hack attacks on computers and computer communications. It leads to dangerous things like atomic bombs and bridges that collapse under heavy winds. It leads to wet naked men running down the street yelling Eureka. (A small city in California - how did he know about Eureka?

One man's constant is another man's variable. -- A.J. Perlis

Working...