Forgot your password?
typodupeerror
Encryption Security IT

FPGA Bitstream Security Broken 90

Posted by timothy
from the your-determined-foes-rub-their-hands-gleefully dept.
NumberField writes "Researchers in Germany released a pair of papers documenting severe power analysis vulnerabilities in the bitstream encryption of multiple Xilinx FPGAs. The problem exposes products using FPGAs to cloning, hardware Trojan insertion, and reverse engineering. Unfortunately, there is no easy downloadable fix, as hardware changes are required. These papers are also a reminder that differential power analysis (DPA) remains a potent threat to unprotected hardware devices. On the FPGA front, only Actel seems to be tackling the DPA issue so far, although their FPGAs are much smaller than Xilinx's."
This discussion has been archived. No new comments can be posted.

FPGA Bitstream Security Broken

Comments Filter:
  • Good or bad? (Score:4, Interesting)

    by Hatta (162192) on Thursday July 21, 2011 @02:20PM (#36836866) Journal

    Is this the good kind of security breach, which enables end users to do new things with their FPGAs? Or the bad kind, that enables attackers to do malicious things with others FPGAs? Or both?

    • Re: (Score:3, Informative)

      by Anonymous Coward

      If the encryption is cracked it can expose the core to reverse engineering as well as injecting malicious code. If the bitstream contains a soft processor and sw image it could really get interesting as it opens up another vector for getting malicious software onto the device in question.

    • by Anonymous Coward

      Unless you're into industrial espionage or are Chinese (but I repeat myself), it's purely the bad kind.

      • by Luckyo (1726890)

        Erm, USA is the hands down #1 in industrial espionage. Have no doubt about that, ever. If you do, look at what USA did to Russia during cold war. Things like the biggest pipeline explosion in the world caused by industrial espionage.

        • by Nrrqshrr (1879148)
          o hi! am year 2011, where the cold war ended and china is the copy/paste factory of the world. with love. xoxoxo kthxbye.
          • LMAO now we're not just behind in manufacturing and test scores, we also trail behind what once was a third world country in espionage. Great. We're all going to hell.
          • by Luckyo (1726890)

            Sure, and that means that smart people in intelligence most likely already fed them a whole lot of long-term critical errors that will bloom when needed.

        • Back in the late 1700s, the technology behind the textile industry (spinning, looms) was a British state secret. Nobody who had been trained in the technology was allowed to leave Britain. Samuel Slater [wikipedia.org] dressed as a girl, sailed to America, and replicated the British technology. That was a big part of the beginning of the American Industrial Revolution, and the beginning of the end of the British monopoly on cheap textiles.

          Some of the mills built in the early 1800s in New England still stand. Of course,

    • Mostly good. If the attacker can lay physical hands on your machine, most reasonable security people consider it compromised.

      This sounds like it makes it harder for manufacturers to TiVoize their products.
      • Re:Good or bad? (Score:4, Insightful)

        by harrkev (623093) <kfmsd@@@harrelsonfamily...org> on Thursday July 21, 2011 @02:51PM (#36837210) Homepage

        Also, if you SELL products with FPGAs in them, it makes it harder to make a profit if somebody decides to reverse-engineer your stuff. Really, all this is good for is cracking into a design that somebody else made. Once you GET the actual bitstream, there are really two things that you can do with it...

        1) Make copies of the FPGA. Boards are not that hard to reverse-engineer, so you could copy somebody else's design completely.

        2) Reverse engineer the code. However, you will NOT have anything that would help you do this, like net names or hierarchies. This will make actual reverse-engineering in order to change something or learn something very challenging.

        This doe NOT make FPGAs any more useful, since you can easily download free development software from every FPGA vendor and put whatever you want on there. Really, the only thing that you CAN'T do with the free software is stuff related to licensed IP (processor cores, various controllers for things like Ethernet, SATA, etc.). While you COULD pull that out of an encrypted bitstream, using it without any sort of documentation or the configuration wizards would be very challenging and, 9 times out of 10, it is just easier to pony up the money to license the cores in the first place.

        • by jp102235 (923963)
          you could also modify the bitstream and release malicious code into a STB. another thing: sometimes these STB's are more 'trusted' because the engrs assume that the bitstream/designs in the FPGA are secure. Its a great place to put a trojan, monitor packets, etc. this is not a good thing. It will mean more expensive hardware in the future.
        • by chrb (1083577)

          there are really two things that you can do with it... 2) Reverse engineer the code. However, you will NOT have anything that would help you do this, like net names or hierarchies. This will make actual reverse-engineering in order to change something or learn something very challenging.

          I think you underestimate the difficulty of number 2. I know I guy who figured out the bitstream format of a particular FPGA type that he was using so that he could write his own synthesis tools for research. It took him a couple of months, but he did it. There are now published papers on this topic From the bitstream to the netlist [psu.edu], A library and platform for FPGA bitstream manipulation [megacz.com], so it should be somewhat easier. There was even a tool called "debit" that disassembled the bitstream back to FPGA tools

          • by chrb (1083577)

            I think you underestimate the difficulty of number 2.

            Should obviously be "overestimate"...

          • by jp102235 (923963)
            help me out with that link to the fpga bitstream library at Berkeley, its giving me a 404-like response, surely it's not slash dotted.
            • by chrb (1083577)
              berkeley [berkeley.edu]. Also see section 3 of this [cam.ac.uk].
              • by jp102235 (923963)
                and my advisor says slashdot is a waste of time. thanks for the links, they are immensely useful.
                • by chrb (1083577)
                  If you are actually doing research on bitstreams there's some more recent stuff - Florian Benz, his thesis is "Reverse Engineering the FPGA Bitstream Format" (not yet published?) and from the same research group Andreas Marinopoulos "Reverse Engineering of FPGA Netlists" 2010. Florian posted to some FPGA groups a few months ago saying he was wanting to publish his library as open source, but I didn't find it anywhere yet. I suspect if you emailed him he would provide you with a copy.
                  • by jp102235 (923963)
                    yea, I am in this area, I'll see if I can a look at the manuscript, thanks! I am looking to make constructs on FPGA's that the EDA crapware won't allow, at least with some elegance / automation. Ive been forced to make my undergrad student use xilinx's "fpga editor" - although it works for what we are doing, it is tedious and not very repeatable or scriptable. why are these companies so fearful to release the specs on the bitstream / architecture?
    • by Anonymous Coward

      Is this the good kind of security breach, which enables end users to do new things with their FPGAs? Or the bad kind, that enables attackers to do malicious things with others FPGAs? Or both?

      Both.

      FPGAs are used for a lot of stuff, so on the one hand, it could be used to decrypt media after purchasing it... but they're also used by hardware which transmits information most people want to be secure (as an example, many modern Gas/Electric meters use FPGAs to transmit encrypted usage information back to the utility for billing and use analysis. I think most people would agree that this data should be encrypted in transit, and this means that it can't be trusted 100%).

      The big issue is that this doe

    • Re:Good or bad? (Score:4, Informative)

      by Andy Dodd (701) <atd7&cornell,edu> on Thursday July 21, 2011 @02:34PM (#36837044) Homepage

      There's nothing about the Xilinx bitstream encryption that prevents you from loading in an unencrypted bitstream, or a new bitstream with a new key.

      Unfortunately it means that it's easier to compromise/clone/tamper with FPGA designs. FPGA cloning/tampering has been a big problem for Cisco as I understand it (counterfeit Cisco products).

      • by Anonymous Coward

        You are supplying full designs files: schematic, gerber file, firmware and FPGA loads to the contract manufacturer so that they can make legit products for you. What if someone at the factory decided to simply copy that data to a Flash drive and sell it to someone else? No amount of encryption is going to do you any good against that level of attack.

        • by Anonymous Coward

          You are supplying full designs files: schematic, gerber file, firmware and FPGA loads to the contract manufacturer so that they can make legit products for you.

          ...and it's well known that the same manufacturers are building Cisco replicas *cough* huawei *cough*
          Not surprisingly they even have the same bugs.

      • by munozdj (1787326)

        There's nothing about the Xilinx bitstream encryption that prevents you from loading in an unencrypted bitstream, or a new bitstream with a new key.

        Unfortunately it means that it's easier to compromise/clone/tamper with FPGA designs. FPGA cloning/tampering has been a big problem for Cisco as I understand it (Huawei products).

        ftfy

    • Re:Good or bad? (Score:5, Informative)

      by Animats (122034) on Thursday July 21, 2011 @02:36PM (#36837058) Homepage

      Is this the good kind of security breach, which enables end users to do new things with their FPGAs? Or the bad kind, that enables attackers to do malicious things with others FPGAs? Or both?

      This attack is only useful when an FPGA is programmed by a third-party manufacturer using a canned encrypted bitstream provided by someone else. This is the case for many products nominally made by US, Japanese, or Taiwanese firms but actually built in China. The attack allows someone with access to the encrypted bitstream to recover the unencrypted bitstream, from which they can potentially reverse-engineer the device and make changes.

      An end user, who has only the programmed FPGA, can't do anything with this attack.

      For background, here's a short note on where this technology is used. [militaryaerospace.com]

      • by chrb (1083577)

        An end user, who has only the programmed FPGA, can't do anything with this attack.

        Not really. According the TFA, the majority of deployed systems utilise external memory for the bitstream, so an end user will be able to easily extract the bitstream. Also, many devices are now updatable via the internet - so the bitstream is accessible via web (or via satellite/cable, for Pay TV).

        "One of the disadvantages of FPGAs, especially with respect to custom hardware such as ASICs, is that an attacker who has access to the bitstream can clone the system and extract the intellectual property of t

        • Afaict the silicon processes that make good high speed logic do not make good EEprom/flash and vice-versa. So high end processors and FPGAs tend to have little to no programable areas on the chip and rely on reading their code from a seperate device.

          • by ngg (193578)
            This is true, but there isn't really any technical reason Xilinx (or any other FPGA manufacturer) can't ship a hybrid IC (one with multiple dies in the same package).
      • by Hatta (162192)

        An end user, who has only the programmed FPGA, can't do anything with this attack.

        If I understand correctly, the end user isn't threatened by this attack either then. The only thing the end user has to worry about is potentially getting a cloned device.

    • by drolli (522659)

      Well on one hand i would appreciate that you have the freedom to reprogram HW build by somebody else (e.g. Cisco). On the other hand the most prominent reason to do so i can imagine for that would be HW trojan insertion. (You would have to verify the flashs contents with cisco after you bought a router)....

  • by Anonymous Coward
    I like to think I'm pretty technical, but this article was fucking martian to me. Anyone care to translate? (Posting anon so I can mod-up helpful replies.)
    • by Anonymous Coward on Thursday July 21, 2011 @02:32PM (#36837014)

      As transistors switch they create little glitches in the power supply, or rather they consume a little more or less current than at the previous steady state (where steady state may be nanoseconds long). By correctly interpreting the changes in current consumption the encryption key can be read.

      For the car analogy (this is slashdot after all) think of it as monitoring fuel flow to extrapolate acceleration, speed and distance.

      • ...to try to keep the power consumption constant, therefore not giving hints, if I understand correctly.

        • by Anonymous Coward on Thursday July 21, 2011 @02:50PM (#36837194)

          There is only so much you can do. We put a fair amount of power supply filtering around FPGAs because of the switching noise, but the cost in board space and materials to make the switching undetectable would be astronomical. As HW engineers we're always asked to cram a little more in that space, and "do you really need that many capacitors?"

          The company I work for (and the reason I'm posting anonymously) uses a bunch of FPGAs per board with man-years of code invested into them, and we usually use Xilinx parts. It's relatively trivial to get the bitstreams from our systems which hasn't bothered us since they're encrypted (or I guess they used to be).

          • I am referring to adding circuitry into the FPGA's themselves, so that the current consumption cannot be as easily used for side-channel attacks.

            In a sense, think of adding additional NOT gates, within the FPGA itself, and their only purpose would be to always have the combination of an actual [data line + NOT] provide a sum of constant power consumption wherever the FPGA is doing anything that might leak side-channel info. None of the NOT gates would actually be part of processing actual data. At least, th

            • by Anonymous Coward

              Adding that into an FPGA still adds cost and takes room, if you put inverting gates to drive equivalent loads that reduces the number of resources available for the job. Therefore I have to pay more to get and FPGA that will do what I want. Then there's the unfortunate reality of real circuits. The two gates will never be perfectly timed and so as a result there will still be small glitches on the power supply. This becomes a race to security through obscurity which is only effective if it is the last st

            • This won't work because you will still information leaked when the bits are toggling vs. not toggling. The reason for that is that the logic (CMOS gates in particular) generates power spikes when a a bit toggles. That needs to be addressed.

              See here: http://en.wikipedia.org/wiki/Cmos#Power:_switching_and_leakage [wikipedia.org]
            • IANA EE, but ... include an additional circuit that switches randomly, imposing a random element on the current flow - if you have some gate space left over from doing the real work.

    • by Anonymous Coward on Thursday July 21, 2011 @02:34PM (#36837032)

      An FPGA is sort of like a PROM except that instead of memory circuits you program logic circuits into it.

      If this hack allows people to reverse-engineer the chip, they can basically dump its logic diagram, which means that they could copy it. As I understand it, it's normally pretty hard to reverse-engineer a microchip, so this is a pretty significant breakthrough.

      • by atrus (73476)
        Note that most FPGAs (and all of Xilinx's) are SRAM based - the bitstream has to generally be loaded from an external memory IC at boot-time.
        • by Laser Dan (707106)

          Note that most FPGAs (and all of Xilinx's) are SRAM based - the bitstream has to generally be loaded from an external memory IC at boot-time.

          Not true, the Xilinx Spartan-3AN can store the bitstream in internal flash memory.
          That is the only family with that feature though.

  • by kbonin (58917) on Thursday July 21, 2011 @02:38PM (#36837072) Homepage

    An interesting blurb from the Actel linked page:

    Many of the fundamental techniques used to defend against DPA and other side-channel attacks are patented by Cryptography Research, Inc. ... One of CRI's businesses today is licensing this portfolio of very fundamental patents. Nearly all the secure microcontrollers used in smart cards, set-top boxes, SIM cards for GSM phones and Trusted Platform Modules (TPM) for personal computers are built under license to CRI, amounting to about 4.5 billion chips per year in total.

    Yet another critical set of concepts which should be obvious to anyone working in the field locked behind a paywall due to USPTO incompetence and/or malfeasance...

    • by bws111 (1216812) on Thursday July 21, 2011 @03:18PM (#36837454)

      Yet another idiot who doesn't understand the simple fact that the 'obvious' test is applied BEFORE the patent is public. Of course it is 'obvious' AFTER the patent is public. If you asked 100 people working in the field how to "defend against DPA and other side-channel attacks" BEFORE the patent (or anything using the patent, or any papers based on the patent, etc) was public, what percentage of them would have come up with the EXACT SAME WAY (not 'general concepts', the exact methods used) that CR did? It had better be very close to 100% if you are going to claim 'obvious'. If you ask these same 100 people AFTER the patent is public, 99 of them will claim that the CR method is 'obvious'.

      • what percentage of them would have come up with the EXACT SAME WAY (not 'general concepts', the exact methods used) that CR did?

        People who complain on Slashdot about the USPTO's examination process are under the impression that inventors manage to score patents on "general concepts".

        • by kbonin (58917)

          Not everyone who complains on Slashdot is naive on patent realities, and the problem is real and ugly.

          Aside from the legal fiction of the PHOSITA (Person Having Ordinary Skill In The Art), the intent of this clause by the framers was that it should not be possible for anyone to obtain a patent on something that would be obvious to someone working in the field.

          In this specific case, once the feasibility of power vector side channel attacks was understood, any ideas that should have been obvious to someone ha

  • So maybe a little transistor meant to do one thing is not incredibly secure. But the Russians are going to start writing malicious code that waits for FPGA users to visit hacking urls and then download exploits to their servers?
  • but I am a security person. The way I understand it, these keys are different for each individual device. Also, the attack requires direct physical access to the device. As a customer, wouldn't all potential threats require a physical security breach? Forgive me if I'm mistaken I'm not entirely sure I understand how/where these things are implemented. It seems like they're mostly used in switches and routers and things. If someone is poking at the power supplies on your switches and routers, I'd imag
  • gives a reasonable description of what all of this means, but it seems to me that xilinx are approaching this wrongly.
    They should create a chain of trust and sign vendors certificates (or for large production runs allow purchasers to do so). The FPGA would only accept a signed bitstream that can be traced back to a particular vendor. All new FPGAs should have a burned in CRL and a burned in xilinx-signed certificate in ROM. That would allow mutual authentication at least. you can layer encryption on top of
    • The FPGA would only accept a signed bitstream that can be traced back to a particular vendor.

      How would the user of an evaluation kit sign a bitstream for such an FPGA?

      • Presumably the manufacturers of the FPGA would provide parts for dev kits (or for customers that don't care) that will accept any self-signed certificate.
        They already provide devices with lots of different options, the certificate options would only add a few more {xilinx CA, any CA acceptable, customer-1 CA, customer2 CA} etc. with parts accepting customerX CA only available to customer X. For all I care, they could provide them for programming by anything with a verisign certificate if they wanted to or
  • The very well written story How Digital Detectives Deciphered Stuxnet, the Most Menacing Malware in History [wired.com] on Wired.com describes an attack on an Iranian nuclear plant through inserting frequency changing commands sent to the PLC to damage centrifuges. The papers the OP mentioned are probably something very important if encrypted FPGA bit streams can indeed be meaningfully tampered with easily.
  • 7 year Altera person here. Your'e bitstream got compromised? your chip is copied.
  • by Anonymous Coward

    Remember kids: FPGA's are used in robotics... so, see subject-line above & beware!

    ("Muahahahaha" mad-scientist laughter & "SiNiSteR" sounding organ music plays...)

    APK

    P.S.=> On a more serious note though - this MAY have "security-implications" on the note of robotics, one day in the future - Hence the subject-line I used...

    ... apk

Never invest your money in anything that eats or needs repainting. -- Billy Rose

Working...