Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

Researcher Has New Attack For Embedded Devices 86

tinkertim writes "Computerworld is reporting that a researcher at Juniper has discovered an interesting vulnerability that can be used to compromise ARM and Xscale based electronic devices such as many popular routers and mobile phones. According to the article, the vulnerability would allow hackers to execute code and compromise personal information or re-direct internet traffic at the router level. Juniper plans to demonstrate not only the researcher's discovery, but also how he managed to use a common JTAG developed Boundary Scan to discover the vulnerability at this month's CanSecWest conference in hopes of shifting more of the black hat community to looking at devices instead of software."
This discussion has been archived. No new comments can be posted.

Researcher Has New Attack For Embedded Devices

Comments Filter:
  • You can use a debugger to actually see where the code checks for the registration key, and by manipulating the program in a hex editor, you could even make the code skip over the check and run without the key.

    I've just had the greatest idea for my PhD.
    • Re: (Score:3, Interesting)

      by pytheron ( 443963 )
      Hardly new ! We were doing this way back in the warez scene on the Amiga. Whip out your favorite dissasembler, change a few bne.w instructions to jump to the "it's authenticated!" code. Myself and a colleague even did this on the Palm Pilot. (Anyone remember that monkey that you fed crack pipes to on this ?)
      • It was called AREXX.
        • hmm.. no. Either devpac or argasm. And I meant the name of the monkey ! (On a side note, the action replay box was fantastic for reverse engineering).
          • Are you trying to say the Amiga didn't have AREXX?
            • 'course it had AREXX. Just that you wouldn't use it for dissasembling an executable and setting breakpoints etc.
              • And the reason why is because REXX is a scripting language, like Python or Perl. Bill Hawes did a great version of REXX for the Amiga. I scripted a lot of things in it!
                • by jgrahn ( 181062 )

                  And the reason why is because REXX is a scripting language, like Python or Perl. Bill Hawes did a great version of REXX for the Amiga. I scripted a lot of things in it!

                  Yeah, but unlike Perl and Python, AREXX sucked, from a programming language point of view. It worked as a scripting glue, but I wouldn't want to write a substantial program in that language.

                  • True true, it did suck as a programming language... Perl and AREXX were first developed about the same time, python a little later, and neither perl nor python were available for the Amiga until much later. What set AREXX apart was the IPC mechanism - applications had hooks for it. It allowed separate applications to work together as one. That is something that is still missing for the most part on *nix and Windows (or is a HECK of a lot more complicated.) It made everything scriptable, and allowed everythi
                    • I've heard of a C compiler written in BASH. The capacity of a programming language to suck depends significantly on the intended application. AREXX would probably work quite well to write a Linuxfromscratch [linuxfromscratch.org], Amiga [osnews.com]fromscratch, or AROS [sourceforge.net]fromscratch installer.

                      Amiga's mascots have alwasy been sooooooooooo s3xy. =D
      • by obarel ( 670863 )
        The ZX Spectrum had a wonderful piece of hardware called the SpecMate. With a click of a button it would dump the memory image (after the magic "code" had been entered), and then all you do is load the image and you have the game exactly where you left it. This practically breaks any "security" scheme, because it skips the entire loading process.

        I wonder why they don't do the same for modern operating systems - basically storing the entire "context" (memory pages, registers, etc.) and loading it later, mayb
        • by Anonymous Coward
          They do. It's called "hibernate" or "software suspend". That is, except for the different machine part.
          • Re: (Score:1, Insightful)

            by Anonymous Coward
            yah, thats called vmware.
  • that Juniper wants the BLACK HAT hackers focusing on their hardware?

    To me that seems bass ackwards. Something seems fishy about the post, perhaps they want White HAT hackers, or maybe they are afraid of the interest of Black Hats but... surely they aren't excited to have people finding holes in their devices and not reporting them?
    • Re: (Score:3, Insightful)

      by ePhil_One ( 634771 )
      that Juniper wants the BLACK HAT hackers focusing on their hardware?

      Not on their hardware, but hardware in general. Show folks that those Linksys firewalls aren't as good as the Netscreen product which cost 5x to 100x more. I'm sure they are unreasonably confident in the security of their own product.

      • by wytcld ( 179112 )
        On the one hand we have home users behind Linksys firewall/routers. On the other hand we have business users who have better primary firewall hardware (at the cheaper end at least a Linux iptables box) but who have some of the stuff from Linksys and its competitors sitting behind that running wireless in their offices. So is this exploit going to be something that only threatens the former, or is it something that you could embed in a Webpage such that your office user, in pulling it across the Linksys-type
        • by Curtman ( 556920 )

          On the one hand we have home users behind Linksys firewall/routers. On the other hand we have business users who have better primary firewall hardware (at the cheaper end at least a Linux iptables box) but who have some of the stuff from Linksys and its competitors sitting behind that running wireless in their offices.

          And then there's us poor schmuck's who bought something like this [archos.com], and just want to be able to run whatever code we want on it. These folks [archopen.org] have done a lot of hacking on the Archos devices


    • Juniper plans to demonstrate... at this month's CanSecWest conference in hopes of shifting more of the black hat community to looking at devices instead of software

      My initial reaction was along the lines of, "Good God, I hope they get together with Marvell & JTAG and post some firmware updates before they release the details."

      To do otherwise would strike me as nigh unto criminally negligent.

      Or maybe they're saying that the vulnerability can't be patched in firmware?!? If so, then yikes! [And all
      • Re: (Score:2, Informative)

        by billcopc ( 196330 )
        Firmware can only do so much. They're basically taking advantage of the JTAG debugging circuitry. It's the kind of thing you use during design, then usually you just strip off the connector/header before shipping. You could completely remove the JTAG and be safe that way, but that means reworking the circuit one last time _without_ debugging functionality, where a lot of things can go wrong and you have no way of tracing them... well, not without pulling out your grand-daddy's digital probe and frequency
        • The solution is to have a JTAG disable bit, which software can set but not clear. The bootrom can check for a signed software in flash and set the bit unless it was signed with a debug signature. So when you're in development, you sign with the debug enabled key. When you ship, you sign that with the non debug one. You can use the same trick so that any debug printouts you didn't get a chance to strip out don't actually come out of the pins on the chip too, even though they go almost all the way there so th
  • Via JTAG? (Score:5, Interesting)

    by Anonymous Coward on Thursday April 05, 2007 @04:26PM (#18627317)
    Is this implying that it could be done remotely? The product I work on supports JTAG access via software, but if you can do that, you already own the box. (And have our internal hardware specifications.)

    If it's not remote, then what's the point? I though it was already well-established that if you have physical access to the device you can do anything you want.
    • Re: (Score:3, Interesting)

      by microbee ( 682094 )
      I believe it requires physical access, so it's like "hacking own box". However, vendors typically do not grant full access (read: shell) to customers so very experienced customers (or competitors) could now use this method to get into the black box and find out more internal details.
    • Re:Via JTAG? (Score:5, Insightful)

      by yorgasor ( 109984 ) <ron@@@tritechs...net> on Thursday April 05, 2007 @04:44PM (#18627579) Homepage
      No, he used JTAG to discover the vulnerability. He will disclose how to take advantage of the vulnerability at the conference. He's just letting other people know they can peek into hardware using the JTAG interface as well.

    • Yes...and no. Saying that people should *never* get access to your hardware is not an excuse for not making it as secure as possible. Why design a secure keyboard interface? No point, right? Until you find a hardware keylogger plugged into your keyboard port - probably placed by a 'trusted' co-worker or boss.
    • by Rei ( 128717 )
      I'm sure we'll get a remote exploit one of these days. I'm particularly interested in the possibility of RF-induced currents to manipulate registers, busses, caches, etc. One one side of the spectrum, you have your HERF-style weapons, which just put out high power noise to induce currents in systems. The currents are effectively random and tend to just crash machines. However, if you could have reliable, predictable, specific induced currents -- say, taking advantage of the length of particular wires an
      • Re: (Score:3, Insightful)

        by QuasiEvil ( 74356 )
        Difficult at best, impossible in 99.999% of cases. For the most part, in modern high speed digital design, all of the bus path lengths are close to the same for reasons of propagation delay. Also, you don't really want to induce current flow, you want to induce a DC voltage at exactly the right moment. As you'll remember, one of the components of induction is frequency, and you'd need to synchronize your induced peaks with exactly when the device was sampling.

        I'm not saying it's impossible, but it would
        • by Rei ( 128717 )
          What makes me think that it's workable is that you can tempest a CPU. Different ops give off different RF, which is detectable even outside the case with good enough equipment. While proper design tends to try and keep bus path lengths the same, the same can't be said about path geometry.
      • The problem would be inducing currents in a particular conductor without inducing currents in the conductors around it. Inside of pratically any device with an IC on it, and attempt to remotely induce current via RF interference is probably going to just crash it, or fry it. It's one thing to try and read off of one particular line (Van Eck phreaking), but it's another to try and replace the signal on that line without frying (or at least rebooting) the entire machine.

        Unless you're talking about trying

    • Sure, no problems.

      You just need to get the victim to open up their unit, solder on some contacts and hook up an ethernet-enabled jtag debugger and plug that into the ethernet without a firewall. Something like: http://users.actrix.co.nz/manningc/lejos_nxt.jpg [actrix.co.nz] (a JTAG unit hooked up to a Lego NXT device).

      You'd then be able to debug the device as much as you want without the victim noticing anything.

    • Is this implying that it could be done remotely? The product I work on supports JTAG access via software, but if you can do that, you already own the box. (And have our internal hardware specifications.)

      If it's not remote, then what's the point? I though it was already well-established that if you have physical access to the device you can do anything you want.

      The researcher discovered a vulnerability via JTAG, however a Boundary Scan is obviously not needed to use the exploit remotely. A Boundary Scan is w

  • by russotto ( 537200 ) on Thursday April 05, 2007 @04:28PM (#18627359) Journal
    If the attack involves popping open the router and attaching wires to the JTAG port, I'm not going to worry about it.
    • by jhfry ( 829244 ) on Thursday April 05, 2007 @04:35PM (#18627431)
      I think what it's actually saying is that, by using the jtag to better understand the configuration of the machine, new exploits can be found.

      So it's not exactly an exploit, but a way to discover exploits by targeting issues with the embedded processors as discovered via jtag access to a similar unit.
      • TFA is very short on details though, so I guess we'll have to wait 'till Blackhat to find out for certain.
      • Re: (Score:3, Informative)

        by mr_mischief ( 456295 )
        My original understanding from the quotes was that the guy actually found a possible exploit vector by using JTAG. (I tend to read just the quotes first in articles which are interviews on technical topics -- it's often easier to get a sense of what the subject of the interview is talking about without misinterpretations by reporters.)

        TFA talks about using JTAG itself to run exploits, which I don't care about since physical security is the first layer of any security plan. If someone has better physical acc
        • by HomelessInLaJolla ( 1026842 ) * <sab93badger@yahoo.com> on Thursday April 05, 2007 @04:54PM (#18627703) Homepage Journal
          Jack used JTAG to discover exploits in the hardware. The exploit can, most probably, be taken advantage of from the WAN side using malformed packets and raw payloads.

          The proper trained eye looking at the circuit schematics would have been able to identify the same things--and probably have. The engineers who see the exploits usually take them home and play core wars with their friends. It's the same concept as reverse engineering closed source drivers. The original engineers wrote the closed source implementation and now Jack (at Juniper) is reverse engineering it and finding some interesting twists along the way.

          What do you call a zero day exploit before it's released to the general public and called a zero day exploit? Whatever it's called it has existed since before common home routers have been available at major consumer outlets. It's impossible to think that nobody ever took advantage of it until now.
          • Dude, schematics are like TOTALLY so 80's. Most chips are designed using Verilog or VHDL (or systemC, superlog, etc) all text based languages.

            He said he found some architectural weirdnesses using jtag to debug stuff. No biggie. The thing is between some external packets coming in and exploiting an architectural misfeature is a bunch of OS software, so it seems like there should be plenty of opportunities to squash whatever bugs he's going to come up with.
            • The point being that whatever bugs he's going to come up with have already been known to a priveleged set of people. Squashing those bugs now won't change the fact that exploits have probably been available for years.
        • by jhfry ( 829244 ) on Thursday April 05, 2007 @04:55PM (#18627709)
          The article clearly says that he discovered the exploits while tinkering with JTAG.

          He said he came up with the technique after spending several months cracking open and soldering test equipment onto a range of embedded devices. By taking advantage of ... JTAG (Joint Test Action Group) Jack was able to sneak a peek at the systems' processors and get a close-up look at how they worked. "With every hardware device, there has to be a way for developers to debug the code and all I did was take advantage of that," he said. "As I was digging deeper into the architecture, I saw a couple of subtleties which could allow for some interesting things.
          So while using the JTAG to debug the processor he noticed a couple of potential exploits.

          The rest of the article goes on to discuss the security implications of leaving the JTAG enabled

          Though some companies are able to cut off the JTAG interface on their products, Jack said it was enabled in 90 percent of the devices he examined.
          I am certain that this article isn't trying to suggest that hackers break into networks using JTAG... that's just plain dumb. What he is saying, is that because most devices leave their JTAG intact, hackers can debug the code on their processors and find flaws. Essentially reverse engineering the underlying architecture and using that knowledge to exploit it.

          I imagine that Juniper produces some of the 10% of those devices that disable the JTAG on their equipment, that is why they are promoting this in hacker circles.
          • Good threat models assume that the attacker already has the code. Cryptosystems, for example, are designed
            to assume that the bad guy knows *exactly* how your algorithm works. Other security mechanisms should, and do,
            use the same threat model.

            If the article is about "I used JTAG to dump the code from the CPU, which allowed me to find exploitable flaws",
            it's rather boring.

            If the article is about "I used JTAG to cause the CPU to do something other than the origin

            • I think exploitable flaws at the CPU level is still pretty interesting. Being able to trigger those with traffic is much more interesting, though.
          • by pchan- ( 118053 )
            I imagine that Juniper produces some of the 10% of those devices that disable the JTAG on their equipment, that is why they are promoting this in hacker circles.

            I cracked open (physically) a Juniper packet filter the other day (a 1U box with only an ethernet in, ethernet out, and power). Inside was an Intel-made x86 CPU (I forget which one, but a fairly old one) on a minimal motherboard and a 3.5 inch HD set with a the 2GB clip enabled (the drive was 10GB). The filesystem was FAT16 with no long file names
  • by oman_ ( 147713 ) on Thursday April 05, 2007 @04:40PM (#18627519) Homepage

    The article doesn't claim that the attack uses the JTAG port. It claims that he used the JTAG port to find some sort of vulnerability. People do this ALL THE TIME.... I do it at work to reverse engineer automotive computers.

    Now it does say that there is some peculiarity of these specific CPUs that makes them vulnerable to an attack of some sort. I hope the peculiarity isn't the presense of the JTAG port. If you assume people won't get your binary code off of a chip because it doesn't have a debug port then you're a fool.
  • About the only part of the software industry that doesn't assume that you've already won if you've got physical access to the box (and getting into a JTAG port kind of implies that) are the folks who still have a dog in the DRM fight... and there's fewer of them every year.
    • by fatphil ( 181876 )
      Nonsense. For example Nokia let you hold Nokia hardware, with TI chipsets, in your hands. They do not want you breaking out of the sandbox that they've set up. If that's not physical access to the device I don't know what is, and there are a billion instances of that in the world.
      • Exactly. Physical access to a cellphone will do you absolutely no good when it comes to hacking the device. Even if you took the flash chips off the board, changed a few bytes in a programmer and put them back, the phone will detect it and refuse to boot.
        • by fatphil ( 181876 )
          In theory. Often, as this story seems to support, not all possible security is turned on.

          I have just quit a gig at Freescale Semiconductor, and I can assure you that the security capabilities of their mobile platforms are absolutely spot on - you can nail absolutely *everything* down. However, if you want to debug models that have failed in the field, then you need to at ship them with the secure JTAG cranked down to a not-totally-disabled (by e-fuses, no way back) setting. For work at service centres, rath
          • However, if you want to debug models that have failed in the field, then you need to at ship them with the secure JTAG cranked down to a not-totally-disabled (by e-fuses, no way back) setting

            Can't you just require signing something with a very secret key - one that even the original developers don't know - to re enable JTAG? I don't know all the details of the solution, and I wouldn't want to post them if I did, but I know at least some embedded systems ship in a state where both the factory/service centre
            • by fatphil ( 181876 )
              That's one of the security levels, yes. 3rd level out of the 4, IIRC. Uses a challenge-response to verify the emulator connected to the JTAG is a valid one, so one can't even use a replay attack. It's probably the most sensible level to release hardware at.
      • by argent ( 18001 )
        For example Nokia let you hold Nokia hardware, with TI chipsets, in your hands. They do not want you breaking out of the sandbox that they've set up.

        Right, like I said, only people who believe they can keep someone from breaking into their own computer think that you can win even when the other guy has physical access. Cellphone manufacturers are a perfect example.

        What keeps people from chipping their own cellphones isn't the technical difficulty of breaking in and unlocking it, it's that the risk of losing
  • Ehhhh, you can't fool us; it's the easiest thing in the world for a man to look as if he's got a great secret in him.

    Just tell us, no free publicity.
  • Maybe this "atack" is not useful to remotely hack into the box. But there are other reasons to hack a device. It could help with reverse engineering for example.
  • by anss123 ( 985305 )
    Reminds me of a security presentation about how Nintendo had secured the Wii, over the gamecube. Apparently they had changed the physical interface to a JTag like port and changed to password to all capitals. heh.
  • Does this mean better iPod hacks are coming? This is mostly over my head, so I don't know if it's even relevant to iPods or similar devices...
  • The summary is rather misleading.

    He probably used the JTAG port to take a look and play with the ARM/XScale processors, but not the Boundary Scan part of the port's capabilities. Even the article doesn't mention the Boundary Scan, which is normally used only for testing whether the processor is well and alive.

  • When you JTAG into a device you OWN the device. This is no breakthrough. It's what JTAG was designed to allow you to do. Jeez..

  • It's really important to make a distinction between ARM Ltd- who make IP cores implementing the ARM architecture (now at version 7) and XScale which is an Intel implementation of the ARM v4/v5 architecture. Intel has an architecture license to produce products compatible with ARM-derived cores. Any kind of micro-architectural vulnerability is very unlikely to be shared across ARM Ltd and Intel implementations because they share no heritage. So making sweeping statements of vulnerabilities across all ARM-com
  • by Anonymous Coward
    Barnaby used the JTAG to determine vulnerabilities in embedded hardware and the RTOS running on it. The vulnerability is not that he used a JTAG, or even that companies leave JTAG ports enabled on hardware (as i've seen clever hardware hackers pin out the chips themselves to re-enable a removed JTAG port). The point of this article, and much of the work barnaby has been doing for the past couple years (http://research.eeye.com/html/advisories/publish e d/AD20060714.html , also previous presentations at can

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...