Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security IT

How Dangerous Could a Hacked Robot Possibly Be? 229

alphadogg writes "Researchers at the University of Washington think it's finally time to start paying some serious attention to the question of robot security. Not because they think robots are about to go all Terminator on us, but because the robots can already be used to spy on us and vandalize our homes. In a paper published Thursday the researchers took a close look at three test robots: the Erector Spykee, and WowWee's RoboSapien and Rovio. They found that security is pretty much an afterthought in the current crop of robotic devices. 'We were shocked at how easy it was to actually compromise some of these robots,' said Tadayoshi Kohno, a University of Washington assistant professor, who co-authored the paper."
This discussion has been archived. No new comments can be posted.

How Dangerous Could a Hacked Robot Possibly Be?

Comments Filter:
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Thursday October 08, 2009 @09:39AM (#29680313)
    Comment removed based on user account deletion
  • by MBGMorden ( 803437 ) on Thursday October 08, 2009 @09:42AM (#29680349)

    They speak of "compromising" these robots as if user programmable devices are inherently bad. I don't want to see devices locked down into black box "no touch" state because of some fear mongering.

    That said, it has always been the case with computers (and robots are just computers with moving appendages) that if a hacker has physical access to the device, you're basically screwed anyways.

  • hmm (Score:5, Insightful)

    by Dyinobal ( 1427207 ) on Thursday October 08, 2009 @09:43AM (#29680359)
    The hacked robot is as dangerous as the person who hacked it.
  • by fuzzyfuzzyfungus ( 1223518 ) on Thursday October 08, 2009 @09:46AM (#29680415) Journal
    Hardly irrelevant.

    "Someone" will always find a way; but there is a big difference between "someone" being "any script kiddie who can torrent a copy of bot-h5x-b0t" and being "The Feds; but they'll say 'Fuck it.' and just send a couple of guys with guns and those little curly ear things instead."
  • by jimicus ( 737525 ) on Thursday October 08, 2009 @09:52AM (#29680475)

    See Isaac Asimov for the exact quote, but it basically says robots may not harm humans. Because the law is encoded *in the hardware* there's no way that it can be altered.

    Very noble, very pure, very useless when your robot doesn't have any intelligence and just executes commands blindly.

  • by HangingChad ( 677530 ) on Thursday October 08, 2009 @09:56AM (#29680515) Homepage

    I'm more concerned about someone hacking a Predator or Reaper.

  • by OzPeter ( 195038 ) on Thursday October 08, 2009 @09:56AM (#29680517)

    See Isaac Asimov for the exact quote, but it basically says robots may not harm humans. Because the law is encoded *in the hardware* there's no way that it can be altered.

    Except that pretty well all of Asimovs stories were about how the 3 laws could be subverted by finding complex interactions that were not and could not be covered by the application of those simplistic laws

  • by retech ( 1228598 ) on Thursday October 08, 2009 @09:57AM (#29680529)
    ...for when the metal ones come for you, and they will.
  • by Kell Bengal ( 711123 ) on Thursday October 08, 2009 @09:57AM (#29680531)
    Ugh. I feel the need to clarify, before the shouts from the peanut gallery. Yes, some robots have computer vision and are not 'blind', yes some robots can be well programmed and very smart, but that's still not the same thing as a true reasoning intelligence. Robots are only as good as their software and, if their programming has been corrupted, there is nothing you can do to get around that.
  • by Crash Culligan ( 227354 ) on Thursday October 08, 2009 @09:58AM (#29680547) Journal

    MBGMorden: They speak of "compromising" these robots as if user programmable devices are inherently bad. I don't want to see devices locked down into black box "no touch" state because of some fear mongering.

    I half agree with you; user-programmable devices are very useful, and easily tailored to efficiently perform specific tasks.

    The crux of the argument, though, is "which user is giving the instructions?" Long ago on /. I made a comment differentiating security vs. transparency in government. This is much the same thing.

    On the one hand, you (and a lot of people) want the device to be as programmable, flexible, and useful as possible. That means it must be able to do a lot of things. On the other hand, people might want to use such devices for nefarious, invasive purposes like spying, theft, vandalism, etc.

    The two are not mutually exclusive, but remember:

    1. You cannot have 100% security in anything, meaning someone might sooner or later break into your progbot and do horrible things, and
    2. Until you have an establishment of security, any flexibility and programmability your progbot may have is a double-edged sword and may be used against you. Consequently...
    3. The ultimate risk which a progbot poses to its owner is a factor of both its utility and the ease of intrusion. Given that security isn't guaranteed, utility has to be given some limits or other protections must be maintained (backups, lockdowns, etc.). Adjust your cost-benefit analyses accordingly.
  • Re:hacking (Score:5, Insightful)

    by Hizonner ( 38491 ) on Thursday October 08, 2009 @10:00AM (#29680561)
    They want you to play with them and make them do cool things. They don't necessarily want other people to drive up outside your house and use the robots' cameras and microphones to spy on you over WiFi. The problem is that the features that enable the first aren't secured, and therefore they can also be used to do the second.
  • by Kell Bengal ( 711123 ) on Thursday October 08, 2009 @10:03AM (#29680597)
    It always amuses me when people worry about robots going wrong or turning on us, or being used by The Bad Guys of the Week to do us harm. I know a lot of very smart people who are involved in robotics research, and they will tell you that making robots do anything is hard. Making robots do something with surreptitiously poisoned programming would be even harder. Seriously,

    if you're smart enough to remotely modify a robot's code to do something usefully nefarious, you're smart enough to sell a usefully nefarious to the government for megadollars.

    There's a lot more money to be made will legitimate killbots. It might be nice to protect robots from script kiddies who just want to throw a spanner in the works but until robots are ubiquitous enough that domestic cybernetic terrorism becomes attractive (ie, doing it for the lulz) I don't think we need to be overly worried now.

    That said, now -is- the time to be thinking about these things so that we're ready before we get to that point. Thinking, but not worried.
  • by s31523 ( 926314 ) on Thursday October 08, 2009 @10:06AM (#29680623)
    Did we really need to research this, we know the answer... VERY! Of course, this depends on the robot of course.

    Robot A is tasked with going into a nuclear reactor and removing spent fuel rods. If Robot A is hacked and re-programmed to smash the shit out of the reactor, this might be dangerous.

    Robot B is tasked with preventing people from entering into an access point in a secure building by 'restraining' them. If Robto B is hacked and re-programmed to 'hack' the people at random then this might be dangerous.

    Hacking a roomba to spell your name in the carpet is not dangerous... It is all about what the level of responsibility of the robot is. It is funny that we needed research on this.
  • by Anonymous Coward on Thursday October 08, 2009 @10:08AM (#29680653)

    Parent is offtopic. Millitary grade UAV's aren't even in the same ballpark as Robosapien, et al. First of all they're not fully autonomous and second, security is NOT an afterthought in a UAV.

  • by Bakkster ( 1529253 ) <Bakkster@man.gmail@com> on Thursday October 08, 2009 @10:36AM (#29680997)

    For example, the story about robots who prevented humans from coming to harm through inefficient human governance. Since they could not, through inaction, allow humans to harm themselves, they replaced the human government with robot governors.

    They, for the record, did not welcome their new robot overlords.

  • by noundi ( 1044080 ) on Thursday October 08, 2009 @10:41AM (#29681071)

    Right, because no one would ever do something purely for the challenge and then release their work.

    If it takes longer to crack something that the product of cracking it is worth, you'd have no reason to even begin.

    Hint: "challenge" is the key word.

    Answer: You assume that by worth I mean monetary gains. The satisfaction of completing the challenge is also a product of cracking it, which has its own value. You see, clicking a button that starts bruteforcing something which would take 50-60 years isn't a challenge worth the product.

  • Re:hmm (Score:3, Insightful)

    by mcgrew ( 92797 ) * on Thursday October 08, 2009 @10:45AM (#29681111) Homepage Journal

    The crHacked tool is as dangerous as the tool itself. I wouldn't worry about fuzzy robot puppy very much, but a robot lawn mower might be dangerous in the wrong hands.

  • by Anonymous Coward on Thursday October 08, 2009 @11:03AM (#29681341)

    GP isn't actually offtopic. This article is directly or indirectly about fear mongering. Pointing out that there are carnivorous child-eating lizards, but that they live on the other side of the planet, is ontopic for "Under the Bed Monster fears" because it's reality, and the more of it you connect to the less subject you will be to irrational fears.

    Your post is similarly on topic, since the robots that we should seriously worry about are indeed well secured against hackers.

    Spykee is too loud to "sneak" up on anyone, but despite this and the "hype phrasing" of the articles, we finally have robots that are capable of external abuse. Spykee could instruct a trusting child as to how to unlatch a gate and fall down the stairs. Rovio could wait patiently by the stairs and slide exactly under a falling foot at a critical moment. These things can be done today, over the Internet, from the other side of the world. While Usenet is still in operation, it's pretty clear that the police are not well equipped for catching telemurderers.

    Now would seem a good time to consider the issue. If we're posting on /. we can probably set our WPA-Enterprise security and require ssh tunnelling to access our home networks. Less than 10% of the people buying these robots can say the same. The infantile geek attitude of "serves 'em right for not securing it" needs to be discarded.

    We geeks of the world are a significant force in the robot-buying market. Without exception all my friends would ask me first if they were going to buy a robot. We should let the manufacturers know that we won't buy or recommend "hard to secure" bots for our homes. Robot makers are one group that would actually listen to us. And since the tools for doing it right are freely available (though cost money to integrate), it's not an unreasonable stretch for the manufacturers.

    While it's obvious how computer-people make the world an incrementally better place, this is one places where taking on some principals could save real living breathing humans. Seems worthy of some effort.

  • by blhack ( 921171 ) on Thursday October 08, 2009 @12:12PM (#29682243)

    Can we stop with this completely illogical fear-mongering? Hacked robots? Are you people insane?

    When you say "robot", people think of the sort of mindless, strangely powerful, totally mystical automotons found in sci fi movies and television shows. People think cylons and centurions, not a couple of servos and some sensors.

    Are hacked robots dangerous? No. Or at least they are no more dangerous in the "hacked" form than their unhacked form. My advice is to not build robots with energy-weapons for arms.

    If the "robot" that builds your car gets "hacked" (and by this I mean the PC that has some hydraulics connected to it gets somehow "hacked"), unplug it.

    Done.

  • by TheCarp ( 96830 ) * <sjc.carpanet@net> on Thursday October 08, 2009 @12:24PM (#29682409) Homepage

    Up to a point yes. Look at something like public key cryptography. I pgp encrypt a message and send it.Sure you can dedicate cycles to breaking the session key. It gets you ONE message. To get another message, you have to attack the next key. You might get my private key if you attack that. That gets you any messages that I send. Still, you are only getting my messages, until I change the key.

    Longer keys and good passwords (depending on how the attack is being done), increase the time, AND decrease the usefulness. Sure, you can dedicate resources to attacking my messages, but... the damage is limited to that, until you do it all over again for the next key.

    So that means now, which key to attack becomes a very practical consideration. Which message do you attack? Your resources are limited. What if it takes you a full day to hack one robot. Ok so you sat in front of my house, on wifi, all day... now you can.... drive my roomba around. You get to sit there another whole day to get my scuba... and that doesn't help you get my neighbors roomba.

    Guess what? You MIGHT see the occasional hack of the occasional robot. You are not going to see massive robotnets. Make it so you take one, and you can then trivially take the next in seconds and... well...

    -Steve

  • by TheRaven64 ( 641858 ) on Thursday October 08, 2009 @01:59PM (#29683585) Journal
    And, out of interest, the chips used to implement those features were made in which Chinese factory?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...