Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

The New USB Rubber Ducky Is More Dangerous Than Ever (theverge.com) 47

The USB Rubber Ducky "has a new incarnation, released to coincide with the Def Con hacking conference this year," reports The Verge. From the report: To the human eye, the USB Rubber Ducky looks like an unremarkable USB flash drive. Plug it into a computer, though, and the machine sees it as a USB keyboard -- which means it accepts keystroke commands from the device just as if a person was typing them in. The original Rubber Ducky was released over 10 years ago and became a fan favorite among hackers (it was even featured in a Mr. Robot scene). There have been a number of incremental updates since then, but the newest Rubber Ducky makes a leap forward with a set of new features that make it far more flexible and powerful than before.

With the right approach, the possibilities are almost endless. Already, previous versions of the Rubber Ducky could carry out attacks like creating a fake Windows pop-up box to harvest a user's login credentials or causing Chrome to send all saved passwords to an attacker's webserver. But these attacks had to be carefully crafted for specific operating systems and software versions and lacked the flexibility to work across platforms. The newest Rubber Ducky aims to overcome these limitations.

It ships with a major upgrade to the DuckyScript programming language, which is used to create the commands that the Rubber Ducky will enter into a target machine. While previous versions were mostly limited to writing keystroke sequences, DuckyScript 3.0 is a feature-rich language, letting users write functions, store variables, and use logic flow controls (i.e., if this... then that). That means, for example, the new Ducky can run a test to see if it's plugged into a Windows or Mac machine and conditionally execute code appropriate to each one or disable itself if it has been connected to the wrong target. It also can generate pseudorandom numbers and use them to add variable delay between keystrokes for a more human effect. Perhaps most impressively, it can steal data from a target machine by encoding it in binary format and transmitting it through the signals meant to tell a keyboard when the CapsLock or NumLock LEDs should light up. With this method, an attacker could plug it in for a few seconds, tell someone, "Sorry, I guess that USB drive is broken," and take it back with all their passwords saved.

This discussion has been archived. No new comments can be posted.

The New USB Rubber Ducky Is More Dangerous Than Ever

Comments Filter:
  • by lexios ( 1684614 ) on Saturday August 20, 2022 @05:32AM (#62805869)

    For Linux systems USBGuard can be probably be used to block devices like this. Anybody aware about similar software functionality for other OSes?

    How to use Linux's built-in USB attack protection
    https://www.zdnet.com/article/how-to-use-linuxs-built-in-usb-attack-protection/ [zdnet.com]

    • usbguard is fine for experts but they used to have management GUIs and then abandoned them so now you have to do all management manually. Even here on slashdot there are users who would struggle with this. I for one just don't want to be bothered. If I were to plug in a mysterious USB device (unlikely) I would hook it up to a pogoplug, since they are cheap. I would say raspi, but you can't even get those... but you can get a used pogo

      • After hooking it up to a Pogo Plug what would you do to establish trust? Would you just use it to transfer a file that is supposed to be on it or go to it? I can see that being a challenge for anything beyond file transfer.

        • I'd probably benchmark it. If the results are slow then at best I don't want it, and possibly it's just pretending to be a proper mass storage device.

    • Whitelisting every USB device is just as practical as email address whitelisting. It absolutely will work for secure environments but not your everyday joe.

      • Not even that. I unplug your keyboard, plug it into my tool, read out the PID and VID, clone them to my device, plug it in and your system thinks my keyboard spoofer is the keyboard.

        There is no foolproof way to do it. People need to effin' keep an eye on their machines when someone else is around.

    • Will that actually protect many users from a simulated keyboard? The RubberDucky can change its identification strings, so for example it could say use a Dell ID or even rotate through various manufacturer ID strings. It is hard to just whitelist a single device ID for a keyboard because it could break, and even a handful of "recovery" devices would be limited to true SCIF systems.

  • by Anonymous Coward

    Yes, yes, very nice proof of concept. But the execution and presentation remain immature sensationalism through and through. As if designed to be as little helpful as possible for maximum attention, so as to draw out "cyber security" consulting opportunities out for maximum "consultant" gain.

    There really is no need for that sort of job security (=="rent") seeking, there's plenty enough work for plenty capable people.

    • There really is no reason to try for job security in security. If you're not a complete idiot who is just faking it (and yes, these people unfortunately exist, mostly because managers have zero clue and are completely buzzword-compliant), headhunters are kicking down your door anyway.

      • by gweihir ( 88907 )

        Indeed. The job market for anybody with some actual skills in IT security is great and I guess people that fake it still have a pretty good chance. The last few headhunters I just told my daily rate if they wanted me to interview. Things have gotten more quiet now.

  • Pros aren't going to fall for it, and trawling the masses won't get them anywhere but locked up next to common carders.
    • Because it's a neat gimmick for shock-and-awe presentation to C-Levels.

      I wish I was kidding.

      • Because it's a neat gimmick for shock-and-awe presentation to C-Levels.

        I wish I was kidding.

        Don't wish you were kidding, It's great to have something on hand that can get people to actually take security in general seriously. When you're talking to C-level you need shock-and-awe to get them to notice an issue they kept ignoring.

        • I'd prefer them to understand the problem. But I guess with C-Levels and other children, you better put your money on spectacular magic tricks.

          • The higher you are the more abstract your understanding needs to be. The only thing they need to understand is that underinvesting in IT security can have catastrophic consequences.

            That's it. The very last thing you want is a C-level with a deep level of understanding actually helping come up with a solution. That's not their job.

      • by gweihir ( 88907 )

        Because it's a neat gimmick for shock-and-awe presentation to C-Levels.

        I wish I was kidding.

        I know that you are definitely not kidding. The problem is product vendors often lie by misdirection to C-levels and the C-levels typically do not notice because many thing they are a lot smarter than they actually are. So the other side also lies by misdirection. This device is one of the tools used.

    • Pros aren't going to fall for it

      I think you are wrong about that.

      People who are given high level credentials aren't necessarily more security aware than the average person. I've known loads of people who had access to useful data who didn't know much at all about how domain security works, or even user level security. Same for people who have access to financial information or even the ability to move money around.

      Getting value from a device like this isn't limited to cracking a security professional's machine, or even an IT profess

  • Bonus ducks [youtube.com]!

  • Of per-device dip switch settings inside the pc chassis.

    You can do this in software of course. Most stuff has serial numbers that are more or less unique to each device, so you can lock out keyboards, mice, etc that aren't on your whitelist (maintained as a sticky note the admin has under his keyboard).

    I've done stuff like that to disambiguate usb to serial dongles that talked to different pieces of hardware and would make symlinks like /dev/gizmo1 and /dev/gizmo2 without relying on things to show in the co

  • I mean you do _not_ let people plug in stuff you do not know into an USB port on your computer.

  • by Viol8 ( 599362 ) on Saturday August 20, 2022 @10:17AM (#62806233) Homepage

    Try typing stuff on a normal keyboard on some random computer and see what happens. 99% of the time nothing useful because either the focus is on a window that doesn't accept keyboard input or there's no focus at all. Or its at a login/lock screen/prompt.

    The only place where this might be of use is surreptitious use in a server room where servers are in a known input state. But then if someone can break into a server room to do this why would you need it in the first place? Just plug in a keyboard and get on with the hack.

  • USB Driver needed (Score:4, Interesting)

    by PPH ( 736903 ) on Saturday August 20, 2022 @11:04AM (#62806365)

    "You have just plugged a new keyboard into your system. Please enter the following CAPTCHA into this device to authorize its use."

  • I'm sure it's use and possession is already against a number of laws. But I'm sure if this becomes a problem the designers and manufacturers will have to answer some tough legal questions.

  • If you work at a company (or government organization, or military organization) that has highly-sensitive data on your systems, then they're going to have a policy of no USB storage devices being brought into the facility, let alone inserted into the USB port of any company computer. So like with most things there'd have to be some 'social engineering' involved to enable such an attack to happen. Unless you're going to pose a Mission: Impossible scenario to me, that is.
    Of course one thing is always true: if you can get physical access to a system, you can do almost anything to it. Nothing new there.
    • If you work in a sensitive enough area in the government any machine that should never have USB devices connected will have them physically blocked and disabled (with wirecutters not some software - epoxy works well). Machines that expect USB disks to come and go had allowlisted usb identifiers.

      This isn't a problem for the security folk but I can definitely see it being a problem out in the SMB + larger non-international corporate environments...during pentest season of course! There are easier ways in f
  • All kidding aside, can a company like Apple or Samsung get the EU to approve USB ports that offer limited access unless proprietary USB devices are inserted?

All the simple programs have been written.

Working...