Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security IT

Duqu Attackers Managed to Wipe C&C Servers 227

Trailrunner7 writes with an update in the saga of Duqu and Stuxnet. From the article: "Shortly after the first public reports about Duqu emerged in early autumn, the crew behind Duqu wiped out all of the command-and-control servers that had been in use up to that point, including some that had been used since 2009. An in-depth analysis of the known C&C servers used in the Duqu attacks has found that some of the servers were compromised as far back as 2009, and that the attackers clearly targeted Linux machines. All of the known Duqu C&C servers discovered up to this point have been running CentOS ... There also is some evidence that the attackers may have used a zero-day in OpenSSH 4.3 to compromise the C&C servers initially."
This discussion has been archived. No new comments can be posted.

Duqu Attackers Managed to Wipe C&C Servers

Comments Filter:
  • NO! (Score:5, Funny)

    by masternerdguy ( 2468142 ) on Wednesday November 30, 2011 @12:48PM (#38215554)
    Damn, not the command and conquer servers. My weekend is fried.
  • by Evro ( 18923 ) * <evandhoffman.gmail@com> on Wednesday November 30, 2011 @12:50PM (#38215594) Homepage Journal

    Editors, your job is not simply to click "post." Read the submission and see if it makes sense. I have no idea what Duqu is or what this is about. I had to dig down 2 links deep to see that this was related to an attack in India. Context: provide it.

    • by martas ( 1439879 ) on Wednesday November 30, 2011 @12:57PM (#38215710)
      Didn't the very first link in the summary do that?
    • by ackthpt ( 218170 )

      Editors, your job is not simply to click "post." Read the submission and see if it makes sense. I have no idea what Duqu is or what this is about. I had to dig down 2 links deep to see that this was related to an attack in India. Context: provide it.

      It's about distributing a worm using servers which were largely set up, using default passwords or never updating anything. This is why it's so critical to have good, dedicated system administration, intelligent installation and follow-up support . Honestly. Most of these servers were likely built once and left to run on their own, without a single thought to maintaning or even checking for security updates. Lazy, cheap people never seem to learn. It's like leaving your keys in your car and being utte

      • So, what you're saying is Linux, so easy even a Windows user could do it?
        • Re: (Score:2, Insightful)

          by mcgrew ( 92797 ) *

          Actually, I find that Linux is far easier to use than Windows. When a distro upgrade comes along, there are new perks that you learn. In Windows, an upgrade means you have to relearn the whole damned OS.

          Someone who ran Mandrake eight years ago would have no trouble at all migrating to kubuntu 11. Someone running Windows 98 or XP at that time has to relearn everything when upgrading to Win 7.

          The "Windows is easier and more user friendly" is a myth. It only seems that way because you kids grew up with Windows

          • I'm sorry what?

            It's "easy" for someone with 8 years of Mandrake experience to switch to kubuntu 11?
            Now, someone with 8 years experience of 98/XP would have it "hard" when migrating to Windows 7?

            I love the fair and biased comparison here. /sarcasm

            An "End-User" is _ALWAYS_ going to have a difficult time migrating, regardless of the OS, however any technically savvy "computer person" will be able to make either migration easy.

            I don't know about you, but in both Linux and Windows, I learned by messing around, b

          • by drsmithy ( 35869 )

            Someone who ran Mandrake eight years ago would have no trouble at all migrating to kubuntu 11. Someone running Windows 98 or XP at that time has to relearn everything when upgrading to Win 7.

            Complete and utter bullshit. The basic structure of the current Windows UI (Start Menu, Desktop, Taskbar, application and document windows, etc) hasn't changed significantly since Windows 95, and in many ways (window manipulation, drop-down menus, the concept of document-application associations) since Windows 3.0.

            Anyo

        • Setting up a CentOS server is even faster than configuring a Windows 7 PC, yes.

          I run unattended CentOS installations regularly, its very very simple.

          • by drsmithy ( 35869 )

            I run unattended CentOS installations regularly, its very very simple.

            Unsurprisingly, so are unattended Windows installations. That is, after all, kind of the point of an unattended install.

    • Well, you see, Count Duqu was trying to trap Anakin Skywalker and Senator Padmé Amidala....

    • by boldi ( 100534 ) on Wednesday November 30, 2011 @03:42PM (#38217786)

      http://en.wikipedia.org/wiki/Duqu

      Duqu is a computer worm discovered on 1 September 2011, thought to be related to the Stuxnet worm. The Laboratory of Cryptography and System Security (CrySyS) of the Budapest University of Technology and Economics in Hungary, which discovered the threat, analyzed the malware and wrote a 60-page report, naming the threat Duqu. Duqu got its name from the prefix "~DQ" it gives to the names of files it creates.
      Symantec, based on the CrySyS report, continued the analysis of the threat, which it called "nearly identical to Stuxnet, but with a completely different purpose", and published a detailed technical paper on it with a cut-down version of the original lab report as an appendix. Symantec believes that Duqu was created by the same authors as Stuxnet, or that the authors had access to the source code of Stuxnet.

      More likely Duqu==Stuxnet==Stars. Same guys, different vulns, different tools. Duqu is an instance made from a lego-kit.

    • by ljhiller ( 40044 )
      Do you know what Bitcoin is? Duqu has been in the news and on the slashdot pages for four weeks at least. Sorry you're behind.
  • Dear Kids... (Score:2, Insightful)

    by Lumpy ( 12016 )

    You never need your server directly on the internet.
    put it behind a firewall with holes poked through. they can't attach a zero day SSH exploit if the only hole is port 80 to Apache.

    And if you are one of the incredibly rare cases where you really do need to have the machine on the net directly.. I suggest daily security audits.

    • they can't attach a zero day SSH exploit if the only hole is port 80 to Apache.

      What about the edge cases where you're running something other than a vanilla web server?

      • "they can't attach a zero day SSH exploit if the only hole is port 80 to Apache.

        What about the edge cases where you're running something other than a vanilla web server?"

        then its "they can't attach a zero day SSH exploit if the only hole is port(s) N-Z to %service%."

        the point is if the only ports open are bound to an active service (and you have stripped the list down to what is absolutely needed)
        then its a lot harder to attack that system (bonus points if those services are not on default ports)

        • Re:Dear Kids... (Score:5, Insightful)

          by amicusNYCL ( 1538833 ) on Wednesday November 30, 2011 @01:12PM (#38215884)

          My point was that several servers do use SSH. If I rent a dedicated server, SSH is how I get things done. If an exploit is discovered in httpd, the correct solution is not to block port 80.

          • by Bert64 ( 520050 )

            Well you should only be running the services you need...
            If you need a service and have a firewall, then you will allow it through...

            It's ridiculous to run unnecessary services and then use a firewall to hide them.

        • And is there any security threat to a port being open that does NOT have an active service on it? If so what is the attacker cracking? The TCP/IP stack itself?

          Why have an active service on any port if you aren't using it?

          As far as I can tell firewalls are useful if you aren't sure what services are running on your network and cannot or do not feel like cleaning them up. Or... as a lazy way to make services accessible only on the LAN. For the former use I can understand on a LAN with many users. It ma
          • I for one would much rather control which network can access a service in one place (a centralized firewall), rather than manage it through ten different config files that use different syntaxes on a bunch of servers for every service.
          • I've found that firewalls work best when you are trying to protect only one machine. You can put a firewall up front, then run a script to open all 65535 ports and forward the packets to the single machine on the internal network. Whoa-la! You have all the protection of a state of the art firewall AND you have all the transparent configurability of being directly on the internet!

      • they can't attach a zero day SSH exploit if the only hole is port 80 to Apache.

        What about the edge cases where you're running something other than a vanilla web server?

        As in "any server that can be sysadmin'ed remotely"? :)

        About half of the system administrators I know don't work on-site. A few use VPNs + ssh; the rest uses plain ssh. Either way, it's more than a single port 80.

    • by Hatta ( 162192 )

      How do I get remote shell access if the SSH port isn't open? It might be wise to run SSH on a non-standard port, or to use port knocking, but simply blocking SSH entirely is way too far down the security/convenience tradeoff. You might as well unplug the thing entirely.

    • Ha, Typical BOFH statement. What if you don't spend all your time on the LAN? What if you actually USE ssh on a regular basis from outside locations. I guess you run an ethernet cord from home to work to the coffee shop and any other place you go?
      • by Lumpy ( 12016 )

        Most people use the secret service called...... VPN. or if you like more secure, you use an out of band initiation that opens a port for a short window.
        Example: I simply SMS my server, it get's the SMS message and opens the VPN firewall rule for 3 minutes. I connect and do my work. if my connect did not happen in the 3 minute window it closes down again.

        SMS is easy with a cellular rs232 modem, but there are plenty of other ways to do it as well. Email to a specific gmail account can do the same exact

        • We still have modems at most customer sites; although IPSEC is configured to allow remote VPN access; but that's firewalled to only permit attempted connections from known IP addresses (including ours).

    • Comment removed based on user account deletion
  • The first thing you do in C&C is build walls around your MCV so engineers won't get it. Seriously, guys.

  • CentOS (Score:4, Insightful)

    by future assassin ( 639396 ) on Wednesday November 30, 2011 @12:52PM (#38215624)

    >All of the known Duqu C&C servers discovered up to this point have been running CentOS

    Probably since this is a popular OS for web hosts that resell/sell servers. Who are the people who buy these server? Well anyone and everyone who wants to be another web host yet have no idea on how to secure a server so they hire some $40 per month security company to secure their servers. There must be 1000's of those servers out there ripe for raping.

    • by JSBiff ( 87824 )

      "so they hire some $40 per month security company to secure their servers. There must be 1000's of those servers out there ripe for raping."

      If each customer is paying $40 per month, and their are thousands of customers, wouldn't that be a $40,000+ per month security company? For that kind of cash, they should be competent. When I buy into a company like that, I figure I'm supposed to be getting more than $40/mo worth of security expertise, because I'm *sharing* the costs with thousands of other customers.

      Sa

    • ...so they hire some $40 per month security company...

      What should I be googling to find these companies? I have one customer that I can no longer support and I'd like to at least refer him to _somebody_ professional.

  • by martas ( 1439879 ) on Wednesday November 30, 2011 @12:55PM (#38215660)
    Am I the only one who is kind of worried about the whole stuxnet/duqu thing? We've been hearing/hypothesizing about the dangers of "cyber-warfare" (as much as I hate the term) for a while, pretty much since the beginning of Internet malware, but it seems as though recently shit has finally started to hit the fan, first with increasingly worrying allegations about Chinese hackers and such, and now with this (which seems to be the doing of the US/Israel, at least a lot of people think it is).

    If things continue along this trend, one could expect a really bleak future for the Internet where major world governments and other well-financed organizations have virtually unlimited power to do what they like with any computerized system, and continually carry out covert attacks against each other. It seems the only thing that could prevent that from realizing would be some major game-changing advances in computer security, but I'm not seeing any indication that that's likely to happen...
    • Even though this story posits a 0-day in OpenSSH as the culprit, I'm of the mind that free software with a strong patch and update system is as good as it gets. If you don't update your systems say because you don't want to break stuff, sorry but even non-0-days will bring you down. So on the sysadmin side, we're moving toward more specialization.

      On malware and free software: http://trygnulinux.com/action/?q=node/68 [trygnulinux.com]

    • That "future" already more or less exists. In fact, it always has. What prevents it from getting bleak is the checks and balances. Governments can screw other governments of course, but being caught really sucks for them diplomatically, so they have to be cautious. Corporations can be caught either by the government (which often seems to do little or nothing) or by the public eye, which can wreck the company. Or by other companies, of course.

      This has always been true, and in far more than "cyber"-space. Co

    • Or.. after a couple high profile attacks they finally disconnect these critical control systems from the internet and we don't hear about it again.
    • by Hentes ( 2461350 )

      If things continue along this trend, one could expect a really bleak future for the Internet where major world governments and other well-financed organizations have virtually unlimited power to do what they like with any unsecured computerized system,

      FTFY
      Also, I don't want to frighten you, but with an unsecured system it's not just incredibly powerful governments, but every 16 year old scriptkiddie can do what they like.

      • by martas ( 1439879 )
        Well, there are certainly degrees to it. A script kiddie probably couldn't have pulled off stuxnet, because he wouldn't have intel about how Iran't enrichment program is run and such.
    • "It seems the only thing that could prevent that from realizing would be some major game-changing advances in computer security, but I'm not seeing any indication that that's likely to happen..."

      Pre-computer security was an "air gap" (often reinforced with guards and alarms etc) between valuable systems and potential attackers.

      The horny craving to have everything connect to the internet and run Windows is to some extent a self-punishing mistake born of extreme hubris.

  • Points 4. and 5... (Score:5, Insightful)

    by djsmiley ( 752149 ) <djsmiley2k@gmail.com> on Wednesday November 30, 2011 @12:59PM (#38215728) Homepage Journal

    4.The servers appear to have been hacked by bruteforcing the root password. (We do not believe in the OpenSSH 4.3 0-day theory - that would be too scary!)
    5.The attackers have a burning desire to update OpenSSH 4.3 to version 5 as soon as they get control of a hacked server.

    Ah yes, lets pretend there is no problem because the idea that there is, is too scary. Someone kill me, please. The only other reason I can think of, which also ties in with the fact they were appently checking the man page for sshd_config is that something changes in the default settings between 4.8 and 5 and this they wanted desperately, but even then this would point to some sort of exploit. *(Maybe an exploit in the way the default settings are in centos, rather than in openssh).

    • Re: (Score:3, Informative)

      by Anonymous Coward

      4.The servers appear to have been hacked by bruteforcing the root password. (We do not believe in the OpenSSH 4.3 0-day theory - that would be too scary!)

      Why the f**k PermitRootLogin defaults to yes on CentOS's sshd config?
      Isn't it supposed to be a enterprise oriented distro?

      • That was my question!! The second one being, "Why wasn't PermitRootLogin turned off?" One of the first things I do when setting up a new server is verify that the root cannot get in remotely. As soon as there is any kind of user authentication set up and a user either set up or can log in, PermitRootLogin is set to no. From then on, admins wanting remote access with root privileges must log in as a user and either use sudo (preferably) or su. I even have the server email a group if someone does an su to roo

      • by Pharmboy ( 216950 ) on Wednesday November 30, 2011 @01:53PM (#38216418) Journal

        Why the f**k PermitRootLogin defaults to yes on CentOS's sshd config?
        Isn't it supposed to be a enterprise oriented distro?

        Most enterprises have IT staff to change that as soon as the OS is installed. The problem with not allowing root to ssh in with a fresh install is that a fresh install only creates the user "root", so you physically have to be at the machine to log in and setup the system if you don't allow root to ssh in. Yes, it is technically safer to disallow root to log in with a vanilla install, but it is inconvenient. On the DESKTOP, it makes sense to disallow root via ssh from a vanilla install, however.

        On servers, I usually setup vanilla, then ssh in, add a user, change to disallow root logins, and change the default port, then restart ssh, open a new session to test as that new user on the new port and "su -" to root, then log out of the first root shell, and finally start a new session on the new port and try to root in, to make sure I can't. I can't be that unique in doing it this way.

        Serious question to all: Do people still use the default port for SSH anymore? I never have, as once we went from telnet to ssh (over a decade ago...) we just always used a non-standard port. Makes my logs a lot easier to read.

        • Serious question to all: Do people still use the default port for SSH anymore? I never have, as once we went from telnet to ssh (over a decade ago...) we just always used a non-standard port. Makes my logs a lot easier to read.

          Yes, I run it on the default port, as does everyone else I personally know. How does running it on a non-standard port make your logs easier to read?

          • by gatkinso ( 15975 )

            It decreases the number of script based dictionary attacks aimed at port 22, so your logs are not as cluttered. Other than this running on a nonstandard port does nothing to enhance security.

            However some fools somehow think it does.

            • Nothing foolish about knocking out 100% of the scripted attacks on the server, which are over 99% of the attacks that will ever be attempted on most servers. Running on a non-standard port isn't the solution to running a secure server, it is just part of the solution, and works great for 0 day exploits in particular. Any decent admin knows that.

              • by gatkinso ( 15975 )

                It isn't part of the solution, it is just lazy (which ironically ends up being more work), and imparts a false sense of security.

                Even if I were foolish enough to buy into the whole security through obscurity angle, I would implement this via a firewall/router that forwards traffic on a nonstandard port to port 22 on the server in question, not by simply plugging in the red cable into my box and having it listen on a nonstandard port (or even more retardedly forwarding the nonstandard port ssh traffic to a n

                • by Raenex ( 947668 )

                  Even if I were foolish enough to buy into the whole security through obscurity angle

                  What's foolish about this? If I have all the security plans and an exploit is found (as they have been time and time gain), then you are vulnerable. If an additional layer of obscurity was applied on top of that, you are a harder target.

        • by gatkinso ( 15975 )

          The whole nonstandard port thing is silly. It does nothing (yes, nothing) to enhance security but to be fair it doesn't impact security either. Note that some places block outbound connections on nonstandard ports.

          Have fun logging into your box from such places.

          • It does nothing (yes, nothing) to enhance security

            It may enhance your luck. Sometimes exploits are found and there's some time between the discovery of the vulnerability and you fixing your system. In that time it could be that the only attack that will be attempted on your system will be an untargeted one by someone who's just quickly sweeping the whole internet on the standard port for as many machines to root as quickly as possible.

            • THAT is the entire point of using non standard ports. Anyone who says otherwise is too busy trying to sound important to realize how attacks really happen. I've run on non-standard for years, and I get no (repeat, NO) entries in my logs from scripts trying to log in. Obviously I still have to patch everything and keep it all up to date, but this removes one vector: 0-day ssh exploits. I keep my stuff up to date, my concern isn't the easy stuff, it is the 0-day stuff that I can't just patch.

              Some will say

        • by grcumb ( 781340 )

          The problem with not allowing root to ssh in with a fresh install is that a fresh install only creates the user "root", so you physically have to be at the machine to log in and setup the system if you don't allow root to ssh in.

          Is this true? I haven't installed plain CentOS in years and years, but I don't recall seeing this behaviour back when I was using RedHat (1990s - early 2000s), and it absolutely never happens with Debian.

          But even if it is true, it's not difficult at all to customise one's installer

      • Its actually handy for those of us who do multiple VNC installs of CentOS on the LAN at once. I start a bunch of LAN installations, monitor them by VNC, and on reboot, I ssh in as root, upload keys for the default user, disable root access and password logins, and voila, its done.

        What I really wish they'd do is put up an /etc/issue.net prompt complaining that the server hasn't been configured yet (like snmpd.conf's).

    • Judging by the rest of the article, I strongly suspect it has more to do with enabling their secure hierarchy of kerebos based logins than turning off the exploit they used. You can see some of the other things they do relate to features that require a 5+ openSSH once they're in.

    • I don't understand the "brute force" claim. In the article, they later explain:

      "Note how the 'root' user tries to login at 15:21:11, fails a couple of times and then 8 minutes and 42 seconds later the login succeeds. This is more of an indication of a password bruteforcing rather a 0-day. "

      This makes no sense to me. 2 attempts at a login, and then the 3rd succeeds? How is that brute force? Or is it just extraordinary luck (or an inept password policy).

      While I don't regularly perform penetration testing, my

    • by Jerry ( 6400 )

      "Brute force" in only three tries? How logical is that?

  • by gatkinso ( 15975 ) on Wednesday November 30, 2011 @01:37PM (#38216190)

    That makes me think twice about skipping on that Redhat license.

    Perhaps the folks at Cent should be checking their logs.

    • That makes me think twice about skipping on that Redhat license. Perhaps the folks at Cent should be checking their logs.

      It's just representative of the extra week that CentOS sometimes takes before patches are available. They've been slightly better lately, only one day behind RHEL. Of course compare that to windows machines with a sometimes multi-month lag on patches..

  • 1. Don't run services you don't need. This goes for all systems, including Windows.
    2. If you do need sshd running, install denyhosts.
    3. If at all possible, run sshd on a nonstandard port.

    #3 keeps the logs quiet from bots trying to jiggle a door handle that isn't there on 22.

    --
    BMO

    • 4. Change PermitRootLogin to no. If you must have remote root access, make them log in as a normal user and su to root. (Better yet, set up sudo and control who can do what.)
      • by bmo ( 77928 )

        Yeah, that too.

        It's just that I've been running Ubuntu so long and got so used to sudo and no root logins and all that I entirely forgot about it.

        Because any other way is madness.

        Speaking of no-root-login, there is a certain kind of user or admin who will fight to the death against removing the login for root and say how sudo is a security hole. I just don't get it.

        Someone else also mentioned fail2ban. I endorse that too.

        --
        BMO

      • From the article it appears it had nothing to do with whether or not root login is turned on.

        Remember, OpenSSH runs as the root user even if root logins are not accepted. Exploiting a vulnerability in OpenSSH isn't entirely out of the picture.

        A more proper way to do things is to force a VPN scenario to manage your servers. Try to run known proven VPN hardware from major vendors (such as Juniper and Cisco) where the hardware they use is special purpose (and not running a lot of extra fluff), which limits you
        • From the article it appears it had nothing to do with whether or not root login is turned on.

          Sorry, but the logs from both servers in the article (second link) do show that they both accepted a password for user=root from a remote IP address. That doesn't happen when sshd_config is set to prevent remote root logins. The sshd logs should show a normal user logging in. The secure logs should then show the su or sudo with privilege escalation.

          Even though OpenSSH runs as root, that does not mean that anyone who connects in has root privileges. If the exploit allowed someone to connect in as root when

          • You employ good security measures IMO :)

            Even if attackers can remove relevant logs they have no guaranteed knowledge of what your logging is doing and triggering upon first entry (hopefully). A root login or escalation that triggers an e-mail is something they're very unlikely to catch before you are notified about the intrusion.
      • If you must have remote root access, make them log in as a normal user and su to root.

        Can somebody please explain how this is safer than just logging in as root?

        • There are several ways that this is safer.
          1. It removes the known user problem. Since root is a user on all Linux boxes, if I want access, all I have to do is to find a password. I only have to discover one piece of information. If root cannot log in, I must now find three pieces of information, a username, a password for that user, and the root password.
          2. It discourages scripting attacks. Since root cannot get in, I would need to modify my script to try common usernames, or try specific usernames for e
          • Some of those are good points, but several of them seem to be logically equivalent to "use a longer root password." You can argue that logging in as a user, then su'ing to root requires more information, but I can just cat those three bits of info together and use that as the root password. You might as well just pick a longer password.
    • 1. Don't run services you don't need.

      You'd have to know what services you truly need. I remove all obvioulsy uneeded services on any new CentOS installation but always stop at these ones: acpid, auditd, cpuspeed, haldaemon, irqbalance, messagebus, microcode_ctl and possibly kudzu and sysstat. Do I need them? I just have no idea and no willingness to try.

      2. If you do need sshd running, install denyhosts. 3. If at all possible, run sshd on a nonstandard port.

      #3 keeps the logs quiet from bots trying to jiggle a door handle that isn't there on 22.

      No. You just shouldn't have SSH accessible from the outside, period. Use OpenVPN if you need remote access, it's free.

      • by bmo ( 77928 )

        You know, I had a reply half thought up, but then all I have to say is this now:

        Thanks for jumping down my throat.

        Piss off.

        --
        BMO

  • I would think this points to an exploit in SSHD 5.x, not 4.3. Once I brute-forced into a system, I would think the first order of business is to ensure I can get back in if the password is changed, not to patch the little-known exploit I used to get in in the first place.

    • by gatkinso ( 15975 )

      Patch the hole because you don't want someone else (say a pron spammer) to come in behind you and end up getting caught (or screwing up your server). But yes there could be an exploit they are using in 5.x as well.

      I suspect it was not a brute force attack, they simply disguised the exploit as one so that it falls into the noise of the hundreds of brute force attacks each day.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...