Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Privacy Technology

Doomsday Docker Security Hole Uncovered (zdnet.com) 87

An anonymous reader quotes a report from ZDNet: One of the great security fears about containers is that an attacker could infect a container with a malicious program, which could escape and attack the host system. Well, we now have a security hole that could be used by such an attack: RunC container breakout, CVE-2019-5736. RunC is the underlying container runtime for Docker, Kubernetes, and other container-dependent programs. It's an open-source command-line tool for spawning and running containers. Docker originally created it. Today, it's an Open Container Initiative (OCI) specification. It's widely used. Chance are, if you're using containers, you're running them on runC.

According to Aleksa Sarai, a SUSE container senior software engineer and a runC maintainer, security researchers Adam Iwaniuk and Borys Popawski discovered a vulnerability, which "allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host. The level of user interaction is being able to run any command (it doesn't matter if the command is not attacker-controlled) as root." To do this, an attacker has to place a malicious container within your system. But, this is not that difficult. Lazy sysadmins often use the first container that comes to hand without checking to see if the software within that container is what it purports to be.
Red Hat technical product manager for containers, Scott McCarty, warned: "The disclosure of a security flaw (CVE-2019-5736) in runc and docker illustrates a bad scenario for many IT administrators, managers, and CxOs. Containers represent a move back toward shared systems where applications from many different users all run on the same Linux host. Exploiting this vulnerability means that malicious code could potentially break containment, impacting not just a single container, but the entire container host, ultimately compromising the hundreds-to-thousands of other containers running on it. While there are very few incidents that could qualify as a doomsday scenario for enterprise IT, a cascading set of exploits affecting a wide range of interconnected production systems qualifies...and that's exactly what this vulnerability represents."
This discussion has been archived. No new comments can be posted.

Doomsday Docker Security Hole Uncovered

Comments Filter:
  • Containers (Score:3, Insightful)

    by 110010001000 ( 697113 ) on Monday February 11, 2019 @04:41PM (#58106356) Homepage Journal
    Containers are just computer programs. I never understood the hipster fascination with it.
    • by farble1670 ( 803356 ) on Monday February 11, 2019 @04:47PM (#58106400)

      I never understood the fascination with Linux. It's just a few computer programs.

    • Re:Containers (Score:5, Insightful)

      by Anonymous Coward on Monday February 11, 2019 @04:50PM (#58106412)

      Containers are primarily used by programmers trying to do an end-run around systems and security engineers who are trying to protect the programmer and the organization.

      • by Anonymous Coward

        Thank you. You said this so much better than I ever could.

      • by Anonymous Coward

        But...but...agility!!

      • by gweihir ( 88907 )

        Hahahaha, nice! Pretty true also.

      • Or ... they're used as lightweight virtualization doing a small set of jobs instead of heavyweight VMs that use considerably more disk space, memory and take longer to start up.

        Like on my QNAP where I have multiple LXC containers that are each a VPN endpoint, routable gateway and SOCKS5 proxy.

        • by DarkOx ( 621550 )

          ^^^Containers have a place.

          I do something similar. I have all my home services: http proxy, dvr, file server, several web applications, router, DLNA server; etc split out into containers. Why? precisely because I DO want to be able to install updates and apply patches. Containers make that easy. As long as I get the kernel right and don't break LXC there isn't much on the host that will impact services.

          I can upgrade each container (and easily revert to a btrfs snapshot of it if things go wrong) at time.

      • by sfcat ( 872532 )

        Containers are primarily used by programmers trying to do an end-run around systems and security engineers who are trying to protect the programmer and the organization.

        That's funny. I always saw the ops folks being the ones pushing it. Or maybe they were the devops folks. Those non programmer types always change their job titles so quickly. But seriously, I've never seen a programmer push containers. I've always seen it pushed from ops or management. I've also never seen anyone happy with their container deployment...

        • Many programmers push for containers.

          It makes it much more handy to work on a project that is developed while two or more older versions are out in the field and needs to be maintained.

          It makes it also easier to simply try something out, especially if you have a repository in your organization with typical configurations. So getting an image is a few clicks or an ansible or kubernetes command.

          I've also never seen anyone happy with their container deployment...
          And I have never seen anyone unhappy.

          Perhaps you

    • by Anonymous Coward on Monday February 11, 2019 @04:50PM (#58106420)

      and undocumented since it runs isolated from everything else, and doesn't have to be installed
      run in the same machine (virtual or physical) as other software.

    • Re:Containers (Score:4, Informative)

      by ArchieBunker ( 132337 ) on Monday February 11, 2019 @05:26PM (#58106608)

      Dependencies got so convoluted that nobody could compile code from another project because it needed 100 obscure libraries. 10 of those libraries needed another handful of libraries, etc etc. Voila, problem solved.

      • Then use a dependency manager ... there are plenty.

        You don't need to use maven if you prefer to have build and dependencies separated ... take ivy and gradle, problem solved.

        If you can not handle dependencies, you should not be in the software business.

    • by gweihir ( 88907 )

      Containers work for those that are insecure in the area of system administration. That they have to maintain every individual container now and an additional layer that can be attacked escapes them.

    • Re:Containers (Score:5, Interesting)

      by crow ( 16139 ) on Monday February 11, 2019 @06:31PM (#58106908) Homepage Journal

      It's a cross between a chroot environment and a virtual machine. For most purposes, it is a virtual machine, but by using file system overlays, the overhead per VM is much lower; almost as low as running them all in the same environment.

      That's the theory, anyway.

      If you're running dozens or hundreds of web servers or something like that, it's probably a good solution. If you're only running a few, there's probably no reason not to just use real VMs. Of course, for many people it's not about what's the best fit, it's about using the tool you know.

      • If you have a cloud, you probably want to spin down "VMs" that you don't need at the moment. And spin up some handling a single or a few dozen requests and spin them down again.

        That is 100 times faster with containers than with VMs. And you can shift your load over the real hardware much more efficient.

        If you're running dozens or hundreds of web servers or something like that, it's probably a good solution.
        It is not the question of "services", it is a question of requests. If you are in an elastic cloud, th

    • by AHuxley ( 892839 )
      So a computer and chair can be used to work on may different OS projects.
      No having to get up and move to a secure computer.
      It saves on computers and walking around.
    • Because Kubernetes (K8 for those in the know) is the new hawtness, yo!

    • by Anil ( 7001 )

      Not sure of how useful containers are in production, but just like VMs they are extremely useful when developing and testing software that needs to target multiple platforms, or needs to do automated testing against many supported versions.

  • why Joyent exists (Score:5, Interesting)

    by epine ( 68316 ) on Monday February 11, 2019 @04:55PM (#58106442)

    The Joyent cloud features a second layer of isolation. Sometimes you see this described as "double-hulled virtualization". The OS performance penalty to achieve this is low to non-existent due to the nature of BSD zones (hardened jails).

    Joyent hybrid cloud: Triton Compute Service and Triton DataCenter integration [joyent.com]

    This is precisely the scenario that Joyent's technology exists to mitigate.

    You think you're running Linux containers, but under the hood you've also got zones and ZFS snapshots.

    There is a resource penalty involved in using a high-integrity file system like ZFS, (efficient copy-on-write requires extensive write-buffer coalescing) but it's often not a large one compared to the many gains in administrative security and ease.

    • by Rysc ( 136391 ) *

      You're implying that you can't get the equivalent of BSD zones on Linux and run containers inside them. You can, it's just a lot of bother.

    • by Anonymous Coward

      Just about every "enterprise" feature that Linux has slowly and painfully reimplemented was already done far better in illumos/Solaris. Unfortunately, sheer momentum and large companies with lots of money will win over technical superiority every time.

      But docker is a particularly sad example of this.

      • Severely underrated comment.
      • Re:why Joyent exists (Score:5, Interesting)

        by DeVilla ( 4563 ) on Tuesday February 12, 2019 @01:02AM (#58108000)

        Solaris and the other UNIXes died for the same reason. They all provided roughly the same feature set in slightly incompatible ways. It made development, maintenance and administration unnecessarily difficult and error prone.

        None of the vendors put sincere effort into fixing it. The GNU tools focus portability helped immensely with this. Free source tools ended up defining the only true portable standard. They gained features consistently that the others had and implemented them in ways that served the developers & user rather than an particular vendor. Eventually Linux and the FSF's tools became the best of breed UNIX without even being UNIX.

        Docker is a mess because it was originally developed in a way that served the interests of Docker Inc. The single local name space of images, the poor default implement of a remote registry, the ability to only search images in dockerhub... It wasn't designed to support secure isolation. That was bolted on later and needs continual patching. There is a not-so-new love affair with BSD/MIT style licenses and "Open Core" business models. It's only bringing back the bad old days of the past.

  • by firehawk2k ( 310855 ) on Monday February 11, 2019 @05:04PM (#58106496)

    I got a malicious container from the Container Store. I blame Marie Kondo.

  • by Anonymous Coward

    yes it's bad; but it is not the first docker escape and it won't be the last

    it's also funny, because of the way it works: the bug in runc allows the container to overwrite the runc binary and inherit runc's root privileges

  • by Anonymous Coward

    Rule 34. Never underestimate it. Poor Superman.

    Captcha: adultery

  • container security (Score:4, Insightful)

    by Anonymous Coward on Monday February 11, 2019 @06:06PM (#58106788)

    Containers (the collection of Linux namespaces and cgroups) are not a strong enough security boundary to safely isolate untrusted code. They never have been, and anybody that told you otherwise is either lying or clueless. Containers are super convenient, and a great way to manage the deployment of your software, and you should use them -- Just not to protect mixed-trust workloads running on the same host from each other.

    If you want to run code from sources that you don't trust, isolate it in a separate VM. If you want to use container-like workflows and orchestration systems to manage your VMs, use something like Kata Containers (https://katacontainers.io/).

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      It's not a strong enough security boundary, but it damn well *should* have been. There's no problem doing this in FreeBSD and Solaris without paying a hardware virtualization performance tax. Frankly, the Linux community should be embarrassed that such a fundamental system facility was implemented in a botched and useless manner.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Then docker itself is lying:
      "Containers and virtual machines have similar resource isolation and allocation benefits. [] Containers are more portable and efficient"
      https://www.docker.com/resources/what-container

    • by Anonymous Coward

      If you have mixed trust, you should really separate the physical hardware (storage, processing, and networking) in the various zones of trust.
      If you have a VM with untrusted code, there are still exploits that can escape that VM. Even ignoring that, since it is physically on the same network hardware it could mount a network based attack on adjacent VMs without compromising the hypervisor itself.
      Toss a modern hardware-based network firewall between the zones and monitor it.

      • Yeah this is exactly the kind of stuff that Spectre and Meltdown etc make scary. VM's on the same hardware are not isolated anymore until you can replace your VM host with hardware that isn't susceptible.
  • by Anonymous Coward
    Docker is not about isolation, just ease of standing up an environment. If you want to have the security of isolating a Docker container from the host or other containers for that matter, you need to use a VM or BSD jails. I have services that ran execute arbitrary code, but they run as locked down users. Yet people have Docker running as root. This is just asking for trouble. Docker gives no guarantees about security, yet it runs as root and is expected to run whatever is inside of a container.

    Docker is

"Show me a good loser, and I'll show you a loser." -- Vince Lombardi, football coach

Working...