Doomsday Docker Security Hole Uncovered (zdnet.com) 87
An anonymous reader quotes a report from ZDNet: One of the great security fears about containers is that an attacker could infect a container with a malicious program, which could escape and attack the host system. Well, we now have a security hole that could be used by such an attack: RunC container breakout, CVE-2019-5736. RunC is the underlying container runtime for Docker, Kubernetes, and other container-dependent programs. It's an open-source command-line tool for spawning and running containers. Docker originally created it. Today, it's an Open Container Initiative (OCI) specification. It's widely used. Chance are, if you're using containers, you're running them on runC.
According to Aleksa Sarai, a SUSE container senior software engineer and a runC maintainer, security researchers Adam Iwaniuk and Borys Popawski discovered a vulnerability, which "allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host. The level of user interaction is being able to run any command (it doesn't matter if the command is not attacker-controlled) as root." To do this, an attacker has to place a malicious container within your system. But, this is not that difficult. Lazy sysadmins often use the first container that comes to hand without checking to see if the software within that container is what it purports to be. Red Hat technical product manager for containers, Scott McCarty, warned: "The disclosure of a security flaw (CVE-2019-5736) in runc and docker illustrates a bad scenario for many IT administrators, managers, and CxOs. Containers represent a move back toward shared systems where applications from many different users all run on the same Linux host. Exploiting this vulnerability means that malicious code could potentially break containment, impacting not just a single container, but the entire container host, ultimately compromising the hundreds-to-thousands of other containers running on it. While there are very few incidents that could qualify as a doomsday scenario for enterprise IT, a cascading set of exploits affecting a wide range of interconnected production systems qualifies...and that's exactly what this vulnerability represents."
According to Aleksa Sarai, a SUSE container senior software engineer and a runC maintainer, security researchers Adam Iwaniuk and Borys Popawski discovered a vulnerability, which "allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host. The level of user interaction is being able to run any command (it doesn't matter if the command is not attacker-controlled) as root." To do this, an attacker has to place a malicious container within your system. But, this is not that difficult. Lazy sysadmins often use the first container that comes to hand without checking to see if the software within that container is what it purports to be. Red Hat technical product manager for containers, Scott McCarty, warned: "The disclosure of a security flaw (CVE-2019-5736) in runc and docker illustrates a bad scenario for many IT administrators, managers, and CxOs. Containers represent a move back toward shared systems where applications from many different users all run on the same Linux host. Exploiting this vulnerability means that malicious code could potentially break containment, impacting not just a single container, but the entire container host, ultimately compromising the hundreds-to-thousands of other containers running on it. While there are very few incidents that could qualify as a doomsday scenario for enterprise IT, a cascading set of exploits affecting a wide range of interconnected production systems qualifies...and that's exactly what this vulnerability represents."
Re: IT guys (Score:1)
âoechance areâ??!
Geez, doesnt anyone proofread this crap?
Re: (Score:3, Funny)
Damnit, Jim, I'm a Docker, not a software engineer.
Wait, wut did I just say?
Re: (Score:1)
This is what we pay our IT guy an arm and a leg to prevent.
Where are you? I only get paid a hand and wrist, a full arm in itself would be awesome, PLUS a leg?! Sign me up!
Containers (Score:3, Insightful)
Re:Containers (Score:5, Funny)
I never understood the fascination with Linux. It's just a few computer programs.
Re: (Score:3, Funny)
I never understood the fascination with computers. Its just a few abacus routines.
Re: (Score:2)
I never understood the fascination with math. It's just a few numbers.
Re: (Score:2)
Re: (Score:2)
I never understood the fascination with Linux. It's just a few computer programs
No, you're thinking of GNU/Linux.
Re:Containers (Score:5, Insightful)
Containers are primarily used by programmers trying to do an end-run around systems and security engineers who are trying to protect the programmer and the organization.
Re: (Score:1)
Thank you. You said this so much better than I ever could.
Re: (Score:1)
But...but...agility!!
Re: (Score:3)
The reality is that it can support both.
I have seen container images that live eternally and get randomly changed and 'docker committed' and no one is comfortable knowing how to get back to that state.
To the extent I have authority to do so, I make sure that images do not stray from a specific build process that comes from the same teams that release the OS platform and do not tolerate processes that involve 'change stuff in a container and commit to capture changes'. In that scenario it is a useful tool.
T
Re: (Score:1)
Hahahaha, nice! Pretty true also.
Re: (Score:2)
Or ... they're used as lightweight virtualization doing a small set of jobs instead of heavyweight VMs that use considerably more disk space, memory and take longer to start up.
Like on my QNAP where I have multiple LXC containers that are each a VPN endpoint, routable gateway and SOCKS5 proxy.
Re: (Score:2)
Perhaps I should have said lighter-weight.
Re: (Score:2)
Whilst they are a potential attack vector, they're pretty heavily locked down. Services run as nobody, these are VPN clients (not servers), no default gateway unless the VPN is up, all DNS requests except to the VPN server must be resolved over the VPN, traffic is MASQUERADEd, etc.
Re: (Score:2)
^^^Containers have a place.
I do something similar. I have all my home services: http proxy, dvr, file server, several web applications, router, DLNA server; etc split out into containers. Why? precisely because I DO want to be able to install updates and apply patches. Containers make that easy. As long as I get the kernel right and don't break LXC there isn't much on the host that will impact services.
I can upgrade each container (and easily revert to a btrfs snapshot of it if things go wrong) at time.
Re: Containers (Score:1)
What I find surprising, is articles like this https://devops.com/docker-vs-vms/ which are horribly inaccurate. Anyone who has ever utilized VMware, for instance, knows that most of the statements about overhead and constraints (and performance) of VM's in this article, are just plain wrong (even at the time of the articles posting). I've been reading many Docker vs. VM posts lately, and I'm astonished, because it's as if no one writing the articles have ever even utilized any modern hypervisor (much less aw
Re: (Score:2)
Containers are primarily used by programmers trying to do an end-run around systems and security engineers who are trying to protect the programmer and the organization.
That's funny. I always saw the ops folks being the ones pushing it. Or maybe they were the devops folks. Those non programmer types always change their job titles so quickly. But seriously, I've never seen a programmer push containers. I've always seen it pushed from ops or management. I've also never seen anyone happy with their container deployment...
Re: (Score:2)
Many programmers push for containers.
It makes it much more handy to work on a project that is developed while two or more older versions are out in the field and needs to be maintained.
It makes it also easier to simply try something out, especially if you have a repository in your organization with typical configurations. So getting an image is a few clicks or an ansible or kubernetes command.
I've also never seen anyone happy with their container deployment...
And I have never seen anyone unhappy.
Perhaps you
They allow your software to be sloppy... (Score:5, Insightful)
and undocumented since it runs isolated from everything else, and doesn't have to be installed
run in the same machine (virtual or physical) as other software.
Re:Containers (Score:4, Informative)
Dependencies got so convoluted that nobody could compile code from another project because it needed 100 obscure libraries. 10 of those libraries needed another handful of libraries, etc etc. Voila, problem solved.
Re: (Score:2)
Then use a dependency manager ... there are plenty.
You don't need to use maven if you prefer to have build and dependencies separated ... take ivy and gradle, problem solved.
If you can not handle dependencies, you should not be in the software business.
Re: (Score:2)
Containers work for those that are insecure in the area of system administration. That they have to maintain every individual container now and an additional layer that can be attacked escapes them.
Re:Containers (Score:5, Interesting)
It's a cross between a chroot environment and a virtual machine. For most purposes, it is a virtual machine, but by using file system overlays, the overhead per VM is much lower; almost as low as running them all in the same environment.
That's the theory, anyway.
If you're running dozens or hundreds of web servers or something like that, it's probably a good solution. If you're only running a few, there's probably no reason not to just use real VMs. Of course, for many people it's not about what's the best fit, it's about using the tool you know.
Re: (Score:2)
If you have a cloud, you probably want to spin down "VMs" that you don't need at the moment. And spin up some handling a single or a few dozen requests and spin them down again.
That is 100 times faster with containers than with VMs. And you can shift your load over the real hardware much more efficient.
If you're running dozens or hundreds of web servers or something like that, it's probably a good solution.
It is not the question of "services", it is a question of requests. If you are in an elastic cloud, th
Re: (Score:2)
No having to get up and move to a secure computer.
It saves on computers and walking around.
Re: (Score:2)
Because Kubernetes (K8 for those in the know) is the new hawtness, yo!
Re: (Score:1)
Not sure of how useful containers are in production, but just like VMs they are extremely useful when developing and testing software that needs to target multiple platforms, or needs to do automated testing against many supported versions.
Doomsday Docker Security Hole Uncovered in RunC (Score:2)
Re: (Score:2)
We're talking containers and we're saying k8s instead of kubernetes.
If this isn't telling you that we're talking hipster here, what will? So OF COURSE it's 2Y38.
Huh? What's Hipster? Well, technically, I think it's leetspeek with a beard.
why Joyent exists (Score:5, Interesting)
The Joyent cloud features a second layer of isolation. Sometimes you see this described as "double-hulled virtualization". The OS performance penalty to achieve this is low to non-existent due to the nature of BSD zones (hardened jails).
Joyent hybrid cloud: Triton Compute Service and Triton DataCenter integration [joyent.com]
This is precisely the scenario that Joyent's technology exists to mitigate.
You think you're running Linux containers, but under the hood you've also got zones and ZFS snapshots.
There is a resource penalty involved in using a high-integrity file system like ZFS, (efficient copy-on-write requires extensive write-buffer coalescing) but it's often not a large one compared to the many gains in administrative security and ease.
Re: (Score:2)
You're implying that you can't get the equivalent of BSD zones on Linux and run containers inside them. You can, it's just a lot of bother.
Re: (Score:1)
Just about every "enterprise" feature that Linux has slowly and painfully reimplemented was already done far better in illumos/Solaris. Unfortunately, sheer momentum and large companies with lots of money will win over technical superiority every time.
But docker is a particularly sad example of this.
Re: why Joyent exists (Score:2)
Re:why Joyent exists (Score:5, Interesting)
Solaris and the other UNIXes died for the same reason. They all provided roughly the same feature set in slightly incompatible ways. It made development, maintenance and administration unnecessarily difficult and error prone.
None of the vendors put sincere effort into fixing it. The GNU tools focus portability helped immensely with this. Free source tools ended up defining the only true portable standard. They gained features consistently that the others had and implemented them in ways that served the developers & user rather than an particular vendor. Eventually Linux and the FSF's tools became the best of breed UNIX without even being UNIX.
Docker is a mess because it was originally developed in a way that served the interests of Docker Inc. The single local name space of images, the poor default implement of a remote registry, the ability to only search images in dockerhub... It wasn't designed to support secure isolation. That was bolted on later and needs continual patching. There is a not-so-new love affair with BSD/MIT style licenses and "Open Core" business models. It's only bringing back the bad old days of the past.
Re: (Score:2)
Container Store (Score:3, Funny)
I got a malicious container from the Container Store. I blame Marie Kondo.
overblown (Score:1)
yes it's bad; but it is not the first docker escape and it won't be the last
it's also funny, because of the way it works: the bug in runc allows the container to overwrite the runc binary and inherit runc's root privileges
Doomsday Docker (Score:1)
Rule 34. Never underestimate it. Poor Superman.
Captcha: adultery
container security (Score:4, Insightful)
Containers (the collection of Linux namespaces and cgroups) are not a strong enough security boundary to safely isolate untrusted code. They never have been, and anybody that told you otherwise is either lying or clueless. Containers are super convenient, and a great way to manage the deployment of your software, and you should use them -- Just not to protect mixed-trust workloads running on the same host from each other.
If you want to run code from sources that you don't trust, isolate it in a separate VM. If you want to use container-like workflows and orchestration systems to manage your VMs, use something like Kata Containers (https://katacontainers.io/).
Re: (Score:2, Interesting)
It's not a strong enough security boundary, but it damn well *should* have been. There's no problem doing this in FreeBSD and Solaris without paying a hardware virtualization performance tax. Frankly, the Linux community should be embarrassed that such a fundamental system facility was implemented in a botched and useless manner.
Re: (Score:2, Interesting)
Then docker itself is lying:
"Containers and virtual machines have similar resource isolation and allocation benefits. [] Containers are more portable and efficient"
https://www.docker.com/resources/what-container
Re: (Score:1)
If you have mixed trust, you should really separate the physical hardware (storage, processing, and networking) in the various zones of trust.
If you have a VM with untrusted code, there are still exploits that can escape that VM. Even ignoring that, since it is physically on the same network hardware it could mount a network based attack on adjacent VMs without compromising the hypervisor itself.
Toss a modern hardware-based network firewall between the zones and monitor it.
Re: (Score:2)
So? (Score:2)
Docker does not isolate (Score:1)
Docker is