Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Bug Technology

Docker Bug Allows Root Access To Host File System (duo.com) 76

Trailrunner7 shares a report: All of the current versions of Docker have a vulnerability that can allow an attacker to get read-write access to any path on the host server. The weakness is the result of a race condition in the Docker software and while there's a fix in the works, it has not yet been integrated. The bug is the result of the way that the Docker software handles some symbolic links, which are files that have paths to other directories or files. Researcher Aleksa Sarai discovered that in some situations, an attacker can insert his own symlink into a path during a short time window between the time that the path has been resolved and the time it is operated on. This is a variant of the time of check to time of use (TOCTOU) problem, specifically with the "docker cp" command, which copies files to and from containers.

"The basic premise of this attack is that FollowSymlinkInScope suffers from a fairly fundamental TOCTOU attack. The purpose of FollowSymlinkInScope is to take a given path and safely resolve it as though the process was inside the container. After the full path has been resolved, the resolved path is passed around a bit and then operated on a bit later (in the case of 'docker cp' it is opened when creating the archive that is streamed to the client)," Sarai said in his advisory on the problem. "If an attacker can add a symlink component to the path after the resolution but beforeit is operated on, then you could end up resolving the symlink path component on the host as root. In the case of 'docker cp' this gives you read and write access to any path on the host."

This discussion has been archived. No new comments can be posted.

Docker Bug Allows Root Access To Host File System

Comments Filter:
  • by Anonymous Coward

    We've seen a similar problem with docker before. And that's because by design docker containers can reach out and control the host.

    Moral of the story? Don't let idiot webmonkeys design systems software. Or virtualisation solutions. Or much of anything at all.

    • Where do you draw the line between a Webmonkey and a Enterprise Level Developer?

      Besides have you used "Enterprise Level Software" in the past 30 years it is all pure crap. Buggy, Insecure, and the corporation needs to higher more Higher Paying Certified Professionals, then what you would need for a modest Software Development Team, who can make something better fitting for your organization.

      • by Hylandr ( 813770 )

        This is painfully true.

        Having worked in startups, the enterprise and modest sized companies that have been around for ages it's the latter that's always the highest quality with the greatest people.

    • Naw, Docker doesn't try to solve that problem, so nobody serious with a technical hat to wear is going to even lean that direction.

      Docker solves the problem of operating system level software prerequisite management in a way that is portable across linux distributions.

      Security is desirable, and so bugs like this will be considered important. And yet, for the most part this is not a problem; if you could get a shell on the server, the person in charge of that server must presume that you can escalate and get

      • by sjames ( 1099 )

        You have seriously whooshed there. It is Docker itself that creates the problem.

        • And yet, I missed nothing. If you're wooshing, go ahead and woosh, but don't blame others for your problem.

          Maybe you simply missed my point? It seems more likely that you missed my point than that I missed my point.

  • After all this time I still don't know what Docker does or what problem it solves.

    • People use it as a disk image for deploying to AWS (or Google cloud). AWS of course lets you use your own custom image, and but it's harder, so people use virtualization inside of virtualization instead.
      • by nazsco ( 695026 )

        > but it's harder, so people use virtualization inside of virtualization instead.

        "docker. For when FTPing your PHP files is too démodé"

        • Sometimes your services are too small to use the load on a server, then you can put each small piece of functionality. Then you have a different docker for each small piece, and it can be local or remote. That is, microservices are just distributed object-oriented-programming.
    • Re:Oh no! (Score:4, Informative)

      by bill_mcgonigle ( 4333 ) * on Wednesday May 29, 2019 @03:09PM (#58673402) Homepage Journal

      After all this time I still don't know what Docker does or what problem it solves.

      You know how package dependency issues can be hard with applications, especially given the need to update those dependencies for security fixes, sometimes having to tweak your application to work with those new versions?

      Docker lets you avoid those updates, so you can just deploy your app in a chroot that has copies of the old libraries with vulnerabilities. This way you never have to deal with security updates or application tweaks.

      Security admins everywhere never sleep anymore.

      • Except that is mostly horseshit since the majority of Dockers benefits is that it has the latest and most secure libraries and software versions *first*, long before distribution maintainers pull their finger out and declare an update fit for release.

        • Re:Oh no! (Score:4, Insightful)

          by Junta ( 36770 ) on Wednesday May 29, 2019 @09:01PM (#58676058)

          Dockers benefits is that it has the latest and most secure libraries and software versions *first*, long before distribution maintainers pull their finger out and declare an update fit for release.

          Well, actually it only provides logitstics. It is a container publisher's prerogative on how they give you updates if you pull from docker hub. They also happen to almost always base from.. a distribution maintainer!

          So let me review a handful of docker containers I pulled as of today...

          Ok, first I have an application that used debian 9. They have not, however pulled any updates from debian since January.

          Ok, second one.. I can't tell what they started from, but let me just spot check their openssl.. 1.0.2o... not exactly a spring chicken there...

          Third one.. Alpine Linux 3.9.3.. Ok so it's only one month behind the distro, not bad, but still, not as up to date as the host system...

          Fourth one.. Missing months of openssl updates, a centos 7 from about mid 2018.

          Now getting to what I see built by developers inside the company, the developers who, upon starting out decry how "ancient" the distros on the servers are and are so relieved to have docker to run cutting edge.. So the hosting environment has moved to RHEL8 and is nice and current, and when I look at their docker image.. It's based on Ubuntu 15.10... Back in early 2016, the redhat 7 environment was just "too old" and we were just not responsive enough and were "too scared" of newer distributions. Now they live in mortal terror of touching this image. This image that they don't have a dockerfile for (they just sporadically edit an image life and docker commit). For the "mere mortal" developer, this is what I generally see, they want cutting edge only for the first month of their product's lifecycle, then they never want to risk changes to it ever again.

        • by sjames ( 1099 )

          That's true when you first create the Docker image. Not so much 5 years later.

          Imagine if you will (but best to wait until the sun is up and you're wekk rested), someone spun up a docker image in 2001, tweaked it to the nth degree, somehow kept the dependencies from exploding and got it just right. Too bad it's 2019 and they're still passing that image around...

    • by darkain ( 749283 )

      Think VMWare style virtualization... only, instead of each VM being a full virtualized computer, each VM shares the host OS's single kernel. Instead of virtualizing the entire computer, you're essentially just virtualizing the user space while sharing the kernel space. In other words, its just a few tools bolted onto chroot.

      • I would reply with a horrified rant about how bad this comparison is... but VMWare is such an ugly hack doing the "world switch" inside the kernel that... yeah, it is just like that. The modern world is candy floss all the way down.

    • After all this time I still don't know what Docker does or what problem it solves.

      That's because you have no desire to expand your knowledge on a topic.

    • In a nutshell? Tarballs with execution instructions in a chroot environment.

    • Comment removed based on user account deletion
  • I had to look it up, I don't use Docker.
    Docker cp allows you to copy files between the docker instance and the host system.

    https://docs.docker.com/engine... [docker.com]

  • Quoth the Acerbic Arch-commentator:

    "You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes."

    • Theo can be a polarizing figure, but he's as smart as they come. I once overheard a casual conversation he was having about translation lookaside buffers, and despite having over 25 years of highly-technical computer experience, the majority of his conversation was quite over my head. I felt like a kid listening to adults speak :)

    • by lgw ( 121541 )

      Nothing us perfect, but VM hypervisors have had far fewer security bugs than OSs or applications. Containers, OTOH, are inseceure by their very nature. They don't promise much isolation to begin with, so it's hardly a surprise when flaws like this are found.

      • by sl3xd ( 111641 )

        Containers, OTOH, are inseceure by their very nature.

        When we start the Linux userspace (systemd, sysvinit, whatever), that userspace is running in a series of namespaces.

        "Containers" are merely a new userspace in an additional set of namespaces. The mechanism is identical to the one used to boot systems the initial userspace.

        If you can't trust your OS kernel, you're frakked, and hypervisors aren't going to save you.

        • by DeVilla ( 4563 )
          If it had been designed that way, yes. But the ability to create separate name spaces for processes, IPC, networks, etc have been bolted on a piece at a time. That bolting has been ugly at times. And it has been done at achieve several different goals rather than just implementing "docker", "containers" or "superjails". Add to this that the additions have to be done in a way that don't break existing application that don't know about them. It would be like trying to "evolve" DOS into UNIX.
          • by sl3xd ( 111641 )

            “Bolted on” isn’t really what happened; it was more of an organ transplant than tacking something over the top. And of course it was done a piece st a time — the LKML doesn’t respond well to patch bombs. Eric Biederman’s initial goal was to be able to checkpoint (and migrate, and restart) long-running compute jobs running on a supercomputing cluster. I know because he sat kiddie corner to me while he did it, and he showed me the first containers to ever run on Linux. Cont

        • by lgw ( 121541 )

          Don't trust anything. Privilege escalation exploits are common, hypervisor exploits very rare. Neither form of sharing is allowed for a variety of audit compliance needs, though internal (same company) sharing of hardware via VMs is often permitted.

  • Can the docker daemon be run as a non-root user and if so, are there any limitations?

    • by ls671 ( 1122017 )

      Exactly what I was thinking, I run qemu with its own dedicated user so it shouldn't be able write to the host file system as root. In the case, of qemu, running it as non-root doesn't cause any issues.

      Disclaimer: qemu uses the kvm module which is loaded in the kernel. Maybe this could be an exploitable weakness although...

  • by caffeinejolt ( 584827 ) on Wednesday May 29, 2019 @03:27PM (#58673486)
    I use podman as a docker replacement - part of the reason is for security - namely its ability to run without a persistent daemon with root and its ability to run rootless containers. Regardless of if you are running podman or docker - employing selinux to prevent a container process from accessing the host security context seems like a good solution here. Selinux is kind of a PITA, but once configured it really helps with this sort of thing.
    • by zidium ( 2550286 )

      This is the very first time I've ever heard of Podman. Thanks. Their marketing *sucks* :o

  • by tomhath ( 637240 )

    The weakness is the result of a race condition in the Docker software...

    So docker will be banned from twitter?

  • This won't work on, for example, Fedora. It will get an AVC Denial from SELinux.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...