Back in my build engineer days (and then in a sort-of devops role) I kept hearing some colleagues rave and praise Docker as the end-all-be-all for, basically, everything. Is it really like that?
Following is just me throwing some thoughts against the wall, so take everything with a grain of salt. Also, some of the info might be a bit outdated, since the last time I seriously looked over at Docker for work was middle of 2015.
What is Docker?
An open platform for distributed applications for developers and sysadmins.
So you'll notice that unless you're a developer or a sysadmin, Docker will not be useful for you. If you're a developer or a sysadmin, but you don't write or manage distributed (clustered) applications, Docker will not be useful for you.
Docker is particularly useful for continuous integration workflows and automated applicaton scaling. If you don't currently do continuous integration, Docker will probably not be very useful for you. If your application does not consist of multiple components (often called "micro services") that can scale independently, Docker will probably not be very useful to you.
If you're an end user, using Docker is very much like using any other virtualization technology. You get a "vm" image and you run it in your "VM" hypervisor (e.g. VMWare Fusion or VirtualBox), or you get a "docker container" and you run it in your "docker daemon".
Here is a good list of the top 8 reasons to use Docker. Now, answer honestly, how man apply to your daily workflow?
Here is a good list of system you must have in place before you deploy Docker:
- secured least-privilege access (key based logins, firewalls, fail2ban, etc)
- restorable secure off-site database backups
- automated system setup (using Ansible, Puppet, etc)
- automated deploys
- automated provisioning
- monitoring of all critical services
- and more (documentation, etc)
Do you think you're ready?
What about CoreOS?
Linux for Massive Server Deployments. CoreOS enables warehouse-scale computing on top of a minimal, modern operating system.
So you'll notice that unless you spend your day deploying massive "warehouse-scale" numbers of systems, CoreOS will not benefit you. CoreOS is particularly useful if you run lots of Linux virtual machines that are very similar. Where "lots" is probably thousands+. CoreOS works in combination with two cluster management frameworks called "etcd" and "fleet". If you don't already use a cluster management framework for your applications to handle things like "service discovery" and "task scheduling", these cluster management frameworks will not be useful for you.
Plus, CoreOS only supports applications that run in containers. If your distributed application is not comprised of containers, CoreOS won't be useful to you.
If your app runs in Docker, and you are willing to pay to run it in a big provider's infrastructure, you get a lot of stuff for your money, like really well designed automated monitoring and metrics and role-based-access-control and networking/routing/load-balancing infrastructure. But much of the workflow around managing/launching containers will be specific to AWS/Google.
Similar older technologies
- Java - run your program on any system without recompilation!
- OSv - a stripped-down "vm" for running java apps on top of a hypervisor
- OpenVZ / Virtuozzo / Solaris Zones / FreeBSD Jails - see Operating-system-level virtualization
How do I get started with Docker?
It really depends on how you currently provision and deploy systems/apps. Specifically, where do you store the configuration information for your systems and services? For example if you're using Puppet, you may want to try agentless puppet first. If you're using git, you may want to use more pre-/post-commit hooks. If you're not using Jenkins, you probably want to start using Jenkins first. If you're already in a heavy auto-scaling environment where SSH is impractical and you have to use zeromq or MCollective, then, you already know more about docker than me.