It is one of those tools that I will focus on in this post. Containers. But while a lot of that hype has been around how awesome containers can be on an enterprise level, I’m going to examine them from the angle of how they could potentially be the number one tool used for development environments. And here’s why I think that.
Traditionally developers used to create their environments directly onto their own local workstations. Along with all those great “Well it works on my machine” excuses, to borrow from a great writer, This has made a lot of people very angry and been widely regarded as a bad move.
When everyone was manually installing tools any old ad-hoc way it was to be expected that things woudn’t always work as intended in production environments (which would also have had their own manual configuration at some point). Great days for coffee and head ache tabled manufacturers.
Over recent years organisations have been moving steadily towards virtualizing their development environments or at least automating the installation onto local machines so that they can have at least some kind of level playing field.
For the time being, I’m going to put aside the localised environment put directly onto the development workstation and focus on VM usage for a while.
One of the neat features toted by containers are how they are more efficient than VMs due to the way they function. This is true. When a virtual machine is running, the host has to emulate all the hardware such as devices, BIOS, network adapters and even the CPU and memory in some cases where proxying is not an option (such as none x86 architecture).
Containers function by running directly on the hardware of the host, using the hosts OS but segregating the application layer inside those handy shippable areas. It does mean that you are limited to a certain extent. For instance, A Hypervisor can run any operating system regardless of what it is. Windows VMs and Linux VMs can cohabit on the same host as happy as Martini with Ice but you can’t run say an MS Exchange server in a container on a Centos Docker host, or a full NginX Linux stack on the windows variant. For large enterprise Full Wintel environments for example this won’t be an issue as they’d only need Windows container hosts, but for smaller mixed infrastructure, it means that they would need to run 2 container instances, doubling the required support for the 2 very different platforms and this is where containers do fall short of the mark as an enterprise level tool for production environments.
However, that being said, my focus isn’t to knock containers, but to praise them for the benefit that they could potentially bring in the actual creation of software!
Lets go back to the Developer who has stopped installing quasi production environments directly onto his workstation and has now adopted VM based development. Depending on the spec of his machine, he could be fine or in for a very hard time. As already mentioned, VMs are emulated which means they take up processing power, memory and more disk space than what is available to the guest. They hog resources. For enterprise solutions such as vCenter or QEMU, the overhead is not really an issue. Many tests have proven that these enterprise solutions only loose fractions of a percent on overhead against running the same operating systems in a bare bones capacity and enterprise storage is cheap as chips. Workstation virtualisation solutions however are a different story. Where as the enterprise hypervisors will only be running that virtualization process, workstations will also be running Email clients, web browsers, IDEs such as visual studio, Mono-Develop, PHPStorm or Sublime to name a few plus many other processes and applications in the background. The VMs will be sharing the available resources with all those others so you will never receive anywhere close to bare bones performance. You will frequently find VMs locking up from time to time or being slow to respond (especially if a virus scan is running). While these are small niggles and don’t occur regularly, they can be frustrating when you’re up against a deadline and sophos decides now is a great time to bring that VM to a grinding halt.
By moving to containers, you can eliminate a lot of that aggravation simply from not having all those resources sucked up to run another operating system within the operating system. Instead, the container allows you to run the application stack directly to the hardware. I’m not promising that it will cure all your problems when the computer grinds to it’s knees during those infernal virus scans, but if the workstation in question is limited in resources it can help to give the developer the environment they need without hogging HD, memory or CPU.
And finally probably what I feel is the best bit. Providing that the VM the Developer was using was provisioned correctly with a CM tool such as Ansible, Puppet or Chef, there is no reason why the same CM script couldn’t be used to provision the container as well so moving from VM based development to container based is not as hard as you would think. CM is the magic glue that holds it all together and allows us to create environments in vagrant, vCenter, Azure or physical boxes regardless what or where they are. Including Containers.
In summary, I don’t see Containers being the enterprise production heal all tool some are purporting it to be. The gains are too few and the costs too high, but for development environments, I see a very bright future.