With the introduction of Windows Server 2016 Technical Preview 3 in August 2015, Microsoft enabled the container technology on the Windows platform. While Linux had its container technology since August 2008 such functionality was not supported on Microsoft operating systems before. Thanks to the success of Docker on Linux, Microsoft decided 2,5 years ago to start working on a container implementation for Windows. Currently we are able to test this new container technology on Windows Server 2016 and Windows 10. Last September (2016) Microsoft finally announced that it released Windows Server 2016 to the public. But what does that mean for me as a developer or for us as an enterprise organisation? In this deep dive serie of blogposts we’re gonna look at the different aspects of working with Windows Containers, Docker and how containers will change the way we deliver our software. But first, in this first blogpost of this serie, we will answer the question why we should even care about containers…
Why should I care about software containers?
To explain the different advantages, we will reuse the metaphor of shipping containers. For that, we go back to the 26th of November, 1955. The day on which the first containership, the Clifford J. Rogers, was taken into service. A day which changed the course of world trade and laid the foundations for what was to become the biggest liner business in the world. But what was unique on this new containership approach? Or maybe a better question: what was the reason for the Vickers shipyard to introduce a new cargoship? In short: speed, costs, standardisation and isolation.
Before the advent of containerization in the 1950s, break-bulk items were loaded, lashed, unlashed and unloaded from the ship one piece at a time. Cargo was loaded and unloaded by hand and huge gangs of longshoremen would spend hours fitting the various items of cargo into different holds to be able to fully load the hull. Thanks to the introduction of containerization, shipping expense and shipping time have decreased enormously. As a result a containership can be loaded and unloaded in a few hours compared to days in a traditional cargo vessel. Moreover, a 39-fold savings of shipping expense has been realized.
The same goes for software containers. Nowadays many application deployments have to bring the target environment into the correct state by executing different installations and configurations. Like hand loading, this may take a long time. More worse is the time we have to wait when we upgrade our application. Even the same as with hand unloading (the hull of) a containership, we have to uninstall the old version of our application (and hope that all relevant files are removed) before we can execute the installation of our new version. Like shipping containers, delivering applications in a containerized way will have a huge impact on the deployment expense and time of our applications. Because our applications are already installed within our software containers, we just have to “move” those containers from one server to another to have our applications run on our new environment. Executing a deployment of our containerized application will save the amount of time we have to wait for our application to be installed. This results in instant-startup time of containers. Moreover, thanks to the isolation boundary of containers (namespace isolation), we do not have to upgrade our applications anymore. Instead, containers are immutable and running a newer version of an application will mean that we roll out a new container version and remove the old container version of the application. This results in instant “upgrade” times of our containerized applications.
Using software containers will result in significant cost savings. Firstly because instant container startup times reduce waiting times and costs. Secondly because containers heavily reduce the resource costs that are needed in case of an isolated rollout of individual applications. Where in the past Virtual Machines were used to ensure resource governance and registry, file system and process isolation, containers nowadays can ensure the same level of isolation (more on this isolation part in a latter blogpost in this serie) without even using a VM. Compared to VM’s, containers have a smaller footprint and share the OS. This will result in great cost reductions in OS licenses and disk space.
Additional to the above costs savings, software containers enable us to easily set-up, scale and manage(self-healing) our environments in just a few minutes. Instead of wasting a lot of time and money in fixing our corrupted development, testing and production environments, containers make it possible to instantly spin-up the exact same environment as we had before in just a few commands.
Thanks to the use of shipping containers the way of dealing with cargo in the world of transportation is highly standardised. We not only have a uniform way of loading and unloading, but also other aspects of transportation like origin labelling, hazard identifications, cargo descriptions and handles are highly standardised. Where, in the old days, longshoremen had to think about different holds for fitting the various items of cargo in the hull, the transportation of freight nowadays is a repeatable and relative simple activity. Containerization has lowered shipping expense and decreased shipping time, and this has in turn helped the growth of international trade. Cargo that once arrived in cartons, crates, bales, barrels or bags now comes in factory sealed containers, with no indication to the human eye of their contents, except for a product code that machines can scan and computers trace. This system of tracking has been so exact that a two-week voyage nowadays can be timed for arrival with an accuracy of under fifteen minutes.
Like with shipping containers, the standardisation that comes in when using software containers will incredibly simplify the deployment of our applications. When you decide to deliver all of your applications in a containerized way, you will make use of the same base commands for starting, stopping and removing your applications. As a first advantage this results in lower maintenance costs on your deployment scripts. Moreover, thanks to the way the container technology is implemented, you’ll also have the benefits of a uniform way of labeling (ID and origin), contents description and history tracking of your containerized applications. Last but not least, containerizing your applications enable a uniformed way of dealing with the context around your applications. Containers do not only contain your applications, but also ensure that your application will get the right context (configuration, environment variables etc.) around it.
An import part of working with shipping containers, is that they ensure that the content of a container is isolated from the content of other containers. When I fill a container with specific cargo, say for example shoes and clothes, containerized transportation ensures that the content of my container is sealed and stays the same during transportation. As those shipping containers are sealed and only opened at the destination, breakage (due to less handling), pilferage and theft levels have been greatly reduced. In short: thanks to its isolation, shipping containers maximize the reliability of transportation.
Like with shipping containers, using software containers will increase reliability in two different ways:
- Sealed content: sealed content means the content of our Docker container on initialization is exactly (except custom parameters) the same as we had defined in our container image in the central registry. For sure, like with shipping containers, we can change the content of our container while its running, however this can only be done by explicitly opening this container (docker exec) and changing its content.
- Isolated context: isolated context means that the running applications within our container have exactly the same outside context regardless different contexes around our container. The container itself defines the bounded context. Like rotting bananas outside a shipping container do not influence the quality of our clothes, applications outside our container can not change the registry, process and filesystem view of our container. Moreover, even the resources that are available for applications within a container can be restricted. This resource governance implementation ensures that my container has exactly the same resources available on my development and production environment even when the number of other containers running in the environment is not the same.
We’ve seen that using software containers enable us to deliver our applications with higher speed, lower costs and with higher quality (thanks to the standardisation and isolation aspects). Especially in DevOps/Agile organisations containers are a must-have solution to get the right level of deployment flexibility. Are we done now? No. Now we know why we should care about containers, we still don’t know what Windows containers actually are and how they are implemented internally. Therefore, in the next part of this serie we’ll do a deep dive into the underlying implementation and internals of containers on Windows. If you have any questions left, please do not hesitate to leave your comments below and I’ll try to answer your questions as soon as possible. Stay tuned…