Blog

Windows Containers – What is it and why should we care?

06 Apr, 2018
Xebia Background Header Wave

This post is originally published as article within SDN Magazine on February 28th, 2017.
One of the hot topics within the Microsoft development community right now is undoubtedly the “container” topic. Following the success of both Docker and containers on Linux, Microsoft developed a Windows container implementation on Windows Server 2016 and Windows 10. After two and a half years of development plus one year of running this container technology in preview for insiders (Windows Server 2016 TP3 – TP5), September of 2016 finally saw Microsoft’s announcement that it had released Windows Server 2016 to the public.
While the container technology and Containerized Delivery have been used by non-Microsoft focused enterprises for a few years now (Linux has had its container technology since August 2008), the Microsoft community is only at the beginning of this new journey. This is, therefore, the perfect moment to ask ourselves whether we should care about Windows containers, and whether we should look into the structure of this new technology. However, before we take a more detailed look at Windows container technology, let’s look at the way in which we have been delivering our applications for the past 10 years.

A short story about deployment maturity

If we look at the way in which we have been delivering our software during the past 10 years, we can see a shift from manually installing our applications towards an automated way of pushing the applications into production. Driven by the mindset that the reliability, speed and efficiency of delivering our applications to our customers could be improved, various tools and frameworks have been introduced over time. They all support the automated method of installing applications on any given server, e.g. WiX, PowerShell, PowerShell DSC, Chocolatey, Chef, and Puppet.

Deployment maturity anno 2018
Figure 1 – Deployment maturity anno 2018

Previously, we created an MSI by using WiX, and subsequently our applications were installed by our operations colleague who was available during weekends to finish the installation of our applications within a production environment. To allow him to do his job, the development team provided him with a documented runbook of the actions he needed to take. However, after some time, we noticed that installing our applications manually had an undesirable side-effect. It was error-prone and it took us a lot of time to find and fix unintended mistakes made by our operations colleague during the manual installation of our applications. This is why we – the development team – decided to mitigate this risk as much as possible by delivering a PowerShell script, in addition to the MSI. Then the operations colleague only had to run our main PowerShell script with the highly secret production variables specified as arguments on invocation. The rest of the installation was scripted in PowerShell or it was part of our Windows installer.
Although this scripted way of installing our applications brought us more reliability and speed, we still ran into issues. The baseline situation of our development, test, acceptance and production environments was not the same, and so some deployments failed due to missing and different libraries, configurations and tool versions. This occurred particularly in production, an environment managed by our operations team that should always have had the latest product versions installed in order to make it more secure. Issues also occurred in the development environment, where the application “works on my machine” and not on the machine of my colleague. Naturally, the question of how to solve these problems arose. We tried to make our PowerShell installation scripts more resilient, but that took us a lot of time and we were still faced with the same environment issues, albeit to a lesser extent.
As a team, we decided to find a solution that would show the differences in baseline installations in our different environments, and that would guarantee that all our environments would end up with the same tool and assembly versions after running our installation scripts. We found a lot of tools that could help us with this, for instance PowerShell DSC, Chocolatey, Chef, and Puppet. Meanwhile, we became a DevOps team and together with our guys from operations we defined the desired state of our environments for our scripts. We stored the various environment-specific settings in a single configuration data store, and the scripts receive the correct configuration settings from this store, which means that we don’t have to specify them explicitly in our scripts. As a result, we were able to deploy our applications with the desired state solutions, and with a higher level of reliability. Moreover, it saved us a lot of time during the development of deployment scripts, compared to making our normal PowerShell scripts resilient against different baselines of our environments. However, we still encountered a number of issues…
While we have an automated, fast and reliable way of delivering our applications to production, we still have to reboot our production machines after de-installations to ensure that all registry keys, caches and file system files are really cleared. Besides that, we still experience a lot of waiting times and downtimes caused by these installations and de-installations. Even in the case of a scale-out solution we still have to wait until all our installation scripts have been run on each machine in order to finish the upgrade of our entire environment. In short, it is high time for a new way to deploy our applications, so let’s take a look at the world of containerized delivery.

Why should we care?

A lot of information has already been written about the advantages that containers will bring us. Most of those articles and blogs end up with the same benefits: containers enable us to deliver our applications faster, cheaper and in a more reliable way. And it is true that these benefits are the very reasons why you should consider using containers for deploying your applications. However, let’s prove these benefits by taking a closer look at the underlying implementation of containers.

Faster

Containers are the resulting artifacts of a new level of virtualization. Whereas things like Virtual Memory and Virtual Machines are a result of hardware virtualization, containers are a result of the so-called operating system-level virtualization. This means that containers share the operating system, whereas VM’s all need their own operating system. The interesting fact here is that this level of virtualization enables a whole new level of application delivery. Other ways of application delivery had to bring our environment into a given state, however, with containerized delivery we just have to “move/ship” our container to another container host and we are up-and-running. The container itself contains the installed application, which results in instant startup times during deployment. Instead of installing your application at the moment that it is deployed, in containerized delivery the application is installed in the container during the container build process (Docker build).

docker build-ship-run workflow
Figure 2 – Docker build-ship-run workflow

Better

Another aspect of the Docker build process is that during build, the in-between container states are extracted as so-called “image-layers” and the final end state is extracted as a so-called “container image”. This image and these layers are a blueprint of the running application within the container at that moment, but in addition, they also contain a snapshot of the state of the file system, registry and running processes. Because the container itself contains the installed application, including the required environment context (registry keys, processes, assigned resources, and files), we can be sure that our application will run within exactly the same context over different environments, even when these environments (development, test production) can use different memory resources, and they run  different numbers of other containers.

Cheaper

The cost-saving part of working with containers is especially applicable to those scenarios in which you provision a Virtual Machine to ensure registry, process, and file system isolation between the applications you serve to your customers. The increased options of container technology allow us to use containers for these scenarios instead of Virtual Machines. Because containers share the operating system (license) and have a much smaller footprint (storage) than Virtual Machines, you will save a lot of money when you need isolation between applications, or when you need to run multiple versions/instances of the same application on a single machine.

VMs vs containers
Figure 3 – VMs vs containers

Underlying implementation

To explain how containers are implemented internally within the Windows operating system, you have to know about two important concepts: User Mode and Kernel Mode. Before the launch of Windows Server 2016, each Windows operating system we used consisted of a single “User Mode” and “Kernel Mode”. Both are different modes between which a processor continuously switches, depending on what type of code it has to run.

Kernel Mode

The Kernel Mode of an operating system has been implemented for drivers that need to have unrestricted access to the underlying hardware. Normal programs (User Mode) have to make use of the operating system API’s to access hardware or memory. Code that is running within the Kernel Mode has direct access to those resources and shares the same memory locations/virtual address space as the operating system and other kernel drivers. Running code in this Kernel Mode is therefore very risky, because data that belongs to the operating system or another driver could be compromised as a result of your kernel mode code accidentally writing data to a wrong virtual address. If a kernel mode driver crashes, the entire operating system crashes. Running code within the kernel space should therefore be done as little as possible. This is exactly the reason why the User Mode was introduced.

User Mode

In the User Mode, the code always runs in a separate process (user space), which has its own dedicated set of memory locations (private virtual address space). Because each application’s virtual address space is private, one application cannot alter data that belongs to another application. Each application runs in isolation, and if an application crashes, the crash is limited to that one application. In addition to being private, the virtual address space of a user-mode application is limited. A processor running in user mode cannot access virtual addresses that are reserved for the operating system. Limiting the virtual address space of a user-mode application prevents the application from altering, and possibly damaging, critical operating system data.

Technical implementation of Windows containers

But what do these processor modes have to do with containers? Each container is just a processor “User Mode” with a couple of additional features such as namespace isolation, resource governance and the concept of a union file system. This means that Microsoft had to adapt the Windows operating system in order to allow it to support multiple User Modes. This is something which was very tough considering the high level of integration between both modes in earlier Windows versions. The following diagram gives a global idea of this new multi-User Mode architecture.

Different User Modes in Windows Server 2016
Figure 4 – Different User Modes in Windows Server 2016

Looking at the User Modes of Windows Server 2016, we can identify two types: the Host User Mode and the Container User Modes. The Host User Mode is identical to the normal User Mode that we were familiar with. The goal of this User Mode is to facilitate running applications on the host. A new feature of Windows Server 2016 is that, once you enable the Containers feature, this Host User Mode will contain some additional container management technologies, which ensure that containers work on Windows.
The core of this container technology is the Computer Services abstraction, which exposes the low-level container capabilities provided by the kernel via a public API. In fact, these services only launch Windows containers, keep track of them while they are running, and manage the functionality required for restarting. The rest of the Container Management functionality is done by the Docker Engine, which makes use of the “Go language binding solution”, offered by Microsoft in order to be able to communicate with these Compute Services API’s.

Windows vs Linux containers

The right side of the diagram shows two different Windows Server containers. As with Linux containers, these containers contain the application processes of the applications that were started within the container. Because Microsoft has always controlled the public interfaces of its kernel by delivering it via DLL’s instead of syscalls (like in Linux), and because those DLL’s are highly integrated with each other, Windows containers also need some extra system processes and services to be run.
Note: within each container you’ll see an smss process running, which launches this variety of system services.
Another difference between the Linux and Windows container implementation is the concept of Hyper-V Containers. Because all the processes that run within a normal Windows Server Container can be seen from within the container host, Microsoft decided to introduce a new container type to offer a secure solution for hostile multi-tenancy situations. This new Hyper-V container has the same functionalities as a normal Windows Server Container, apart from the fact that it always will run within a minimalized (utility) Hyper-V VM in order to create an extra security boundary around the container. Actually, it is a hybrid model of a VM and a container.
This also means that some core kernel elements and a Guest Compute Service are copied into this VM. While Hyper-V containers are great for situations in which the different containers you run on a shared set of hosts are not in the same trust boundary, the extra Hyper-V secure virtualization layer makes these containers a little bit slower on startup. This is why it is advisable to only use them when you really need that extra security boundary.

Hyper-V containers
Figure 5 – Hyper-V containers

Note: You can specify which container type you want to run in the isolation argument of the docker run command. To be able to run Hyper-V containers, the Hyper-V feature should be enabled.

Cornell Knulst
Cornell works for Xpirit, Hilversum, The Netherlands, as a trainer/architect. He is specialized in the domain of Application Lifecycle Management and Continuous Delivery, with a special focus on Microsoft-based technologies.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts