This post is originally published as article within SDN Magazine on October 13th, 2017. During the past year I supported several clients in their journey toward Containerized Delivery on the Microsoft stack. In this blogseries I’d like to share the eight practices I learned while practicing Containerized Delivery on the Microsoft stack using Docker, both in a Greenfield and in a Brownfield situation. In the first blogpost of this series I want to talk about the first practice: Small and reusable image layers.
PRACTICE 1: Small, reusable image layersOnce you start containerizing.NET workloads, you need to decide how to modularize your container images. A good starting point is to review your architecture and determine which parts of your application landscape you need to selectively scale or release independently. In fact, each container image you create should be self-contained and must be able to run on its own. There is also another important aspect you have to think about: container image layering. As you may or may not know, container images are the blueprint for your containers. Images consist of image layers. Each image layer is created during the Docker build process as the resulting artifact of a set of instructions (e.g., creating a directory, enabling Windows features) specified within the Docker file. This process is shown in Figure 1. The nice thing about Docker is that this image-layering principle is reused to optimize the performance and speed of Docker. Once Docker notices that a given layer is already available within the image layer cache on your local machine, it will not download, rebuild or add this layer again. For example, if you have two ASP.NET container images – one for Website 1 and one for Website 2 – Docker will reuse the ASP.NET, IIS and OS layers both at container runtime and in the container image cache. This is shown in Figure 2. If you implement your container image layers in a smart way, you’ll see an increase in the performance of your containerized workload and the speed of their delivery. Moreover, you’ll see a decrease in the amount of storage your container images require. The following practices are related to container image layers:
- Sequence of layers: Try to order and structure the layering of your container images in such a way that you reuse layers as much as possible. Figure 3 shows how I achieved this for one of my clients by creating different images.
- Combining actions in a single instruction: try to combine multiple actions (e.g., by enabling a Windows feature, creating a filesystem directory, etc.) in a single Docker instruction as much as possible. By default, Docker will create a separate container image layer from each individual Docker file instruction. If you don’t need a separate image layer for later use, combine multiple actions in a single instruction line to avoid overhead in the storage of image layers.
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"] RUN Add-WindowsFeature NET-Framework-45-Core RUN Add-WindowsFeature NET-Framework-45-ASPNETcombine both Add-WindowsFeature actions into a single instruction:
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"] RUN Add-WindowsFeature NET-Framework-45-Core,NET-Framework-45-ASPNETAnother example of running multiple PowerShell commands in one instruction is:
RUN Invoke-WebRequest "https://aka.ms/InstallAzureCliWindows" -OutFile az.msi -UseBasicParsing; ` Start-Process msiexec.exe -ArgumentList '/i', 'C:az.msi', '/quiet', '/norestart' -NoNewWindow -Wait; ` Remove-Item az.msi; ` $env:PATH = $env:AZ_PATH + $env:PATH; ` [Environment]::SetEnvironmentVariable('PATH', $env:PATH, [EnvironmentVariableTarget]::Machine)Interested in the next practice? See PRACTICE 2 – Multi Staged Builds.
Cornell works for Xpirit, Hilversum, The Netherlands, as a trainer/architect. He is specialized in the domain of Application Lifecycle Management and Continuous Delivery, with a special focus on Microsoft-based technologies.