Blog

The eight practices for Containerized Delivery on the Microsoft stack – PRACTICE 5: Secure Containerized Delivery

06 Apr, 2018
Xebia Background Header Wave

This post is originally published as article within SDN Magazine on October 13th, 2017.
During the past year I supported several clients in their journey toward Containerized Delivery on the Microsoft stack. In this blogseries I’d like to share eight practices I learned while practicing Containerized Delivery on the Microsoft stack using Docker, both in a Greenfield and in a Brownfield situation. In the fifth blogpost of this series I want to talk about Secure Containerized Delivery.

PRACTICE 5: Secure Containerized Delivery

Securing your container infrastructure and deployments is an important aspect of Containerized Delivery. There are a lot of aspects to keep in mind here, so I will highlight the most important ones.

  • Harden your images, containers, daemons and hostsWhen you set up your containerized infrastructure, it is important that you harden your infrastructure elements against threats. To help you with this, the Center for Internet Security has published a Docker benchmark that includes configuration and hardening guidelines for containers, images, and hosts. Have a look at this benchmark at https://www.cisecurity.org/benchmark/docker/.Based on this benchmark, there’s a Linux container available that checks for dozens of common best-practices around deploying Docker containers in production in a scripted way. Unfortunately, this implementation is not available for Windows hosts right now, but if you make use of Linux container hosts for your ASP.NET core applications, you should definitely check this implementation at https://github.com/docker/docker-bench-security.
    One important aspect of hardening your container hosts is protecting your Docker daemon with TLS. A great, fast and simple way to achieve this is to use Stefan Scherer’s dockertls-windows container . This container generates all TLS certificates you’ll need to access the secured container daemon. Save the .pem files in a central, secure location so that you can use the content of those files once you want to access the secured Docker daemon. If you make use of VSTS for CI/CD, you can store the contents of the various .pem files directly in the Docker host Service endpoint.
  • Know the origin and content of your imagesAs mentioned in practice 3, there are two Microsoft base images that all Windows container images should derive from. However, there are also a lot of other public container images available on DockerHub, such as microsoft/iis, microsoft/powershell and even images from other publishers. Using those out-of-the-box images accelerates the development of your systems. However, making use of public image definitions can expose your production landscape to high risk. For example, how can you make sure that those images do not contain any vulnerabilities? How do you ensure that the owner of the image will maintain the image definition over time in case of vulnerabilities and exploits that are discovered? It is important to know the origin and content of the images you consume.Luckily there are many tools available to help you fill this gap. For example, you can make use of Docker Notary to check the authenticity of images, or Docker Security Scan to scan for any vulnerabilities within your image. You can also use other solutions such as Aqua and Twistlock. Whatever tool you use, make sure that you put a process in place that forces you to only use scanned public images and trusted origins.
    For existing internal images, it is important that you perform regular checks and actively maintain those images with regard to vulnerabilities and exploits. For new internal images it is important that you reduce the attack surface as much as possible. Many of the out-of-the-box images from DockerHub, such as the microsoft/iis and microsoft/aspnet enable too many features for your workload. At one of my clients, this was the reason that we decided to create our own internal IIS base image with only those Windows features and services enabled that we really needed. For example, the default IIS image enables all Web-Server sub-features. The image we created enables only some of the sub-features, e.g., Web-Static-Content, Web-Http-Logging, Web-Stat-Compression, and Web-Dyn-Compression. By creating our own internal IIS image we made our workload more secure and achieved better performance. To find out which Windows features you really need, take a look at the PowerShell Get-WindowsOptionalFeature –Online and Get-WindowsFeature commandlets.

Interested in the next practice? See PRACTICE 6: Dealing with secrets.

Cornell Knulst
Cornell works for Xpirit, Hilversum, The Netherlands, as a trainer/architect. He is specialized in the domain of Application Lifecycle Management and Continuous Delivery, with a special focus on Microsoft-based technologies.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts