This post is originally published as article within SDN Magazine on October 13th, 2017.
During the past year I supported several clients in their journey toward Containerized Delivery on the Microsoft stack. In this blogseries I’d like to share eight practices I learned while practicing Containerized Delivery on the Microsoft stack using Docker, both in a Greenfield and in a Brownfield situation. In the sixth blogpost of this series I want to talk about Dealing with secrets.
PRACTICE 6: Dealing with secrets
Before the container era, we used to put our secrets (i.e. credentials, certificates, connection strings) at a given location in the file system, alongside our application files. Within a containerized world, there is a problem with this approach. Because container images contain both the application as registry, environment variables and other file system content, this approach would mean that each team member can see the secrets by spinning up a new container based on this image. Dealing with secrets of containerized applications therefore means that you need to specify your secrets on container initialization and store them outside your container images. But how can you achieve this?
Before we look at the solution for dealing with secrets, you have to know exactly what you need to declare as secrets. Many values in configuration files are not secrets, e.g., endpoints, whereas passwords and SSL certificates are definitely secrets. It is important to be aware of this separation between secrets and configuration settings because in an ideal world you will manage both in a different way.
Looking at configuration settings, there are several ways to manage them. The option I like most is to make use of a configuration container in which all configuration settings (e.g., endpoints) are stored. At the time of container initialization, you can make use of this container to get the right endpoints for your application, for instance a service bus topic endpoint or an external SMS endpoint. The nice thing about a configuration container is that you don’t have to change the content of all other containers to deal with configurations over multiple environments like Dev, Test and Production. By making use of Docker Compose you can define this configuration store as a separate service and use its Docker DNS name to get the latest configuration settings from that store.
Until the beginning of this year, the most frequently used solution for dealing with secrets was either to make use of volume mappings or to make use of environment variables that contain the actual secrets. However, neither of these options was very secure. In the case of environment variables, your secrets are accessible by any process in the container, preserved in intermediate layers of an image, visible in docker inspect, and shared with any container linked to the container. In the case of volume mappings, the disadvantage is that you are making your containers dependent on the content of a data volume and this means that this container becomes unnecessarily stateful instead of stateless.
Luckily, since the beginning of this year, the most applicable option is to make use of the secrets management solution of the different cluster implementations, e.g., Kubernetes Secrets or Docker Secrets (Docker 17.05 for Windows containers). The nice thing about secret management at the cluster level is that secrets are automatically distributed across the container hosts. Another benefit is that the same secret name can be used across multiple clusters. If you have a separate Development, Test and Acceptance cluster, you can reuse the secret name, and your containers only need to know the name of the secret in order to function in all three environments. Creating those secrets in your container cluster environment can be orchestrated by the tools you are using for your delivery pipeline, e.g. VSTS.
Interested in the next practice? See PRACTICE 7: Explicit Dependency Management.