Blog

A High Available Docker Container Platform using CoreOS and Consul

24 Mar, 2015
Xebia Background Header Wave

Docker containers are hot, but containers in themselves are not very interesting. It needs an eco-system to make it into  24×7 production deployments. Just handing your container names to operations, does not cut it.
In the blog post, we will show you how  CoreOS can be used to provide a High Available Docker Container Platform as a Service, with a box standard way to deploy Docker containers. Consul is added to the mix to create a lightweight HTTP Router to any docker application offering a HTTP service.
We will be killing a few processes and machine on the way to prove our point…

Architecture

The basic architecture for our Docker Container Platform as a Service, consists of the following components
coreos-caas

  • CoreOS cluster
    The CoreOS cluster will provide us with a cluster of Highly Available Docker Hosts. CoreOS is an open source lightweight operating system based on the Linux kernel and provides an infrastructure for clustered deployments of applications. The interesting part of CoreOS is that you cannot install applications or packages on CoreOS itself. Any custom application has to be packed and deployed as a Docker container. At the same time CoreOS provides only basic functionality for managing these applications.
  • Etcd
    etcd is the CoreOS distributed key value store and provides a reliable mechanism to distribute data through the cluster.
  • Fleet
    Fleet is the cluster wide init system of CoreOS which allows you to schedule applications to run inside the Cluster and provides the much needed nanny system for you apps.
  • Consul
    Consul from Hashicorp is a tool that eases service discovery and configuration. Consul allows services to be discovered via DNS and HTTP and provides us with the ability to respond to changes in the service registration.
  • Registrator
    The Registrator from Gliderlabs will automatically register and deregister any Docker container as a service in Consul. The registrator runs on each Docker Host.
  • HttpRouter
    Will dynamically route HTTP traffic to any application providing a HTTP services, running anywhere in the cluster.  It listens on port 80.
  • Load Balancer
    An external load balancer which will route the HTTP traffic to any of the CoreOS node listening on port 80.
  • Apps
    These are the actual applications that may advertise HTTP services to be discovered and accessed. These will be provided by you.

 

Getting Started

In order to get your own container platform as a service running, we have created a Amazon AWS CloudFormation file which installs the basic services: Consul, Registrator, HttpRouter and the load balancer.
In the infrastructure we create two autoscaling groups: one for the Consul Servers which is limited to 3 to 5 machines and one from the Consul clients which is basically unlimited and depends on your need.
The nice thing about the autoscaling group is that it will automatically launch a new machine if the number of machines drops below the minimum or desired number.  This adds robustness to the platform.
The Amazon Elastic Load Balancer balances incoming traffic to any port machine in either autoscaling group.
We created a little script that creates your CoreOS cluster. This has prerequisite that you are running MacOS and have installed:

In addition, the CloudFormation file assumes that you have a Route53 HostedZone in which we can add Records for your domain. It may work on other Linux platforms, but that I did not test.

[bash]
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service
cd coreos-container-platform-as-a-service
./bin/create-stack.sh -d cargonauts.dutchdevops.net

{
"StackId": "arn:aws:cloudformation:us-west-2:233211978703:stack/cargonautsdutchdevopsnet/b4c802f0-d1ff-11e4-9c9c-5088484a585d"
}
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
INFO: create in progress. sleeping 15 seconds…
CoreOSServerAutoScale 54.185.55.139 10.230.14.39
CoreOSServerAutoScaleConsulServer 54.185.125.143 10.230.14.83
CoreOSServerAutoScaleConsulServer 54.203.141.124 10.221.12.109
CoreOSServerAutoScaleConsulServer 54.71.7.35 10.237.157.117
[/bash]
Now you are ready to look around. Use one of the external IP addresses to setup a tunnel for fleetctl.
[bash]
export FLEETCTL_TUNNEL=54.203.141.124
[/bash]
fleetctl is the command line utility that allows you to manage the units that you deploy on CoreOS.
[bash]
fleetctl list-machines
….
MACHINE IP METADATA
1cdadb87… 10.230.14.83 consul_role=server,region=us-west-2
2dde0d31… 10.221.12.109 consul_role=server,region=us-west-2
7f1f2982… 10.230.14.39 consul_role=client,region=us-west-2
f7257c36… 10.237.157.117 consul_role=server,region=us-west-2
[/bash]
will list all the machines in the platform with their private IP addresses and roles. As you can see we have tagged 3 machines for the consul server role and 1 machine for the consul client role. To see all the docker containers that have started on the individual machines, you can run the following script:
[bash]
for machine in $(fleetctl list-machines -fields=machine -no-legend -full) ; do
fleetctl ssh $machine docker ps
done

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ccd08e8b672f cargonauts/consul-http-router:latest "/consul-template -c 6 minutes ago Up 6 minutes 10.221.12.109:80->80/tcp consul-http-router
c36a901902ca progrium/registrator:latest "/bin/registrator co 7 minutes ago Up 7 minutes registrator
fd69ac671f2a progrium/consul:latest "/bin/start -server 7 minutes ago Up 7 minutes 172.17.42.1:53->53/udp, 10.221.12.109:8300->8300/tcp, 10.221.12.109:8301->8301/tcp, 10.221.12.109:8301->8301/udp, 10.221.12.109:8302->8302/udp, 10.221.12.109:8302->8302/tcp, 10.221.12.109:8400->8400/tcp, 10.221.12.109:8500->8500/tcp consul
….
[/bash]
To inspect the Consul console, you need to first setup a tunnel to port 8500 on a server node in the cluster:
[bash]
ssh-add stacks/cargonautsdutchdevopsnet/cargonauts.pem
ssh -A -L 8500:10.230.14.83:8500 core@54.185.125.143
open xebia.com/blog:8500
[/bash]
Consul Console
You will now see that there are two services registered: consul and the consul-http-router. Consul registers itself and the http router was detected and registered by the Registrator on 4 machines.

Deploying an application

Now we can to deploy an application and we have a wonderful app to do so: the paas-monitor. It is a simple web application which continuously gets the status of a backend service and shows who is responding in a table.
In order to deploy this application we have to create a fleet unit file. which is basically a systemd unit file. It describes all the commands that it needs to execute for managing the life cycle of a unit. The paas-monitor unit file looks like this:
[code]
[Unit]
Description=paas-monitor
[Service]
Restart=always
RestartSec=15
ExecStartPre=-/usr/bin/docker kill paas-monitor-%i
ExecStartPre=-/usr/bin/docker rm paas-monitor-%i
ExecStart=/usr/bin/docker run –rm –name paas-monitor-%i –env SERVICE_NAME=paas-monitor –env SERVICE_TAGS=http -P –dns 172.17.42.1 –dns-search=service.consul mvanholsteijn/paas-monitor
ExecStop=/usr/bin/docker stop paas-monitor-%i
[/code]
It states that this unit should always be restarted, with a 15 second interval. Before it starts, it stops and removes the previous container (ignoring any errors) and when it starts, it runs a docker container – non-detached. This allows systemd to detect that the process has stopped. Finally there is also a stop command.
The file also contains %i: This is a template file which means that more instances of the unit can be started.
In the environment settings of the Docker container, hints for the Registrator are set. The environment variable SERVICE_NAME indicates the name under which it would like to be registered in Consul and the SERVICE_TAGS indicates which tags should be attached to the service. These tags allow you to select the  ‘http’ services in a domain or even from a single container.
If the container would expose more ports for instance 8080 en 8081 for  http and administrative traffic, you cloud specify environment variables.
[code]
SERVICE_8080_NAME=paas-monitor
SERVICE_8080_TAGS=http
SERVICE_8081_NAME=paas-monitor=admin
SERVICE_8081_TAGS=admin-http
[/code]
Deploying the file goes in two stages: submitting the template file and starting an instance:
[bash]
cd fleet-units/paas-monitor
fleetctl submit paas-monitor@.service
fleetctl start paas-monitor@1
Unit paas-monitor@1.service launched on 1cdadb87…/10.230.14.83
[/bash]
Now the fleet report that it is launched, but that does not mean it is running. In the background Docker has to pull the image which takes a while. You can monitor the progress using fleetctl status.
[bash]
fleetctl status paas-monitor@1
paas-monitor@1.service – paas-monitor
Loaded: loaded (/run/fleet/units/paas-monitor@1.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Tue 2015-03-24 09:01:10 UTC; 2min 48s ago
Process: 3537 ExecStartPre=/usr/bin/docker rm paas-monitor-%i (code=exited, status=1/FAILURE)
Process: 3529 ExecStartPre=/usr/bin/docker kill paas-monitor-%i (code=exited, status=1/FAILURE)
Main PID: 3550 (docker)
CGroup: /system.slice/system-paas\x2dmonitor.slice/paas-monitor@1.service
└─3550 /usr/bin/docker run –rm –name paas-monitor-1 –env SERVICE_NAME=paas-monitor –env SERVICE_TAGS=http -P –dns 172.17.42.1 –dns-search=service.consul mvanholsteijn/paas-monitor
Mar 24 09:02:41 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 85071eb722b3: Pulling fs layer
Mar 24 09:02:43 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 85071eb722b3: Download complete
Mar 24 09:02:43 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 53a248434a87: Pulling metadata
Mar 24 09:02:44 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 53a248434a87: Pulling fs layer
Mar 24 09:02:46 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 53a248434a87: Download complete
Mar 24 09:02:46 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: b0c42e8f4ac9: Pulling metadata
Mar 24 09:02:47 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: b0c42e8f4ac9: Pulling fs layer
Mar 24 09:02:49 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: b0c42e8f4ac9: Download complete
Mar 24 09:02:49 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: b0c42e8f4ac9: Download complete
Mar 24 09:02:49 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: Status: Downloaded newer image for mvanholsteijn/paas-monitor:latest
[/bash]
Once it is running you can navigate to https://paas-monitor.cargonauts.dutchdevops.net and click on start.
Screen Shot 2015-03-24 at 10.05.20

You can now add new instances and watch them appear in the paas-monitor! It definitively takes a while because the docker images have to be pulled from the registry before the can be started, but in the end it they will all appear!
[bash]
fleetctl start paas-monitor@{2..10}
Unit paas-monitor@2.service launched on 2dde0d31…/10.221.12.109
Unit paas-monitor@4.service launched on f7257c36…/10.237.157.117
Unit paas-monitor@3.service launched on 7f1f2982…/10.230.14.39
Unit paas-monitor@6.service launched on 2dde0d31…/10.221.12.109
Unit paas-monitor@5.service launched on 1cdadb87…/10.230.14.83
Unit paas-monitor@8.service launched on f7257c36…/10.237.157.117
Unit paas-monitor@9.service launched on 1cdadb87…/10.230.14.83
Unit paas-monitor@7.service launched on 7f1f2982…/10.230.14.39
Unit paas-monitor@10.service launched on 2dde0d31…/10.221.12.109
[/bash]
to see all deployed units, use the list-units command
[bash]
fleetctl list-units

UNIT MACHINE ACTIVE SUB
paas-monitor@1.service 94d16ece…/10.90.9.78 active running
paas-monitor@2.service f7257c36…/10.237.157.117 active running
paas-monitor@3.service 7f1f2982…/10.230.14.39 active running
paas-monitor@4.service 94d16ece…/10.90.9.78 active running
paas-monitor@5.service f7257c36…/10.237.157.117 active running
paas-monitor@6.service 7f1f2982…/10.230.14.39 active running
paas-monitor@7.service 7f1f2982…/10.230.14.39 active running
paas-monitor@8.service 94d16ece…/10.90.9.78 active running
paas-monitor@9.service f7257c36…/10.237.157.117 active running
[/bash]

How does it work?

Whenever there is a change to the consul service registry, the consul-http-router is notified, selects all http tagged services and generates a new nginx.conf. After the configuration is generated it is reloaded by nginx so that there is little impact on the current traffic.
The consul-http-router uses the Go template language to regenerate the config. It looks like this:
[code]
events {
worker_connections 1024;
}
http {
{{range $index, $service := services}}{{range $tag, $services := service $service.Name | byTag}}{{if eq "http" $tag}}
upstream {{$service.Name}} {
least_conn;
{{range $services}}server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1;
{{end}}
}
{{end}}{{end}}{{end}}
{{range $index, $service := services}}{{range $tag, $services := service $service.Name | byTag}}{{if eq "http" $tag}}
server {
listen 80;
server_name {{$service.Name}}.;
location / {
proxy_pass https://{{$service.Name}};
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
{{end}}{{end}}{{end}}
server {
listen 80 default_server;
location / {
root /www;
index index.html index.htm Default.htm;
}
}
}
[/code]
It loops through all the services and selects all services tagged ‘http’ and creates a virtual host for servicename.
which sends all request to the registered upstream services. Using the following two commands you can see the current configuration file.
[bash]
AMACHINE=$(fleetctl list-machines -fields=machine -no-legend -full | head -1)
fleetctl ssh $AMACHINE docker exec consul-http-router cat /etc/nginx/nginx.conf

events {
worker_connections 1024;
}
http {
upstream paas-monitor {
least_conn;
server 10.221.12.109:49154 max_fails=3 fail_timeout=60 weight=1;
server 10.221.12.109:49153 max_fails=3 fail_timeout=60 weight=1;
server 10.221.12.109:49155 max_fails=3 fail_timeout=60 weight=1;
server 10.230.14.39:49153 max_fails=3 fail_timeout=60 weight=1;
server 10.230.14.39:49154 max_fails=3 fail_timeout=60 weight=1;
server 10.230.14.83:49153 max_fails=3 fail_timeout=60 weight=1;
server 10.230.14.83:49154 max_fails=3 fail_timeout=60 weight=1;
server 10.230.14.83:49155 max_fails=3 fail_timeout=60 weight=1;
server 10.237.157.117:49153 max_fails=3 fail_timeout=60 weight=1;
server 10.237.157.117:49154 max_fails=3 fail_timeout=60 weight=1;
}
server {
listen 80;
server_name paas-monitor.*;
location / {
proxy_pass https://paas-monitor;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80 default_server;
location / {
root /www;
index index.html index.htm Default.htm;
}
}
}
[/code]
This also happens when you stop or kill and instance. Just stop an instance and watch your monitor respond.
[bash]
fleetctl destroy paas-monitor@10

Destroyed paas-monitor@10.service
[/bash]

Killing a machine

Now let’s be totally brave and stop and entire machine!
[bash]
ssh core@$FLEETCTL_TUNNEL sudo shutdown -h now.

Connection to 54.203.141.124 closed by remote host.
[/bash]
Keep watching your paas-monitor. You will notice a slow down and also notice that a number of backend services are no longer responding. After a short while (1 or 2 minutes) you will see new instances appear in the list.
paas-monitor after restart
What happened is that Amazon AWS restarted a new instance into the cluster and all units that were running on the stopped node have been moved to running instances with only 6 HTTP errors!
Please note that CoreOS is not capable of automatically recovering from loss of a majority of the servers at the same time. In that case, manual recovery by operations is required.

Conclusion

CoreOS provides all the basic functionality to manage Docker containers and provided High Availability to your application with a minimum of fuss. Consul and Consul templates actually make it very easy to use custom components like NGiNX to implement dynamic service discovery.

Outlook

In the next blog we will be deploying an multi-tier application that uses Consul DNS to connect application parts to databases!

References

This blog is based on information, ideas and source code snippets from  https://coreos.com/docs/running-coreos/cloud-providers/ec2https://cargonauts.io/mitchellh-auto-dc and https://github.com/justinclayton/coreos-and-consul-cluster-via-terraform

Mark van Holsteijn
Mark van Holsteijn is a senior software systems architect at Xebia Cloud-native solutions. He is passionate about removing waste in the software delivery process and keeping things clear and simple.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts