Blog

Automated deployment of Docker Universal Control Plane with Terraform and Ansible

19 Jan, 2016
Xebia Background Header Wave

You got into the Docker Universal Control Plane beta and you are ready to get going, and then you see a list of manual commands to set it up. As you don’t want to do anything manually, this guide will help you setup DUCP in a few minutes by using just a couple of variables. If you don’t know what DUCP is, you can read the post I made earlier. The setup is based on one controller, and a configurable amount of replicas which will automatically join the controller to form a cluster. There a few requirements we need to address to make this work, like setting the external (public) IP while running the installer and passing the controller’s certificate fingerprint to the replicas during setup. We will use Terraform to spin up the instances, and Ansible to provision the instances and let them connect to each other.
Prerequisites
Updated 2016-01-25: v0.7 has been released, and no Docker Hub account is needed anymore because the images are moved to public namespace. This is updated in the ‘master’ branch on the Github repository. If you still want to try v0.6, you can checkout tag ‘v0.6’!
Before you start cloning a repository and executing commands, let’s go over the prerequisites. You will need:

  • Access to the DUCP beta (during installation you will need to login with a Docker Hub account which is added to the ‘dockerorca’ organization, tested with v0.5v0.6 and v0.7. Read update notice above for more information.)
  • An active Amazon Web Services and/or Google Cloud Platform account to create resources
  • Terraform (tested with v0.6.8 and v0.6.9)
  • Ansible (tested with 1.9.4 and 2.0.0.1)

Step 1: Clone the repository
CiscoCloud’s terraform.py is used as Ansible dynamic discovery to find our Terraform provisioned instances, so –recursive is needed to also get the Git submodule.
[code]git clone –recursive https://github.com/nautsio/ducp-terraform-ansible
cd ducp-terraform-ansible[/code]
Step 2.1: AWS specific instructions
These are the AWS specific instructions, if you want to use Google Cloud Platform, skip to step 2.2.
For the AWS based setup, you will be creating an aws_security_group with the rules in place for HTTPS (443) en SSH (22). With Terraform you can easily specify what we need by specifying ingress and egress configurations to your security group. By specifying ‘self = true‘ the rule will be applied to the all resources in that to be created security group. In the single aws_instance for the ducp_controller we use the lookup function to get the right AMI from the list specified in vars.tf. Inside each aws_instance we can reference the created security group by using "${aws_security_group.ducp.name}". This is really easy and it keeps the file generic. To configure the amount of instances for ducp-replica we are using the count parameter. To identify each instance to our Ansible setup, we specify a name by using the tag parameter. Because we use the count parameter, we can generate a name by using a predefined string (ducp-replica) and add the index of the count to it. You can achieve this by using the concat function like so: "${concat("ducp-replica",count.index)}". The sshUser parameter in the tags block is used by Ansible to connect to the instances. The AMIs are configured inside vars.tf and by specifying a region, the correct AMI will be selected from the list.

variable "amis" {
    default = {
        ap-northeast-1 = "ami-46c4f128"
        ap-southeast-1 = "ami-e378bb80"
        ap-southeast-2 = "ami-67b8e304"
        eu-central-1   = "ami-46afb32a"
        eu-west-1      = "ami-2213b151"
        sa-east-1      = "ami-e0be398c"
        us-east-1      = "ami-4a085f20"
        us-west-1      = "ami-fdf09b9d"
        us-west-2      = "ami-244d5345"
    }
}

The list of AMIs

    ami = "${lookup(var.amis, var.region)}"

The lookup function

Let’s configure the variables so you can use it to create the instances. Inside the repository you will find a terraform.tfvars.example file. You can copy or move this file to terraform.tfvars so that Terraform will pick it up during a plan or apply.
[code]cd aws
cp terraform.tfvars.example terraform.tfvars[/code]
Open terraform.tfvars with your favorite text editor so you can set up all variables to get the instances up and running.

  • region can be selected from available regions
  • access_key and secret_key can be obtained through the console
  • key_name is the name of the key pair to use for the instances
  • replica_count defines the amount of replicas you want

The file could look like the following:

region = "eu-central-1"
access_key = "string_obtained_from_console"
secret_key = "string_obtained_from_console"
key_name = "my_secret_key"
replica_count = "2"

By executing terraform apply you can create the instances, let’s do that now. Your command should finish with:

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

 Step 2.2: Google Cloud Platform specific instructions

In GCP, it’s a bit easier to set everything up. Because there are no images/AMI’s per region, we can use a disk image with a static name. And because the google_compute_instance has a name variable, you can use the same count trick as we did on AWS, but this time without the metadata. By classifying the nodes with the tag https-server, it will automatically open port 443 in the firewall. Because you can specify the user that should be created with your chosen key, setting the ssh_user is needed to connect with Ansible later on.

Let’s setup our Google Cloud Platform variables.

[code]cd gce
cp terraform.tfvars.example terraform.tfvars[/code]
Open terraform.tfvars with your favorite text editor so you can set up all variables to get the instances up and running.

The file could look like the following:

project = "my_gce_project"
credentials = "/path/to/file.json"
region = "europe-west1"
zone = "europe-west1-b"
ssh_user = "myawesomeusername"
replica_count = "2"

By executing terraform apply you can create the instances, let’s do that now. Your command should finish with:

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Step 3: Running Ansible

Instances should all be there, now let’s install the controller and add a replica. This setup uses terraform.py to retrieve the created instances (and IP addresses) based on the terraform.tfstate file. To make this work you need to specify the location of the tfstate file by settings TERRAFORM_STATE_ROOT to the current directory. Then you specify the script to lookup the inventory (-i) and the site.yml where you can assign the roles to the hosts.

There are two roles that will be applied to all hosts, called common and extip. Inside common everything is set up to get Docker running on the hosts, so it configures the apt repo, installs the docker-engine package and finally installs the docker-py package because Ansible needs this to use Docker. Inside extip, there are two shell commands to lookup external IP addresses. Because if you want to access DUCP on the external IP, it should be present inside the certificate that DUCP generates. Because the external IP addresses are not found on GCP instances and I wanted a generic approach where you would only need one command to provision both AWS and GCP I chose to look them up, and eventually register the variable extip with the one that was correctly looked up on the instances. Second reason to use the external IP, is that all the replicas need the external IP of the controller to register themselves. By passing the –url parameter to the join command, you can specify to what controller it should register.

--url https://"{{ hostvars['ducp-controller']['extip'] }}"

The extip variable used by replica

This also counts for the certificates fingerprint, the replica should provide the fingerprint of the controllers certificate to register successfully. We can access that variable the same way: “{{ hostvars[‘ducp-controller’][‘ducp_controller_fingerprint’].stdout }}. It specifies .stdout to only use the stdout part of the command to get the fingerprint, because it also registers exitcode and so.

To supply external variables, you can inject vars.yml through –extra-vars. Let’s setup the vars.yml by copying the example file to vars.yml and configure it.

[code]cd ../ansible
cp vars.yml.example vars.yml[/code]
As stated before, the installer will login to the Docker Hub and download images that live under the dockerorca organization. Your account needs to be added to this organization to let it succeed. Fill in your Docker Hub account details in vars.yml, and choose an admin user and admin password for your installation. If you use ssh-agent to store your SSH private keys, you can proceed with the ansible-playbook command, else you can specify your private-key file by adding –private-key <priv_key_location> to the list of arguments.
Let’s run Ansible to set up DUCP. You need to change to the directory where the terraform.tfstate file resides, or change TERRAFORM_STATE_ROOT accordingly.
[code]cd ../{gce,aws}
TERRAFORM_STATE_ROOT=. ansible-playbook -i ../terraform.py/terraform.py \
../ansible/site.yml \
–extra-vars "@../ansible/vars.yml"[/code]
If all went well, you should see something like:

PLAY RECAP *********************************************************************
ducp-controller            : ok=13   changed=9    unreachable=0    failed=0
ducp-replica0              : ok=12   changed=8    unreachable=0    failed=0
ducp-replica1              : ok=12   changed=8    unreachable=0    failed=0

To check out our brand new DUCP installation, run the following command to extract the IP addresses from the created instances:
[code]TERRAFORM_STATE_ROOT=. ../terraform.py/terraform.py –hostfile[/code]
Copy the IP address listed in front of ducp-controller and open up a web browser. Prefix the address with https://<ip&gt; and now you can login with your chosen username and password.
ducp login
Let me emphasise that this is not a production ready setup, but can definitely help if you want to try out DUCP and maybe build a production ready version from this setup. If you want support for other platforms, please file an issue on Github or submit a pull request. I’ll be more than happy to look into it for you.

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts