Blog

How to run a container image on Google Container Optimized OS

09 Dec, 2022
Xebia Background Header Wave

Sometimes Google Compute Engine just has everything you need: compute, storage, load balancing, autoscaling. In this blog you will see how you can run a container image Cloud-native style, directly on a Google Container Optimized OS instance, using cloud-init and terraform

You can run a container image on Google Container Optimized OS, in just three steps.

  1. define a systemd service unit
  2. define a cloud-init configuration
  3. define a Google Compute Engine managed instance group

If you wanna skip the yibber yabber, check out the source code

define a systemd service unit

Systemd is a wonderful system and service manager which runs on most Linux operating system, including cos. The service unit defines a process that you want to run on the system. Systemd will take care that the service is started and kept running during its lifetime.

The systemd service unit configuration, looks as follows:

[Unit]
Description=The paas-monitor

[Service]
Type=simple

User=paas-monitor
Group=paas-monitor

ExecStartPre=/usr/bin/docker-credential-gcr configure-docker
ExecStop=/usr/bin/docker stop paas-monitor

ExecStart=/usr/bin/docker run 
    --rm 
    --name paas-monitor 
    --publish 80:80 
    gcr.io/binx-io-public/paas-monitor:0.4.3 
    --port 80

Restart=always
SuccessExitStatus=0 SIGTERM

[Install]
WantedBy=multi-user.target
BindsTo=firewall-config.service

In the configuration file, you can see that he process starts under the user paas-monitor. This is to ensure that the docker credential helper can update the ~/.docker/config.json file. You cannot run the docker configuration as the root user on Container Optimized OS, as the root home directory is a mounted read-only.

The paas-monitor is an application which allows you to observe application platforms when they are doing their thing.

Note that the install section mentions that it binds to the firewall-config.service shown below:

[Unit]
Description=Configures the host firewall

[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT
ExecStop=/sbin/iptables -A INPUT -p tcp --dport 80 -j DROP

This service opens port 80 of the firewall and is started before the paas-monitor service starts. Conversely, it stops when the paas-monitor service stops.

To run the application, copy the two service configuration files to directory /etc/systemd/system, type systemctl daemon-reload followed by systemctl start paas-monitor.service

define a cloud-init configuration

To configure a systemd service unit on a virtual machines, you have at least two options: Either you create a new virtual machine image or you create a cloud-init configuration.

Even though building a new virtual machine is the ultimate in "immutable infrastructure", it is also a very slow. As a good half-way house you can define a cloud-init configuration file. Cloud-init allows you to complement the operating system configuration during boot time. It is fast, while providing excellent reproducibility for the virtual machine configuration.

The cloud-init configuration looks as follows:

#cloud-config  
users:  
  - name: paas-monitor  
    groups: docker  

runcmd:  
  - systemctl daemon-reload  
  - systemctl start paas-monitor.service

write_files:  
  - path: /etc/systemd/system/paas-monitor.service  
    permissions: '0644'  
    owner: root  
    content: |  
        [Unit]
        Description=The paas-monitor

        [Service]
        Type=simple

        User=paas-monitor
        Group=paas-monitor

        ExecStartPre=/usr/bin/docker-credential-gcr configure-docker
        ExecStop=/usr/bin/docker stop paas-monitor

        ExecStart=/usr/bin/docker run 
            --rm 
            --name paas-monitor 
            --publish 80:80 
            gcr.io/binx-io-public/paas-monitor:0.4.3 
            --port 80

        Restart=always
        SuccessExitStatus=0 SIGTERM

        [Install]
        WantedBy=multi-user.target
        BindsTo=firewall-config.service

  - path: /etc/systemd/system/firewall-config.service
    permissions: 0644
    owner: root
    content: |
      [Unit]
      Description=Configures the host firewall

      [Service]
      Type=oneshot
      RemainAfterExit=true
      ExecStart=/sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT
      ExecStop=/sbin/iptables -A INPUT -p tcp --dport 80 -j DROP

The first thing you will notice, is that a user paas-monitor is created which is added to the docker group. The runcmd instructs systemd to load the configuration and start the service. The write_files section, actually writes the service unit configuration defined earlier.

define a Google Compute Engine managed instance group

To define a Google compute Engine managed instance group to run your service, you create a template which passes your cloud-init configuration as user-data metadata to the virtual machine, as shown below at line 9.

resource "google_compute_instance_template" "paas-monitor" {  
  name_prefix = "paas-monitor"  
  description = "showing what happens"

  instance_description = "paas-monitor"  
  machine_type         = "e2-micro"  

  metadata = {  
    "user-data" = file("user-data.yaml")  
  }  

  disk {  
    source_image = "cos-cloud/cos-stable"  
    auto_delete  = true  
    boot         = true  
  }

  network_interface {  
    network = "default"  

    access_config {}  
  }  

  scheduling {  
    automatic_restart   = false  
    preemptible         = true  
    on_host_maintenance = "TERMINATE"  
  }  

  lifecycle {  
    create_before_destroy = true  
  }  
}

Now the virtual machine is configured with your cloud-init configuration file, you can use this template in a managed instance group definition:

resource "google_compute_region_instance_group_manager" "paas-monitor" {  
  name = "paas-monitor"  

  base_instance_name = "paas-monitor"  

  target_size = 1  

  version {  
    instance_template = google_compute_instance_template.paas-monitor.id  
  }  

  update_policy {
    type                           = "PROACTIVE"
    minimal_action                 = "RESTART"
    most_disruptive_allowed_action = "REPLACE"
    max_surge_fixed                = local.number_of_zones + 1
    max_unavailable_fixed          = local.number_of_zones
  }

  named_port {
    name = "paas-monitor"
    port = 80
  }

  auto_healing_policies {
    health_check      = google_compute_health_check.paas-monitor.id
    initial_delay_sec = 30
  }
}

resource "google_compute_health_check" "paas-monitor" {
  name        = "paas-monitor"
  description = "paas-monitor health check"

  timeout_sec         = 1
  check_interval_sec  = 1
  healthy_threshold   = 4
  unhealthy_threshold = 5

  http_health_check {
    port_name    = "paas-monitor"
    request_path = "/health"
    response     = "ok"
  }
}

data "google_compute_zones" "available" {
}

locals {
   number_of_zones = length(data.google_compute_zones.available.names)
}

As you can see, the managed instance group refers to the instance template at line 9 too. The nice thing about using a managed instance group, is that it will ensure that the desired number of virtual machines is running. The health check will ensure that it will take care of rolling updates if the need ever arises. This happens when you would upate the boot image of the virtual machines. Minor updates, like changing the cloud-init are applied by just restarting the virtual machines.

conclusion

By using the basis services of Google Compute Engine , Google Container Optimized OS , cloud-init and terraform it is very easy to deploy applications cloud-native style, using standard tools, without introducing anymore complexity than strictly necessary.

Note that the managed instance group can be used as a target for a backend service of a google load balancer. You can even slap on an autoscaler for good measure. Checkout this blog for details.

Image by Brian Sarubbi from Pixabay

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts