Blog

How to Create Serverless CI/CD Pipelines on Google Cloud

09 Dec, 2019

When you look at Google Cloud services like Source Repository and Cloud Build, you would think it is very easy to create a CI/CD build pipeline. I can tell you: it is! In this blog I will show you how to create a serverless CI/CD pipeline for a Docker image, using three resources in Terraform.

Create a Serverless CI/CD Pipeline

To create a serverless CI/CD pipeline with Google Cloud Platform, you have to:

  1. Create a Google Source Repository.
  2. Define a trigger to start a Google Cloud Build on push.
  3. Add a Cloud Build definition to your Source Repository.
  4. Push your code.
    There are really just four steps. That is all!

Create a Google Source Code Repository

You create a google source code repository with this terraform snippet:

resource google_sourcerepo_repository image {
  name       = "paas-monitor"
  depends_on = [google_project_service.sourcerepo]
}

Define a Trigger

Then next step is to create a trigger that will start a Cloud Build job when you push to the master branch
This is how you do it:

resource google_cloudbuild_trigger image {
  project = google_sourcerepo_repository.image.project

  trigger_template {
    branch_name = "master"
    repo_name   = google_sourcerepo_repository.image.name
  }

  filename = "cloudbuild.yaml"
  depends_on = [
    google_project_service.cloudbuild
  ]
}

Create a Cloud Build Definition

You also need to create a Cloud Build job, which can build your code. This is how
you do that: add a definition file called cloudbuild.yaml. This file defines the steps to take to perform the build. Each step is executed by a specific Docker image, mounted on running in the checked out workspace. Our build has two steps: git fetch and make snapshot as our build process uses
a Makefile for building and releasing Docker images.

steps:
  - name: gcr.io/cloud-builders/git
    args: ["fetch", "--unshallow", "--tags"]
  - name: gcr.io/cloud-builders/docker
    entrypoint: make
    args:
      - REGISTRY_HOST=gcr.io
      - USERNAME=${PROJECT_ID}
      - snapshot

I was very happy to learn that the Docker builder has both git and make pre-installed.

Manage Access to the Source Repository

You can define an IAM policy to manage access to the source repository. Here’s another
terraform snippet that can help with that:

resource google_sourcerepo_repository_iam_policy image {
  project     = google_sourcerepo_repository.image.project
  repository  = google_sourcerepo_repository.image.name
  policy_data = data.google_iam_policy.image.policy_data
}

data google_iam_policy image {
  binding {
    role    = "roles/source.reader"
    members = []
  }

  binding {
    role    = "roles/source.writer"
    members = ["user:${var.email}"]
  }

  binding {
    role = "roles/source.admin"
    members = [
      "serviceAccount:${data.google_project.current.number}@cloudbuild.gserviceaccount.com",
    ]
  }
}

There are two policy bindings defined here:
* Administrator rights to the Cloud Build service account.
* Write access to user with the specified email.

Seeing it all Work Together

To see it all in action, type the following commands:

git clone https://github.com/binxio/blog-serverless-ci-cd-of-docker-images-with-google-cloud-platform.git
cd blog-serverless-ci-cd-of-docker-images-with-google-cloud-platform

and deploy it:

export TF_VAR_email=$(gcloud config get-value account)
export TF_VAR_project=$(gcloud config get-value project)
terraform init
terraform apply -auto-approve

This script will create:
* A source code repository named paas-monitor with write permission for your email account.
* A build trigger on the code repository.

Installing the Git Remote Helper

Before you can push to a Google Source Repository, you need to configure your local git install to use
your gcloud credentials for authentication.

git config --global \
   credential.'https://source.developers.google.com'.helper \
   git-credential-gcloud.sh

Cloning the Source Repository

To see the pipeline in action, clone my paas-monitor repository.

git clone https://github.com/mvanholsteijn/paas-monitor.git
cd paas-monitor

Pushing to Google Source Repository

With the credentials in place and the source checked out, you can now push to the Google Source Repository:

git remote add \
    gcp $(gcloud source repos \
          describe paas-monitor --format 'value(url)')
git push gcp --tags
git push gcp

You just started the build process with a git push to the master branch. To view the build, type gcloud builds list:

gcloud builds list
ID                                    CREATE_TIME                DURATION  SOURCE               IMAGES  STATUS
d3a313b8-ec30-442d-af0f-b9d5a10f788a  2019-12-07T17:54:24+00:00  53S       paas-monitor@master   -       SUCCESS

To view the logs for this build, type this (substituting the build id with yours)

gcloud builds log d3a313b8-ec30-442d-af0f-b9d5a10f788a

------------------------------------------------------------------------------------------------------------ REMOTE BUILD OUTPUT -------------------------------------------------------------------------------------------------------------
starting build "d3a313b8-ec30-442d-af0f-b9d5a10f788a"

FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/speeltuin-mvanholsteijn/r/paas-monitor
 * branch            23f349221c561dced520606ea0a144c4f04dab95 -> FETCH_HEAD
HEAD is now at 23f3492 added cloudbuild.yaml
...

Alternatively you can go to the Cloud Build console.

Re-Use

As the pipeline can be created with as little as two resources, I recommend to add these straight into your Terraform template.

Conclusion

With Terraform it is very easy to create a completely serverless CI/CD pipeline on Google Cloud Platform. By changing the Cloud Build specification, you can use this setup to build or deploy anything you want!
Checkout the [entire source code for this blog] (https://github.com/binxio/blog-serverless-ci-cd-with-google-cloud-platform/blob/master/pipeline.tf)

Mark van Holsteijn is a senior software systems architect at Xebia Cloud-native solutions. He is passionate about removing waste in the software delivery process and keeping things clear and simple.

Explore related posts