Blog

Kubernetes and on-demand CI builders

24 Jan, 2018
Xebia Background Header Wave

Let’s say you’ve got a CI/CD pipeline and you would like to run builds in containers. You could just configure Docker on one of your machines and point your builds there, but why not use something a bit more scalable? Enter Kubernetes, a leading container orchestration platform, which luckily offers several options for using its self-contained pods as on-demand CI builders. Things like sharing sources between containers and networking will be handled for you, so all you’ll have to worry about is specifying the desired image.

In this blog, I’ll be exploring two of those options in commonly used CI tools, namely Gitlab and Jenkins, and will explain how to configure the Gitlab-runner or Jenkins Kubernetes plugin to run on-demand CI builders on a Kubernetes (or “K8s”) cluster.

What do I need?

This guide assumes you already have a running Kubernetes cluster and either Gitlab or Jenkins, running within the cluster or externally. If you don’t, and you’re able to use Helm (basically a K8s package manager), there are predefined deployments for both Gitlab and Jenkins in the form of Helm charts which also configures the runner or plugin for you. If you want to experiment with K8s locally, you could use Minikube or the Edge channel of the Docker client (only for Mac at the time of writing).

The examples below can also be cloned from my git repo here.

Gitlab

The Gitlab-runner supports Kubernetes out-of-the-box. It acts as an intermediary between the Gitlab service itself, containing the webserver, git repos, etc. and the Kubernetes API. The API handles all requests for creating pods, services, and basically anything else you can do on a K8s cluster. For example, when you commit to your repo in Gitlab and you’ve specified image: node:8.9.4-alpine in your .gitlab-ci.yml, it sends the specified image to the runner. The runner then requests a pod from the K8s API containing the node container alongside a “helper” container which handles git cloning and artifacts. Any script in your pipeline will then be executed in the node container and your sources will automatically be available in a volume in said container.

To use the runner on your cluster, you’ll have to configure a couple of things, starting with a service-account and role-based access for the runner pod. This authorizes Gitlab-runner to create pods on the cluster for each CI job and is done by applying a configuration file in YAML with kubectl. Note that this assumes you’re OK with giving the runner permission to do anything in the specified namespace, but you can tweak permissions as needed:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab-sa
  namespace: default # change to the namespace you want to use for gitlab-runner pods
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: gitlab-role
  namespace: default
# Change the following rules if you want to restrict permissions
rules:
- apiGroups:
  - ""
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: gitlab-rb
  namespace: default
subjects:
  - kind: ServiceAccount
    name: gitlab-sa
    namespace: default
roleRef:
  kind: Role
  name: gitlab-role
  apiGroup: rbac.authorization.k8s.io

Save this in a YAML file and if needed modify the namespace or other values, then apply with kubectl apply -f file.yml. Afterwards, let’s configure the runner. We’ll need a runner registration token for that, so go to Gitlab’s /admin/runners page to get the token. Then create a configmap which configures the runner by mounting a few files within the Gitlab-runner pod:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: gitlab-runner-cm
  namespace: default # Change to your gitlab namespace
data:
  entrypoint: |
    #!/bin/bash
    set -xe
    cp /scripts/config.toml /etc/gitlab-runner/
    # Register the runner
    /entrypoint register \
      --non-interactive \
      --registration-token CHANGEME \ # Insert the runner registration token here
      --url https://your.gitlab.url \
      --clone-url https://your.local.gitlab.url \ # Optional, use if gitlab is running on the same cluster
      --executor "kubernetes" \
      --name "Kubernetes Runner" \
      --config "/etc/gitlab-runner/config.toml"
    # Start the runner
    /entrypoint run --user=gitlab-runner \
      --working-directory=/home/gitlab-runner \
      --config "/etc/gitlab-runner/config.toml"
  config.toml: |
    concurrent = 10 # This sets the maximum number of concurrent CI pods
    check_interval = 10

Note that the --clone-url option can be used in case Gitlab is running on the same cluster as your runner. Simply specifying the local hostname of Gitlab at --url will still cause the runner to do git clones via the external URL, so use the clone-url option to speed up cloning to the runner.

Apply using kubectl apply again. Finally we’ll create the Gitlab-runner deployment, which runs the actual container within the cluster:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gitlab-runner
  namespace: gitlab
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: gitlab-runner
    spec:
      serviceAccountName: gitlab-sa
      containers:
        - name: gitlab-runner
          image: gitlab/gitlab-runner:alpine-v10.3.0
          command: ["/bin/bash", "/scripts/entrypoint"]
          env:
            - name: KUBERNETES_NAMESPACE
              value: default # Change me to the namespace you want to use
            - name: KUBERNETES_SERVICE_ACCOUNT
              value: gitlab-sa
          # This references the previously specified configmap and mounts it as a file
          volumeMounts:
            - mountPath: /scripts
              name: configmap
          livenessProbe:
            exec:
              command: ["/usr/bin/pgrep","gitlab.*runner"]
            initialDelaySeconds: 60
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            exec:
              command: ["/usr/bin/pgrep","gitlab.*runner"]
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
      restartPolicy: Always
      volumes:
      - configMap:
          name: gitlab-runner-cm
        name: configmap

After applying this deployment, Kubernetes will deploy the gitlab-runner pod, which should automatically register itself to your Gitlab instance. Verify this by running kubectl -n your_namespace get pods to see if the pod is running and kubectl logs -l name=gitlab-runner to check the logs of the runner.

That’s it! Now go run something. If you don’t have a pipeline yet and want to test this setup, create a new project in Gitlab from a template (such as the “NodeJS Express” template) and modify any file to trigger the predefined pipeline.

Jenkins

Interaction between Jenkins and Kubernetes is handled by (you guessed it) a plugin. This makes configuration slightly easier than Gitlab, but the downside is, of course, yet another plugin. When you specify a podTemplate with a desired CI image in a Jenkinsfile, the plugin communicates with the Kubernetes API and requests a pod containing your container alongside a “jnlp-slave” container. This JNLP container acts as a buildslave and handles things such as communication back to Jenkins, git checkouts, and workspace volumes. You’ll find an example further below.

First of all, install the plugin. You’ll also need to make sure Jenkins has enabled the JNLP agent listener on port 50000, found on Jenkins’ /configureSecurity page under “Agents”. If Jenkins is running on this cluster you’ll also need to authorize Jenkins with the service account and role based access configuration:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins-sa
  namespace: default # change to the namespace you want to use for jenkins pods
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins-role
  namespace: default
# Change the following rules if you want to restrict permissions
rules:
- apiGroups:
  - ""
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins-rb
  namespace: default
subjects:
  - kind: ServiceAccount
    name: jenkins-sa
    namespace: default
roleRef:
  kind: Role
  name: jenkins-role
  apiGroup: rbac.authorization.k8s.io

Create a YAML file with these contents, modifying any relevant values and then apply with kubectl apply -f file.yml. Then we’ll configure the plugin itself under “Manage Jenkins” > “Configure System”. If Jenkins is running on the cluster the only settings you’ll need are displayed in the image below. Otherwise, you’ll also need to setup certificate authentication between Jenkins and Kubernetes.
Jenkins kubernetes config
To make it easier, this can be converted to Groovy, so these settings won’t have to be entered manually when your Jenkins container gets killed. The following can either be saved as a Groovy file in your Jenkins image’s init.groovy.d directory or posted via the script console API at your_jenkins.url/scriptText with script=your_script as the body. That script being:

import org.csanchez.jenkins.plugins.kubernetes.*
import jenkins.model.*
def instance = Jenkins.getInstance()
def cloudName = 'kubernetes'
// Either create a new config if it doesn't exist or use the existing one
KubernetesCloud kubernetes = instance.getCloud(cloudName) ?: new KubernetesCloud(cloudName)
kubernetes.setServerUrl("https://kubernetes.default.svc.cluster.local")
kubernetes.setNamespace("default")
// Change to your local jenkins URL, note that the format starts with service-name.namespace
kubernetes.setJenkinsUrl("https://jenkins.default.svc.cluster.local")
if (!instance.getCloud(cloudName)) {
  instance.clouds.add(kubernetes)
}
instance.save()

After this, you should be ready to run your first pipeline in K8s pods. Here’s an example to test this setup:

podTemplate(name: "test-build", label: "test-build", containers: [
  containerTemplate(name: "node",
                    image: "node",
                    ttyEnabled: true,
                    command: 'cat')])
{
  node("test-build") {
    stage("test") {
      sh "echo test"
    }
    stage("test 2") {
      container("node") {
        sh "node --version"
      }
    }
  }
}

The podTemplate contains the configuration for containers you want to use for CI. Anything specified outside of a container block is executed within the JNLP container, which is where git and any Jenkins plugins are available. Anything within the container block is executed in whichever container name you specify. See the plugin readme for more options.

Which is “better”?

It depends. Gitlab has tighter integration with Kubernetes, also offering options besides CI such as automated production deployments and canary releases. Its pipeline syntax is also far more readable and easier than Jenkinsfiles, though the latter allows for more complex scripting. On the other hand, Gitlab’s CI works best if your sourcecode is also versioned there, which may not be an option for many of you, making Jenkins more flexible. So basically, pick whichever CI tool you prefer based on its features, as the act of running CI in K8s pods is pretty similar between the two.

What else is there?

Take a look at Brigade by Microsoft, which is an event-driven JavaScript framework for Kubernetes. Combined with “Kashti” dashboards, it makes for an interesting CI/CD option. I plan on looking into it in the coming weeks, so you can expect to see a similar post based on Brigade soon.

I am a specialist at Qxperts. We empower companies to deliver reliable & high-quality software. Any questions? We are here to help! www.qxperts.io

Tariq Ettaji
Software Delivery Consultant
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts