This post will describe how you can deploy Apache Airflow using the Kubernetes executor on Azure Kubernetes Service (AKS). It will also go into detail about registering a proper domain name for airflow running on HTTPS. To get the most out of this post basic knowledge of helm, kubectl and docker is advised as it the commands won’t be explained into detail here. In short, Docker is currently the most popular container platform and allows you to isolate and pack self-contained environments. Kubernetes (accessible via the command line tool kubectl
) is a powerful and comprehensive platform for orchestrating docker containers. Helm is a layer on top of kubectl
and is an application manager for Kubernetes, making it easy to share and install complex applications on Kubernetes.
Getting started
To get started and follow along:
- Clone the Airflow docker image repository
- Clone the Airflow helm chart
Make a copy of ./airflow-helm/airflow.yaml
to ./airflow-helm/airflow-local.yaml
. We’ll be modifying this file throughout this guide.
Kubernetes Executor on Azure Kubernetes Service (AKS)
The kubernetes executor for Airflow runs every single task in a separate pod. It does so by starting a new run of the task using the airflow run
command in a new pod. The executor also makes sure the new pod will receive a connection to the database and the location of DAGs and logs.
AKS is a managed Kubernetes service running on the Microsoft Azure cloud. It is assumed the reader has already deployed such a cluster – this is in fact quite easy using the quick start guide. Note: this guide was written using AKS version 1.12.7
.
The fernet key in Airflow is designed to communicate secret values from database to executor. If the executor does not have access to the fernet key it cannot decode connections. To make sure this is possible set the following value in airflow-local.yaml
:
airflow: fernetKey:
Use bcb.github.io/airflow/fernet-key to generate a fernet key.
This setting will make sure the fernet key gets propagated to the executor pods. This is done by the Kubernetes Executor in Airflow automagically.
Deploying with Helm
In addition to the airflow-helm repository make sure your kubectl
is configured to use the correct AKS cluster (if you have more than one). Assuming you have a Kubernetes cluster called aks-airflow
you can use the azure CLI or kubectl.
az aks get-credentials --name aks-airflow --resource-group MyResourceGroup
or
kubectl config use-context aks-airflow
respectively. Note that the latter one only works if you’ve invoked the former command at least once.
Azure Postgres
To make full use of cloud features we’ll be connected to a managed Azure Postgres instance. If you don’t have one, the quick start guide is your friend. All state for Airflow is stored in the metastore. Choosing this managed database will also take care of backups, which is one less thing to worry about.
Now, the docker image used in the helm chart uses an entrypoint.sh
which makes some nasty assumptions:
It creates the AIRFLOW<strong>CORE</strong>SQL_ALCHEMY_CONN
value given the postgres host
, port
, user
and password
.
The issue is that it expects an unencrypted (no SSL) connection by default. Since this blog post uses an external postgres instance we must use SSL encryption.
The easiest solution to this problem is to modify the Dockerfile
and completely remove the ENTRYPOINT
and CMD
line. This does involve creating your own image and pushing it to your container registry. The Azure Container Registry (ACR) would serve that purpose very well.
We can then proceed to create the user and database for Airflow using psql
. The easiest way to do this is to login to the Azure Portal open a cloud shell and connect to the postgres database with your admin user. From here you can create the user, database and access rights for Airflow with
create database airflow; create user airflow with encrypted password 'foo'; grant all privileges on database airflow to airflow;
You can then proceed to set the following value (assuming your postgres instance is called posgresdbforairflow
) in airflow-local.yaml
:
airflow: sqlalchemy_connection: postgresql+psycopg2://airflow@posgresdbforairflow:[email protected]:5432/airflow?sslmode=require
Note the sslmode=require
at the end, which tells Airflow to use an encrypted connection to postgres.
Since we use a custom image we have to tell this to helm. Set the following values in airflow-local.yaml
:
airflow: image: repository: youracrservice.azurecr.io/custom-docker-airflow tag: 1.10.2 pullPolicy: IfNotPresent pullSecret: acr-auth
Note the acr-auth
pull secret. You can either create this yourself or – better yet – let helm take care of it. To let helm create the secret for you, set the following values in airflow-local.yaml
:
imageCredentials: registry: youracrservice.azurecr.io username: youracrservice password: password-for-azure-container-registry
The sqlalchemy connection string is also propagated to the executor pods. Like the fernet key, this is done by the Kubernetes Executor.
Persistent logs and dags with Azure Fileshare
Microsoft Azure provides a way to mount SMB fileshares to any Kubernetes pod. To enable persistent logging we’ll be configuring the helm chart to mount an Azure File Share (AFS). Out of scope is setting up logrotate, this is highly recommended since Airflow (especially the scheduler) generates a LOT of logs. In this guide, we’ll also be using an AFS for the location of the dags.
Set the following values to enable logging to a fileshare in airflow-local.yaml
.
persistence: filestore: storageAccountName: yourazuresstorageaccountname storageAccountKey: yourazurestorageaccountkey logs: secretName: logssecret shareName: yourlogssharename dags: secretName: dagssecret shareName: yourdagssharename
The name of shareName
must match an AFS that you’ve created before deploying the helm chart.
Now everything for persistent logging and persistent dags has been setup.
This concludes all work with helm and airflow is now ready to be deployed! Run the following command from the path where your airflow-local.yaml
is located:
helm install --namespace "airflow" --name "airflow" -f airflow-local.yaml airflow/
The next step would be to exec -it
into the webserver or scheduler pod and creating Airflow users. This is out of scope for this guide.
Learn Spark or Python in just one day
Develop Your Data Science Capabilities. **Online**, instructor-led on 23 or 26 March 2020, 09:00 – 17:00 CET.
FQDN with Ingress controller
Airflow is currently running under it’s own service and IP in the cluster. You could go into the web server by port-forward
-ing the pod or the service using kubectl
. But much nicer is assigning a proper DNS name to airflow and making it reachable over HTTPS. Microsoft Azure has an excellent guide that explains all the steps needed to get this working. Everything below – up to the "Chaoskube" section – is a summary of that guide.
Deploying an ingress controller
If your AKS cluster is configured without RBAC you can use the following command to deploy the ingress controller.
helm install stable/nginx-ingress --namespace airflow --set controller.replicaCount=2 ----set rbac.create=false
This will configure a publicly available IP address to an NGINX pod which currently points to nothing. We’ll fix that. You can get this IP address become available by watching the services:
kubectl -n airflow get service --watch
Configuring a DNS name
Using the IP address created by the ingress controller you can now register a DNS name in Azure. The following bash
commands take care of that:
IP=$(kubectl -n airflow get service nameofyour-nginx-ingress-controller -o jsonpath={.status.loadBalancer.ingress..ip}) # Name to associate with public IP address DNSNAME="yourairflowdnsname" # Get the resource-id of the public ip PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv) # Update public ip address with DNS name az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME
Make it HTTPS
Now let’s make it secure by configuring a certificate manager that will automatically create and renew SSL certificates based on the ingress
route. The following bash commands takes care of that:
# Install the CustomResourceDefinition resources separately kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml # Create the namespace for cert-manager kubectl create namespace cert-manager # Label the cert-manager namespace to disable resource validation kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true # Add the Jetstack Helm repository helm repo add jetstack https://charts.jetstack.io # Update your local Helm chart repository cache helm repo update # Install the cert-manager Helm chart helm install --name cert-manager --namespace cert-manager --version v0.7.0 jetstack/cert-manager
Next, install letsencrypt to enable signed certificates.
apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: airflow spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: letsencrypt-prod http01: {}
kubectl apply -f cluster-issuer.yaml
After the script has completed you now have a DNS name pointing to the ingress controller and a signed certificate. The only step remaining to make airflow accessible is configuring the controller to make sure it points to the well hidden airflow web
service. Create a new file called ingress-routes.yaml
containing
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: airflow-ingress namespace: airflow annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - yourairflowdnsname.yourazurelocation.cloudapp.azure.com secretName: tls-secret rules: - host: yourairflowdnsname.yourazurelocation.cloudapp.azure.com http: paths: - path: / backend: serviceName: airflow-web servicePort: 8080
Run
kubectl apply -f ingress-routes.yaml
to install it.
Now airflow is accessible over HTTPS on https://yourairflowdnsname.yourazurelocation.cloudapp.azure.com
Cool!
Chaoskube
As avid Airflow users might have noticed is that the scheduler occasionally has funky behaviour. Meaning that it stops scheduling tasks. A respected – though hacky – solution is to restart the scheduler every now and then. The way to solve this in kubernetes is by simply destroying the scheduler pod. Kubernetes will then automatically boot up a new scheduler pod.
Enter chaoskube. This amazing little tool – which also runs on your cluster – can be configured to kill pods within your cluster. It is highly configurable to target any pod to your liking.
Using the following command you can specify it to only target the airflow scheduler pod.
helm upgrade --install chaos --set dryRun=false --set interval=5m --set namespaces=airflow --set labels="app=airflow-scheduler" stable/chaoskube
Concluding
Using a few highly available Azure services and a little effort you’ve now deployed a scalable Airflow solution on Kubernetes backed by a managed Postgres instance. Airflow also has a fully qualified domain name and is reachable over HTTPS. The kubernetes executor makes Airflow infinitely scalable without having to worry about workers.
Check out our Apache Airflow course, that teaches you the internals, terminology, and best practices of working with Airflow, with
hands-on experience in writing an maintaining data pipelines.