Managing authentication across multiple services in a Kubernetes cluster can become cumbersome quickly. Each application requires its own credentials, making it difficult to manage user access and maintain security. Single Sign-On (SSO) addresses this issue by providing a centralized authentication system, where users log in once and gain access to all authorized services.
This blog post describes how we set up Authentik as our SSO provider for a local Kubernetes cluster. Authentik is an open-source identity provider that supports various authentication protocols, including OAuth2, SAML, and LDAP.
Authentication Flow
The authentication flow uses the OpenID Connect (OIDC) protocol with kubectl and kubelogin:
When you run a kubectl command:
- kubectl calls kubelogin to get credentials
- kubelogin opens your browser to authenticate with Authentik
- After successful authentication, Authentik returns an ID token
- kubelogin passes this token to kubectl
- kubectl uses the token to authenticate with the Kubernetes API server
Our Stack
Before diving into the configuration, let's highlight the tools we use:
Configuration
Authentik requires a PostgreSQL database and uses Redis for caching. This section outlines the process of deploying Authentik with these dependencies using Helm and Kustomize.
Kustomize Setup
We start by referencing the Authentik Helm Chart in our kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: authentik
helmCharts:
- name: authentik
releaseName: authentik
version: "2025.8.4"
repo: https://charts.goauthentik.io
Namespace
To create the namespace, under the resources/ folder, create namespace.yml:
apiVersion: v1
kind: Namespace
metadata:
name: authentik
Reference this in the kustomization.yml:
...
resources:
- resources/namespace.yml
PostgreSQL Connection
Authentik requires a PostgreSQL database. We assume you already have PostgreSQL deployed in your cluster under the postgres namespace. Configure Authentik to connect to it:
helmCharts:
- name: authentik
...
valuesInline:
authentik:
postgresql:
host: postgres.postgres.svc.cluster.local
name: authentik
user: file:///postgres-creds/username
password: file:///postgres-creds/password
Secrets Management
Authentik needs several secrets: - PostgreSQL credentials - Secret key for encryption
We use Sealed Secrets to manage these securely. First, ensure Sealed Secrets is installed:
# macOS
brew install kubeseal yq
PostgreSQL Credentials
Create a secret that replicates from your PostgreSQL namespace:
apiVersion: v1
kind: Secret
metadata:
name: authentik-postgres
annotations:
replicator.v1.mittwald.de/replicate-from: postgres/postgres-user
data: {}
Authentik Secret Key
Generate and seal the secret key:
kubectl create secret generic authentik-secrets \
--namespace authentik \
--dry-run=client \
--from-literal=secret-key=$(openssl rand -hex 32) \
-o json | \
kubeseal \
--controller-namespace sealed-secrets \
--controller-name sealed-secrets | \
yq -p json
Add the output to resources/sealed-secret.yml.
Mounting Secrets
Configure volume mounts to make secrets available to Authentik pods:
helmCharts:
- name: authentik
...
valuesInline:
...
server:
volumes:
- name: postgres-creds
secret:
secretName: authentik-postgres
- name: secrets
secret:
secretName: authentik-secrets
volumeMounts:
- name: postgres-creds
mountPath: /postgres-creds
readOnly: true
- name: secrets
mountPath: /secrets
readOnly: true
worker:
volumes:
- name: postgres-creds
secret:
secretName: authentik-postgres
- name: secrets
secret:
secretName: authentik-secrets
volumeMounts:
- name: postgres-creds
mountPath: /postgres-creds
readOnly: true
- name: secrets
mountPath: /secrets
readOnly: true
Redis
Enable the built-in Redis instance:
helmCharts:
- name: authentik
...
valuesInline:
...
redis:
enabled: true
Ingress with Tailscale
Configure ingress to expose Authentik via Tailscale:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: authentik
annotations:
tailscale.com/experimental-forward-cluster-traffic-via-ingress: "true"
spec:
defaultBackend:
service:
name: authentik-server
port:
number: 443
ingressClassName: tailscale
tls:
- hosts:
- auth
This creates a Tailscale hostname auth that routes to the Authentik server.
Initial Setup
After deployment, access Authentik at your configured URL (e.g., https://auth.tail6720f8.ts.net) and complete the initial setup:
- Navigate to
/if/flow/initial-setup/ - Create an admin user with a strong password
- Log in with your admin credentials
Configuring Users and Groups
Creating Users
In the Authentik admin interface: 1. Navigate to Directory → Users 2. Click Create 3. Fill in user details (username, name, email) 4. Set a password or configure email-based activation
Creating Groups
Groups help organize users and assign permissions: 1. Navigate to Directory → Groups 2. Click Create 3. Name your group (e.g., "admins") 4. Add users to the group
Kubernetes Integration with kubelogin
To enable SSO for kubectl, we'll use kubelogin, an authentication plugin that implements the OIDC flow.
Setting up OAuth Provider in Authentik
- Navigate to Applications → Providers
- Click Create and select OAuth2/OpenID Provider
- Configure the provider:
- Name: kubernetes
- Authorization flow: implicit
- Client type: Public
- Redirect URIs:
http://localhost:8000andhttp://localhost:18000 - Note the Client ID (e.g.,
zUdTG...)
Creating the Application
- Navigate to Applications → Applications
- Click Create
- Link it to the provider you just created
- Configure:
- Name: Kubernetes
- Slug: kubernetes
- Provider: kubernetes (the one you created)
Installing kubelogin
Install kubelogin on your local machine:
# macOS
brew install kubelogin
Configuring kubectl
Modify your kubeconfig to use kubelogin:
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: kubectl
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.tail6720f8.ts.net/application/o/kubernetes/
- --oidc-client-id=zUdTG...
- --oidc-extra-scope=email
- --oidc-extra-scope=profile
Configuring K3s
Configure K3s to validate OIDC tokens by editing /etc/rancher/k3s/config.yaml:
kube-apiserver-arg:
- "oidc-issuer-url=https://auth.tail6720f8.ts.net/application/o/kubernetes/"
- "oidc-client-id=zUdTG..."
- "oidc-username-claim=email"
- "oidc-groups-claim=groups"
Restart K3s:
sudo systemctl restart k3s
Setting up RBAC
Create a ClusterRoleBinding to grant admin privileges to your Authentik admin group:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: authentik-kubernetes-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: "admins"
apiGroup: rbac.authorization.k8s.io
The group name should match the group you created in Authentik.
Complete Example
To summarize, this results in the following file structure:
authentik/
├── kustomization.yml
└── resources
├── cluster-role-binding.yml
├── ingress.yml
├── namespace.yml
└── sealed-secret.yml
kustomization.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: authentik
helmCharts:
- name: authentik
releaseName: authentik
version: "2025.8.4"
repo: https://charts.goauthentik.io
valuesInline:
authentik:
secret_key: file:///secrets/secret-key
postgresql:
host: postgres.postgres.svc.cluster.local
name: authentik
user: file:///postgres-creds/username
password: file:///postgres-creds/password
redis:
enabled: true
server:
ingress:
enabled: false
volumes:
- name: postgres-creds
secret:
secretName: authentik-postgres
- name: secrets
secret:
secretName: authentik-secrets
volumeMounts:
- name: postgres-creds
mountPath: /postgres-creds
readOnly: true
- name: secrets
mountPath: /secrets
readOnly: true
worker:
volumes:
- name: postgres-creds
secret:
secretName: authentik-postgres
- name: secrets
secret:
secretName: authentik-secrets
volumeMounts:
- name: postgres-creds
mountPath: /postgres-creds
readOnly: true
- name: secrets
mountPath: /secrets
readOnly: true
resources:
- resources/cluster-role-binding.yml
- resources/ingress.yml
- resources/namespace.yml
- resources/sealed-secret.yml
Resources
resources/namespace.yml:
apiVersion: v1
kind: Namespace
metadata:
name: authentik
resources/ingress.yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: authentik
annotations:
tailscale.com/experimental-forward-cluster-traffic-via-ingress: "true"
spec:
defaultBackend:
service:
name: authentik-server
port:
number: 443
ingressClassName: tailscale
tls:
- hosts:
- auth
resources/cluster-role-binding.yml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: authentik-kubernetes-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: "admins"
apiGroup: rbac.authorization.k8s.io
resources/sealed-secret.yml:
apiVersion: v1
kind: Secret
metadata:
name: authentik-postgres
annotations:
replicator.v1.mittwald.de/replicate-from: postgres/postgres-user
data: {}
---
kind: SealedSecret
apiVersion: bitnami.com/v1alpha1
metadata:
name: authentik-secrets
namespace: authentik
creationTimestamp: null
spec:
template:
metadata:
name: authentik-secrets
namespace: authentik
creationTimestamp: null
encryptedData:
secret-key: AgB2l3...
Testing the Setup
Test kubectl Access
After configuring everything, test your kubectl access:
kubectl get nodes
On first use, kubelogin will open your browser to authenticate with Authentik. After successful authentication, kubectl commands should work seamlessly.
Verify Token
Check that you're using OIDC authentication:
kubectl config view --minify
You should see the kubelogin exec configuration in the user section.
Check User Identity
Verify which user Kubernetes sees:
kubectl auth whoami
The command should show your email address from Authentik.
Integrating with Argo CD
Authentik can also provide SSO for Argo CD, allowing developers to use the same credentials for both kubectl and the Argo CD UI.
Creating an OAuth Provider for Argo CD
- In Authentik, create a new OAuth2/OpenID Provider:
- Name: argocd
- Authorization flow: implicit
- Client type: Public
- Redirect URIs:
https://argocd.example.com/auth/callback - Note the Client ID (e.g.,
zSN3...)
Creating the Argo CD Application
- Navigate to Applications → Applications
- Click Create
- Link it to the Argo CD provider
- Configure the application details
Configuring Argo CD
Add OIDC configuration to your Argo CD ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
url: https://argocd.example.com
oidc.config: |
name: Authentik
issuer: https://auth.tail6720f8.ts.net/application/o/argocd/
clientID: zSN3...
requestedScopes:
- openid
- profile
- email
- groups
Configure RBAC to map Authentik groups to Argo CD roles:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.csv: |
g, admins, role:admin
Extensions
To professionalize this setup, consider:
- Multi-factor Authentication: Enable MFA in Authentik for enhanced security
- LDAP Integration: Connect Authentik to your organization's existing LDAP directory, Microsoft Entra ID, or Google Workspace
- Certificate Management: Use cert-manager for automatic TLS certificate rotation
- Session Management: Configure session timeouts and refresh token policies
References
Written by

Jetze Schuurmans
Machine Learning Engineer
Jetze is a well-rounded Machine Learning Engineer, who is as comfortable solving Data Science use cases as he is productionizing them in the cloud. His expertise includes: AI4Science, MLOps, and GenAI. As a researcher, he has published papers on: Computer Vision and Natural Language Processing and Machine Learning in general.
Contact