Canary Deployments in Kubernetes: Hands-on Guide
A practical guide to implementing canary deployments in Kubernetes using kind or Docker Desktop
TLDR;#
Canary deployments allow you to gradually roll out new versions of your application to a subset of users, minimizing risk and enabling quick rollbacks. This guide will walk you through implementing canary deployments in Kubernetes using either kind or Docker Desktop.
What is a Canary Deployment?#
A canary deployment is a deployment strategy where a new version of an application is gradually rolled out to a small percentage of users before being released to the entire user base. This approach helps in:
- Reducing risk by testing new versions with a limited audience
- Enabling quick rollbacks if issues are detected
- Gathering real-world performance metrics before full deployment
- Validating new features with actual users
Prerequisites#
Before we begin, ensure you have:
- Docker Desktop with Kubernetes enabled OR kind installed
- kubectl command-line tool
- Basic understanding of Kubernetes concepts
Setting Up the Environment#
Option 1: Using kind#
# Create a Kind cluster with ingress support
cat <<EOF | kind create cluster --name canary-demo --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
EOF
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
# Wait for ingress controller to be ready
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
bashCreating a Sample Application#
Let’s create a simple web application with two versions that display their version information:
# Create a namespace for our demo
kubectl create namespace canary-demo
# Create deployment for version 1
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-v1
namespace: canary-demo
spec:
replicas: 3
selector:
matchLabels:
app: webapp
version: v1
template:
metadata:
labels:
app: webapp
version: v1
spec:
containers:
- name: webapp
image: nginx:1.19
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
configMap:
name: webapp-v1-html
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-v1-html
namespace: canary-demo
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>WebApp v1</title>
<style>
body { font-family: Arial, sans-serif; text-align: center; padding: 50px; }
.version { color: #666; font-size: 0.8em; }
</style>
</head>
<body>
<h1>Welcome to WebApp</h1>
<p class="version">Version: v1</p>
</body>
</html>
EOF
# Create a service to expose the application
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: webapp
namespace: canary-demo
spec:
selector:
app: webapp
ports:
- port: 80
targetPort: 80
type: ClusterIP
EOF
bashImplementing the Canary Deployment#
Now, let’s deploy version 2 of our application as a canary:
# Create canary deployment (10% of traffic)
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-v2
namespace: canary-demo
spec:
replicas: 1
selector:
matchLabels:
app: webapp
version: v2
template:
metadata:
labels:
app: webapp
version: v2
spec:
containers:
- name: webapp
image: nginx:1.20
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
configMap:
name: webapp-v2-html
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-v2-html
namespace: canary-demo
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>WebApp v2</title>
<style>
body { font-family: Arial, sans-serif; text-align: center; padding: 50px; }
.version { color: #666; font-size: 0.8em; }
</style>
</head>
<body>
<h1>Welcome to WebApp</h1>
<p class="version">Version: v2</p>
</body>
</html>
EOF
bashTraffic Splitting with Ingress#
For more sophisticated traffic splitting, we can use an Ingress controller with traffic splitting capabilities:
Creating Ingress Resources for Canary Deployment#
For canary deployments with NGINX, we need two ingress resources:
# Create primary and canary ingress resources
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress-canary
namespace: canary-demo
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
rules:
- host: webapp.localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress-primary
namespace: canary-demo
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: webapp.localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp
port:
number: 80
EOF
bashAccessing the Application Locally#
Add the following entry to your /etc/hosts
file:
127.0.0.1 webapp.localhost
plaintextNow you can access your application at http://webapp.localhost ↗ in your browser.
Monitoring the Canary Deployment#
To monitor the deployment and verify traffic splitting, we can use this shell script:
#!/bin/bash
# monitor-canary.sh
# Usage: ./monitor-canary.sh <number-of-requests>
REQUESTS=${1:-10}
SERVICE_URL="http://webapp.localhost" # Update this with your actual service URL
echo "Monitoring canary deployment with $REQUESTS requests..."
echo "----------------------------------------"
v1_count=0
v2_count=0
for i in $(seq 1 $REQUESTS); do
response=$(curl -s $SERVICE_URL)
if [[ $response == *"Version: v1"* ]]; then
v1_count=$((v1_count + 1))
elif [[ $response == *"Version: v2"* ]]; then
v2_count=$((v2_count + 1))
fi
echo -n "."
sleep 0.1
done
echo -e "\n----------------------------------------"
echo "Results:"
echo "v1 responses: $v1_count"
echo "v2 responses: $v2_count"
echo "v1 percentage: $((v1_count * 100 / REQUESTS))%"
echo "v2 percentage: $((v2_count * 100 / REQUESTS))%"
bashGradual Rollout Strategy#
To gradually increase the canary traffic:
- Start with 10% traffic
- Monitor metrics and logs
- If successful, increase to 25%
- Continue monitoring
- If still successful, increase to 50%
- Finally, roll out to 100%
# Update canary weight to 25%
kubectl patch ingress webapp -n canary-demo --type=merge -p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"25"}}}'
bashRollback Strategy#
If issues are detected:
# Scale down canary deployment
kubectl scale deployment webapp-v2 -n canary-demo --replicas=0
# Set canary weight to 0
kubectl patch ingress webapp -n canary-demo --type=merge -p '{"metadata":{"annotations":{"nginx.ingress.kubernetes.io/canary-weight":"0"}}}'
bashBest Practices#
- Monitoring: Implement comprehensive monitoring before starting canary deployments
- Metrics: Define clear success metrics for the canary
- Automation: Automate the rollout process where possible
- Testing: Ensure proper testing before canary deployment
- Documentation: Maintain clear documentation of the process
Cleanup#
# Delete the namespace and all resources
kubectl delete namespace canary-demo
# If using kind
kind delete cluster --name canary-demo
bashConclusion#
Canary deployments provide a safe way to roll out new versions of your applications. By following this guide, you can implement canary deployments in your Kubernetes clusters and reduce the risk associated with deployments.
Remember to:
- Start with a small percentage of traffic
- Monitor closely
- Have a clear rollback strategy
- Document the process
- Automate where possible
Happy deploying!