Tutorial: Django Deployment & Scaling with K3s and GitLab CI/CD

A Django book.
Photo by Faisal on Unsplash

This doc will walk you through setting up your cluster, getting your database sorted, hooking up GitLab, and then deploying and scaling your Django masterpiece.

1. Setting up K3s

First things first, let's get K3s on your machine. Super easy.

curl -sfL https://get.k3s.io | sh -

After that's done, you'll need to sort out your kubeconfig so you can actually talk to your cluster.

## Give sudo access if prompted
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chmod 600 ~/.kube/config
export KUBECONFIG=~/.kube/config # Add this to your .bashrc too, so it sticks around!
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
sudo chown alfian:alfian /home/alfian/.kube/config # Change 'alfian:alfian' to your actual user:group

2. Setting up Helm

Helm's gonna be your best friend for managing Kubernetes apps. Let's get it installed.

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

3. Setting up CloudNativePG Operator

Since we need a persistent database for Django, we're going with the CloudNativePG operator for PostgreSQL. It's awesome for managing Postgres on Kubernetes. We'll install it via its Helm chart.

helm repo add cnpg https://cloudnative-pg.github.io/charts
helm upgrade --install cnpg \
  --namespace cnpg-system \
  --create-namespace \
  cnpg/cloudnative-pg

4. Setting up PostgreSQL on Kubernetes

Now that the operator is running, you can deploy your PostgreSQL cluster using a manifest file. You'll need to create a manifests/postgres/postgresql-cluster.yaml file for this.

manifests/postgres/postgresql-cluster.yaml:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: postgres-database
spec:
  instances: 1
  storage:
    size: 5Gi
  postgresql:
    parameters:
      max_connections: "100"
  imageName: ghcr.io/cloudnative-pg/postgresql:16.2
  resources:
    requests:
      cpu: "150m"
      memory: "384Mi"
    limits:
      cpu: "400m"
      memory: "768Mi"

Apply this manifest:

kubectl apply -f manifests/postgres/postgresql-cluster.yaml

5. Setting up the GitLab Kubernetes Agent

This is how your GitLab CI/CD talks to your K3s cluster. Super crucial for automated deployments.

helm repo add gitlab https://charts.gitlab.io
helm repo update
helm upgrade --install alfian-blog gitlab/gitlab-agent \
    --namespace gitlab-agent-alfian-blog \
    --create-namespace \
    --set config.token=<fill with your token> \
    --set config.kasAddress=wss://kas.gitlab.com

Next, create a blank file in your main branch. This tells GitLab where to find the agent config.

# .gitlab/agents/alfian-blog/config.yaml (just an empty file)

Then, double-check your GitLab UI to make sure the Kubernetes agent is connected and happy.

6. Setting up DNS

You'll need to point your domain to your cluster's IP address. This depends on where your K3s cluster is running (e.g., a cloud VM, your home lab, etc.). Get that A record squared away!

7. Installing Cert-Manager for HTTPS

HTTPS is non-negotiable these days. Cert-Manager will handle getting those Let's Encrypt certificates for you.

helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.5 \
  --set installCRDs=true

To verify it's running:

kubectl get pods -n cert-manager

8. Applying Let's Encrypt ClusterIssuer

Now, you'll need to apply a ClusterIssuer to tell Cert-Manager how to get certificates from Let's Encrypt. This will be in a manifests/app/letsencrypt-clusterissuer.yaml file.

manifests/app/letsencrypt-clusterissuer.yaml:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: <your email>
    server: <fill with your>
    privateKeySecretRef:
      name: letsencrypt-prod-account-key
    solvers:
    - http01:
        ingress:
          class: traefik

Apply this manifest:

kubectl apply -f manifests/app/letsencrypt-clusterissuer.yaml

9. Deploying Your Django App

Important Note: The following YAML files and Dockerfile are tailored for the "alfian-blog" project. You'll need to adjust names, namespaces, image paths, and other specific values to match your own Django application and GitLab project.

This is where the magic happens with your CI/CD. Here are the files you'll need:
Dockerfile

This builds your Django application image.

# Use an official Python runtime as a parent image
FROM python:3.11-slim

# Install curl for debugging
RUN apt-get update && apt-get install -y curl --no-install-recommends

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Collect static files
RUN python manage.py collectstatic --noinput

# Expose the port your app runs on
EXPOSE 8000

# Run the Django application
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi:application"]

GitLab CI/CD Configuration (.gitlab-ci.yml)

This handles building your Docker image, pushing it to the GitLab registry, running tests, and deploying to Kubernetes. Notice how it creates Kubernetes secrets for your database credentials, Django SECRET_KEY, and GitLab registry access. It also injects the COMMIT_SHA into your deployment manifest.

stages:
  - build
  - test
  - deploy

build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  only:
    - main
  variables:
    DOCKER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
  script:
    - docker buildx build -t "$DOCKER_IMAGE" .
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker push "$DOCKER_IMAGE"

test:
  stage: test
  image: python:3.11-slim
  script:
    - pip install -r requirements.txt
    - python manage.py test
  only:
    - main

deploy:
  stage: deploy
  image: alpine:latest
  variables:
    KUBECTL_CONTEXT: alfianpr/alfian-blog:alfian-blog
  before_script:
    - apk add --no-cache kubectl
  only:
    - main
  script:
    - kubectl config use-context "$KUBECTL_CONTEXT"
    # Check if namespace exists and create if it doesn't
    - if ! kubectl get namespace blog > /dev/null 2>&1; then kubectl create namespace blog; fi
    - kubectl create secret generic django-secret --from-literal=secret-key="$DJANGO_SECRET_KEY" --namespace=blog --dry-run=client -o yaml | kubectl apply -f -
    # Create Kubernetes secret for PostgreSQL credentials
    - kubectl create secret generic postgres-credentials --from-literal=POSTGRES_USER="$POSTGRES_USER" --from-literal=POSTGRES_PASSWORD="$POSTGRES_PASSWORD" --from-literal=POSTGRES_DB="$POSTGRES_DB" --from-literal=POSTGRES_HOST="$POSTGRES_HOST" --from-literal=POSTGRES_PORT="$POSTGRES_PORT" --namespace=blog --dry-run=client -o yaml | kubectl apply -f -

    # Create Kubernetes secret for GitLab Registry credentials
    - echo "Ensuring gitlab-registry-secret is fresh with personal credentials..."
    - kubectl delete secret gitlab-registry-secret --namespace=blog --ignore-not-found
    - echo "Creating secret with server=$CI_REGISTRY, user=$MY_GITLAB_USERNAME"
    - kubectl create secret docker-registry gitlab-registry-secret --docker-server="$CI_REGISTRY" --docker-username="$MY_GITLAB_USERNAME" --docker-password="$MY_GITLAB_PASSWORD" --docker-email="$CI_REGISTRY_EMAIL" --namespace=blog --dry-run=client -o yaml | kubectl apply -f -

    # Create Kubernetes secret for S3 backup credentials (from GitLab CI/CD variables)
    # Make sure to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as protected CI/CD variables in GitLab
    - kubectl create secret generic s3-credentials --from-literal=AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" --from-literal=AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" --namespace=cnpg-system --dry-run=client -o yaml | kubectl apply -f -


    # Deployment rollout logic (as discussed, using unique tags is best)
    - kubectl delete deployment alfian-blog-deployment --namespace=blog --ignore-not-found
    - sed "s|__COMMIT_SHA__|$CI_COMMIT_SHORT_SHA|g" manifests/app/deployment.yaml > /tmp/deployment.yaml
    - kubectl apply -f /tmp/deployment.yaml

Deployment Manifest (manifests/app/deployment.yaml)

This defines how your Django application pods will run in Kubernetes. The COMMIT_SHA placeholder will be replaced by your GitLab CI pipeline.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: alfian-blog-deployment
  namespace: blog # Set the namespace to "blog"
  labels:
    app: alfian-blog
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alfian-blog
  template:
    metadata:
      labels:
        app: alfian-blog
    spec:
      imagePullSecrets:
      - name: gitlab-registry-secret
      containers:
      - name: alfian-blog
        image: registry.gitlab.com/alfianpr/alfian-blog:__COMMIT_SHA__
        imagePullPolicy: Always
        ports:
        - containerPort: 8000
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "250m"
        env:
          # These environment variables should be populated from a Kubernetes Secret
          # created from your GitLab CI/CD variables or directly from the PostgreSQL secret.
          # Example:
          - name: DATABASE_NAME
            valueFrom:
              secretKeyRef:
                name: postgres-credentials # Name of the secret containing PostgreSQL credentials
                key: POSTGRES_DB
          - name: DATABASE_USER
            valueFrom:
              secretKeyRef:
                name: postgres-credentials
                key: POSTGRES_USER
          - name: DATABASE_PASSWORD
            valueFrom:
              secretKeyRef:
                name: postgres-credentials
                key: POSTGRES_PASSWORD
          - name: DATABASE_HOST
            valueFrom:
              secretKeyRef:
                name: postgres-credentials
                key: POSTGRES_HOST
          - name: DATABASE_PORT
            valueFrom:
              secretKeyRef:
                name: postgres-credentials
                key: POSTGRES_PORT
          - name: SECRET_KEY # Django SECRET_KEY
            valueFrom:
              secretKeyRef:
                name: django-secret    # Name of the Kubernetes Secret
                key: secret-key       # Key within that Secret
          - name: DEBUG
            value: "False" # Set to False for production

Service Manifest (manifests/app/service.yaml)

This exposes your Django application within the Kubernetes cluster.

apiVersion: v1
kind: Service
metadata:
  name: alfian-blog-service
  namespace: blog 
  labels:
    app: alfian-blog
spec:
  selector:
    app: alfian-blog
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8000 # Django application runs on port 8000
  type: ClusterIP # Expose the service within the cluster

Redirect Middleware Manifest (manifests/app/redirect-middleware.yaml)

This Traefik middleware ensures that all HTTP traffic is redirected to HTTPS.

# redirect-middleware.yaml
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: redirect-to-https
  namespace: blog
spec:
  redirectScheme:
    scheme: https
    permanent: true

Ingress Manifest (manifests/app/ingress.yaml)

This sets up two Ingress resources: one for handling HTTPS traffic (with Cert-Manager for TLS) and another for HTTP traffic that uses the redirect middleware to force HTTPS.

# ingress.yaml

# --- BEGIN HTTPS INGRESS (NO CHANGES HERE) ---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: alfian-blog-ingress # Your existing HTTPS Ingress
  namespace: blog
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    traefik.ingress.kubernetes.io/router.entrypoints: websecure # Only websecure here
    traefik.ingress.kubernetes.io/router.tls: "true"
spec:
  ingressClassName: traefik
  rules:
    - host: alfianpratama.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: alfian-blog-service
                port:
                  number: 80
    - host: www.alfianpratama.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: alfian-blog-service
                port:
                  number: 80
  tls:
    - hosts:
        - alfianpratama.com
        - www.alfianpratama.com
      secretName: alfianpratama-com-tls

--- # --- SEPARATOR FOR NEXT RESOURCE ---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: alfian-blog-ingress-http-redirect
  namespace: blog
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web # Only 'web' entrypoint for HTTP
    # This is the new annotation to apply the middleware:
    traefik.ingress.kubernetes.io/router.middlewares: blog-redirect-to-https@kubernetescrd # IMPORTANT: <namespace>-<middleware-name>@kubernetescrd
spec:
  ingressClassName: traefik
  rules:
    - host: alfianpratama.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: alfian-blog-service # Dummy backend for k8s validation
                port:
                  number: 80
    - host: www.alfianpratama.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: alfian-blog-service # Dummy backend for k8s validation
                port:
                  number: 80

After the deployment runs through your CI/CD, you'll need to apply the service, redirect middleware, and ingress manifests:

kubectl apply -f manifests/app/service.yaml
kubectl apply -f manifests/app/redirect-middleware.yaml
kubectl apply -f manifests/app/ingress.yaml

10. Horizontal Pod Autoscaler (HPA)

For scaling, we're using the Horizontal Pod Autoscaler. K3s usually has the metric server running by default, which is what HPA needs. Just apply your HPA manifest:

manifests/app/horizontal-pod-autoscaler.yaml:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: alfian-blog-hpa
  namespace: blog # Must be in the same namespace as your deployment
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: alfian-blog-deployment # Name of your deployment
  minReplicas: 1
  maxReplicas: 2 # Limit pod up to 2
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80 # Target 80% CPU utilization
  - type: Resource
    resource:
      name: memory
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80 # Target 80% Memory utilization

Apply this manifest:

kubectl apply -f manifests/app/horizontal-pod-autoscaler.yaml

11. Initial Django Setup (Migration & Superuser)

Last but not least, you'll need to run your Django migrations and create a superuser for your app.

First, find your Django pod name in the blog namespace (or whatever namespace you used).

kubectl get po -n blog

Then, execute the migration and superuser creation commands inside that pod. Remember to replace with your actual pod name.

# Exec migration
kubectl exec -it <pod-name> -n blog -- python manage.py migrate

# Exec create superuser
kubectl exec -it <pod-name> -n blog -- python manage.py createsuperuser

12. How to See PostgreSQL Credentials

Your PostgreSQL credentials (username, password, host, port, database name) are stored in a Kubernetes Secret, as referenced in your Django deployment. You can retrieve these using kubectl.

First, you need to know the name of the secret. In your deployment.yaml, you're referencing postgres-credentials.

To view the secret and its keys:

kubectl get secret postgres-credentials -n blog -o yaml

This will output the secret in YAML format, but the actual data (like POSTGRES_USER, POSTGRES_PASSWORD, etc.) will be Base64 encoded.

To decode a specific value, for example, the password:

kubectl get secret postgres-credentials -n blog -o jsonpath='{.data.POSTGRES_PASSWORD}' | base64 --decode

Replace POSTGRES_PASSWORD with POSTGRES_USER, POSTGRES_DB, POSTGRES_HOST, or POSTGRES_PORT to see their respective values.

Next improvement

Since you are using cloudnative pg operator. It's easily to make on demand backup and scheduled backup data. I will write another post about it.

About the Author

Data & Platform Engineer with 3+ years of experience in cloud, DevOps, and data solutions. I love turning complex data into clear, actionable insights and am passionate about building efficient, scalable systems, especially with Kubernetes (which even runs this blog!).

Let's connect on Linkedin