Interactive guide

Kubernetes from the ground up

If you know how to work with Linux and have some networking background, this guide is for you. It explains why things work the way they do — not just the commands.

15 chapters 20+ labs Minikube Real projects

🚀 Introduction to Kubernetes

What is Kubernetes?

Kubernetes (abbreviated K8s) is an open-source platform for container orchestration. Simply put: you tell it "I want 3 copies of this application running" and K8s takes care of making sure they always run — even when a server crashes, or when you need to scale.

💡
Linux analogy: systemd manages processes on a single machine. Kubernetes does the same, but for hundreds of machines and thousands of containers at once. Think of it as systemd for an entire datacenter.

Why Kubernetes?

🔁 Self-healing
When a container crashes, K8s automatically restarts it. When a node fails, it reschedules pods to another node.
📈 Scaling
One command and you have 10 copies instead of 1. HPA does it automatically based on CPU load.
🚀 Rolling updates
Deploy a new version with zero downtime — K8s gradually replaces old pods with new ones.
⚙️ Declarative approach
You don't write how to do something, you write what you want. K8s handles the rest.

Key concepts (glossary)

Term What it is Linux analogy
ClusterSet of servers (nodes) managed by K8sEntire server farm
NodeOne physical or VM serverSingle server
PodSmallest deployable unit — 1+ containersProcess (PID)
DeploymentDefines how many pods we wantsystemd service
ServiceStable IP/DNS for a group of podsload balancer / iptables
NamespaceLogical isolation of resources in a clusterLinux namespaces
IngressHTTP(S) router from the internet into the clusternginx reverse proxy
PVCRequest for persistent storageLVM logical volume

🏗️ Kubernetes Architecture

A Kubernetes cluster consists of two types of servers: the Control Plane (the brain) and Worker Nodes (the muscles). You always communicate only with the Control Plane via kubectl.

Kubernetes Cluster
Control Plane
🧠 kube-apiserver — entry gateway
💾 etcd — state database
📋 scheduler — where should a pod run?
🔄 controller-manager — state watcher
Worker Node (×N)
🤝 kubelet — node agent
🔀 kube-proxy — networking / iptables
🐳 containerd — runs containers
Pod
Container A
Container B
$ kubectl apply HTTPS :6443 kube-apiserver

What does each component do?

kube-apiserver
The single entry point into the cluster. Every kubectl command talks to it via REST API. It handles authentication, authorization (RBAC) and validates YAML manifests before saving them to etcd.
etcd
A distributed key-value database where the entire cluster state is stored — every pod, deployment, secret, configuration. If you lose etcd without a backup, you lose the cluster. In production always run 3 or 5 etcd instances.
kube-scheduler
Watches for new pods without an assigned node and decides where they will run, based on available resources (CPU, RAM), affinity rules and resource requests.
controller-manager
Runs the control loop — continuously compares the desired state (what you want) with the actual state (what's running). If a Deployment wants 3 pods but only 2 are running, the controller creates a 3rd.
kubelet
An agent running on every worker node. It receives instructions from the API server and ensures that the containers defined in a pod are actually running. Think of it as systemd for a single node.
kube-proxy
Manages network rules (iptables/ipvs) on each node. This is what makes Services work — when a request comes in for a ClusterIP, kube-proxy forwards it to one of the running pods.
Minikube simplification: In Minikube, the Control Plane and Worker Node run in a single VM/container. Perfect for learning — no need for 3 separate servers.

⚙️ Setting Up the Environment

1. Docker (container runtime)

bash
# Ubuntu / Debian
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
  sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list

sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io

# Add yourself to the docker group (run without sudo)
sudo usermod -aG docker $USER
newgrp docker

# Verify
docker run hello-world

2. kubectl

bash
# Download the latest stable release
curl -LO "https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

# Install
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Verify
kubectl version --client

# Optional: bash autocomplete
echo 'source <(kubectl completion bash)' >> ~/.bashrc
echo 'alias k=kubectl' >> ~/.bashrc
source ~/.bashrc

3. Minikube

bash
# Download and install
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Start the cluster (with Docker driver)
minikube start --driver=docker --cpus=2 --memory=4096

# Check status
minikube status
kubectl get nodes
output
NAME       STATUS   ROLES           AGE   VERSION
minikube   Ready    control-plane   1m    v1.32.0
Useful minikube commands:
minikube stop — stop the cluster
minikube delete — delete the cluster (clean slate)
minikube dashboard — open the web UI
minikube addons enable ingress — enable nginx ingress

🐳 Docker & Docker Compose

Container vs. virtual machine

🖥️ Virtual Machine
Full OS (kernel + libs) — typically 1–10 GB, boots in minutes
🐳 Container
Shares host kernel, only app + libs — typically 10–200 MB, starts in seconds

Dockerfile — recipe for an image

dockerfile
# Base PHP-FPM image
FROM php:8.2-fpm-alpine

# Install system dependencies
RUN apk add --no-cache git curl

# Set working directory
WORKDIR /var/www/html

# Copy composer.json separately (Docker layer cache!)
COPY composer.json composer.lock ./
RUN composer install --no-dev --optimize-autoloader

# Copy the rest of the code
COPY . .

# Use non-root user (security)
RUN chown -R www-data:www-data /var/www/html
USER www-data

EXPOSE 9000
CMD ["php-fpm"]
💡
Layers and cache: Docker builds images in layers. If you change code but not composer.json, the composer install layer is not rebuilt — saves time. That's why you always COPY dependencies before COPY code.

Essential Docker commands

bash
# Build image
docker build -t my-app:1.0 .
docker build -t my-app:latest -f Dockerfile.prod .

# Run container
docker run -d -p 8080:80 --name web nginx:alpine
docker run -it --rm alpine sh           # interactive, auto-removes after exit

# Management
docker ps                               # running containers
docker ps -a                            # all including stopped
docker logs -f web                      # streaming logs
docker exec -it web sh                  # shell into running container
docker stop web && docker rm web

# Images
docker images
docker pull nginx:alpine
docker push registry.example.com/my-app:1.0
docker rmi my-app:1.0

# Cleanup
docker system prune -a                  # WARNING: removes all unused objects

Docker Compose — local development environment

Compose is a tool for running multiple containers at once. You don't use it directly in Kubernetes, but it's important for local development and understanding multi-container architecture.

yaml
# docker-compose.yml
version: '3.9'
services:
  app:
    build: .
    ports:
      - "8080:80"
    environment:
      DATABASE_URL: mysql://user:pass@db:3306/myapp
    depends_on:
      db:
        condition: service_healthy
    volumes:
      - .:/var/www/html          # live reload during development

  db:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: rootpass
      MYSQL_DATABASE: myapp
      MYSQL_USER: user
      MYSQL_PASSWORD: pass
    volumes:
      - db_data:/var/lib/mysql   # persistent data
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 5s
      timeout: 3s
      retries: 5

volumes:
  db_data:
bash
docker compose up -d        # start in background
docker compose logs -f app  # follow logs
docker compose down         # stop and remove containers (data stays in volume)
docker compose down -v      # stop and remove volumes too
⚠️
Compose vs Kubernetes: Compose is great for local development. For production (scaling, self-healing, rolling updates) you need Kubernetes. Many concepts are similar — service name = DNS name, volumes = PVC.

💻 kubectl — controlling the cluster

kubectl is the CLI tool that communicates with kube-apiserver via HTTPS. Everything you do in Kubernetes goes through it.

Basic command structure

kubectl  [ACTION]  [RESOURCE]  [NAME]  [-n NAMESPACE]  [FLAGS]

The most important commands

bash
# ── GET: what's running? ───────────────────────────────
kubectl get pods                          # pods in default namespace
kubectl get pods -n kube-system           # pods in a specific namespace
kubectl get pods -A                       # pods in ALL namespaces
kubectl get pods -o wide                  # + IP, node
kubectl get pods -o yaml                  # full YAML output
kubectl get all -n my-namespace           # pods + services + deployments

kubectl get nodes                         # list nodes
kubectl get namespaces                    # list namespaces
kubectl get services                      # services
kubectl get deployments                   # deployments
kubectl get pvc                           # persistent volume claims

# ── DESCRIBE: detailed info ────────────────────────────
kubectl describe pod <pod-name>           # events, status, container logs
kubectl describe node minikube
kubectl describe deployment nginx

# ── APPLY / CREATE: deploy ─────────────────────────────
kubectl apply -f deployment.yaml          # create/update from YAML (RECOMMENDED)
kubectl apply -f ./k8s/                   # apply entire directory
kubectl create namespace prod             # quick creation without YAML

# ── DELETE: removal ───────────────────────────────────
kubectl delete -f deployment.yaml         # delete what the YAML defines
kubectl delete pod <name>                 # delete a specific pod (deployment will recreate it!)
kubectl delete deployment nginx
kubectl delete namespace test             # deletes EVERYTHING in namespace

# ── LOGS: viewing logs ────────────────────────────────
kubectl logs <pod>                        # logs
kubectl logs <pod> -f                     # streaming
kubectl logs <pod> -c <container>        # specific container in pod
kubectl logs <pod> --previous            # logs after crash (previous run)

# ── EXEC: commands inside container ──────────────────
kubectl exec -it <pod> -- sh              # shell into container (alpine)
kubectl exec -it <pod> -- bash            # bash (ubuntu/debian)
kubectl exec <pod> -- ls /var/www         # one-off command

# ── PORT-FORWARD: local access ────────────────────────
kubectl port-forward pod/<name> 8080:80   # localhost:8080 → pod:80
kubectl port-forward svc/nginx 8080:80    # via service
LAB 01 First steps in the cluster
  1. Start minikube: minikube start
  2. Check nodes: kubectl get nodes
  3. View system pods: kubectl get pods -n kube-system
  4. Run a test pod: kubectl run test --image=nginx:alpine
  5. Check status: kubectl get pods
  6. View details: kubectl describe pod test
  7. Open shell: kubectl exec -it test -- sh
  8. Clean up: kubectl delete pod test

📦 Pods

A Pod is the smallest deployable unit in Kubernetes. It contains one or more containers that share a network namespace (same IP, can talk via localhost) and volumes.

⚠️
Pods are ephemeral! When a pod crashes or is deleted, all data inside it is gone. This is why pods are never used directly — always through a Deployment, which ensures replacement.

Pod YAML definition

yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  namespace: default
  labels:
    app: web               # labels — key for Selectors and Services
    version: "1.0"
spec:
  containers:
    - name: nginx
      image: nginx:alpine
      ports:
        - containerPort: 80
      resources:
        requests:
          cpu: "100m"      # 0.1 CPU cores (guaranteed minimum)
          memory: "64Mi"
        limits:
          cpu: "200m"      # maximum (if exceeded: CPU throttled, RAM → OOMKill)
          memory: "128Mi"
      env:
        - name: APP_ENV
          value: "production"
      livenessProbe:       # K8s restarts container if this fails
        httpGet:
          path: /healthz
          port: 80
        initialDelaySeconds: 10
        periodSeconds: 15
      readinessProbe:      # K8s only sends traffic if this passes
        httpGet:
          path: /ready
          port: 80
        initialDelaySeconds: 5
        periodSeconds: 5

Pod lifecycle

Pending ContainerCreating Running Succeeded / Failed
Pending — scheduler is finding a suitable node, or image is being pulled
Running — at least one container is running
CrashLoopBackOff — container keeps crashing and K8s keeps restarting it (exponential backoff)
ImagePullBackOff — cannot pull the image (wrong tag, private registry without credentials)

🔄 Deployments & ReplicaSets

A Deployment is what you actually use in practice. It tells K8s: "I want 3 copies of this application, always." A ReplicaSet (created automatically) maintains the exact number of pods.

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  namespace: my-project
spec:
  replicas: 3                          # we want 3 pods
  selector:
    matchLabels:
      app: web-app                     # manages pods with this label
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1                      # max +1 pod during update
      maxUnavailable: 0                # always 3 pods available (zero downtime)
  template:                            # pod template
    metadata:
      labels:
        app: web-app                   # must match selector.matchLabels
    spec:
      containers:
        - name: app
          image: nginx:1.25            # changing tag triggers rolling update
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
            limits:
              cpu: "500m"
              memory: "256Mi"

Managing a Deployment

bash
# Deploy
kubectl apply -f deployment.yaml

# Scale
kubectl scale deployment web-app --replicas=5

# Update image (triggers rolling update)
kubectl set image deployment/web-app app=nginx:1.27

# Watch rolling update progress
kubectl rollout status deployment/web-app

# History
kubectl rollout history deployment/web-app

# Rollback to previous version
kubectl rollout undo deployment/web-app

# Rollback to specific revision
kubectl rollout undo deployment/web-app --to-revision=2

# Restart pods (e.g. after ConfigMap change)
kubectl rollout restart deployment/web-app
LAB 02 Rolling update with zero downtime
  1. Save the deployment YAML above as deployment.yaml
  2. Create namespace: kubectl create namespace my-project
  3. Deploy: kubectl apply -f deployment.yaml
  4. Watch pods: kubectl get pods -n my-project -w
  5. In a second terminal, update the image: kubectl set image deployment/web-app app=nginx:alpine -n my-project
  6. Observe the rolling update — pods are gradually replaced
  7. Try rollback: kubectl rollout undo deployment/web-app -n my-project

🌐 Services & Networking

Pods have temporary IP addresses. When a pod restarts, it gets a new IP. A Service provides a stable DNS name and IP, behind which any number of pods can sit. It works as a load balancer.

💡
Analogy: A Service is like a DNS record + iptables rule. When you call http://web-app, CoreDNS resolves it to a ClusterIP, and kube-proxy forwards the request to one of the running pods.

Types of Services

ClusterIP — default
Accessible only inside the cluster. Ideal for service-to-service communication (e.g. app → database).
NodePort
Opens a port (30000–32767) on every node. Accessible from outside via NODE_IP:PORT. Used in Minikube for testing.
LoadBalancer
Creates an external load balancer at a cloud provider (AWS ELB, GCP LB). In Minikube: minikube tunnel.
Ingress — not a Service
HTTP/HTTPS router — directs traffic by hostname and path. One Ingress instead of many LoadBalancers.
yaml
# ClusterIP — internal communication
apiVersion: v1
kind: Service
metadata:
  name: web-app
  namespace: my-project
spec:
  type: ClusterIP           # default, can be omitted
  selector:
    app: web-app            # selects pods with this label
  ports:
    - port: 80              # SERVICE port
      targetPort: 80        # CONTAINER port

---
# NodePort — external access (Minikube)
apiVersion: v1
kind: Service
metadata:
  name: web-app-ext
spec:
  type: NodePort
  selector:
    app: web-app
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080       # optional, K8s assigns random if omitted

DNS in Kubernetes

CoreDNS automatically creates DNS records for every Service. From any pod in the cluster you can call:

# Same namespace
http://web-app
# Different namespace (full DNS record)
http://web-app.my-project.svc.cluster.local
# Short form (different namespace)
http://web-app.my-project
LAB 03 Expose the app and test DNS
  1. Deploy the deployment from Lab 02
  2. Create a NodePort service: kubectl expose deployment web-app --type=NodePort --port=80 -n my-project
  3. Get the URL: minikube service web-app -n my-project --url
  4. Open in browser or: curl $(minikube service web-app -n my-project --url)
  5. Test DNS from inside: kubectl run -it --rm dns-test --image=alpine --restart=Never -n my-project -- sh
    then inside: wget -O- http://web-app

🔧 ConfigMaps & Secrets

Configuration belongs outside the Docker image. ConfigMap stores non-sensitive data (URLs, feature flags), Secret stores sensitive data (passwords, tokens, certificates — base64 encoded).

yaml
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  namespace: my-project
data:
  APP_ENV: "production"
  APP_URL: "https://my-app.com"
  nginx.conf: |                        # can contain entire files
    server {
        listen 80;
        root /var/www/html/public;
    }

---
# Secret (values must be base64: echo -n 'password' | base64)
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
  namespace: my-project
type: Opaque
data:
  DB_PASSWORD: cGFzc3dvcmQxMjM=       # "password123" in base64
  API_KEY: c2VjcmV0a2V5              # "secretkey" in base64

Using in a Pod — as environment variables

yaml
spec:
  containers:
    - name: app
      image: my-app:latest
      env:
        # Individual values from ConfigMap/Secret
        - name: APP_ENV
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: APP_ENV
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: DB_PASSWORD
      envFrom:
        # ALL values from ConfigMap at once
        - configMapRef:
            name: app-config
        - secretRef:
            name: app-secrets

Using as a volume (files)

yaml
spec:
  volumes:
    - name: config-vol
      configMap:
        name: app-config
  containers:
    - name: nginx
      volumeMounts:
        - name: config-vol
          mountPath: /etc/nginx/conf.d  # each key = one file
🔐
Secrets are not truly secure! Base64 is encoding, not encryption. Anyone with access to the cluster can read a Secret. For production use Sealed Secrets, Vault or External Secrets Operator.
bash
# Quick Secret creation from command line (no YAML needed)
kubectl create secret generic db-creds \
  --from-literal=DB_PASSWORD=my-password \
  --from-literal=DB_USER=admin \
  -n my-project

# Secret from file
kubectl create secret generic tls-cert \
  --from-file=tls.crt=./cert.pem \
  --from-file=tls.key=./key.pem

# Read Secret (decode base64)
kubectl get secret db-creds -o jsonpath='{.data.DB_PASSWORD}' | base64 -d

💾 Persistent Storage

Containers are ephemeral — when you delete a pod, the data is gone. For databases, uploaded files and other permanent data you need a PersistentVolume (PV) and a PersistentVolumeClaim (PVC).

Admin creates PV Dev creates PVC K8s binds PV↔PVC Pod mounts PVC
With dynamic provisioning (StorageClass) in Minikube, steps 1 and 3 happen automatically.
yaml
# PersistentVolumeClaim — "storage request"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-data
  namespace: my-project
spec:
  accessModes:
    - ReadWriteOnce            # RWO: one node can read+write
    # ReadOnlyMany (ROX): multiple nodes can read
    # ReadWriteMany (RWX): multiple nodes can read+write (e.g. NFS)
  storageClassName: standard   # Minikube default StorageClass
  resources:
    requests:
      storage: 5Gi

---
# Deployment with PVC
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: my-project
spec:
  replicas: 1                  # Databases typically 1 replica (without clustering)
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:8.0
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: root-password
            - name: MYSQL_DATABASE
              value: "myapp"
          ports:
            - containerPort: 3306
          volumeMounts:
            - name: mysql-storage
              mountPath: /var/lib/mysql    # where MySQL stores data
      volumes:
        - name: mysql-storage
          persistentVolumeClaim:
            claimName: mysql-data          # link to the PVC above
LAB 04 MySQL with persistent data
  1. Create a Secret: kubectl create secret generic mysql-secret --from-literal=root-password=Password123 -n my-project
  2. Save the PVC + Deployment YAML and apply: kubectl apply -f mysql.yaml
  3. Wait for Running: kubectl get pods -n my-project -w
  4. Connect to MySQL: kubectl exec -it deploy/mysql -n my-project -- mysql -uroot -pPassword123
  5. Create a table, insert data
  6. Delete the pod: kubectl delete pod -l app=mysql -n my-project
  7. After restart — data must still be there!

⚡ Resource Limits

Without limits, one pod can consume all resources and "starve" the others. Kubernetes uses requests (guaranteed minimum) and limits (maximum).

requests
The scheduler uses this value when choosing a node. The node must have at least this much free.
limits
Maximum consumption. CPU: throttling. RAM: OOMKill (container is restarted).

CPU units

1 = 1 CPU core = 1000m
0.5 = 500m = half a core
100m = 0.1 core = 100 millicores
# Recommendation: requests=100m limits=500m for a typical web app

LimitRange — namespace defaults

yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
  namespace: my-project
spec:
  limits:
    - type: Container
      default:               # limits if container doesn't specify
        cpu: "500m"
        memory: "256Mi"
      defaultRequest:        # requests if container doesn't specify
        cpu: "100m"
        memory: "64Mi"
      max:                   # nobody in the namespace can request more
        cpu: "2"
        memory: "2Gi"

ResourceQuota — limits for the entire namespace

yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: namespace-quota
  namespace: my-project
spec:
  hard:
    requests.cpu: "4"         # entire namespace can request max 4 CPU
    requests.memory: 8Gi
    limits.cpu: "8"
    limits.memory: 16Gi
    pods: "20"                # max 20 pods in namespace
    persistentvolumeclaims: "5"
bash
# Current resource consumption
kubectl top pods -n my-project           # requires metrics-server
kubectl top nodes

# Enable metrics-server in Minikube
minikube addons enable metrics-server

⛵ Helm — package manager for K8s

Helm is like apt or pip for Kubernetes. Instead of manually writing ten YAML files, you install an entire application with one command. Packages are called charts.

💡
Why Helm? WordPress in K8s = ~8 YAML files, 300+ lines. A Helm chart packages it into helm install wordpress bitnami/wordpress and through values.yaml you configure only what you need.

Installing Helm

bash
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm version

Essential Helm commands

bash
# Repositories (chart sources)
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update                                  # refresh list

# Search
helm search repo wordpress
helm search hub nginx                             # search on Artifact Hub

# Install
helm install mywordpress bitnami/wordpress \
  --namespace my-project \
  --create-namespace \
  --set wordpressUsername=admin \
  --set wordpressPassword=SuperPassword \
  --set mariadb.auth.rootPassword=RootPassword

# Install with values file (recommended)
helm install mywordpress bitnami/wordpress \
  -f values.yaml -n my-project

# List installed releases
helm list -n my-project
helm list -A                                      # all namespaces

# Update (new chart version or changed values)
helm upgrade mywordpress bitnami/wordpress -f values.yaml -n my-project

# Rollback
helm rollback mywordpress 1 -n my-project         # number = revision

# Uninstall
helm uninstall mywordpress -n my-project

# Preview what Helm generates (without deploying)
helm template mywordpress bitnami/wordpress -f values.yaml

Creating your own Helm chart

bash
helm create my-app
# Creates structure:
# my-app/
#   Chart.yaml          - chart metadata
#   values.yaml         - default values (overridable on install)
#   templates/          - YAML templates with Go template syntax
#     deployment.yaml
#     service.yaml
#     ingress.yaml
#     _helpers.tpl      - helper functions
yaml
# values.yaml
replicaCount: 2
image:
  repository: nginx
  tag: "alpine"
  pullPolicy: IfNotPresent
service:
  type: ClusterIP
  port: 80
ingress:
  enabled: false
resources:
  requests:
    cpu: 100m
    memory: 64Mi
  limits:
    cpu: 500m
    memory: 128Mi
LAB 05 Install WordPress via Helm
  1. helm repo add bitnami https://charts.bitnami.com/bitnami && helm repo update
  2. helm install wp bitnami/wordpress --set wordpressPassword=Password123 -n wordpress --create-namespace
  3. kubectl get pods -n wordpress -w (wait until Running)
  4. minikube service wp-wordpress -n wordpress --url
  5. Open the URL and log in (admin / Password123)
  6. Clean up: helm uninstall wp -n wordpress

🔒 Ingress & Certificates

Ingress is an HTTP/HTTPS router. Instead of a separate LoadBalancer for each application, one Ingress Controller accepts all HTTP traffic and routes it to the correct Service based on hostname/path.

Internet → Ingress Controller → routing by host/path
app.com/api → service/api-backend :80
app.com/ → service/frontend :80
admin.com → service/admin :80

Ingress Controller in Minikube

bash
minikube addons enable ingress
kubectl get pods -n ingress-nginx       # wait until Running

Ingress Resource

yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  namespace: my-project
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - app.local
      secretName: app-tls-secret       # Secret with the certificate
  rules:
    - host: app.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: web-app
                port:
                  number: 80

Self-signed certificate (local development)

bash
# Generate self-signed cert for app.local
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout tls.key \
  -out tls.crt \
  -subj "/CN=app.local/O=Dev" \
  -addext "subjectAltName=DNS:app.local"

# Create TLS Secret in K8s
kubectl create secret tls app-tls-secret \
  --cert=tls.crt \
  --key=tls.key \
  -n my-project

# Add to /etc/hosts (on the host machine)
echo "$(minikube ip) app.local" | sudo tee -a /etc/hosts

Let's Encrypt with cert-manager

On a production cluster with a real domain, cert-manager automatically obtains and renews Let's Encrypt certificates.

bash
# Install cert-manager via Helm
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set installCRDs=true
yaml
# ClusterIssuer — Let's Encrypt production
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: admin@your-domain.com       # your email
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
      - http01:
          ingress:
            class: nginx               # ACME HTTP-01 challenge via Ingress

---
# Ingress with automatic cert-manager certificate
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"   # the magic annotation!
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - my-app.com
      secretName: my-app-tls           # cert-manager stores the cert here
  rules:
    - host: my-app.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: web-app
                port:
                  number: 80
How it works: cert-manager watches Ingresses with the annotation. It registers an ACME challenge, Let's Encrypt verifies domain ownership via HTTP, issues the certificate, and stores it as a Secret. It automatically renews before expiry.

🌿 Real project: Drupal + MySQL

Full walkthrough of deploying the Drupal CMS with a MySQL database. Includes: PVC for the database, Secrets for passwords, Services for communication, and Ingress for HTTP access.

Architecture

Internet → Ingress :80/:443
Service/drupal ClusterIP:80
Pod/drupal (php-fpm + apache)
Service/mysql ClusterIP:3306
Pod/mysql
PVC/mysql-data 10Gi
yaml
# drupal-all.yaml — full deployment in one file (separated by ---)

# 1. Namespace
apiVersion: v1
kind: Namespace
metadata:
  name: drupal

---
# 2. Secret for MySQL passwords
apiVersion: v1
kind: Secret
metadata:
  name: mysql-creds
  namespace: drupal
type: Opaque
stringData:                            # stringData = automatic base64
  MYSQL_ROOT_PASSWORD: "RootPassword123"
  MYSQL_DATABASE: "drupal"
  MYSQL_USER: "drupal"
  MYSQL_PASSWORD: "DrupalPassword456"

---
# 3. PVC for MySQL data
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-data
  namespace: drupal
spec:
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 10Gi

---
# 4. PVC for Drupal files (uploads, public files)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: drupal-files
  namespace: drupal
spec:
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 5Gi

---
# 5. MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: drupal
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:8.0
          envFrom:
            - secretRef:
                name: mysql-creds
          ports:
            - containerPort: 3306
          volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
          resources:
            requests:
              cpu: 250m
              memory: 512Mi
            limits:
              cpu: 1000m
              memory: 1Gi
          livenessProbe:
            exec:
              command: ["mysqladmin", "ping", "-h", "localhost"]
            initialDelaySeconds: 30
            periodSeconds: 10
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: mysql-data

---
# 6. MySQL Service (ClusterIP — internal only)
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: drupal
spec:
  selector:
    app: mysql
  ports:
    - port: 3306
      targetPort: 3306

---
# 7. Drupal Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: drupal
  namespace: drupal
spec:
  replicas: 1
  selector:
    matchLabels:
      app: drupal
  template:
    metadata:
      labels:
        app: drupal
    spec:
      initContainers:
        # Wait until MySQL is ready
        - name: wait-for-mysql
          image: busybox
          command: ['sh', '-c', 'until nc -z mysql 3306; do echo waiting; sleep 2; done']
      containers:
        - name: drupal
          image: drupal:10-apache
          ports:
            - containerPort: 80
          env:
            - name: DRUPAL_DATABASE_HOST
              value: "mysql"           # MySQL Service DNS name
            - name: DRUPAL_DATABASE_NAME
              value: "drupal"
            - name: DRUPAL_DATABASE_USER
              value: "drupal"
            - name: DRUPAL_DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-creds
                  key: MYSQL_PASSWORD
          volumeMounts:
            - name: drupal-files
              mountPath: /var/www/html/sites/default/files
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 512Mi
          readinessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 30
            periodSeconds: 10
      volumes:
        - name: drupal-files
          persistentVolumeClaim:
            claimName: drupal-files

---
# 8. Drupal Service
apiVersion: v1
kind: Service
metadata:
  name: drupal
  namespace: drupal
spec:
  type: NodePort
  selector:
    app: drupal
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
bash
# Deploy
kubectl apply -f drupal-all.yaml

# Watch status
kubectl get pods -n drupal -w

# Get URL (Minikube)
minikube service drupal -n drupal --url

# Drupal pod logs
kubectl logs -f deploy/drupal -n drupal
Drupal installation: After opening the URL in your browser, go through the installer. Set the database host to mysql (the Service name). Drupal will connect through the ClusterIP Service automatically.

🎵 Real project: Symfony + PostgreSQL

A Symfony application requires specific steps: database migrations on deployment, proper environment variables, and optionally a queue worker (Messenger). We also demonstrate a Job for one-off commands.

yaml
# symfony-all.yaml

# 1. Namespace
apiVersion: v1
kind: Namespace
metadata:
  name: symfony-app

---
# 2. Secrets
apiVersion: v1
kind: Secret
metadata:
  name: symfony-secrets
  namespace: symfony-app
type: Opaque
stringData:
  DATABASE_URL: "postgresql://symfony:password@postgres:5432/symfony_db?serverVersion=15"
  APP_SECRET: "generate-a-random-32-char-string"
  MAILER_DSN: "smtp://localhost:25"

---
# 3. ConfigMap (non-sensitive settings)
apiVersion: v1
kind: ConfigMap
metadata:
  name: symfony-config
  namespace: symfony-app
data:
  APP_ENV: "prod"
  APP_DEBUG: "0"
  TRUSTED_PROXIES: "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"

---
# 4. PostgreSQL PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-data
  namespace: symfony-app
spec:
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 5Gi

---
# 5. PostgreSQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: symfony-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:15-alpine
          env:
            - name: POSTGRES_DB
              value: "symfony_db"
            - name: POSTGRES_USER
              value: "symfony"
            - name: POSTGRES_PASSWORD
              value: "password"
          volumeMounts:
            - name: data
              mountPath: /var/lib/postgresql/data
              subPath: postgres             # subPath prevents nested dir
          resources:
            requests:
              cpu: 100m
              memory: 256Mi
            limits:
              cpu: 500m
              memory: 512Mi
          readinessProbe:
            exec:
              command: ["pg_isready", "-U", "symfony", "-d", "symfony_db"]
            initialDelaySeconds: 10
            periodSeconds: 5
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: postgres-data

---
# 6. PostgreSQL Service
apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: symfony-app
spec:
  selector:
    app: postgres
  ports:
    - port: 5432
      targetPort: 5432

---
# 7. Migrations Job (runs once per deployment)
apiVersion: batch/v1
kind: Job
metadata:
  name: symfony-migrations-v1        # change version on each deployment
  namespace: symfony-app
spec:
  template:
    spec:
      restartPolicy: OnFailure
      containers:
        - name: migrations
          image: my-symfony-app:latest
          command: ["php", "bin/console", "doctrine:migrations:migrate", "--no-interaction"]
          envFrom:
            - configMapRef:
                name: symfony-config
            - secretRef:
                name: symfony-secrets

---
# 8. Symfony App Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: symfony-app
  namespace: symfony-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: symfony-app
  template:
    metadata:
      labels:
        app: symfony-app
    spec:
      containers:
        - name: app
          image: my-symfony-app:latest   # your image from registry
          ports:
            - containerPort: 80
          envFrom:
            - configMapRef:
                name: symfony-config
            - secretRef:
                name: symfony-secrets
          resources:
            requests:
              cpu: 200m
              memory: 128Mi
            limits:
              cpu: 1000m
              memory: 512Mi
          readinessProbe:
            httpGet:
              path: /health
              port: 80
            initialDelaySeconds: 10
            periodSeconds: 5

---
# 9. Symfony Queue Worker (Messenger)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: symfony-worker
  namespace: symfony-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: symfony-worker
  template:
    metadata:
      labels:
        app: symfony-worker
    spec:
      containers:
        - name: worker
          image: my-symfony-app:latest
          command: ["php", "bin/console", "messenger:consume", "async", "--time-limit=3600"]
          envFrom:
            - configMapRef:
                name: symfony-config
            - secretRef:
                name: symfony-secrets
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 256Mi

---
# 10. Symfony Service + Ingress
apiVersion: v1
kind: Service
metadata:
  name: symfony-app
  namespace: symfony-app
spec:
  selector:
    app: symfony-app
  ports:
    - port: 80
      targetPort: 80

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: symfony-ingress
  namespace: symfony-app
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - my-symfony-app.com
      secretName: symfony-tls
  rules:
    - host: my-symfony-app.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: symfony-app
                port:
                  number: 80
💡
Dockerfile for Symfony: Use a multi-stage build — run composer install in the build stage, copy only the result into the runtime image. The image will be 2–3× smaller.
docker build -t my-symfony-app:latest . && minikube image load my-symfony-app:latest

Deployment steps

bash
# 1. Deploy infrastructure
kubectl apply -f symfony-all.yaml

# 2. Wait for PostgreSQL
kubectl wait --for=condition=ready pod -l app=postgres -n symfony-app --timeout=60s

# 3. Run migrations (Job applies automatically)
kubectl apply -f symfony-all.yaml

# 4. Follow migration logs
kubectl logs job/symfony-migrations-v1 -n symfony-app -f

# 5. Check everything
kubectl get all -n symfony-app

# 6. Port-forward for local testing
kubectl port-forward svc/symfony-app 8080:80 -n symfony-app
# Open: http://localhost:8080

🎉 You made it!

You've been through the entire guide. Here's a quick reference for everyday use:

Cluster status
kubectl get all -A
kubectl top nodes
kubectl get events -n <ns>
Debugging
kubectl describe pod <p>
kubectl logs <pod> -f
kubectl exec -it <p> -- sh
Deployment
kubectl apply -f .
kubectl rollout restart deploy/<n>
kubectl rollout undo deploy/<n>
Minikube
minikube start/stop/delete
minikube dashboard
minikube image load <img>