Kubernetes from the ground up
If you know how to work with Linux and have some networking background, this guide is for you. It explains why things work the way they do — not just the commands.
🚀 Introduction to Kubernetes
What is Kubernetes?
Kubernetes (abbreviated K8s) is an open-source platform for container orchestration. Simply put: you tell it "I want 3 copies of this application running" and K8s takes care of making sure they always run — even when a server crashes, or when you need to scale.
Why Kubernetes?
Key concepts (glossary)
| Term | What it is | Linux analogy |
|---|---|---|
| Cluster | Set of servers (nodes) managed by K8s | Entire server farm |
| Node | One physical or VM server | Single server |
| Pod | Smallest deployable unit — 1+ containers | Process (PID) |
| Deployment | Defines how many pods we want | systemd service |
| Service | Stable IP/DNS for a group of pods | load balancer / iptables |
| Namespace | Logical isolation of resources in a cluster | Linux namespaces |
| Ingress | HTTP(S) router from the internet into the cluster | nginx reverse proxy |
| PVC | Request for persistent storage | LVM logical volume |
🏗️ Kubernetes Architecture
A Kubernetes cluster consists of two types of servers: the Control Plane (the brain)
and Worker Nodes (the muscles). You always communicate only with the Control Plane via kubectl.
What does each component do?
kubectl command talks to it via REST API. It handles authentication, authorization (RBAC) and validates YAML manifests before saving them to etcd.⚙️ Setting Up the Environment
1. Docker (container runtime)
# Ubuntu / Debian
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
# Add yourself to the docker group (run without sudo)
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker run hello-world2. kubectl
# Download the latest stable release
curl -LO "https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
# Install
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Verify
kubectl version --client
# Optional: bash autocomplete
echo 'source <(kubectl completion bash)' >> ~/.bashrc
echo 'alias k=kubectl' >> ~/.bashrc
source ~/.bashrc3. Minikube
# Download and install
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# Start the cluster (with Docker driver)
minikube start --driver=docker --cpus=2 --memory=4096
# Check status
minikube status
kubectl get nodesNAME STATUS ROLES AGE VERSION
minikube Ready control-plane 1m v1.32.0minikube stop — stop the clusterminikube delete — delete the cluster (clean slate)minikube dashboard — open the web UIminikube addons enable ingress — enable nginx ingress🐳 Docker & Docker Compose
Container vs. virtual machine
Dockerfile — recipe for an image
# Base PHP-FPM image
FROM php:8.2-fpm-alpine
# Install system dependencies
RUN apk add --no-cache git curl
# Set working directory
WORKDIR /var/www/html
# Copy composer.json separately (Docker layer cache!)
COPY composer.json composer.lock ./
RUN composer install --no-dev --optimize-autoloader
# Copy the rest of the code
COPY . .
# Use non-root user (security)
RUN chown -R www-data:www-data /var/www/html
USER www-data
EXPOSE 9000
CMD ["php-fpm"]composer.json, the composer install layer is not rebuilt — saves time. That's why you always COPY dependencies before COPY code.Essential Docker commands
# Build image
docker build -t my-app:1.0 .
docker build -t my-app:latest -f Dockerfile.prod .
# Run container
docker run -d -p 8080:80 --name web nginx:alpine
docker run -it --rm alpine sh # interactive, auto-removes after exit
# Management
docker ps # running containers
docker ps -a # all including stopped
docker logs -f web # streaming logs
docker exec -it web sh # shell into running container
docker stop web && docker rm web
# Images
docker images
docker pull nginx:alpine
docker push registry.example.com/my-app:1.0
docker rmi my-app:1.0
# Cleanup
docker system prune -a # WARNING: removes all unused objectsDocker Compose — local development environment
Compose is a tool for running multiple containers at once. You don't use it directly in Kubernetes, but it's important for local development and understanding multi-container architecture.
# docker-compose.yml
version: '3.9'
services:
app:
build: .
ports:
- "8080:80"
environment:
DATABASE_URL: mysql://user:pass@db:3306/myapp
depends_on:
db:
condition: service_healthy
volumes:
- .:/var/www/html # live reload during development
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: myapp
MYSQL_USER: user
MYSQL_PASSWORD: pass
volumes:
- db_data:/var/lib/mysql # persistent data
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 5s
timeout: 3s
retries: 5
volumes:
db_data:docker compose up -d # start in background
docker compose logs -f app # follow logs
docker compose down # stop and remove containers (data stays in volume)
docker compose down -v # stop and remove volumes too💻 kubectl — controlling the cluster
kubectl is the CLI tool that communicates with kube-apiserver via HTTPS. Everything you do in Kubernetes goes through it.
Basic command structure
The most important commands
# ── GET: what's running? ───────────────────────────────
kubectl get pods # pods in default namespace
kubectl get pods -n kube-system # pods in a specific namespace
kubectl get pods -A # pods in ALL namespaces
kubectl get pods -o wide # + IP, node
kubectl get pods -o yaml # full YAML output
kubectl get all -n my-namespace # pods + services + deployments
kubectl get nodes # list nodes
kubectl get namespaces # list namespaces
kubectl get services # services
kubectl get deployments # deployments
kubectl get pvc # persistent volume claims
# ── DESCRIBE: detailed info ────────────────────────────
kubectl describe pod <pod-name> # events, status, container logs
kubectl describe node minikube
kubectl describe deployment nginx
# ── APPLY / CREATE: deploy ─────────────────────────────
kubectl apply -f deployment.yaml # create/update from YAML (RECOMMENDED)
kubectl apply -f ./k8s/ # apply entire directory
kubectl create namespace prod # quick creation without YAML
# ── DELETE: removal ───────────────────────────────────
kubectl delete -f deployment.yaml # delete what the YAML defines
kubectl delete pod <name> # delete a specific pod (deployment will recreate it!)
kubectl delete deployment nginx
kubectl delete namespace test # deletes EVERYTHING in namespace
# ── LOGS: viewing logs ────────────────────────────────
kubectl logs <pod> # logs
kubectl logs <pod> -f # streaming
kubectl logs <pod> -c <container> # specific container in pod
kubectl logs <pod> --previous # logs after crash (previous run)
# ── EXEC: commands inside container ──────────────────
kubectl exec -it <pod> -- sh # shell into container (alpine)
kubectl exec -it <pod> -- bash # bash (ubuntu/debian)
kubectl exec <pod> -- ls /var/www # one-off command
# ── PORT-FORWARD: local access ────────────────────────
kubectl port-forward pod/<name> 8080:80 # localhost:8080 → pod:80
kubectl port-forward svc/nginx 8080:80 # via service- Start minikube:
minikube start - Check nodes:
kubectl get nodes - View system pods:
kubectl get pods -n kube-system - Run a test pod:
kubectl run test --image=nginx:alpine - Check status:
kubectl get pods - View details:
kubectl describe pod test - Open shell:
kubectl exec -it test -- sh - Clean up:
kubectl delete pod test
📦 Pods
A Pod is the smallest deployable unit in Kubernetes. It contains one or more containers that share a network namespace (same IP, can talk via localhost) and volumes.
Pod YAML definition
apiVersion: v1
kind: Pod
metadata:
name: my-pod
namespace: default
labels:
app: web # labels — key for Selectors and Services
version: "1.0"
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
cpu: "100m" # 0.1 CPU cores (guaranteed minimum)
memory: "64Mi"
limits:
cpu: "200m" # maximum (if exceeded: CPU throttled, RAM → OOMKill)
memory: "128Mi"
env:
- name: APP_ENV
value: "production"
livenessProbe: # K8s restarts container if this fails
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe: # K8s only sends traffic if this passes
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5Pod lifecycle
🔄 Deployments & ReplicaSets
A Deployment is what you actually use in practice. It tells K8s: "I want 3 copies of this application, always." A ReplicaSet (created automatically) maintains the exact number of pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: my-project
spec:
replicas: 3 # we want 3 pods
selector:
matchLabels:
app: web-app # manages pods with this label
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # max +1 pod during update
maxUnavailable: 0 # always 3 pods available (zero downtime)
template: # pod template
metadata:
labels:
app: web-app # must match selector.matchLabels
spec:
containers:
- name: app
image: nginx:1.25 # changing tag triggers rolling update
ports:
- containerPort: 80
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"Managing a Deployment
# Deploy
kubectl apply -f deployment.yaml
# Scale
kubectl scale deployment web-app --replicas=5
# Update image (triggers rolling update)
kubectl set image deployment/web-app app=nginx:1.27
# Watch rolling update progress
kubectl rollout status deployment/web-app
# History
kubectl rollout history deployment/web-app
# Rollback to previous version
kubectl rollout undo deployment/web-app
# Rollback to specific revision
kubectl rollout undo deployment/web-app --to-revision=2
# Restart pods (e.g. after ConfigMap change)
kubectl rollout restart deployment/web-app- Save the deployment YAML above as
deployment.yaml - Create namespace:
kubectl create namespace my-project - Deploy:
kubectl apply -f deployment.yaml - Watch pods:
kubectl get pods -n my-project -w - In a second terminal, update the image:
kubectl set image deployment/web-app app=nginx:alpine -n my-project - Observe the rolling update — pods are gradually replaced
- Try rollback:
kubectl rollout undo deployment/web-app -n my-project
🌐 Services & Networking
Pods have temporary IP addresses. When a pod restarts, it gets a new IP. A Service provides a stable DNS name and IP, behind which any number of pods can sit. It works as a load balancer.
http://web-app, CoreDNS resolves it to a ClusterIP, and kube-proxy forwards the request to one of the running pods.Types of Services
NODE_IP:PORT. Used in Minikube for testing.minikube tunnel.# ClusterIP — internal communication
apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: my-project
spec:
type: ClusterIP # default, can be omitted
selector:
app: web-app # selects pods with this label
ports:
- port: 80 # SERVICE port
targetPort: 80 # CONTAINER port
---
# NodePort — external access (Minikube)
apiVersion: v1
kind: Service
metadata:
name: web-app-ext
spec:
type: NodePort
selector:
app: web-app
ports:
- port: 80
targetPort: 80
nodePort: 30080 # optional, K8s assigns random if omittedDNS in Kubernetes
CoreDNS automatically creates DNS records for every Service. From any pod in the cluster you can call:
- Deploy the deployment from Lab 02
- Create a NodePort service:
kubectl expose deployment web-app --type=NodePort --port=80 -n my-project - Get the URL:
minikube service web-app -n my-project --url - Open in browser or:
curl $(minikube service web-app -n my-project --url) - Test DNS from inside:
kubectl run -it --rm dns-test --image=alpine --restart=Never -n my-project -- sh
then inside:wget -O- http://web-app
🔧 ConfigMaps & Secrets
Configuration belongs outside the Docker image. ConfigMap stores non-sensitive data (URLs, feature flags), Secret stores sensitive data (passwords, tokens, certificates — base64 encoded).
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: my-project
data:
APP_ENV: "production"
APP_URL: "https://my-app.com"
nginx.conf: | # can contain entire files
server {
listen 80;
root /var/www/html/public;
}
---
# Secret (values must be base64: echo -n 'password' | base64)
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
namespace: my-project
type: Opaque
data:
DB_PASSWORD: cGFzc3dvcmQxMjM= # "password123" in base64
API_KEY: c2VjcmV0a2V5 # "secretkey" in base64Using in a Pod — as environment variables
spec:
containers:
- name: app
image: my-app:latest
env:
# Individual values from ConfigMap/Secret
- name: APP_ENV
valueFrom:
configMapKeyRef:
name: app-config
key: APP_ENV
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: DB_PASSWORD
envFrom:
# ALL values from ConfigMap at once
- configMapRef:
name: app-config
- secretRef:
name: app-secretsUsing as a volume (files)
spec:
volumes:
- name: config-vol
configMap:
name: app-config
containers:
- name: nginx
volumeMounts:
- name: config-vol
mountPath: /etc/nginx/conf.d # each key = one file# Quick Secret creation from command line (no YAML needed)
kubectl create secret generic db-creds \
--from-literal=DB_PASSWORD=my-password \
--from-literal=DB_USER=admin \
-n my-project
# Secret from file
kubectl create secret generic tls-cert \
--from-file=tls.crt=./cert.pem \
--from-file=tls.key=./key.pem
# Read Secret (decode base64)
kubectl get secret db-creds -o jsonpath='{.data.DB_PASSWORD}' | base64 -d💾 Persistent Storage
Containers are ephemeral — when you delete a pod, the data is gone. For databases, uploaded files and other permanent data you need a PersistentVolume (PV) and a PersistentVolumeClaim (PVC).
# PersistentVolumeClaim — "storage request"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data
namespace: my-project
spec:
accessModes:
- ReadWriteOnce # RWO: one node can read+write
# ReadOnlyMany (ROX): multiple nodes can read
# ReadWriteMany (RWX): multiple nodes can read+write (e.g. NFS)
storageClassName: standard # Minikube default StorageClass
resources:
requests:
storage: 5Gi
---
# Deployment with PVC
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: my-project
spec:
replicas: 1 # Databases typically 1 replica (without clustering)
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root-password
- name: MYSQL_DATABASE
value: "myapp"
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql # where MySQL stores data
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: mysql-data # link to the PVC above- Create a Secret:
kubectl create secret generic mysql-secret --from-literal=root-password=Password123 -n my-project - Save the PVC + Deployment YAML and apply:
kubectl apply -f mysql.yaml - Wait for Running:
kubectl get pods -n my-project -w - Connect to MySQL:
kubectl exec -it deploy/mysql -n my-project -- mysql -uroot -pPassword123 - Create a table, insert data
- Delete the pod:
kubectl delete pod -l app=mysql -n my-project - After restart — data must still be there!
⚡ Resource Limits
Without limits, one pod can consume all resources and "starve" the others. Kubernetes uses requests (guaranteed minimum) and limits (maximum).
CPU units
LimitRange — namespace defaults
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
namespace: my-project
spec:
limits:
- type: Container
default: # limits if container doesn't specify
cpu: "500m"
memory: "256Mi"
defaultRequest: # requests if container doesn't specify
cpu: "100m"
memory: "64Mi"
max: # nobody in the namespace can request more
cpu: "2"
memory: "2Gi"ResourceQuota — limits for the entire namespace
apiVersion: v1
kind: ResourceQuota
metadata:
name: namespace-quota
namespace: my-project
spec:
hard:
requests.cpu: "4" # entire namespace can request max 4 CPU
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "20" # max 20 pods in namespace
persistentvolumeclaims: "5"# Current resource consumption
kubectl top pods -n my-project # requires metrics-server
kubectl top nodes
# Enable metrics-server in Minikube
minikube addons enable metrics-server⛵ Helm — package manager for K8s
Helm is like apt or pip for Kubernetes. Instead of manually writing ten YAML files, you install an entire application with one command. Packages are called charts.
helm install wordpress bitnami/wordpress and through values.yaml you configure only what you need.Installing Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm versionEssential Helm commands
# Repositories (chart sources)
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update # refresh list
# Search
helm search repo wordpress
helm search hub nginx # search on Artifact Hub
# Install
helm install mywordpress bitnami/wordpress \
--namespace my-project \
--create-namespace \
--set wordpressUsername=admin \
--set wordpressPassword=SuperPassword \
--set mariadb.auth.rootPassword=RootPassword
# Install with values file (recommended)
helm install mywordpress bitnami/wordpress \
-f values.yaml -n my-project
# List installed releases
helm list -n my-project
helm list -A # all namespaces
# Update (new chart version or changed values)
helm upgrade mywordpress bitnami/wordpress -f values.yaml -n my-project
# Rollback
helm rollback mywordpress 1 -n my-project # number = revision
# Uninstall
helm uninstall mywordpress -n my-project
# Preview what Helm generates (without deploying)
helm template mywordpress bitnami/wordpress -f values.yamlCreating your own Helm chart
helm create my-app
# Creates structure:
# my-app/
# Chart.yaml - chart metadata
# values.yaml - default values (overridable on install)
# templates/ - YAML templates with Go template syntax
# deployment.yaml
# service.yaml
# ingress.yaml
# _helpers.tpl - helper functions# values.yaml
replicaCount: 2
image:
repository: nginx
tag: "alpine"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: false
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 500m
memory: 128Mihelm repo add bitnami https://charts.bitnami.com/bitnami && helm repo updatehelm install wp bitnami/wordpress --set wordpressPassword=Password123 -n wordpress --create-namespacekubectl get pods -n wordpress -w(wait until Running)minikube service wp-wordpress -n wordpress --url- Open the URL and log in (admin / Password123)
- Clean up:
helm uninstall wp -n wordpress
🔒 Ingress & Certificates
Ingress is an HTTP/HTTPS router. Instead of a separate LoadBalancer for each application, one Ingress Controller accepts all HTTP traffic and routes it to the correct Service based on hostname/path.
Ingress Controller in Minikube
minikube addons enable ingress
kubectl get pods -n ingress-nginx # wait until RunningIngress Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: my-project
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- app.local
secretName: app-tls-secret # Secret with the certificate
rules:
- host: app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80Self-signed certificate (local development)
# Generate self-signed cert for app.local
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tls.key \
-out tls.crt \
-subj "/CN=app.local/O=Dev" \
-addext "subjectAltName=DNS:app.local"
# Create TLS Secret in K8s
kubectl create secret tls app-tls-secret \
--cert=tls.crt \
--key=tls.key \
-n my-project
# Add to /etc/hosts (on the host machine)
echo "$(minikube ip) app.local" | sudo tee -a /etc/hostsLet's Encrypt with cert-manager
On a production cluster with a real domain, cert-manager automatically obtains and renews Let's Encrypt certificates.
# Install cert-manager via Helm
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true# ClusterIssuer — Let's Encrypt production
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@your-domain.com # your email
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: nginx # ACME HTTP-01 challenge via Ingress
---
# Ingress with automatic cert-manager certificate
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod" # the magic annotation!
spec:
ingressClassName: nginx
tls:
- hosts:
- my-app.com
secretName: my-app-tls # cert-manager stores the cert here
rules:
- host: my-app.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80🌿 Real project: Drupal + MySQL
Full walkthrough of deploying the Drupal CMS with a MySQL database. Includes: PVC for the database, Secrets for passwords, Services for communication, and Ingress for HTTP access.
Architecture
# drupal-all.yaml — full deployment in one file (separated by ---)
# 1. Namespace
apiVersion: v1
kind: Namespace
metadata:
name: drupal
---
# 2. Secret for MySQL passwords
apiVersion: v1
kind: Secret
metadata:
name: mysql-creds
namespace: drupal
type: Opaque
stringData: # stringData = automatic base64
MYSQL_ROOT_PASSWORD: "RootPassword123"
MYSQL_DATABASE: "drupal"
MYSQL_USER: "drupal"
MYSQL_PASSWORD: "DrupalPassword456"
---
# 3. PVC for MySQL data
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data
namespace: drupal
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 10Gi
---
# 4. PVC for Drupal files (uploads, public files)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drupal-files
namespace: drupal
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 5Gi
---
# 5. MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: drupal
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
envFrom:
- secretRef:
name: mysql-creds
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping", "-h", "localhost"]
initialDelaySeconds: 30
periodSeconds: 10
volumes:
- name: data
persistentVolumeClaim:
claimName: mysql-data
---
# 6. MySQL Service (ClusterIP — internal only)
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: drupal
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
---
# 7. Drupal Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal
namespace: drupal
spec:
replicas: 1
selector:
matchLabels:
app: drupal
template:
metadata:
labels:
app: drupal
spec:
initContainers:
# Wait until MySQL is ready
- name: wait-for-mysql
image: busybox
command: ['sh', '-c', 'until nc -z mysql 3306; do echo waiting; sleep 2; done']
containers:
- name: drupal
image: drupal:10-apache
ports:
- containerPort: 80
env:
- name: DRUPAL_DATABASE_HOST
value: "mysql" # MySQL Service DNS name
- name: DRUPAL_DATABASE_NAME
value: "drupal"
- name: DRUPAL_DATABASE_USER
value: "drupal"
- name: DRUPAL_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-creds
key: MYSQL_PASSWORD
volumeMounts:
- name: drupal-files
mountPath: /var/www/html/sites/default/files
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1000m
memory: 512Mi
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
volumes:
- name: drupal-files
persistentVolumeClaim:
claimName: drupal-files
---
# 8. Drupal Service
apiVersion: v1
kind: Service
metadata:
name: drupal
namespace: drupal
spec:
type: NodePort
selector:
app: drupal
ports:
- port: 80
targetPort: 80
nodePort: 30080# Deploy
kubectl apply -f drupal-all.yaml
# Watch status
kubectl get pods -n drupal -w
# Get URL (Minikube)
minikube service drupal -n drupal --url
# Drupal pod logs
kubectl logs -f deploy/drupal -n drupalmysql (the Service name). Drupal will connect through the ClusterIP Service automatically.🎵 Real project: Symfony + PostgreSQL
A Symfony application requires specific steps: database migrations on deployment, proper environment variables, and optionally a queue worker (Messenger). We also demonstrate a Job for one-off commands.
# symfony-all.yaml
# 1. Namespace
apiVersion: v1
kind: Namespace
metadata:
name: symfony-app
---
# 2. Secrets
apiVersion: v1
kind: Secret
metadata:
name: symfony-secrets
namespace: symfony-app
type: Opaque
stringData:
DATABASE_URL: "postgresql://symfony:password@postgres:5432/symfony_db?serverVersion=15"
APP_SECRET: "generate-a-random-32-char-string"
MAILER_DSN: "smtp://localhost:25"
---
# 3. ConfigMap (non-sensitive settings)
apiVersion: v1
kind: ConfigMap
metadata:
name: symfony-config
namespace: symfony-app
data:
APP_ENV: "prod"
APP_DEBUG: "0"
TRUSTED_PROXIES: "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
---
# 4. PostgreSQL PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data
namespace: symfony-app
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 5Gi
---
# 5. PostgreSQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: symfony-app
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15-alpine
env:
- name: POSTGRES_DB
value: "symfony_db"
- name: POSTGRES_USER
value: "symfony"
- name: POSTGRES_PASSWORD
value: "password"
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
subPath: postgres # subPath prevents nested dir
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
exec:
command: ["pg_isready", "-U", "symfony", "-d", "symfony_db"]
initialDelaySeconds: 10
periodSeconds: 5
volumes:
- name: data
persistentVolumeClaim:
claimName: postgres-data
---
# 6. PostgreSQL Service
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: symfony-app
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
---
# 7. Migrations Job (runs once per deployment)
apiVersion: batch/v1
kind: Job
metadata:
name: symfony-migrations-v1 # change version on each deployment
namespace: symfony-app
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: migrations
image: my-symfony-app:latest
command: ["php", "bin/console", "doctrine:migrations:migrate", "--no-interaction"]
envFrom:
- configMapRef:
name: symfony-config
- secretRef:
name: symfony-secrets
---
# 8. Symfony App Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: symfony-app
namespace: symfony-app
spec:
replicas: 2
selector:
matchLabels:
app: symfony-app
template:
metadata:
labels:
app: symfony-app
spec:
containers:
- name: app
image: my-symfony-app:latest # your image from registry
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: symfony-config
- secretRef:
name: symfony-secrets
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 1000m
memory: 512Mi
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 10
periodSeconds: 5
---
# 9. Symfony Queue Worker (Messenger)
apiVersion: apps/v1
kind: Deployment
metadata:
name: symfony-worker
namespace: symfony-app
spec:
replicas: 1
selector:
matchLabels:
app: symfony-worker
template:
metadata:
labels:
app: symfony-worker
spec:
containers:
- name: worker
image: my-symfony-app:latest
command: ["php", "bin/console", "messenger:consume", "async", "--time-limit=3600"]
envFrom:
- configMapRef:
name: symfony-config
- secretRef:
name: symfony-secrets
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
---
# 10. Symfony Service + Ingress
apiVersion: v1
kind: Service
metadata:
name: symfony-app
namespace: symfony-app
spec:
selector:
app: symfony-app
ports:
- port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: symfony-ingress
namespace: symfony-app
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- my-symfony-app.com
secretName: symfony-tls
rules:
- host: my-symfony-app.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: symfony-app
port:
number: 80composer install in the build stage, copy only the result into the runtime image. The image will be 2–3× smaller.docker build -t my-symfony-app:latest . && minikube image load my-symfony-app:latest
Deployment steps
# 1. Deploy infrastructure
kubectl apply -f symfony-all.yaml
# 2. Wait for PostgreSQL
kubectl wait --for=condition=ready pod -l app=postgres -n symfony-app --timeout=60s
# 3. Run migrations (Job applies automatically)
kubectl apply -f symfony-all.yaml
# 4. Follow migration logs
kubectl logs job/symfony-migrations-v1 -n symfony-app -f
# 5. Check everything
kubectl get all -n symfony-app
# 6. Port-forward for local testing
kubectl port-forward svc/symfony-app 8080:80 -n symfony-app
# Open: http://localhost:8080🎉 You made it!
You've been through the entire guide. Here's a quick reference for everyday use: