Docker Compose handles multi-container applications on a single server beautifully. But what happens when one server isn't enough? When you need high availability, automatic failover, or the ability to scale beyond what a single VPS can handle?
That's where Kubernetes comes in. This guide explains what Kubernetes is, when you actually need it, and breaks down its core concepts so you can decide whether it's right for your use case.
What is Kubernetes?
Kubernetes (often abbreviated as K8s) is a container orchestration platform. Where Docker runs containers and Compose coordinates them on one host, Kubernetes coordinates containers across multiple machines, handling deployment, scaling, networking, and self-healing automatically.
Originally developed by Google and now maintained by the Cloud Native Computing Foundation, Kubernetes has become the industry standard for running containers at scale.
What Kubernetes provides:
- Multi-node orchestration: Run containers across a cluster of machines
- Service discovery: Containers find each other automatically, even as they move between nodes
- Load balancing: Distribute traffic across container replicas
- Auto-scaling: Add or remove container instances based on demand
- Self-healing: Automatically restart failed containers and replace unhealthy nodes
- Rolling updates: Deploy new versions with zero downtime
- Storage orchestration: Automatically mount storage systems to containers
- Secret management: Store and manage sensitive information securely
When Do You Actually Need Kubernetes?
Kubernetes is powerful but complex. Before diving in, honestly assess whether you need it.
- Running a few containers on a single VPS
- Traffic fits comfortably on one server
- Brief downtime during updates is acceptable
- You're a solo developer or small team
- You want the simplest possible setup
- You need high availability (no single point of failure)
- Running across multiple servers
- Need automatic scaling based on load
- Want zero-downtime deployments
- Deploying many microservices
- Learning K8s for career development
The honest middle ground: Many teams adopt Kubernetes because it's trendy, not because they need it. Running K8s adds operational overhead—more things to monitor, secure, and troubleshoot. If a 2GB VPS running Docker Compose handles your workload, adding Kubernetes doesn't make it better; it makes it more complicated.
Kubernetes Architecture
Kubernetes runs as a cluster: a set of machines working together. The cluster has two types of nodes.
Control Plane (Master)
The control plane manages the cluster. Components include:
- API Server: The front door to Kubernetes. All commands go through here.
- etcd: A distributed key-value store holding all cluster state
- Scheduler: Decides which node should run new containers
- Controller Manager: Runs background processes that maintain desired state
Worker Nodes
Worker nodes run your actual containers. Each worker runs:
- kubelet: An agent that talks to the control plane and manages containers on that node
- Container runtime: Usually containerd or Docker, actually runs the containers
- kube-proxy: Handles networking, routing traffic to the right containers
┌─────────────────────────────────────────────────────────────┐
│ Control Plane │
│ ┌──────────┐ ┌──────┐ ┌───────────┐ ┌────────────────┐ │
│ │API Server│ │ etcd │ │ Scheduler │ │Controller Mgr │ │
│ └──────────┘ └──────┘ └───────────┘ └────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Worker Node │ │ Worker Node │ │ Worker Node │
│ ┌───────────┐ │ │ ┌───────────┐ │ │ ┌───────────┐ │
│ │ kubelet │ │ │ │ kubelet │ │ │ │ kubelet │ │
│ ├───────────┤ │ │ ├───────────┤ │ │ ├───────────┤ │
│ │kube-proxy │ │ │ │kube-proxy │ │ │ │kube-proxy │ │
│ ├───────────┤ │ │ ├───────────┤ │ │ ├───────────┤ │
│ │ Pods │ │ │ │ Pods │ │ │ │ Pods │ │
│ └───────────┘ │ │ └───────────┘ │ │ └───────────┘ │
└─────────────────┘ └─────────────────┘ └─────────────────┘Core Kubernetes Concepts
Kubernetes introduces several abstractions. Understanding these is essential before deploying anything.
Pods
A Pod is the smallest deployable unit—a wrapper around one or more containers that share storage and network. Most Pods contain a single container.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: nginx:1.25
ports:
- containerPort: 80Pods are ephemeral—they can be killed, moved, or recreated at any time. You almost never create Pods directly.
Deployments
A Deployment manages a set of identical Pods, ensuring the desired number are always running.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: nginx:1.25
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi"
cpu: "250m"This creates 3 Pods. If one fails, Kubernetes automatically creates a replacement.
Services
A Service provides a stable network endpoint that routes traffic to the right Pods.
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 80
type: ClusterIPService types:
- ClusterIP (default): Only accessible within the cluster
- NodePort: Exposes on each node's IP at a static port
- LoadBalancer: Provisions an external load balancer
ConfigMaps and Secrets
# ConfigMap for non-sensitive config
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "postgres-service"
LOG_LEVEL: "info"
---
# Secret for sensitive data
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
DATABASE_PASSWORD: cGFzc3dvcmQxMjM= # base64 encodedPersistent Volumes
Pods are ephemeral, but data often isn't. PersistentVolumeClaims decouple storage from Pods.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standardIngress
Ingress manages external HTTP/HTTPS traffic with URL-based routing and SSL termination.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80Comparing Compose and Kubernetes
| Aspect | Docker Compose | Kubernetes |
|---|---|---|
| Scope | Single host | Multi-host cluster |
| Configuration | docker-compose.yml | Multiple YAML manifests |
| Learning curve | Low | Steep |
| High availability | Manual | Built-in |
| Scaling | Manual | Automatic (HPA) |
| Rolling updates | Basic | Sophisticated |
| Resource overhead | Minimal | Significant |
| Best for | Small deployments | Large-scale production |
Translating Concepts
| Compose | Kubernetes |
|---|---|
| Service (in compose) | Deployment + Service |
ports: | Service + Ingress |
volumes: (named) | PersistentVolumeClaim |
environment: | ConfigMap + Secret |
restart: always | Default behavior (Deployment) |
Kubernetes Distributions
"Kubernetes" isn't one thing you install—it's a specification with many implementations.
For Learning and Development
minikube: Single-node cluster in a VM. Great for learning.
minikube start
kubectl get nodeskind (Kubernetes in Docker): Runs cluster nodes as Docker containers. Fast to create/destroy.
kind create cluster
kubectl get nodesFor Production (Self-Hosted)
k3s: Lightweight distribution perfect for VPS. Uses ~512MB RAM. We'll use this in Part 5.
curl -sfL https://get.k3s.io | sh -
kubectl get nodesOther options:
- k0s: Single binary, zero dependencies
- kubeadm: Official bootstrapping tool, more complex but full-featured
Managed Kubernetes
Cloud providers handle the control plane:
- Google Kubernetes Engine (GKE)
- Amazon Elastic Kubernetes Service (EKS)
- Azure Kubernetes Service (AKS)
kubectl: The Kubernetes CLI
kubectl is how you interact with Kubernetes. It communicates with the API server to manage resources.
Cluster Info
kubectl cluster-info
kubectl get nodesWorking with Resources
# List resources
kubectl get pods
kubectl get services
kubectl get deployments
# Detailed info
kubectl describe pod my-pod
kubectl describe service my-service
# Create/apply resources
kubectl apply -f deployment.yaml
kubectl apply -f . # Apply all YAML in directory
# Delete resources
kubectl delete pod my-pod
kubectl delete -f deployment.yamlDebugging
# View logs
kubectl logs my-pod
kubectl logs -f my-pod # Follow
kubectl logs my-pod -c container-name # Specific container
# Execute commands
kubectl exec -it my-pod -- /bin/bash
# Port forward for local access
kubectl port-forward my-pod 8080:80
kubectl port-forward service/my-service 8080:80Scaling
kubectl scale deployment my-app --replicas=5Namespaces
# List namespaces
kubectl get namespaces
# Resources in specific namespace
kubectl get pods -n production
# Set default namespace
kubectl config set-context --current --namespace=productionResource Requirements
Kubernetes itself needs resources before your applications even run.
| Setup | CPU | RAM | Disk |
|---|---|---|---|
| Learning (single-node k3s) | 1 core | 1GB | 10GB |
| Production (3-node cluster) | 2 cores each | 4GB each | 20GB each |
Realistic Self-Hosted Setup
| Role | Instance Size | Count | Purpose |
|---|---|---|---|
| Control plane | 2GB RAM | 1-3 | Runs k3s server |
| Worker | 4GB+ RAM | 2+ | Runs applications |
For a budget-friendly start, a single 4GB VPS running k3s handles both control plane and workloads. Scale out as needed.
What's Next
You now understand Kubernetes architecture and core concepts: Pods, Deployments, Services, ConfigMaps, Secrets, Volumes, and Ingress. You know when Kubernetes makes sense and when Docker Compose is the better choice. In Part 5, we'll get hands-on: deploying a lightweight Kubernetes distribution (k3s) on RamNode VPS instances, setting up a functional cluster, configuring Ingress with automatic SSL, and deploying your first real application.
