Prerequisites
Before starting with Kubernetes, ensure you have:
Server Requirements
- • RamNode VPS (2GB+ RAM minimum)
- • Ubuntu 20.04/22.04 or CentOS 7+
- • At least 2 CPU cores
- • 20GB+ disk space
- • Docker installed
Knowledge Requirements
- • Docker fundamentals
- • Basic Linux administration
- • YAML syntax understanding
- • Networking concepts
- • Command line proficiency
What is Kubernetes?
Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.
🎯 Purpose
While Docker manages individual containers, Kubernetes orchestrates entire fleets of containers across multiple servers, ensuring high availability, scalability, and efficient resource utilization.
🏗️ Architecture
Kubernetes uses a master-worker architecture with a control plane managing worker nodes that run your containerized applications in pods.
⚙️ Key Features
Automatic scaling, load balancing, self-healing, rolling updates, service discovery, and configuration management out of the box.
Why Use Kubernetes on RamNode?
RamNode's infrastructure provides an excellent foundation for Kubernetes deployments:
🏎️ Resource Efficiency
KVM-based virtualization with dedicated resources ensures consistent performance, eliminating "noisy neighbor" issues common in shared hosting.
📈 Scalability
Easily add more RamNode instances to expand your Kubernetes cluster as your applications grow.
💰 Cost-Effectiveness
Competitive pricing allows running multiple nodes without breaking the budget, perfect for both learning and production use.
🌐 Network Performance
Low-latency network infrastructure ensures fast communication between cluster nodes, crucial for optimal performance.
Core Kubernetes Concepts
Understanding these fundamental building blocks is essential for working with Kubernetes:
🏗️ Pods
The smallest deployable unit. Contains one or more containers sharing storage and network resources.
🖥️ Nodes
Worker machines in your cluster. Each RamNode VPS can serve as a Kubernetes node running pods.
🌐 Cluster
A set of nodes grouped together, allowing Kubernetes to distribute applications across multiple machines.
🚀 Deployments
Describe desired state for applications, managing replicas, updates, and rollbacks automatically.
🔗 Services
Provide stable network endpoints and load balancing for accessing applications.
🔧 ConfigMaps & Secrets
Manage configuration data and sensitive information separately from application code.
Choose Your Kubernetes Distribution
For RamNode deployments, you have several excellent options:
K3s (Recommended for Beginners)
Lightweight, easy to install, includes everything in a single binary. Perfect for development or small production workloads.
kubeadm (Standard Kubernetes)
Official Kubernetes installer providing full upstream compatibility. Best for learning standard Kubernetes or multi-node clusters.
MicroK8s
Canonical's lightweight distribution with useful addons. Great for development and testing environments.
Install K3s (Recommended)
K3s is perfect for RamNode deployments due to its simplicity and low resource requirements:
curl -sfL https://get.k3s.io | sh -sudo kubectl get nodessudo cat /etc/rancher/k3s/k3s.yamlmkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config✅ K3s is now running! This single command sets up a complete Kubernetes cluster with sensible defaults.
Install kubeadm (Multi-Node Clusters)
For multi-node clusters across multiple RamNode instances:
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectlsudo kubeadm init --pod-network-cidr=10.244.0.0/16mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configkubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml💡 Tip: Save the join command from kubeadm init output to add worker nodes later.
Deploy Your First Application
Let's deploy a simple web application to demonstrate Kubernetes basics:
Create a Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"kubectl apply -f nginx-deployment.yamlExpose the Application
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePortkubectl apply -f nginx-service.yaml# View all pods
kubectl get pods
# Check service status
kubectl get services
# Get service details
kubectl describe service nginx-serviceEssential Kubernetes Commands
Master these commands to effectively manage your Kubernetes cluster:
Cluster Information
# Cluster information
kubectl cluster-info
# Node information
kubectl get nodes -o wide
# Resource usage (requires metrics-server)
kubectl top nodes
kubectl top podsManaging Applications
# Create resources from files
kubectl apply -f deployment.yaml
# Delete resources
kubectl delete -f deployment.yaml
# Scale deployments
kubectl scale deployment nginx-deployment --replicas=5
# Rolling updates
kubectl set image deployment/nginx-deployment nginx=nginx:1.21
# Rollback
kubectl rollout undo deployment/nginx-deploymentDebugging & Inspection
# Describe resources for debugging
kubectl describe pod <pod-name>
kubectl describe service <service-name>
# View logs
kubectl logs <pod-name>
kubectl logs -f <pod-name> # Follow logs
# Execute commands in pods
kubectl exec -it <pod-name> -- /bin/bash
# Port forwarding for testing
kubectl port-forward pod/<pod-name> 8080:80Best Practices
Follow these best practices for optimal Kubernetes deployments on RamNode:
Resource Management
Always set resource requests and limits:
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"Security Considerations
- • Keep Kubernetes and Docker updated with latest security patches
- • Use Kubernetes Secrets for sensitive data, not ConfigMaps
- • Implement Network Policies to control pod-to-pod communication
- • Configure RBAC (Role-Based Access Control) for production deployments
- • Run containers with non-root users when possible
Health Checks
Implement readiness and liveness probes:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5Monitoring Your Cluster
Set up monitoring to track cluster health and performance:
Install Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yamlBasic Monitoring Commands
# View resource usage
kubectl top nodes
kubectl top pods --all-namespaces
# Monitor events
kubectl get events --sort-by=.metadata.creationTimestamp
# Watch resource changes
kubectl get pods -wBackup Strategy
# Backup cluster configuration
kubectl get all --all-namespaces -o yaml > cluster-backup.yaml
# For etcd backup (kubeadm clusters)
sudo ETCDCTL_API=3 etcdctl snapshot save /backup/etcd-snapshot.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.keyScaling Your Applications
Learn to scale your Kubernetes applications effectively:
Manual Scaling
# Scale up
kubectl scale deployment nginx-deployment --replicas=5
# Scale down
kubectl scale deployment nginx-deployment --replicas=2Horizontal Pod Autoscaling
Configure automatic scaling based on resource utilization:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70kubectl apply -f hpa.yamlAdding Worker Nodes
Scale your cluster by adding more RamNode instances:
- Provision additional RamNode VPS instances
- Install Docker and Kubernetes components on new nodes
- Use the join token from your master node to add workers
- Verify new nodes with
kubectl get nodes
Troubleshooting Common Issues
Common issues and their solutions when running Kubernetes on RamNode:
🎉 Congratulations!
You've successfully learned Kubernetes basics on RamNode! You can now deploy, scale, and manage containerized applications with industry-standard orchestration tools.
