Kubernetes on AWS (EKS)
Refresher on Kubernetes concepts with hands-on EKS demos. Pods, Services, Deployments, Ingress, and HPA.
Kubernetes Refresher
Kubernetes (K8s) orchestrates containerized applications โ handling deployment, scaling, networking, and self-healing. This module is a refresher, not a deep dive. We'll cover core concepts and then demo them on AWS EKS.
| Azure | AWS | Notes |
|---|---|---|
AKS | EKS | Both are managed K8s. AKS control plane is free; EKS costs $0.10/hr |
Azure Container Registry | ECR | Private Docker registry |
AKS + AGIC | EKS + ALB Controller | Ingress to load balancer integration |
AKS Node Pools | EKS Node Groups | Same concept โ groups of worker nodes |
Core Concepts Refresher
๐ฆ Pod
Smallest deployable unit. One or more containers sharing network and storage. Like a VM but lightweight.
๐ Deployment
Manages replica sets and rolling updates. Declares desired state (e.g., "run 3 copies of my app").
๐ Service
Stable network endpoint for pods. Types: ClusterIP (internal), NodePort (node), LoadBalancer (cloud LB).
๐ช Ingress
HTTP routing rules. Maps domain paths to services. On EKS, the ALB Controller creates real ALBs.
๐ Namespace
Virtual cluster within a cluster. Isolate environments (dev/staging/prod) or teams.
๐ HPA
Horizontal Pod Autoscaler. Scales pod count based on CPU/memory/custom metrics. Like ASG for pods.
EKS โ Elastic Kubernetes Service
EKS is AWS's managed Kubernetes. AWS manages the control plane(API server, etcd, scheduler) and you manage the worker nodes.
Setting Up EKS
# Install eksctl (the EKS CLI โ like 'az aks create')
brew install eksctl
# Create a cluster (takes ~15 minutes)
eksctl create cluster \
--name aws-sandbox-eks \
--region us-east-1 \
--version 1.29 \
--nodegroup-name standard \
--node-type t3.medium \
--nodes 2 \
--nodes-min 1 \
--nodes-max 4
# Verify
kubectl get nodeseksctl create cluster is similar to az aks create. Both create the control plane + node pool. EKS costs $0.10/hr for the control plane; AKS control plane is free.
Demo 1: Deploy the Sandbox App to EKS
Create Kubernetes manifests and deploy the app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: aws-sandbox-app
labels:
app: aws-sandbox
spec:
replicas: 3
selector:
matchLabels:
app: aws-sandbox
template:
metadata:
labels:
app: aws-sandbox
spec:
containers:
- name: app
image: node:18-alpine
ports:
- containerPort: 3000
env:
- name: PORT
value: "3000"
- name: DB_HOST
valueFrom:
secretKeyRef:
name: db-credentials
key: host
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: aws-sandbox-service
spec:
type: LoadBalancer
selector:
app: aws-sandbox
ports:
- port: 80
targetPort: 3000# Create a secret for DB credentials
kubectl create secret generic db-credentials \
--from-literal=host=your-rds-endpoint \
--from-literal=password=your-password
# Apply the deployment
kubectl apply -f k8s/deployment.yaml
# Watch pods come up
kubectl get pods -w
# Get the LoadBalancer URL
kubectl get svc aws-sandbox-service
# โ EXTERNAL-IP column shows the ALB DNSDemo 2: Horizontal Pod Autoscaler
Scale the app automatically based on CPU:
# Create HPA โ target 50% CPU, scale 2-10 pods
kubectl autoscale deployment aws-sandbox-app \
--cpu-percent=50 \
--min=2 \
--max=10
# Generate load (from another terminal)
kubectl run load-gen --image=busybox --restart=Never -- \
/bin/sh -c "while true; do wget -q -O- http://aws-sandbox-service; done"
# Watch the HPA scale up
kubectl get hpa -w
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS
# aws-sandbox-app Deployment/aws-sandbox-app 72%/50% 2 10 4
# Clean up load generator
kubectl delete pod load-genThe metrics server must be installed for HPA to work. On EKS, install it with:kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Demo 3: Ingress with AWS ALB Controller
Use the AWS ALB Ingress Controller to create a real ALB from K8s Ingress rules:
# Install ALB Controller (via Helm)
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=aws-sandbox-eksapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: aws-sandbox-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: aws-sandbox-service
port:
number: 80kubectl apply -f k8s/ingress.yaml
kubectl get ingress # Get the ALB DNS from ADDRESS columnEKS Cleanup
# Delete the cluster (removes all resources)
eksctl delete cluster --name aws-sandbox-eks --region us-east-1EKS clusters cost ~$0.10/hr ($72/month) for the control plane plus EC2 costs for worker nodes. Always delete after learning sessions!
Key Takeaways
- Pods = smallest unit, Deployments = desired state, Services = stable networking
- EKS = managed K8s on AWS.
eksctlis the easiest way to create clusters - HPA scales pods horizontally based on metrics (like ASG for containers)
- ALB Controller bridges K8s Ingress with AWS ALBs
- Always clean up EKS clusters โ they're not cheap!