Infrastructure as Code
Automate everything with CloudFormation templates and Terraform modules. Compare IaC approaches.
What You'll Learn
You've deployed everything manually and with CloudFormation. Now let's compare the two major IaC approaches and deploy the complete sandbox infrastructure in one shot using Terraform โ covering all resources from Modules 1, 2, and 6.
| Azure | AWS | Notes |
|---|---|---|
ARM Templates | CloudFormation | CloudFormation is the AWS-native IaC |
Bicep | CloudFormation | Bicep is ARM's friendly syntax; CF uses YAML/JSON |
Terraform (azurerm) | Terraform (aws) | Same tool, different provider |
CloudFormation vs Terraform
| Feature | CloudFormation | Terraform |
|---|---|---|
| Provider | AWS only | Multi-cloud (AWS, Azure, GCP, โฆ) |
| Language | YAML / JSON | HCL (HashiCorp Configuration Language) |
| State | Managed by AWS | Self-managed (local or S3 + DynamoDB) |
| Preview | Change Sets | terraform plan (better UX) |
| Rollback | Automatic on failure | Manual (no auto-rollback) |
| Modules | Nested Stacks | First-class reusable modules |
| Import | CloudFormation Import | terraform import |
| Drift detection | Built-in | terraform plan shows drift |
When to use which? CloudFormation for pure AWS shops needing auto-rollback. Terraform for multi-cloud, better module ecosystem, team collaboration via Terraform Cloud, and if you're already using Terraform elsewhere (like Azure).
Terraform Project Structure
The sandbox Terraform config lives in the terraform/ folder at the root of this repository. It uses a modular structure that mirrors the tutorial modules:
terraform/
โโโ versions.tf # Provider versions & Terraform version constraint
โโโ variables.tf # All input variables with descriptions & validation
โโโ locals.tf # Computed local values
โโโ main.tf # Root module โ wires all child modules together
โโโ outputs.tf # Key outputs printed after apply
โโโ terraform.tfvars.example # Copy this โ terraform.tfvars and fill in values
โโโ .gitignore # Excludes state files, .terraform/, terraform.tfvars
โ
โโโ modules/
โโโ networking/ # Module 1: VPC, subnets, IGW, NAT, route tables
โ โโโ main.tf
โ โโโ variables.tf
โ โโโ outputs.tf
โโโ security_groups/ # Module 1: ALB, EC2, and RDS security groups
โ โโโ main.tf
โ โโโ variables.tf
โ โโโ outputs.tf
โโโ database/ # Module 2: DB subnet group + RDS PostgreSQL 15
โ โโโ main.tf
โ โโโ variables.tf
โ โโโ outputs.tf
โโโ compute/ # Module 2: IAM, ALB, target group, ASG, scaling
โ โโโ main.tf
โ โโโ variables.tf
โ โโโ outputs.tf
โ โโโ templates/
โ โโโ user_data.sh.tpl # EC2 bootstrap (Node.js + CodeDeploy)
โโโ eks/ # Module 6: EKS cluster, node group, ECR, OIDC
โโโ main.tf
โโโ variables.tf
โโโ outputs.tfWhat Gets Deployed
| Module | Resources |
|---|---|
| networking | VPC ยท Internet Gateway ยท 2 public subnets ยท 2 private subnets ยท Elastic IP ยท NAT Gateway ยท public route table ยท private route table |
| security_groups | ALB security group (80/443) ยท EC2 security group (3000 from ALB) ยท RDS security group (5432 from EC2) |
| database | DB subnet group ยท RDS PostgreSQL 15 (db.t3.micro, 20 GB gp3, encrypted) |
| compute | IAM role + instance profile ยท Application Load Balancer ยท target group + health check ยท HTTP listener ยท Launch Template (AL2023 + Node.js) ยท Auto Scaling Group (min 2, max 4) ยท CPU target-tracking scaling policy |
| eks (optional) | EKS control plane ยท managed node group (t3.medium) ยท OIDC provider ยท ECR repository + lifecycle policy ยท IAM roles for cluster and nodes |
Prerequisites
Before running Terraform, install and configure these tools:
1. Install Terraform
# macOS (Homebrew)
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
# Verify
terraform -version
# Terraform v1.6.x or higher required2. Install AWS CLI
# macOS
brew install awscli
# Verify
aws --version3. Configure AWS Credentials
Terraform uses the same credentials as the AWS CLI. Use one of these methods:
# Option A: AWS CLI configure (creates ~/.aws/credentials)
aws configure
# AWS Access Key ID: <your-access-key>
# AWS Secret Access Key: <your-secret-key>
# Default region: us-east-1
# Default output format: json
# Option B: Environment variables (useful for CI/CD)
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"
# Verify โ should return your account ID
aws sts get-caller-identityUse an IAM user with AdministratorAccess for the sandbox. In production, use least-privilege policies scoped to only the services you need. Never use your root account credentials.
4. (Optional) Install kubectl and eksctl โ for Module 6 / EKS
# kubectl โ Kubernetes CLI
brew install kubectl
# eksctl โ EKS management CLI (optional, Terraform handles creation)
brew install eksctlStep-by-Step Deployment
Step 1 โ Navigate to the Terraform folder
# From the repo root
cd terraformStep 2 โ Create your tfvars file
Copy the example file and set your values. At minimum, you must set db_password.
cp terraform.tfvars.example terraform.tfvars# Global
aws_region = "us-east-1"
project_name = "sandbox"
environment = "sandbox"
# Networking
vpc_cidr = "10.0.0.0/16"
# Database โ CHANGE the password before applying
db_name = "aws_sandbox"
db_username = "dbadmin"
db_password = "YourSecurePass123!" # Must be 8+ characters
# Compute
ec2_instance_type = "t3.micro" # Free-tier eligible
asg_min_size = 2
asg_max_size = 4
# EKS (Module 6) โ disabled by default
# Set enable_eks = true when you're ready to explore Kubernetes
enable_eks = falseNever commit terraform.tfvars to git. It's already in .gitignore, but double-check before pushing. It contains your DB password.
Step 3 โ Initialize Terraform
Downloads the AWS provider plugin and sets up the working directory. Run this once, or after adding new providers/modules.
terraform init
# Expected output:
# Initializing modules...
# Initializing the backend...
# Initializing provider plugins...
# - Finding hashicorp/aws versions matching "~> 5.0"...
# - Installing hashicorp/aws v5.x.x...
# Terraform has been successfully initialized!Step 4 โ Preview the changes
terraform plan shows exactly what will be created, modified, or destroyed โ without touching any real resources. Review this carefully.
terraform plan
# You'll see ~35 resources to be created (Modules 1+2):
#
# + module.networking.aws_vpc.main
# + module.networking.aws_internet_gateway.main
# + module.networking.aws_subnet.public[0]
# + module.networking.aws_subnet.public[1]
# + module.networking.aws_subnet.private[0]
# + module.networking.aws_subnet.private[1]
# + module.networking.aws_eip.nat
# + module.networking.aws_nat_gateway.main
# + module.networking.aws_route_table.public
# + module.networking.aws_route_table.private
# + module.networking.aws_route_table_association.public[0]
# + module.networking.aws_route_table_association.public[1]
# + module.networking.aws_route_table_association.private[0]
# + module.networking.aws_route_table_association.private[1]
# + module.security_groups.aws_security_group.alb
# + module.security_groups.aws_security_group.ec2
# + module.security_groups.aws_security_group.rds
# + module.database.aws_db_subnet_group.main
# + module.database.aws_db_instance.main
# + module.compute.aws_iam_role.ec2
# + module.compute.aws_iam_instance_profile.ec2
# + module.compute.aws_lb.main
# + module.compute.aws_lb_target_group.main
# + module.compute.aws_lb_listener.http
# + module.compute.aws_launch_template.main
# + module.compute.aws_autoscaling_group.main
# + module.compute.aws_autoscaling_policy.cpu
# ...
#
# Plan: 35 to add, 0 to change, 0 to destroy.Step 5 โ Apply
Creates all resources. Type yes when prompted, or use -auto-approve in CI/CD pipelines.
terraform apply
# Terraform will prompt:
# Do you want to perform these actions? Enter a value: yes
# Tip: save the plan and apply it exactly as reviewed:
terraform plan -out=tfplan
terraform apply tfplanTiming: RDS takes the longest (~5โ8 min). Total apply time is roughly 10โ15 minutes. The NAT Gateway and RDS instance are the main bottlenecks.
Step 6 โ Verify the deployment
# View all outputs
terraform output
# Example output:
# alb_dns_name = "sandbox-alb-1234567890.us-east-1.elb.amazonaws.com"
# app_url = "http://sandbox-alb-1234567890.us-east-1.elb.amazonaws.com"
# db_endpoint = "sandbox-db.xxxx.us-east-1.rds.amazonaws.com"
# vpc_id = "vpc-0abc123..."
# Get just the ALB URL
terraform output alb_dns_name
# Hit the health endpoint
curl http://$(terraform output -raw alb_dns_name)/health
# Expected: {"status":"ok","service":"aws-sandbox-api"}
# Hit the API
curl http://$(terraform output -raw alb_dns_name)/api/tasks
# Expected: {"tasks":[]} (empty until you add tasks)Enabling Module 6 โ EKS
EKS is disabled by default to avoid the ~$0.10/hr control plane cost. Enable it when you're ready to explore Kubernetes.
Cost warning: EKS costs ~$0.10/hr ($72/month) for the control plane alone, plus EC2 costs for worker nodes (2ร t3.medium โ $60/month). Always destroy the EKS cluster when done:terraform destroy -target=module.eks
Step 1 โ Enable EKS in terraform.tfvars
# In terraform.tfvars
enable_eks = true
eks_cluster_version = "1.29"
eks_node_instance_type = "t3.medium"
eks_node_min_size = 1
eks_node_max_size = 4
eks_node_desired_size = 2Step 2 โ Apply (EKS takes ~15 minutes)
terraform apply
# Or apply only the EKS module (faster if everything else is already deployed):
terraform apply -target=module.eksStep 3 โ Configure kubectl
# Use the output command directly
$(terraform output -raw eks_kubeconfig_command)
# Or manually:
aws eks update-kubeconfig --region us-east-1 --name sandbox-eks
# Verify nodes are ready
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ip-10-0-10-xxx.ec2.internal Ready <none> 2m v1.29.x
# ip-10-0-11-xxx.ec2.internal Ready <none> 2m v1.29.xStep 4 โ Push a Docker image to ECR
# Get the ECR URL
ECR_URL=$(terraform output -raw ecr_repository_url)
# Authenticate Docker to ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $ECR_URL
# Build and push
docker build -t sandbox-app ./app
docker tag sandbox-app:latest $ECR_URL:latest
docker push $ECR_URL:latestStep 5 โ Deploy to EKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: sandbox-app
spec:
replicas: 3
selector:
matchLabels:
app: sandbox
template:
metadata:
labels:
app: sandbox
spec:
containers:
- name: app
image: <ecr_url>:latest # Replace with your ECR URL
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: sandbox-service
spec:
type: LoadBalancer # Creates an AWS NLB automatically
selector:
app: sandbox
ports:
- port: 80
targetPort: 3000kubectl apply -f k8s/deployment.yaml
# Watch pods start
kubectl get pods -w
# Get the LoadBalancer URL
kubectl get svc sandbox-service
# EXTERNAL-IP column = your NLB DNS nameState Management
By default, Terraform stores state locally in terraform.tfstate. For team use, migrate to S3 remote state:
# 1. Create the S3 bucket and DynamoDB lock table
aws s3api create-bucket --bucket my-sandbox-tf-state --region us-east-1
aws s3api put-bucket-versioning --bucket my-sandbox-tf-state --versioning-configuration Status=Enabled
aws dynamodb create-table --table-name terraform-state-lock --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --billing-mode PAY_PER_REQUEST --region us-east-1terraform {
backend "s3" {
bucket = "my-sandbox-tf-state"
key = "aws-sandbox/terraform.tfstate"
region = "us-east-1"
encrypt = true # Encrypt state at rest
dynamodb_table = "terraform-state-lock" # Prevent concurrent applies
}
}# After editing versions.tf, re-initialize to migrate state
terraform init -migrate-stateNever commit terraform.tfstate to git. It contains sensitive data (DB passwords, ARNs, private keys). It's already in .gitignore, but always verify before pushing.
Useful Terraform Commands
# Show all managed resources and their current state
terraform state list
# Show details of a specific resource
terraform state show module.networking.aws_vpc.main
# Refresh state from real AWS (detect drift)
terraform refresh
# Destroy a specific module (e.g., EKS to stop billing)
terraform destroy -target=module.eks
# Destroy everything
terraform destroy
# Format all .tf files consistently
terraform fmt -recursive
# Validate config syntax
terraform validate
# Graph resource dependencies (pipe to dot for a diagram)
terraform graph | dot -Tsvg > graph.svgCleanup โ Stop All Billing
When you're done learning, destroy the infrastructure to avoid charges:
# Option A: Destroy EKS only (most expensive โ ~$132/month)
terraform destroy -target=module.eks
# Option B: Destroy everything
cd terraform
terraform destroy
# Type 'yes' when prompted
# Verify everything is gone (should return empty)
aws ec2 describe-vpcs --filters "Name=tag:Project,Values=sandbox"
aws rds describe-db-instances --query "DBInstances[?DBInstanceIdentifier=='sandbox-db']"
# Major cost drivers to watch:
# NAT Gateway ~$32/month โ auto-destroyed with terraform destroy
# ALB ~$18/month โ auto-destroyed with terraform destroy
# RDS free tier โ 750 hrs/month
# EKS cluster ~$72/month โ destroy first if not using K8sKey Takeaways
- CloudFormation = AWS-native, auto-rollback, no state management
- Terraform = multi-cloud, first-class modules, superior
planUX - Always run
terraform planbeforeapplyโ review every change - Use
-target=module.Xto deploy or destroy a single module - Store state in S3 + DynamoDB for team use โ never commit
.tfstatefiles - EKS is disabled by default (
enable_eks = false) to prevent surprise charges - The
terraform/folder mirrors the tutorial modules: networking โ security_groups โ database โ compute โ eks - Run
terraform destroyafter each learning session โ NAT Gateway and EKS are the biggest cost drivers