Module 5

Infrastructure as Code

Automate everything with CloudFormation templates and Terraform modules. Compare IaC approaches.

CloudFormationTerraformState Management

What You'll Learn

You've deployed everything manually and with CloudFormation. Now let's compare the two major IaC approaches and deploy the complete sandbox infrastructure in one shot using Terraform โ€” covering all resources from Modules 1, 2, and 6.

AzureAWSNotes
ARM TemplatesCloudFormationCloudFormation is the AWS-native IaC
BicepCloudFormationBicep is ARM's friendly syntax; CF uses YAML/JSON
Terraform (azurerm)Terraform (aws)Same tool, different provider

CloudFormation vs Terraform

FeatureCloudFormationTerraform
ProviderAWS onlyMulti-cloud (AWS, Azure, GCP, โ€ฆ)
LanguageYAML / JSONHCL (HashiCorp Configuration Language)
StateManaged by AWSSelf-managed (local or S3 + DynamoDB)
PreviewChange Setsterraform plan (better UX)
RollbackAutomatic on failureManual (no auto-rollback)
ModulesNested StacksFirst-class reusable modules
ImportCloudFormation Importterraform import
Drift detectionBuilt-interraform plan shows drift
๐Ÿ“˜ Key Concept

When to use which? CloudFormation for pure AWS shops needing auto-rollback. Terraform for multi-cloud, better module ecosystem, team collaboration via Terraform Cloud, and if you're already using Terraform elsewhere (like Azure).


Terraform Project Structure

The sandbox Terraform config lives in the terraform/ folder at the root of this repository. It uses a modular structure that mirrors the tutorial modules:

text
terraform/
โ”œโ”€โ”€ versions.tf               # Provider versions & Terraform version constraint
โ”œโ”€โ”€ variables.tf              # All input variables with descriptions & validation
โ”œโ”€โ”€ locals.tf                 # Computed local values
โ”œโ”€โ”€ main.tf                   # Root module โ€” wires all child modules together
โ”œโ”€โ”€ outputs.tf                # Key outputs printed after apply
โ”œโ”€โ”€ terraform.tfvars.example  # Copy this โ†’ terraform.tfvars and fill in values
โ”œโ”€โ”€ .gitignore                # Excludes state files, .terraform/, terraform.tfvars
โ”‚
โ””โ”€โ”€ modules/
    โ”œโ”€โ”€ networking/           # Module 1: VPC, subnets, IGW, NAT, route tables
    โ”‚   โ”œโ”€โ”€ main.tf
    โ”‚   โ”œโ”€โ”€ variables.tf
    โ”‚   โ””โ”€โ”€ outputs.tf
    โ”œโ”€โ”€ security_groups/      # Module 1: ALB, EC2, and RDS security groups
    โ”‚   โ”œโ”€โ”€ main.tf
    โ”‚   โ”œโ”€โ”€ variables.tf
    โ”‚   โ””โ”€โ”€ outputs.tf
    โ”œโ”€โ”€ database/             # Module 2: DB subnet group + RDS PostgreSQL 15
    โ”‚   โ”œโ”€โ”€ main.tf
    โ”‚   โ”œโ”€โ”€ variables.tf
    โ”‚   โ””โ”€โ”€ outputs.tf
    โ”œโ”€โ”€ compute/              # Module 2: IAM, ALB, target group, ASG, scaling
    โ”‚   โ”œโ”€โ”€ main.tf
    โ”‚   โ”œโ”€โ”€ variables.tf
    โ”‚   โ”œโ”€โ”€ outputs.tf
    โ”‚   โ””โ”€โ”€ templates/
    โ”‚       โ””โ”€โ”€ user_data.sh.tpl  # EC2 bootstrap (Node.js + CodeDeploy)
    โ””โ”€โ”€ eks/                  # Module 6: EKS cluster, node group, ECR, OIDC
        โ”œโ”€โ”€ main.tf
        โ”œโ”€โ”€ variables.tf
        โ””โ”€โ”€ outputs.tf

What Gets Deployed

ModuleResources
networkingVPC ยท Internet Gateway ยท 2 public subnets ยท 2 private subnets ยท Elastic IP ยท NAT Gateway ยท public route table ยท private route table
security_groupsALB security group (80/443) ยท EC2 security group (3000 from ALB) ยท RDS security group (5432 from EC2)
databaseDB subnet group ยท RDS PostgreSQL 15 (db.t3.micro, 20 GB gp3, encrypted)
computeIAM role + instance profile ยท Application Load Balancer ยท target group + health check ยท HTTP listener ยท Launch Template (AL2023 + Node.js) ยท Auto Scaling Group (min 2, max 4) ยท CPU target-tracking scaling policy
eks (optional)EKS control plane ยท managed node group (t3.medium) ยท OIDC provider ยท ECR repository + lifecycle policy ยท IAM roles for cluster and nodes

Prerequisites

Before running Terraform, install and configure these tools:

1. Install Terraform

bash
# macOS (Homebrew)
brew tap hashicorp/tap
brew install hashicorp/tap/terraform

# Verify
terraform -version
# Terraform v1.6.x or higher required

2. Install AWS CLI

bash
# macOS
brew install awscli

# Verify
aws --version

3. Configure AWS Credentials

Terraform uses the same credentials as the AWS CLI. Use one of these methods:

bash
# Option A: AWS CLI configure (creates ~/.aws/credentials)
aws configure
# AWS Access Key ID:     <your-access-key>
# AWS Secret Access Key: <your-secret-key>
# Default region:        us-east-1
# Default output format: json

# Option B: Environment variables (useful for CI/CD)
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"

# Verify โ€” should return your account ID
aws sts get-caller-identity
๐Ÿ’ก Tip

Use an IAM user with AdministratorAccess for the sandbox. In production, use least-privilege policies scoped to only the services you need. Never use your root account credentials.

4. (Optional) Install kubectl and eksctl โ€” for Module 6 / EKS

bash
# kubectl โ€” Kubernetes CLI
brew install kubectl

# eksctl โ€” EKS management CLI (optional, Terraform handles creation)
brew install eksctl

Step-by-Step Deployment

Step 1 โ€” Navigate to the Terraform folder

bash
# From the repo root
cd terraform

Step 2 โ€” Create your tfvars file

Copy the example file and set your values. At minimum, you must set db_password.

bash
cp terraform.tfvars.example terraform.tfvars
hclterraform.tfvars
# Global
aws_region   = "us-east-1"
project_name = "sandbox"
environment  = "sandbox"

# Networking
vpc_cidr = "10.0.0.0/16"

# Database โ€” CHANGE the password before applying
db_name     = "aws_sandbox"
db_username = "dbadmin"
db_password = "YourSecurePass123!"  # Must be 8+ characters

# Compute
ec2_instance_type = "t3.micro"   # Free-tier eligible
asg_min_size      = 2
asg_max_size      = 4

# EKS (Module 6) โ€” disabled by default
# Set enable_eks = true when you're ready to explore Kubernetes
enable_eks = false
โš ๏ธ Warning

Never commit terraform.tfvars to git. It's already in .gitignore, but double-check before pushing. It contains your DB password.

Step 3 โ€” Initialize Terraform

Downloads the AWS provider plugin and sets up the working directory. Run this once, or after adding new providers/modules.

bash
terraform init

# Expected output:
# Initializing modules...
# Initializing the backend...
# Initializing provider plugins...
# - Finding hashicorp/aws versions matching "~> 5.0"...
# - Installing hashicorp/aws v5.x.x...
# Terraform has been successfully initialized!

Step 4 โ€” Preview the changes

terraform plan shows exactly what will be created, modified, or destroyed โ€” without touching any real resources. Review this carefully.

bash
terraform plan

# You'll see ~35 resources to be created (Modules 1+2):
#
#   + module.networking.aws_vpc.main
#   + module.networking.aws_internet_gateway.main
#   + module.networking.aws_subnet.public[0]
#   + module.networking.aws_subnet.public[1]
#   + module.networking.aws_subnet.private[0]
#   + module.networking.aws_subnet.private[1]
#   + module.networking.aws_eip.nat
#   + module.networking.aws_nat_gateway.main
#   + module.networking.aws_route_table.public
#   + module.networking.aws_route_table.private
#   + module.networking.aws_route_table_association.public[0]
#   + module.networking.aws_route_table_association.public[1]
#   + module.networking.aws_route_table_association.private[0]
#   + module.networking.aws_route_table_association.private[1]
#   + module.security_groups.aws_security_group.alb
#   + module.security_groups.aws_security_group.ec2
#   + module.security_groups.aws_security_group.rds
#   + module.database.aws_db_subnet_group.main
#   + module.database.aws_db_instance.main
#   + module.compute.aws_iam_role.ec2
#   + module.compute.aws_iam_instance_profile.ec2
#   + module.compute.aws_lb.main
#   + module.compute.aws_lb_target_group.main
#   + module.compute.aws_lb_listener.http
#   + module.compute.aws_launch_template.main
#   + module.compute.aws_autoscaling_group.main
#   + module.compute.aws_autoscaling_policy.cpu
#   ...
#
# Plan: 35 to add, 0 to change, 0 to destroy.

Step 5 โ€” Apply

Creates all resources. Type yes when prompted, or use -auto-approve in CI/CD pipelines.

bash
terraform apply

# Terraform will prompt:
#   Do you want to perform these actions? Enter a value: yes

# Tip: save the plan and apply it exactly as reviewed:
terraform plan -out=tfplan
terraform apply tfplan
๐Ÿ“˜ Key Concept

Timing: RDS takes the longest (~5โ€“8 min). Total apply time is roughly 10โ€“15 minutes. The NAT Gateway and RDS instance are the main bottlenecks.

Step 6 โ€” Verify the deployment

bash
# View all outputs
terraform output

# Example output:
#   alb_dns_name     = "sandbox-alb-1234567890.us-east-1.elb.amazonaws.com"
#   app_url          = "http://sandbox-alb-1234567890.us-east-1.elb.amazonaws.com"
#   db_endpoint      = "sandbox-db.xxxx.us-east-1.rds.amazonaws.com"
#   vpc_id           = "vpc-0abc123..."

# Get just the ALB URL
terraform output alb_dns_name

# Hit the health endpoint
curl http://$(terraform output -raw alb_dns_name)/health
# Expected: {"status":"ok","service":"aws-sandbox-api"}

# Hit the API
curl http://$(terraform output -raw alb_dns_name)/api/tasks
# Expected: {"tasks":[]} (empty until you add tasks)

Enabling Module 6 โ€” EKS

EKS is disabled by default to avoid the ~$0.10/hr control plane cost. Enable it when you're ready to explore Kubernetes.

โš ๏ธ Warning

Cost warning: EKS costs ~$0.10/hr ($72/month) for the control plane alone, plus EC2 costs for worker nodes (2ร— t3.medium โ‰ˆ $60/month). Always destroy the EKS cluster when done:terraform destroy -target=module.eks

Step 1 โ€” Enable EKS in terraform.tfvars

hcl
# In terraform.tfvars
enable_eks             = true
eks_cluster_version    = "1.29"
eks_node_instance_type = "t3.medium"
eks_node_min_size      = 1
eks_node_max_size      = 4
eks_node_desired_size  = 2

Step 2 โ€” Apply (EKS takes ~15 minutes)

bash
terraform apply

# Or apply only the EKS module (faster if everything else is already deployed):
terraform apply -target=module.eks

Step 3 โ€” Configure kubectl

bash
# Use the output command directly
$(terraform output -raw eks_kubeconfig_command)

# Or manually:
aws eks update-kubeconfig --region us-east-1 --name sandbox-eks

# Verify nodes are ready
kubectl get nodes
# NAME                          STATUS   ROLES    AGE   VERSION
# ip-10-0-10-xxx.ec2.internal   Ready    <none>   2m    v1.29.x
# ip-10-0-11-xxx.ec2.internal   Ready    <none>   2m    v1.29.x

Step 4 โ€” Push a Docker image to ECR

bash
# Get the ECR URL
ECR_URL=$(terraform output -raw ecr_repository_url)

# Authenticate Docker to ECR
aws ecr get-login-password --region us-east-1 |   docker login --username AWS --password-stdin $ECR_URL

# Build and push
docker build -t sandbox-app ./app
docker tag sandbox-app:latest $ECR_URL:latest
docker push $ECR_URL:latest

Step 5 โ€” Deploy to EKS

yamlk8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sandbox-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sandbox
  template:
    metadata:
      labels:
        app: sandbox
    spec:
      containers:
        - name: app
          image: <ecr_url>:latest    # Replace with your ECR URL
          ports:
            - containerPort: 3000
          readinessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 250m
              memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  name: sandbox-service
spec:
  type: LoadBalancer   # Creates an AWS NLB automatically
  selector:
    app: sandbox
  ports:
    - port: 80
      targetPort: 3000
bash
kubectl apply -f k8s/deployment.yaml

# Watch pods start
kubectl get pods -w

# Get the LoadBalancer URL
kubectl get svc sandbox-service
# EXTERNAL-IP column = your NLB DNS name

State Management

By default, Terraform stores state locally in terraform.tfstate. For team use, migrate to S3 remote state:

bash
# 1. Create the S3 bucket and DynamoDB lock table
aws s3api create-bucket   --bucket my-sandbox-tf-state   --region us-east-1

aws s3api put-bucket-versioning   --bucket my-sandbox-tf-state   --versioning-configuration Status=Enabled

aws dynamodb create-table   --table-name terraform-state-lock   --attribute-definitions AttributeName=LockID,AttributeType=S   --key-schema AttributeName=LockID,KeyType=HASH   --billing-mode PAY_PER_REQUEST   --region us-east-1
hclversions.tf (backend section)
terraform {
  backend "s3" {
    bucket         = "my-sandbox-tf-state"
    key            = "aws-sandbox/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true                      # Encrypt state at rest
    dynamodb_table = "terraform-state-lock"    # Prevent concurrent applies
  }
}
bash
# After editing versions.tf, re-initialize to migrate state
terraform init -migrate-state
โš ๏ธ Warning

Never commit terraform.tfstate to git. It contains sensitive data (DB passwords, ARNs, private keys). It's already in .gitignore, but always verify before pushing.


Useful Terraform Commands

bash
# Show all managed resources and their current state
terraform state list

# Show details of a specific resource
terraform state show module.networking.aws_vpc.main

# Refresh state from real AWS (detect drift)
terraform refresh

# Destroy a specific module (e.g., EKS to stop billing)
terraform destroy -target=module.eks

# Destroy everything
terraform destroy

# Format all .tf files consistently
terraform fmt -recursive

# Validate config syntax
terraform validate

# Graph resource dependencies (pipe to dot for a diagram)
terraform graph | dot -Tsvg > graph.svg

๐Ÿงช

Cleanup โ€” Stop All Billing

When you're done learning, destroy the infrastructure to avoid charges:

bash
# Option A: Destroy EKS only (most expensive โ€” ~$132/month)
terraform destroy -target=module.eks

# Option B: Destroy everything
cd terraform
terraform destroy
# Type 'yes' when prompted

# Verify everything is gone (should return empty)
aws ec2 describe-vpcs --filters "Name=tag:Project,Values=sandbox"
aws rds describe-db-instances --query "DBInstances[?DBInstanceIdentifier=='sandbox-db']"

# Major cost drivers to watch:
#   NAT Gateway  ~$32/month  โ† auto-destroyed with terraform destroy
#   ALB          ~$18/month  โ† auto-destroyed with terraform destroy
#   RDS          free tier   โ† 750 hrs/month
#   EKS cluster  ~$72/month  โ† destroy first if not using K8s

Key Takeaways

  • CloudFormation = AWS-native, auto-rollback, no state management
  • Terraform = multi-cloud, first-class modules, superior plan UX
  • Always run terraform plan before apply โ€” review every change
  • Use -target=module.X to deploy or destroy a single module
  • Store state in S3 + DynamoDB for team use โ€” never commit .tfstate files
  • EKS is disabled by default (enable_eks = false) to prevent surprise charges
  • The terraform/ folder mirrors the tutorial modules: networking โ†’ security_groups โ†’ database โ†’ compute โ†’ eks
  • Run terraform destroy after each learning session โ€” NAT Gateway and EKS are the biggest cost drivers