Back to blog
Cloud & DevOpsintermediate

Project: Full IaC Serverless Stack with Terraform & GitHub Actions

Capstone project — provision a complete serverless stack (Lambda, API Gateway, DynamoDB, CloudFront, S3) across dev, staging, and prod environments using Terraform and a full GitHub Actions CI/CD pipeline.

LearnixoApril 17, 202610 min read
TerraformAWSGitHub ActionsServerlessIaCCI/CDProject
Share:š•
Terraform

What You'll Build

A complete, production-grade serverless application deployed across three environments — using everything from this course:

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│                    Full Stack Architecture                       │
│                                                                  │
│  CloudFront CDN                                                  │
│       │                                                          │
│       ā–¼                                                          │
│  S3 Static Website (React frontend)                              │
│       │                                                          │
│       ā–¼                                                          │
│  API Gateway HTTP API                                            │
│       │                                                          │
│       ā–¼                                                          │
│  Lambda Function (Python 3.12)                                   │
│       │                                                          │
│  ā”Œā”€ā”€ā”€ā”€ā”“ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”                        │
│  │                                      │                        │
│  ā–¼                                      ā–¼                        │
│  DynamoDB Table              Secrets Manager                     │
│  (application data)          (API keys, tokens)                  │
│                                                                  │
│  CloudWatch Logs + Alarms    X-Ray Tracing                       │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

Environments: dev → staging → prod CI/CD: GitHub Actions with OIDC authentication, plan on PR, apply on merge, manual approval for prod.


Project Repository Structure

learnixo-serverless/
ā”œā”€ā”€ .github/
│   └── workflows/
│       ā”œā”€ā”€ ci.yml               # Test + lint on all PRs
│       └── deploy.yml           # Multi-env Terraform deployment
│
ā”œā”€ā”€ infrastructure/
│   ā”œā”€ā”€ modules/
│   │   ā”œā”€ā”€ api/                 # Lambda + API Gateway module
│   │   ā”œā”€ā”€ storage/             # DynamoDB + S3 module
│   │   ā”œā”€ā”€ cdn/                 # CloudFront + S3 static site
│   │   └── monitoring/          # CloudWatch alarms + dashboard
│   │
│   ā”œā”€ā”€ environments/
│   │   ā”œā”€ā”€ dev/
│   │   │   ā”œā”€ā”€ main.tf
│   │   │   ā”œā”€ā”€ terraform.tfvars
│   │   │   └── backend.tf
│   │   ā”œā”€ā”€ staging/
│   │   │   ā”œā”€ā”€ main.tf
│   │   │   ā”œā”€ā”€ terraform.tfvars
│   │   │   └── backend.tf
│   │   └── prod/
│   │       ā”œā”€ā”€ main.tf
│   │       ā”œā”€ā”€ terraform.tfvars
│   │       └── backend.tf
│   │
│   └── bootstrap/               # State bucket + lock table
│       └── main.tf
│
ā”œā”€ā”€ api/
│   ā”œā”€ā”€ handler.py               # Lambda function
│   ā”œā”€ā”€ requirements.txt
│   └── tests/
│       └── test_handler.py
│
└── frontend/
    ā”œā”€ā”€ index.html
    └── app.js

Step 1: Bootstrap (Run Once)

Before you can use Terraform with remote state, you need an S3 bucket and DynamoDB table. Bootstrap these manually:

Bash
# bootstrap/main.tf
terraform {
  required_providers {
    aws = { source = "hashicorp/aws", version = "~> 5.0" }
  }
}

provider "aws" { region = "us-east-1" }

resource "aws_s3_bucket" "state" {
  bucket = "learnixo-terraform-state-${data.aws_caller_identity.current.account_id}"
}

resource "aws_s3_bucket_versioning" "state" {
  bucket = aws_s3_bucket.state.id
  versioning_configuration { status = "Enabled" }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "state" {
  bucket = aws_s3_bucket.state.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "aws:kms"
    }
  }
}

resource "aws_s3_bucket_public_access_block" "state" {
  bucket = aws_s3_bucket.state.id
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

resource "aws_dynamodb_table" "state_lock" {
  name         = "terraform-state-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute { name = "LockID"; type = "S" }
}

data "aws_caller_identity" "current" {}

output "state_bucket" { value = aws_s3_bucket.state.id }
Bash
cd infrastructure/bootstrap
terraform init && terraform apply

Step 2: The API Module

HCL
# modules/api/main.tf
locals {
  name = "${var.project}-${var.environment}"
}

# IAM
resource "aws_iam_role" "lambda" {
  name = "${local.name}-lambda"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect    = "Allow"
      Action    = "sts:AssumeRole"
      Principal = { Service = "lambda.amazonaws.com" }
    }]
  })
}

resource "aws_iam_role_policy_attachment" "basic" {
  role       = aws_iam_role.lambda.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

resource "aws_iam_role_policy" "lambda_access" {
  role = aws_iam_role.lambda.id
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect   = "Allow"
        Action   = ["dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:Query", "dynamodb:Scan", "dynamodb:UpdateItem", "dynamodb:DeleteItem"]
        Resource = [var.dynamodb_table_arn, "${var.dynamodb_table_arn}/index/*"]
      },
      {
        Effect   = "Allow"
        Action   = ["secretsmanager:GetSecretValue"]
        Resource = ["arn:aws:secretsmanager:*:*:secret:${var.environment}/*"]
      },
      {
        Effect   = "Allow"
        Action   = ["xray:PutTraceSegments", "xray:PutTelemetryRecords"]
        Resource = "*"
      }
    ]
  })
}

# Lambda
data "archive_file" "lambda" {
  type        = "zip"
  source_dir  = var.lambda_source_dir
  output_path = "${path.module}/.lambda_${var.environment}.zip"
  excludes    = ["tests", "__pycache__", "*.pyc", "requirements-dev.txt"]
}

resource "aws_cloudwatch_log_group" "lambda" {
  name              = "/aws/lambda/${local.name}"
  retention_in_days = var.log_retention_days
}

resource "aws_lambda_function" "api" {
  function_name    = local.name
  role             = aws_iam_role.lambda.arn
  runtime          = "python3.12"
  handler          = "handler.lambda_handler"
  filename         = data.archive_file.lambda.output_path
  source_code_hash = data.archive_file.lambda.output_base64sha256
  memory_size      = var.lambda_memory_mb
  timeout          = var.lambda_timeout_seconds

  tracing_config { mode = "Active" }   # X-Ray tracing

  environment {
    variables = merge(var.environment_variables, {
      DYNAMODB_TABLE = var.dynamodb_table_name
      ENVIRONMENT    = var.environment
      LOG_LEVEL      = var.environment == "prod" ? "WARNING" : "DEBUG"
    })
  }

  depends_on = [
    aws_iam_role_policy_attachment.basic,
    aws_cloudwatch_log_group.lambda,
  ]
}

# API Gateway HTTP API
resource "aws_apigatewayv2_api" "main" {
  name          = local.name
  protocol_type = "HTTP"

  cors_configuration {
    allow_headers = ["content-type", "authorization", "x-api-key"]
    allow_methods = ["GET", "POST", "PUT", "DELETE", "OPTIONS"]
    allow_origins = var.cors_origins
    max_age       = 300
  }
}

resource "aws_apigatewayv2_integration" "lambda" {
  api_id                 = aws_apigatewayv2_api.main.id
  integration_type       = "AWS_PROXY"
  integration_uri        = aws_lambda_function.api.invoke_arn
  payload_format_version = "2.0"
}

resource "aws_apigatewayv2_route" "default" {
  api_id    = aws_apigatewayv2_api.main.id
  route_key = "$default"
  target    = "integrations/${aws_apigatewayv2_integration.lambda.id}"
}

resource "aws_apigatewayv2_stage" "default" {
  api_id      = aws_apigatewayv2_api.main.id
  name        = "$default"
  auto_deploy = true

  default_route_settings {
    throttling_burst_limit = var.environment == "prod" ? 1000 : 100
    throttling_rate_limit  = var.environment == "prod" ? 500  : 50
  }
}

resource "aws_lambda_permission" "apigw" {
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.api.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "${aws_apigatewayv2_api.main.execution_arn}/*/*"
}

output "api_endpoint"       { value = aws_apigatewayv2_stage.default.invoke_url }
output "lambda_name"        { value = aws_lambda_function.api.function_name }
output "lambda_arn"         { value = aws_lambda_function.api.arn }

Step 3: The Lambda Handler

Python
# api/handler.py
import json
import os
import uuid
import logging
import boto3
from boto3.dynamodb.conditions import Key
from datetime import datetime, timezone

logger = logging.getLogger()
logger.setLevel(os.environ.get("LOG_LEVEL", "INFO"))

dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table(os.environ["DYNAMODB_TABLE"])

ROUTES = {}

def route(method, path):
    def decorator(fn):
        ROUTES[(method, path)] = fn
        return fn
    return decorator

def lambda_handler(event, context):
    method = event.get("requestContext", {}).get("http", {}).get("method", "")
    path = event.get("rawPath", "/")
    
    logger.info(f"{method} {path}", extra={"requestId": context.aws_request_id})
    
    try:
        handler_fn = ROUTES.get((method, path)) or ROUTES.get((method, path.rstrip("/")))
        if handler_fn:
            return handler_fn(event, context)
        
        # Path params (e.g. /items/{id})
        for (m, p), fn in ROUTES.items():
            if m == method and is_path_match(p, path):
                event["pathParams"] = extract_path_params(p, path)
                return fn(event, context)
        
        return respond(404, {"error": "Not Found"})
    
    except ValueError as e:
        logger.warning(f"Validation error: {e}")
        return respond(400, {"error": str(e)})
    except Exception as e:
        logger.exception(f"Unhandled error: {e}")
        return respond(500, {"error": "Internal Server Error"})

@route("GET", "/health")
def health_check(event, context):
    return respond(200, {"status": "ok", "environment": os.environ["ENVIRONMENT"]})

@route("GET", "/items")
def list_items(event, context):
    result = table.scan(Limit=50)
    return respond(200, {"items": result.get("Items", [])})

@route("POST", "/items")
def create_item(event, context):
    body = json.loads(event.get("body") or "{}")
    if not body.get("name"):
        raise ValueError("name is required")
    
    item = {
        "pk": f"ITEM#{uuid.uuid4()}",
        "sk": "METADATA",
        "name": body["name"],
        "description": body.get("description", ""),
        "createdAt": datetime.now(timezone.utc).isoformat(),
    }
    table.put_item(Item=item)
    return respond(201, item)

@route("GET", "/items/{id}")
def get_item(event, context):
    item_id = event["pathParams"]["id"]
    result = table.get_item(Key={"pk": f"ITEM#{item_id}", "sk": "METADATA"})
    item = result.get("Item")
    if not item:
        return respond(404, {"error": "Item not found"})
    return respond(200, item)

def respond(status_code, body):
    return {
        "statusCode": status_code,
        "headers": {
            "Content-Type": "application/json",
            "X-Request-Id": str(uuid.uuid4()),
        },
        "body": json.dumps(body, default=str),
    }

def is_path_match(template, path):
    t_parts = template.split("/")
    p_parts = path.split("/")
    if len(t_parts) != len(p_parts):
        return False
    return all(t == p or t.startswith("{") for t, p in zip(t_parts, p_parts))

def extract_path_params(template, path):
    params = {}
    for t, p in zip(template.split("/"), path.split("/")):
        if t.startswith("{") and t.endswith("}"):
            params[t[1:-1]] = p
    return params

Step 4: Environment Configurations

HCL
# environments/dev/main.tf
terraform {
  required_providers {
    aws = { source = "hashicorp/aws", version = "~> 5.0" }
  }
  backend "s3" {
    bucket         = "learnixo-terraform-state-123456789"
    key            = "dev/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-state-locks"
  }
}

provider "aws" {
  region  = var.aws_region
  profile = "learnixo-dev"
}

module "storage" {
  source      = "../../modules/storage"
  environment = "dev"
  project     = "learnixo"
}

module "api" {
  source              = "../../modules/api"
  environment         = "dev"
  project             = "learnixo"
  lambda_source_dir   = "${path.root}/../../api"
  dynamodb_table_arn  = module.storage.table_arn
  dynamodb_table_name = module.storage.table_name
  lambda_memory_mb    = 256
  lambda_timeout_seconds = 30
  log_retention_days  = 7
  cors_origins        = ["*"]
}

output "api_url" { value = module.api.api_endpoint }
HCL
# environments/prod/terraform.tfvars
aws_region       = "us-east-1"
lambda_memory_mb = 512
cors_origins     = ["https://learnixo.io"]

Step 5: Complete GitHub Actions Pipeline

YAML
# .github/workflows/deploy.yml
name: Deploy

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

concurrency:
  group: deploy-${{ github.ref }}
  cancel-in-progress: true

jobs:
  # ─── 1. Test Lambda ────────────────────────────────────
  test:
    name: Test Lambda
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: api

    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with: { python-version: "3.12", cache: pip }

      - run: pip install -r requirements.txt -r requirements-dev.txt
      - run: pytest tests/ -v --cov=. --cov-report=xml
      - run: pip install bandit && bandit -r . -ll

  # ─── 2. Terraform Plan (Dev) ────────────────────────────
  plan-dev:
    name: Plan Dev
    needs: test
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: infrastructure/environments/dev

    permissions:
      id-token: write
      contents: read
      pull-requests: write

    outputs:
      plan_exitcode: ${{ steps.plan.outputs.exitcode }}

    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3
        with: { terraform_version: "1.8.0" }

      - name: AWS Auth
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.DEV_TERRAFORM_ROLE_ARN }}
          aws-region: us-east-1

      - run: terraform init
      - run: terraform validate
      - run: terraform fmt -check -recursive ../../

      - name: Plan
        id: plan
        run: |
          terraform plan -no-color -out=tfplan 2>&1 | tee plan.txt
          echo "exitcode=$?" >> $GITHUB_OUTPUT

      - name: Comment PR
        if: github.event_name == 'pull_request'
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const plan = fs.readFileSync('infrastructure/environments/dev/plan.txt', 'utf8').slice(0, 50000);
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## Dev Terraform Plan\n\`\`\`hcl\n${plan}\n\`\`\``
            });

  # ─── 3. Apply Dev ────────────────────────────────────────
  apply-dev:
    name: Apply Dev
    needs: plan-dev
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    environment: dev
    defaults:
      run:
        working-directory: infrastructure/environments/dev

    permissions:
      id-token: write
      contents: read

    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3

      - name: AWS Auth
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.DEV_TERRAFORM_ROLE_ARN }}
          aws-region: us-east-1

      - run: terraform init
      - run: terraform apply -auto-approve

      - name: Get API URL
        run: |
          API_URL=$(terraform output -raw api_url)
          echo "DEV_API_URL=$API_URL" >> $GITHUB_ENV
          echo "Dev API: $API_URL"

      - name: Smoke test
        run: |
          sleep 5
          curl -f "$DEV_API_URL/health" | jq .

  # ─── 4. Apply Prod (manual gate) ────────────────────────
  apply-prod:
    name: Apply Production
    needs: apply-dev
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    environment: prod    # Requires manual approval in GitHub Settings
    defaults:
      run:
        working-directory: infrastructure/environments/prod

    permissions:
      id-token: write
      contents: read

    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3

      - name: AWS Auth
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.PROD_TERRAFORM_ROLE_ARN }}
          aws-region: us-east-1

      - run: terraform init
      - run: terraform plan -out=tfplan
      - run: terraform apply -auto-approve tfplan

Step 6: Deploy and Verify

Bash
# 1. Bootstrap state infrastructure (one-time)
cd infrastructure/bootstrap
terraform init && terraform apply

# 2. Deploy dev
cd infrastructure/environments/dev
terraform init
terraform plan
terraform apply

# 3. Verify
DEV_URL=$(terraform output -raw api_url)
curl "$DEV_URL/health"
# → {"status": "ok", "environment": "dev"}

curl -X POST "$DEV_URL/items" \
  -H "Content-Type: application/json" \
  -d '{"name": "Test Item", "description": "Created via API"}'
# → {"pk": "ITEM#abc-123", "name": "Test Item", ...}

curl "$DEV_URL/items"
# → {"items": [...]}

# 4. Push to GitHub → CI runs tests → CD deploys to dev → manual approval → prod
git add -A && git commit -m "feat: initial serverless stack"
git push origin main

What You've Built

| Component | Technology | |-----------|-----------| | API compute | AWS Lambda (Python 3.12) | | HTTP routing | API Gateway HTTP API | | Database | DynamoDB (serverless NoSQL) | | Infrastructure | Terraform (3 environments) | | State management | S3 + DynamoDB locking | | CI/CD | GitHub Actions with OIDC | | Auth | AWS IAM roles (no long-lived keys) | | Observability | CloudWatch Logs + X-Ray tracing | | Environments | dev → staging → prod promotion |

This is production-grade infrastructure you can clone, modify, and deploy for real projects. The patterns here — OIDC auth, environment promotion, module reuse, separate state per environment — are exactly what professional teams use.

Congratulations — you've completed the Terraform & AWS DevOps course.

Enjoyed this article?

Explore the Cloud & DevOps learning path for more.

Found this helpful?

Share:š•

Leave a comment

Have a question, correction, or just found this helpful? Leave a note below.