A MacBook with lines of code on its screen on a busy desk

Hosting a static website on AWS S3 using Route 53 and Terraform

None of this is new. S3 has been around since 2006, Route 53 since 2010, and Terraform since 2016. This isn’t even the best way to do it. This is simply iteration 1 for getting a website accessible. There are many improvements to be made here

Starting with a basic index.html we need to be able to serve that to the world, there are a few things that are needed to make that happen

  • The HTML that we want to share with the world
  • Somewhere to store the HTML
  • A domain name of our own

Oh and, of course, it needs to be possible to automate this so I have consistant deplyments. Terraform makes that incredibly easy

Terraform setup

First, some Terraform basics are needed. This is my providers.tf for reference:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.91.0"
    }
  }

  required_version = "= 1.11.2"
}

provider "aws" {
  region     = var.aws_region
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
}

There’s a couple of things to point out here
1 – I’ve pinned the versions of both Terraform and the AWS provider to the releases available at the time of writing
2 – There’s some magic already with var.* oooh, exciting

Variables

By convention, variables are defined in variables.tf and terraform.tfvars to provide values to those variables. If you don’t provide values, you will be prompted for values when you run terraform. This means you can’t apply your terraform in an automated way, and generally gets quite tedious quite quickly

This is the current state of my varaibles.tf:

variable "aws_region" {
  type        = string
  description = "AWS region to deploy to"

}

variable "aws_access_key" {
  type        = string
  description = "The access key to use to auth to AWS"

}

variable "aws_secret_key" {
  type        = string
  description = "The secret key to use to auth to AWS"

}

Here, the variable name is defined as everything in quotes after the word variable then the varaible parameters are defined in between the braces. You can see that all of the above variables are strings. There are various types available but I only need to use strings

Then my terraform.tfvars looks like:

aws_region     = "My AWS region of choice"
aws_access_key = "My AWS access key"
aws_secret_key = "My AWS secret key"

And it’s as simple as that, really. Filling in those values means I won’t be prompted for them. There will be situations where you’d want to be prompted for input, in which case you wouldn’t put them in to your terraform.tfvars

So we now have our providers set up, let’s do something with them!

Creating S3 bucket

My s3-buckets.tf looks like:

resource "aws_s3_bucket" "benquick_uk_bucket" {
  bucket = var.s3_bucket_name

  tags = {
    host = var.route_53_domain_name
  }
}

data "aws_s3_bucket" "current_bucket" {
  bucket = aws_s3_bucket.benquick_uk_bucket.bucket
}

resource "aws_s3_bucket_website_configuration" "benquick_uk_website_bucket" {
  bucket = data.aws_s3_bucket.current_bucket.bucket

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }
}

This will create my bucket in S3. But! The default access policy is that no public access is allowed so we need to change that with s3-acl.tf and s3-bucket-policy.tf

Sign saying "Come in we're open"
Sign saying “Come in we’re open”

My s3-acl.tf looks like:

resource "aws_s3_bucket_acl" "bucket_acl" {
  bucket     = data.aws_s3_bucket.current_bucket.id
  acl        = "public-read"
  depends_on = [aws_s3_bucket_ownership_controls.s3_bucket_acl_ownership]
}

My s3-bucket-policy.tf looks like:

resource "aws_s3_bucket_ownership_controls" "s3_bucket_acl_ownership" {
  bucket = data.aws_s3_bucket.current_bucket.id
  rule {
    object_ownership = "BucketOwnerPreferred"
  }
  depends_on = [aws_s3_bucket_public_access_block.s3_public_access]
}

resource "aws_s3_bucket_public_access_block" "s3_public_access" {
  bucket = data.aws_s3_bucket.current_bucket.id

  block_public_acls       = false
  block_public_policy     = false
  ignore_public_acls      = false
  restrict_public_buckets = false
}

resource "aws_s3_bucket_policy" "s3_bucket_policy" {
  bucket = data.aws_s3_bucket.current_bucket.id
  policy = data.aws_iam_policy_document.iam-policy-1.json
}
data "aws_iam_policy_document" "iam-policy-1" {
  statement {
    sid    = "AllowPublicRead"
    effect = "Allow"
    resources = [
      "arn:aws:s3:::${var.route_53_domain_name}",
      "arn:aws:s3:::${var.route_53_domain_name}/*",
    ]
    actions = ["S3:GetObject"]
    principals {
      type        = "*"
      identifiers = ["*"]
    }
  }

  depends_on = [aws_s3_bucket_public_access_block.s3_public_access]
}

Adding content to S3

An S3 bucket isn’t much use if it doesn’t have any data in it. I put that in to an s3-objects.tf file:

module "template_files" {
  source = "hashicorp/dir/template"

  base_dir = "${path.module}/website-files"
}

resource "aws_s3_object" "s3_uploads" {
  for_each     = module.template_files.files
  bucket       = data.aws_s3_bucket.current_bucket.bucket
  key          = each.key
  content_type = each.value.content_type
  source       = each.value.source_path
  content      = each.value.content
  etag         = each.value.digests.md5
}

This simply uploads any file that is found in $PWD/website-files in to my previously defined bucket. Each file is given a content_type, and a checksum is calculated. Easy!

All of this is all well and good, but to access this newly uploaded website people will have to navigate to http://${var.s3_bucket_name}.s3-website.${var.aws_region}.amazonaws.com. Which is fine, but my bucket name as a subdomain of amazonaws.com doesn’t look nice

Route 53

Let’s add some records to my hosted zone in Route 53, using r53-records.tf:

data "aws_route53_zone" "benquick_uk_zone" {
  name = var.route_53_domain_name
}

resource "aws_route53_record" "root" {
  zone_id = data.aws_route53_zone.benquick_uk_zone.zone_id
  name    = var.route_53_domain_name
  type    = "A"

  alias {
    name                   = "s3-website.eu-west-2.amazonaws.com."
    zone_id                = aws_s3_bucket.benquick_uk_bucket.hosted_zone_id
    evaluate_target_health = false
  }
}

You’ll never guess what’s now accessible by visiting http://benquick.uk!

Future improvements

This is clearly just a proof of concept. Nothing should be hosted on plain HTTP these days. The Terraform can also be improved on so that there’s more reusable code rather than code re-use. And the way this is deployed, by terraform apply, is pretty poor and it won’t scale. All of this is solvable, but for another day

Leave a Reply

Your email address will not be published. Required fields are marked *