Creating Complete Infrastructure on AWS using Terraform

Abhishek Kumar
6 min readJun 16, 2020

These days cloud is blooming and you must have used AWS, GCP, Microsoft Azure or some other cloud computing platform, through their Web App or CLI. Well, that could be a way to access the services from providers, but don’t you think it’s an old and time-consuming way because of its manual nature. But what if we automate everything, like from creating key-pair for the instance to set up an instance to setting it up as a web-server. And that’s what we are going to do today.

And the best thing is that there is a tool available for this i.e. Terraform. We are going to use terraform code (in HashiCorp Configuration Language). Here I’ll be using AWS, but you can use terraform with other service providers too, even without being explicitly trained for that provider. And this is the beauty of terraform as it supports many providers (through plug-ins).

So let’s see what are we going to do today:

  1. Setting-up the provider (AWS in this case) in Terraform.
  2. Creating a key-pair for the instance.
  3. Creating a security group and adding some rules in it.
  4. Launching an EC2 instance.
  5. Launching one EBS volume and mounting it on the instance and clone the GitHub repository in it.
  6. Creating an S3 bucket, and storing the static content from GitHub in the bucket.
  7. Creating a CloudFront distribution for the static content and replacing its URL in the code (as and when needed).
  8. Creating a snapshot of the volume.
  9. Launching the website that we hosted on our web server.

Before beginning, make sure you have downloaded the Terraform and created a workspace for your terraform (.tf) file.

Step1: Let’s set the provider first

To set-up the provider in terraform, we need to use provider keyword. Also, we have to specify who is the doer (from whose account all these steps gonna perform). You can provide the secret key and access key in the code itself, but that’s not preferred as you might share this file with someone else. As an alternative, I am going to configure a named profile in my local system and then I will just specify which profile I will use in my terraform code.

After configuring you can use that profile under the provider block and set the region accordingly.

Step 2: Now create a key pair

In this step we are going to create a key-pair for the EC2 instance we are going to create in step 4. Here we use tls_private_key resource to generates a secure private key and encodes it as PEM. Then use aws_key_pair resource creates an AWS key pair based on the public key which can be used as key while launching up the instance.

Step 3: Then create a security group

Here we are going to create a security group using aws_security_group resource. You can give name, description and vpc_id accordingly. Then we set ingress/inbound rules. I have added rules for SSH and HTTP requests (since we are setting up a web-server). For egress/outbound rules, I haven’t set any particular restriction.

Step 4: It’s the time to launch Instance

We are going to launch an instance with the resource aws_instance in ap-south-1 region (as specified inside the provider block) using EC2 service of AWS. You can select ami and instance_type according to your requirements. Here we have used the key created in step 2 and the security group created in step 3. Next, since we want to set this instance as web-server, so we are going to install the required software in this instance using provisioner. And here we are working remotely so we use remote-exec provisioner in this case. But before that, we’ll set-up a connection to the remote instance using the connection (you remember we add SSH type in ingress rules of the security group).

Point to note here is we have used depends_on here, which tells terraform that this resource is dependent on other resources. So, we can set a relative sequence of execution.

Step 5: Launch EBS volume and attach to the instance

Firstly we will store the value size of the volume(will be asked while running the code) in a variable. Then we’ll use aws_ebs_volume resource to create an EBS volume, but note that we have to create the volume in the same availability zone as the instance we want to attach it to. To attach we have used aws_volume_attachment resource. We have set force_detach true so that, it won’t create a problem while destroying (although it is not preferred).

Then we used null_resource resource for remotely connecting to the instance. After that, just format and mount the EBS volume to that instance. Then we’ll clone the repository having all the files/images to the /var/www/html folder (because are going to use Apache-httpd software).

Step 6: Now create an S3 bucket

Here we’ll create an S3 bucket using aws_s3_bucket resource and will set an access control list i.e. acl to be public-read (as we are going to use that for the cloud-front distribution). Then we’ll copy the images to the S3 bucket using local-exec provisioner, but before that, we need to clone that GitHub repo to our local system, hence we’ll use depends_on here.

Step 7: Creating CloudFront Distribution

Next, we’ll create a CloudFront distribution for the static data of our cloned website. We’ll use aws_cloudfront_distribution resource for this purpose. You can set various options according to your need. Here we have shown you what we have set. Once the distribution is set, we’ll use its URL to the image’s old URL in the project with the new CloudFront URL for the image, using thesed command as:

Step 8: Creating a snapshot of the volume

Next, we’ll create a snapshot of the volume using aws_ebs_snapshot. We’ll write the depends_on accordingly.

Step 9: Finally, it’s time to launch it in a single click

We are going to use local-exec provisioner for automatically launching the website. Moreover, to see the IP address and some other details we’ll use output block. So it’s time for terraform apply.

Additional Step

This step is needed while we destroy the whole infrastructure. But why do we need additional statements? This can be easily understood by an example that to delete an S3 bucket, it need to be empty, so we’ll first make it empty using additional statements so that it can be destroyed using terraform destroy command.

Now we can create complete infrastructure using terraform in just one click.

You’ll find the entire code at:

Connect With Me On LinkedIn:

--

--