Table of Contents

Earlier, it was a challenging and hectic job to manage IT infrastructures. The System Administrator had to manually manage all the underlying services, hardware, and software needed for the entire application system to work.

In recent years, the IT industry has seen a revolution in terms of tools and technologies that work towards automating the process of deploying and managing the Infrastructure. And this is what we term it as Infrastructure as Code or IaC.

Writing Infrastructure in configuration files or code is referred to as Infrastructure as code. This code acts as the single source of truth for all the components required in the Infrastructure, such as Load Balancers, Virtual Machines, Gateways, etc.
Various IaC tools are available, each solving its purpose, but now the question comes, which tool will best suit your goal.
Some of the popular IaC tools are:-

  1. Terraform
  2. Puppet
  3. Ansible
  4. Chef
  5. Cloudformation

How to manage your Infrastructure as Code using Terraform?

Terraform is a tool that lets you manage, build, and version your Infrastructure using configuration files. It is a convenient tool that enables you to deploy a wide range of resources such as physical servers, networking interfaces, Load balancers on various platforms such as AWS, GCP, and Azure.
The same code needs to be replicated in different environments, ensuring that the same Infrastructure is applied everywhere.
For example, say you are working with AWS as your cloud provider and would like to spin up several EC2 instances of a specific type. You would define the type and number of instances in a configuration file, and Terraform would use that to communicate with the AWS API to create those instances. The same file could then be used to adjust the configuration, for example, increasing or decreasing the number of instances.
Because Infrastructure typically includes many more components than just compute instances, we use a Terraform feature called modules to combine multiple infrastructure components into large, reusable, and shareable chunks.

Why should you use Terraform to manage your Infrastructure?

Terraform is an orchestrator and not an automation tool

What do we mean by that? Automation focuses on a single task, whereas orchestration means creating a workflow and combining multiple automation tasks.

Terraform follows a declarative approach and not a procedural

In a declarative approach, you will tell what you need and not how it is to be done. Just say what you want in your Infrastructure, and terraform will manage all the necessary steps to get the things done.

Multiple Provider support

Terraform comes with the feature to provide support for multiple public cloud service providers like Google Cloud Platform, AWS, Azure.

What are features of Terraform?

  1. Terraform allows managing Infrastructure in the form of configuration files, so it becomes relatively easy to manage multi-environment Infrastructure.
  2. Multi-Cloud platform support
  3. A simple syntax that is easy to understand and modify
  4. Execution plan: Terraform has a planning stage that gives a blueprint of what changes will be deployed in Infrastructure before deploying it.
  5. Terraform builds graphs for all the resources; this helps to parallelly create and modify all the non-dependent resources, thus adding efficiency to the process.

How does Terraform work?

Terraform is split into two parts, terraform core and terraform plugins. Terraform Core communicates with the terraform plugin. Terraform plugins expose an implementation for specific services such as AWS, bash.

What is the Terraform Core?

It’s a binary written in Go programming language. The compiled binary corresponds to CLI terraform.
Terraform Core is responsible for :
Reading the configuration files, i.e., IaC.
State management of various resources.
Construction of resource graph.
Execution of plan.
Communication with plugins.

What are the terraform Plugins?

Terraform Plugins are executable binaries written in Go.

Requirements for Terraform setup

Terraform binary
Configurations files (in which you write IaC)
State file

Terraform configuration files (*.tf)

You write every terraform configuration in files with an extension as .tf. You’ll be mostly dealing with resources in configuration files. Resources are the services or components of your Infrastructure, such as Load Balancers, Virtual machines, etc. And you pass various arguments in a resource you want to set up.
For example:
resource "aws_instance" "web" {
ami =
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"

The above configuration file, defines a resource, i.e., EC2 instance in AWS. Inside the resource block, we have passed the desired arguments as:
ami- AMI id to be issued for our EC2.
Tags - tags for our EC2
This is how we can define other resources, as well.
We can now use terraform apply to deploy this resource on AWS.

What is Terraform state?

The most crucial part of terraform is the terraform state file(terraform.tfstate), which stores all the information related to the deployed Infrastructure and mapping between various components. Terraform then refers to this state file while deploying Infrastructure to check dependencies among resources.

Why is a state file required?

While deploying resources, for example, EC2, as in the example above, you don't specify the ARN or instance ID for the EC2. It gets dynamically assigned, so terraform needs to store all such information so that internally it can detect changes if you update the instance_type for the EC2 in the future.
State file is the single source of truth for your Infrastructure, so you need to keep this very safe and ensure that your team always refers to the latest version of terraform.tfstate file to avoid conflicts. By default, state file is stored locally, but keeping it locally is only beneficial if just one developer is working on managing the Infrastructure. However, the best practice is to keep terraform.tfstate file in some remote backend such as S3 bucket.
Following example shows the backend configuration as S3 bucket.
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"

Backend Initialisation

An initialization step needs to be performed every time there is a change in backend configuration; the command used is terraform init.
This command does the following:
Initializes backend
Configures terraform modules (will cover later).

Terraform Workflow

Write IAC, i.e., configuration files.
See what changes it will reflect using terraform plan.
Deploy those changes using terraform apply

Terraform plan

The plan will represent a list of changes that your IAC will make. It doesn't actually affect anything, and it just shows what new will be created/deleted/modified. Therefore, it is safe to use terraform plans multiple times.

Interpreting Terraform plan

Resources can be easily interpreted by their names. For example,
Syntax: <Resource type>.< resource name>
Example, aws_instance.this.
Resources inside a module can be interpreted by:
Syntax: module.<module_name>.<Resource type>. < resource name>
Example module.my_virtual_machines.aws_instance.this
Terraform plan shows five actions for the resources:
[+] Add - Creation of new resource.
[ - ] Delete - Removal of existing resource.
[~] Modify in-place - Existing resources will be modified, and some new parameters will be added to them. Terraform plans will also show you which new parameters will be modified/ added.
[- / +] Terraform will delete and re-create the same resource. This happens when it's not possible to modify a parameter in place.

Sample Terraform Plan

An execution plan has been generated and is shown below.
20:10:18 Resource actions are indicated with the following symbols:
20:10:18 + create
20:10:18 Terraform will perform the following actions:
20:10:18 # aws_cloudwatch_log_group.log_group_containers[0] will be created
20:10:18 + resource "aws_cloudwatch_log_group" "log_group_containers" {
20:10:18 + arn = (known after apply)
20:10:18 + id = (known after apply)
20:10:18 + name = "/eks/dev-scipher-fx-eks-cluster-2/containers"
20:10:18 + retention_in_days = 14
20:10:18 + tags = {
20:10:18 + "Environment" = "dev"
20:10:18 + "Service" = "EKS Cluster"
20:10:18 + "Terraform" = "true"
20:10:18 }
20:10:18 }
20:10:18 # aws_cloudwatch_log_group.log_group_host[0] will be created
20:10:18 + resource "aws_cloudwatch_log_group" "log_group_host" {
20:10:18 + arn = (known after apply)
20:10:18 + id = (known after apply)
20:10:18 + name = "/eks/dev-scipher-fx-eks-cluster-2/host"
20:10:18 + retention_in_days = 14
20:10:18 + tags = {
20:10:18 + "Environment" = "dev"
20:10:18 + "Service" = "EKS Cluster"
20:10:18 + "Terraform" = "true"
20:10:18 }
20:10:18 }

In the above example, the plan shows:
Two new Cloudwatch groups being created in AWS.
Log groups retention_in_days being set to 14 days
Tags being added to log groups

Terraform apply

On applying a given plan, terraform locks the state. It will perform all the actions stated in the planning stage and deploy those resources. Some resources might take time; for example, AWS DocumentDB may take up to 15 minutes. Until that, terraform will block all other operations.
On completion, terraform updates the state file.

Terraform Modules

All the configuration files together with a folder are called a module. Modules allow us to create a reusable code that can be referred to again with required parameters. Consider it as just like functions in any programming language where we just have to call the process anywhere and pass the necessary set of parameters.
Using a module in a configuration file is similar to using resources as we have seen above; the only difference is the clause “module.”
module "moduleName" {
source = "module/path"

The source path tells where the module can be found.
CONFIG block consists of a set of variables or arguments which can be passed to the module and belongs to the module.
For example:
If the main module has a variable log_retention_period, we can pass its value differently for different environments such as:
For Dev environment:
module "cloud_watch_log_groups" {
source = "module/path"
Log_retention_period = 14}

Similarly for Prod environment:
module "cloud_watch_log_groups" {
source = "module/path"
Log_retention_period = 21

Thus, as you can see, we can reuse the same module to deploy Infrastructure for multiple environments.

Terraform Best Practices

Code structure

Writing terraform code better having several files split logically like this:

  • - call modules, locals, and data-sources to create all resources - contains declarations of variables used in - includes outputs from the resources created in
    terraform.tfvars: will be used to pass values to variables.

Remote backend

Use a remote backend such as S3 to store tfstate files. Backends have two main features: state locking and remote state storage. Locking prevents two executions from happening at the same time. And remote state storage allows you to put your state in a remote yet accessible location.

Naming conventions

General conventions

Use _ (underscore) instead of - (dash) in all: resource names, data source names, variable names, outputs.
Beware that existing cloud resources have many hidden restrictions in their naming conventions.
Only use lowercase letters and numbers.

Resource and data source arguments

Do not repeat resource type in resource name (not partially, nor completely):
Good: resource "aws_route_table" "public" {}
Bad: resource "aws_route_table" "public_route_table" {}
Include tags argument, if supported by resource as the last real argument, followed by depends_on and lifecycle, if necessary. All of these should be separated by a single empty line.


Generate README for each module with input and output variables

Number of EC2 instances

Content to be displayed at home page


Being an open-source tool is the biggest reason terraform has evolved and gathered a strong community of developers. With the power of managing the Infrastructure of multiple providers across multiple environments, terraform is building a solid base among IT professionals.

  1. How AWS supports Infrastructure as Code ? -AWS CloudFormation
  2. Azure Resource Manager (ARM) - Infrastructure as Code on Azure