In this blog, you will learn how to deploy a Terraform autoscaling group with an application load balancer using step-by-step guides
We are going to build the following in this guide.
- AWS Autoscaling group spanning three subnets.
- IAM role attached to Autoscaling instances to access other AWS services
- Application Load Balancer attached to the Autoscaling group
Throughout this article, we will be using the following short names.
- ALB – Application load balancer
- ASG – Autoscaling Group
Note: If you are not aware of AWS Load Balancer and Autoscaling Group concepts, we suggest you understand it before following this setup.
Prerequisites
To follow this guide you need to have the following.
- The latest Terraform binary is installed and configured in your system.
- AWS CLI is installed and configured with a valid AWS account with permission to deploy the autoscaling group and application load balancer.
- If you are using an ec2 instance to run Terraform, ensure you attach an IAM role with permission to create ASG and ALB.
Setup Architecture & Overview
Here is the high-level architecture of the setup we are going to create.
Here is the high-level overview of the AWS resources and components created by this setup.
- IAM role with required policies and the role is attached to an IAM instance profile which will be then attached to every instance that is part of the autoscaling group.
- The auto-scaling group manages a specified number of instances and uses the launch template with the required configurations to launch an instance.
- Application load balancers send traffic to the ASG instances. It creates a target group and creates an LB listener that listens to port 80 for HTTP traffic and forwards it to the specified target group(ASG) to distribute traffic.
- Health checks are added to instances in the target group to check the status of the instance. If the instance health check fails, it destroys the instance and launches a new instance. Once the new instance is in a healthy state, the application load balancer will then forward the traffic to the newly launched instance.
We have separate security groups for ALB and ASG EC2 instances. For ASG, traffic on port 8080 will be accepted only from the ALB. We achieve this by adding the security group ID of the ALB as the source traffic for the ASG security group. Also, we allow port 22 access only from a specific subnet.
Here is a high-level view of how ALB and ASG security groups are designed.
Terraform ALB and ASG Code Repository
ALB and ASG terraform code is a part of the terraform AWS repository. Clone it to your workstation to follow the guide.
git clone https://github.com/techiescamp/terraform-aws.git
Fork and clone the repository if you intend to reuse and make changes as per your requirements.
Note: When using Terraform in production, it has to go through the infra-CI review process using tools like Tflint, Terratest, Checkov, etc.
Terraform AWS ALB and ASG Provisioning Workflow
The ALB and ASG terraform script is structured in the following way.
├── apps
│ ├── alb-asg
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
├── infra
│ └── iam-policies
│ └── alb-asg.json
├── modules
│ ├── asg
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── iam-policy
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── alb
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── security-group
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── vars
└── dev
└── alb-asg.tfvars
vars
folder contains the variables file named alb-asg.tfvars
appsalb-asg
folder contains the parent terraform module (main.tf
) that calls the child modules under the modules
folder
infra/iam-policies
contains the IAM JSON policy document named alb-asg.json
that will be added to the Instance Profile.
The child modules contain the following resources
- IAM Role: For ec2 instance in the autoscaling group to access other AWS services.
- Security Group: To allow & deny access to/from the load balancer and ec2 instance.
- Load Balancer: It distributes incoming traffic to EC2 instances using the Round Robin algorithm.
- Target Group: To evenly distribute traffic throughout a collection of EC2 instances.
- Listener: It monitors incoming requests on a certain port and notifies the target group.
- Auto Scaling Group: It automatically scales EC2 instances based on demand, which maintains application availability and also keeps track of instance health, and replaces failing instances.
- Launch Template: A template that contains the AMI details, keypair, etc. This template will be applied to the autoscaling group instances.
Application AMI
For this demo, we will be using a Java application AMI that runs on port 8080. You need an AMI with some application running to deploy it in the autoscaling group.
If you want to use the same AMI we have used in this guide, you can use the AMI id ami-020f3ca563c92097b
in the us-west-2 region that we have made public.
If you want to create the same AMI, you can refer to the Build Java Application AMI blog for detailed steps to build the AMI using Packer.
Provisioning ASG and ALB Using Terraform
This demo is based on the following values
- Region: us-west-2
- Public AMI ID (Java Application):
ami-020f3ca563c92097b
Follows the steps given below to provision the autoscaling group with an application load balancer.
Step 1: Modify the ALB and ASG variables
Open the alb-asg.tfvars
file present in the vars/dev folder.
You need to modify the variables marked in bold as per your requirements.
region = "us-west-2"
# alb
internal = false
loadbalancer_type = "application"
alb_subnets = ["subnet-058a7514ba8adbb07", "subnet-0dbcd1ac168414927", "subnet-032f5077729435858"]
#alb-sg
alb_ingress_cidr_from_port = [80]
alb_ingress_cidr_to_port = [80]
alb_ingress_cidr_protocol = ["tcp"]
alb_ingress_cidr_block = ["0.0.0.0/0"]
alb_create_ingress_cidr = true
alb_ingress_sg_from_port = [8080]
alb_ingress_sg_to_port = [8080]
alb_ingress_sg_protocol = ["tcp"]
alb_create_ingress_sg = false
alb_egress_cidr_from_port = [0]
alb_egress_cidr_to_port = [0]
alb_egress_cidr_protocol = ["-1"]
alb_egress_cidr_block = ["0.0.0.0/0"]
alb_create_egress_cidr = true
alb_egress_sg_from_port = [0]
alb_egress_sg_to_port = [0]
alb_egress_sg_protocol = ["-1"]
alb_create_egress_sg = false
# instance sg
ingress_cidr_from_port = [22]
ingress_cidr_to_port = [22]
ingress_cidr_protocol = ["tcp"]
ingress_cidr_block = ["0.0.0.0/0"]
create_ingress_cidr = true
ingress_sg_from_port = [8080]
ingress_sg_to_port = [8080]
ingress_sg_protocol = ["tcp"]
create_ingress_sg = true
egress_cidr_from_port = [0]
egress_cidr_to_port = [0]
egress_cidr_protocol = ["-1"]
egress_cidr_block = ["0.0.0.0/0"]
create_egress_cidr = true
egress_sg_from_port = [8080]
egress_sg_to_port = [8080]
egress_sg_protocol = ["tcp"]
create_egress_sg = false
# target_group
target_group_port = 8080
target_group_protocol = "HTTP"
target_type = "instance"
load_balancing_algorithm = "round_robin"
# health_check
health_check_path = "/"
health_check_port = 8080
health_check_protocol = "HTTP"
health_check_interval = 30
health_check_timeout = 5
health_check_healthy_treshold = 2
health_check_unhealthy_treshold = 2
#alb_listener
listener_port = 80
listener_protocol = "HTTP"
listener_type = "forward"
#launch_template
ami_id = "ami-020f3ca563c92097b"
instance_type = "t2.medium"
key_name = "techiescamp"
vpc_id = "vpc-0a5ca4a92c2e10163"
asg_subnets = ["subnet-058a7514ba8adbb07", "subnet-0dbcd1ac168414927", "subnet-032f5077729435858"]
public_access = true
#user_data
user_data = <<-EOF
#!/bin/bash
bash /home/ubuntu/start.sh
EOF
#autoscaling_group
max_size = 2
min_size = 1
desired_capacity = 1
propagate_at_launch = true
instance_warmup_time = 30
target_value = 50
#tags
owner = "techiescamp"
environment = "dev"
cost_center = "techiescamp-commerce"
application = "java-app"
Step 2: Initialize terraform
Once the variables are modified as per your requirements, cd into apps/alb-asg
directory.
cd apps/alb-asg
Inside the alb-asg folder, you can find the main.tf parent module where it calls the load balancer, auto-scaling group, and IAM policy child modules present in the modules directory as shown below.
provider "aws" {
region = var.region
}
module "iam-policy" {
source = "../../../modules/iam-policy"
owner = var.owner
environment = var.environment
cost_center = var.cost_center
application = var.application
}
module "alb-sg" {
source = "../../../modules/security-group"
region = var.region
tags = var.tags
name = "${var.environment}-${var.application}"
environment = var.environment
owner = var.owner
cost_center = var.cost_center
application = "${var.application}-alb"
vpc_id = var.vpc_id
ingress_cidr_from_port = var.alb_ingress_cidr_from_port
ingress_cidr_to_port = var.alb_ingress_cidr_to_port
ingress_cidr_protocol = var.ingress_cidr_protocol
ingress_cidr_block = var.alb_ingress_cidr_block
create_ingress_cidr = var.alb_create_ingress_cidr
ingress_sg_from_port = var.alb_ingress_sg_from_port
ingress_sg_to_port = var.alb_ingress_sg_to_port
ingress_sg_protocol = var.alb_ingress_sg_protocol
ingress_security_group_ids = var.ingress_security_group_ids
create_ingress_sg = var.alb_create_ingress_sg
egress_cidr_from_port = var.alb_egress_cidr_from_port
egress_cidr_to_port = var.alb_egress_cidr_to_port
egress_cidr_protocol = var.alb_egress_cidr_protocol
egress_cidr_block = var.alb_egress_cidr_block
create_egress_cidr = var.alb_create_egress_cidr
egress_sg_from_port = var.alb_egress_sg_from_port
egress_sg_to_port = var.alb_egress_sg_to_port
egress_sg_protocol = var.alb_egress_sg_protocol
egress_security_group_ids = var.egress_security_group_ids
create_egress_sg = var.alb_create_egress_sg
}
module "alb" {
source = "../../../modules/alb"
region = var.region
internal = var.internal
loadbalancer_type = var.loadbalancer_type
vpc_id = var.vpc_id
alb_subnets = var.alb_subnets
target_group_port = var.target_group_port
target_group_protocol = var.target_group_protocol
target_type = var.target_type
load_balancing_algorithm = var.load_balancing_algorithm
health_check_path = var.health_check_path
health_check_port = var.health_check_port
health_check_protocol = var.health_check_protocol
health_check_interval = var.health_check_interval
health_check_timeout = var.health_check_timeout
health_check_healthy_treshold = var.health_check_healthy_treshold
health_check_unhealthy_treshold = var.health_check_unhealthy_treshold
listener_port = var.listener_port
listener_protocol = var.listener_protocol
listener_type = var.listener_type
owner = var.owner
environment = var.environment
cost_center = var.cost_center
application = var.application
security_group_ids = module.alb-sg.security_group_ids
}
module "instance-sg" {
source = "../../../modules/security-group"
region = var.region
tags = var.tags
name = "${var.environment}-${var.application}"
environment = var.environment
owner = var.owner
cost_center = var.cost_center
application = var.application
vpc_id = var.vpc_id
ingress_cidr_from_port = var.ingress_cidr_from_port
ingress_cidr_to_port = var.ingress_cidr_to_port
ingress_cidr_protocol = var.ingress_cidr_protocol
ingress_cidr_block = var.ingress_cidr_block
create_ingress_cidr = var.create_ingress_cidr
ingress_sg_from_port = var.ingress_sg_from_port
ingress_sg_to_port = var.ingress_sg_to_port
ingress_sg_protocol = var.ingress_sg_protocol
ingress_security_group_ids = module.alb-sg.security_group_ids
create_ingress_sg = var.create_ingress_sg
egress_cidr_from_port = var.egress_cidr_from_port
egress_cidr_to_port = var.egress_cidr_to_port
egress_cidr_protocol = var.egress_cidr_protocol
egress_cidr_block = var.egress_cidr_block
create_egress_cidr = var.create_egress_cidr
egress_sg_from_port = var.egress_sg_from_port
egress_sg_to_port = var.egress_sg_to_port
egress_sg_protocol = var.egress_sg_protocol
egress_security_group_ids = module.alb-sg.security_group_ids
create_egress_sg = var.create_egress_sg
}
module "asg" {
source = "../../../modules/asg"
ami_id = var.ami_id
instance_type = var.instance_type
key_name = var.key_name
vpc_id = var.vpc_id
asg_subnets = var.asg_subnets
public_access = var.public_access
user_data = var.user_data
max_size = var.max_size
min_size = var.min_size
desired_capacity = var.desired_capacity
propagate_at_launch = var.propagate_at_launch
owner = var.owner
environment = var.environment
cost_center = var.cost_center
application = var.application
instance_warmup_time = var.instance_warmup_time
target_value = var.target_value
alb_target_group_arn = module.alb.alb_target_group_arn
iam_role = module.iam-policy.iam_role
security_group_ids = module.instance-sg.security_group_ids
tags = {
Owner = "${var.owner}"
Environment = "${var.environment}"
Cost_center = "${var.cost_center}"
Application = "${var.application}"
}
}
Initialize Terraform using the following command
terraform init
This command initializes terraform. Make sure to run the init command inside the environments/dev/alb-asg directory.
Step 3: Validate Configurations
Validate terraform configs using the validate command.
terraform validate
Step 4: Execute the configuration plan
To verify the configurations, run terraform plan with the variable file.
terraform plan -var-file=../../../vars/dev/alb-asg.tfvars
Step 5: Apply the configuration
After verifying, apply the configurations using the command given below.
terraform apply -var-file=../../../vars/dev/alb-asg.tfvars --auto-approve
Once the code is successfully executed, check if everything in the Terraform code is provisioned by visiting the AWS console.
If you have used the AMI id we provided, the load balancer URL should give the webpage as shown below.
Check if the auto-scaling group is working by terminating the instance, if it launches a new instance automatically it means it is working as expected.
It takes approximately 30 seconds to launch a new instance.
Step 6: Cleanup
To clean up the setup, use the following command.
terraform destroy -var-file=../../../vars/dev/alb-asg.tfvars
Note: There are many parameters supported by the autoscaling group and application load balancer resources. If you want to deploy these for production use cases, please refer to the official documentation and design a solution that complies with security and availability as per organizational standards. Refer to terraform official aws_autoscaling_group and aws_lb to know about all the supported parameters.
Conclusion
In this guide, we looked at terraform autoscaling groups and application load balancer provisioning.
When using Autoscaling groups and load balancers for production, you must consider security, availability, cloudwatch logging, scalability, and monitoring. Whether you are using a community module or a custom terraform AWS module, ensure you follow the organization’s standards.
You can also check out our guide on provisioning RDS using Terraform.
2 comments
Great article.
How would “provider.tf” get used ?
your repo has “provider.tf” outside of “environments” folder. but we provision from each of the folders (such as environments/dev/rds). is this not being used ?
Also, great images from your article. How did you create the image where the flow is shown between components (with moving dashes….) ?
Hi Sreenivas,
provider.tf comes in to picture for setting up remote state file using s3 and DynamoDB lock. We will publish a article on that soon.