In this comprehensive step-by-step guide, you will learn how to configure the AWS Load Balancer Controller on EKS with detailed workflows and configuration instructions.
AWS Load Balancer Controller
Lets look at a quick overview of what is a AWS Load Balancer Controller.
The AWS Load Balancer Controller component is used in Amazon EKS to manage load balancers for Ingress and Service objects.
It is primarily focused on Kubernetes ingress resources. It allows you to define how traffic should be routed to your applications within the EKS cluster
Why Use the AWS Load Balancer Controller?
Instead of using a custom Ingress controller, such as an NGINX-based ingress controller, you can use the AWS Load Balancer Controller to provision and manage an ALB as the Ingress controller
The Load Balancer Controller can manage multiple Elastic Load Balancers on AWS for the EKS Cluster.
It supports Two Types of Load Balancers:
- Application Load Balancer (ALB): A Layer 7 load balancer that handles HTTP/HTTPS traffic, including SSL/TLS termination and advanced traffic routing.
- Network Load Balancer (NLB): A Layer 4 load balancer designed to handle TCP/UDP traffic..
Setup Prerequisites
Following are the prerequisites for this setup.
- AWS CLI v2.18.10 or higher should be on your local system with the required privileges.
- EKSCTL v0.193.0 or higher should be on your local system
- EKS Cluster v1.30 or higher
- Pod Identity Agent v1.3.2-eksbuild.2 or higher should be available on the EKS cluster.
- Kubectl v1.31 or higher should be on your local system
- Helm v3.16.2 or higher should be on your local system
Load Balancer Controller Workflow
The following workflow explains how the AWS Load Balancer works on the EKS Cluster.
The AWS Load Balancer Controller will provision the Application/Network Load Balancer based on the Service type and Ingress object.
- The Load Balancer Controller Pod will monitor the Services and Ingress objects.
- The controller Pod will get permission to provision the AWS Load Balancers from the AWS IAM Role and the Pod Identity Agent.
- The controller will provision an Application Load Balancer if an Ingress object is created on the cluster.
- The controller will provision a Network Load Balancer if a Service object is created with the type LoadBalancer.
- The Application load balancer will have a public IPs, so we map this with the hostname on our local machine
/etc/hosts
for the easy access. - Now, we can access the application using the hostname from the local machine. The traffic will first reach the Application Load Balancer and then, via the target group, reach the application Pods inside the EKS cluster.
Note: Without any controller, we can provision Load Balancer in EKS for type Load Balancer Service with the help of Cloud Controller Manger.
But the limitation is that it only provisions the Classic Load Balancer.
We can use the AWS Load Balancer controller only for the AWS cloud, so if we want to use Load Balancer for any of the Cloud Providers cluster or on-premises clusters, we can use the Nginx Ingress Controller.
To set up the AWS Cloud Controller Manager for your AWS Kubeadm cluster to provision load balancers to handle the workload traffic, you can refer to this CCM installation guide.
Setup Load Balancer Controller on the EKS Cluster
Ensure you have the Kubeconfig file on your local system to access the cluster or use
aws eks update-kubeconfig --region <REGION> --name <CLUSTER NAME>
command to configure the Kubeconfig file
Step 1: Create an IAM Policy for the Load Balancer Controller
The AWS Load Balancer Controller will run as Pods inside the EKS cluster, and these controller Pods need IAM permissions to access the AWS Services.
Download the IAM Policy JSON file from the official repo.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.2/docs/install/iam_policy.json
Create an IAM Policy using the iam_policy.json
file.
export POLICY_NAME=AWSLoadBalancerControllerIAMPolicy
aws iam create-policy \
--policy-name ${POLICY_NAME} \
--policy-document file://iam_policy.json
You will also get a similar output if the IAM Policy is properly created.
To ensure the creation of the IAM Policy and verify the permissions, we can use the AWS Console.
To get the ARN of the created IAM Policy.
export POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='${POLICY_NAME}'].Arn" --output text)
Step 2: Create an IAM Role for the Load Balancer Controller
The IAM Policy was created with the required permissions, so now we have to create an IAM Role with the IAM Policy.
Create a Trust Policy JSON file for the creation of the IAM Role.
The Trust Policy helps other services assume this Role.
I am creating this Trust Policy for the Pod Identity Agent so only the Pod Identity Agent can assume this IAM Role.
cat <<EOF > trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
EOF
Create an IAM Role AmazonEKSLoadBalancerControllerRole
with the Trust Policy
aws iam create-role \
--role-name AmazonEKSLoadBalancerControllerRole \
--assume-role-policy-document file://"trust-policy.json"
Verify the IAM Role AmazonEKSLoadBalancerControllerRole
is created, and the Trust Policy is properly placed.
Attach IAM Policy AWSLoadBalancerControllerIAMPolicy
with the IAM Role AmazonEKSLoadBalancerControllerRole
export ROLE_NAME=AmazonEKSLoadBalancerControllerRole
aws iam attach-role-policy \
--policy-arn ${POLICY_ARN} \
--role-name ${ROLE_NAME}
The IAM dashboard will help to ensure the attachment of the Role and the Policy.
Use the following command to store the Role ARN as an environment variable for the upcoming configuration.
export ROLE_ARN=$(aws iam get-role --role-name $ROLE_NAME --query "Role.Arn" --output text)
Step 4: Pod Identity Association
In Step 2, we create a trust policy for the Pod Identity Agent, and this agent will be present in the EKS cluster.
The agent will provide the IAM permission through the Pod’s Service Account, but for that, we need to associate the IAM Role with the Service Account using the Pod Identity Association.
Before the Identity association, we need to create a Service Account on the EKS Cluster for the Load Balancer Controller.
Follow the below commands to create the Service Account
export SERVICE_ACCOUNT=aws-load-balancer-controller
export NAMESPACE=kube-system
export REGION=us-west-2
cat >lbc-sa.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: aws-load-balancer-controller
name: ${SERVICE_ACCOUNT}
namespace: ${NAMESPACE}
EOF
kubectl apply -f lbc-sa.yaml
To list the available Service Accounts in the kube-system
Namespace.
kubectl -n kube-system get sa
Before performing the Pod Identity Association, we need to create the Cluster name as an environment variable and ensure that the Pod Identity Agent is present in the cluster.
To list the available EKS clusters in a specific region.
aws eks list-clusters --region ${REGION}
To create a cluster name as an environment variable
export CLUSTER_NAME=eks-spot-cluster
To list the available addons in the cluster.
aws eks list-addons --cluster-name $CLUSTER_NAME
Use the following command if the Pod Identity Agent is unavailable in the cluster.
aws eks create-addon --cluster-name $CLUSTER_NAME --addon-name eks-pod-identity-agent
The Service Account is ready; we can perform the Pod Identity Association.
eksctl create podidentityassociation \
--cluster $CLUSTER_NAME \
--namespace $NAMESPACE \
--service-account-name $SERVICE_ACCOUNT \
--role-arn $ROLE_ARN
After the successful association, we can list the Pod Identity Associations.
aws eks list-pod-identity-associations --cluster-name $CLUSTER_NAME
This ensures the Pod Identity Association is properly done to the AWS Load Balancer Controller Service Account.
Step 5: Install the AWS Load Balancer Controller
Before we install the Controller, we have to add the Helm chart.
helm repo add eks https://aws.github.io/eks-charts
Update the Helm repository.
helm repo update eks
Install the AWS Load Balancer Controller.
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=${CLUSTER_NAME} \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
To check whether the Load Balancer Controller is deployed or not, use the following command.
kubectl -n kube-system get all
The Ingress Class also be created with the installation, the Ingress Class defines the type of the Load Balancer.
kubectl get ingressclass
The above one is the default Ingress Class, and it will be automatically created once we deploy the controller.
We can create multiple Ingress Class if we are managing multiple Load Balancers or need a customized Ingress Class, and we can look into the later part of this guide.
Step 6: Create an Ingress Object
We have to tag kubernetes.io/role/elb = 1
the Subnets of the EKS cluster for the Load Balancer Controller service discovery.
SUBNET_IDS=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.resourcesVpcConfig.subnetIds" --output text | tr '\t' ' ')
SUBNET_IDS_ARRAY=($(echo $SUBNET_IDS))
for subnet in "${SUBNET_IDS_ARRAY[@]}"; do
aws ec2 create-tags --resources "$subnet" --tags Key=kubernetes.io/role/elb,Value="1"
done
If you want to provision an Application Load Balancer, at least two subnets should be tagged.
If it is a private subnet, then the tag should be kubernetes.io/role/internal-elb
, this is required for the Load Balancer Controller to discover the subnets to provision the Load Balancer.
Note: Instead of Tagging the subnets, you can directly provide the subnet names on the Ingress object manifest by passing an annotation
alb.ingress.kubernetes.io/subnets
I am deploying a Nginx Deployment object with three replicas for testing purposes.
cat >nginx-deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deployment
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-deployment
name: nginx-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-deployment
EOF
kubectl apply -f nginx-deployment.yaml
To list the Pods and Services in the current Namespace.
kubectl get po,svc -o wide
The Service name and the Port number are required to create the Ingress object.
Now, we can create an Ingress object to route the external traffic to the Nginx Pod.
cat << EOF > ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
EOF
To create the Public Load Balancer, the scheme value should be internet-facing
.
The target type of the Load Balancer is ip
, because the Service type we have created for the application is the ClusterIP
, which is used for internal communication, so we will directly map the Pods IP address in the Load Balancer for the traffic routing.
kubectl apply -f ingress.yaml
The Application Load Balancer will be provisioned when deploying the Ingress object.
kubectl describe ingress nginx-ingress
In the Backends section of the configuration, we can see that all three Pods IPs are mapped.
The Load Balancer Controller will stay in sync with the Ingress object, so if any Pod is deleted from the Deployment, a new pod will be spun up, and the pod’s IP will automatically be updated in the Load Balancer target.
We can ensure the Application Load Balancer is provisioned by using the AWS Console.
The resource map section of the Application Load Balancer clearly shows that when traffic comes through the Load Balancer, it goes to the target group and routes to the targets, which are the IPs of the Pods.
What is AWS Load Balancer Controller Ingress Group
AWS Load Balancer Controller Group is a feature that helps to combine multiple Ingress objects with a single Application Load Balancer.
We can utilize the capacity of an Application Load Balancer with multiple Ingress resources using the Ingress Group function.
How to Create an Ingress Group
To create an Ingress Group, we have to add an annotation metadata.annotations.alb.ingress.kubernetes.io/group.name
on the new or the existing Ingress object.
This annotation will help the AWS Load Balancer Controller identify and group the Ingress resources.
In normal cases, the Service Object and the Ingress Object should be in the same Namespace, but Ingress Grouping will allow adding Ingress resources from different Namespaces.
In AWS, target groups, listeners, and rules will be created based on the configuration we will use on the Ingress Object.
I am creating two different deployments in two different namespaces and, this time, providing Service type as NodePort
for demo purposes.
cat << EOF > group-demo-deployment.yaml
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: v1
kind: Namespace
metadata:
name: httpd
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd
namespace: httpd
spec:
replicas: 2
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: httpd
namespace: httpd
spec:
selector:
app: httpd
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
EOF
kubectl apply -f group-demo-deployment.yaml
Use the following command to list the Pods, Deployments, and Services.
kubectl -n nginx get po,deploy,svc
kubectl -n httpd get po,deploy,svc
An ingress object also needs to be created for each Namespace.
cat << EOF > group-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: nginx
annotations:
alb.ingress.kubernetes.io/group.name: common-ingress-group
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
ingressClassName: alb
rules:
- host: nginx.techiescamp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: httpd-ingress
namespace: httpd
annotations:
alb.ingress.kubernetes.io/group.name: common-ingress-group
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
ingressClassName: alb
rules:
- host: httpd.techiescamp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: httpd
port:
number: 80
EOF
To route the traffic between Pods, we use the domain names.
For the Nginx Deployment nginx.techiescamp.com
and for the Httpd Deployment httpd.techiescamp.com
kubectl apply -f group-ingress.yaml
To list the Ingress object.
kubectl -n nginx get ingress
kubectl -n httpd get ingress
In normal cases, if we create two or more Ingress Objects, the Load Balancer Controller will create Application Load Balancers for each one, which is highly costly and unnecessary.
However, the Ingress grouping will help to bind the ingress objects together to use a single Load Balancer.
Here, we can see that the DNS name of the Load Balancer is the same for both Ingress objects.
The number of the target groups is based on the number of Ingress objects.
The traffic route to these Target groups is based on the hostnames we provided in the Ingress object; we can see that in the AWS console.
We can use the resource map section to see detailed traffic routing information.
External traffic from the Load Balancer will go to the Target groups based on the Hostname.
The EKS Nodes are mapped under the Target group, so traffic will go to the specific Node Port for which we have created Services for the Pods.
Through the Node Port, the traffic reaches the Pod.
To test the traffic routing practically, we have to map the host names to the IP address of the Application Load Balancer.
To get the Application Load Balancer DNS.
ALB_DNS=$(kubectl -n nginx get ingress nginx-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
To get the IP address of the ALB.
dig +short ${ALB_DNS}
Add the IP addresses to the /etc/hosts
file with the host names.
vim /etc/hosts
Save and exit.
The configurations are done; we can use the host names over the browser to get the output.
First, let’s check with http://nginx.techiescamp.com
Now, we can check with the other hostname http://httpd.techiescamp.com
How to manage multiple load balancers with AWS Load Balancer Controller?
With the AWS Load Balancer Controller, we can create multiple load balancers; by default, the controller will provide a dedicated load balancer for each ingress object unless we use the group annotation.
But how can we effectively utilize the ALB Controller to manage multiple Load Balancers?
For example, we need an LB for the dev environment and another for the prod environment.
We can create two ingress class configurations for each environment.
Before creating the Ingress Class, we can create Ingress Class Parameter Custom Resources.
apiVersion: elbv2.k8s.aws/v1beta1
kind: IngressClassParams
metadata:
name: dev-class
spec:
scheme: external
ipAddressType: dualstack
Can I use the AWS load balancer controller with other container network interface plugins, such as Calico and Cilium?
The default CNI of the EKS cluster is VPC CNI, but it supports other CNIs such as Calico, Cilium, and Antrea.
For example, if we are using the Calico network plugin in the EKS cluster instead of VPC CNI, the new plugin will create an overlay network inside the cluster for the workload and that is not part of the VPC network.
So if using ClusterIP type service, then the routing will not happen because the Load Balancer is not routable to the overlay network.
Instead if you are using the NodePort type service for the workload, you can still be able to route the traffic.
The traffic from the Load Balancer to the Node is routable because the Node’s IPs are part of VPC, and from there, the traffic routes through the Node Port of the nodes to reach the Pod.
Conclusion
This will give you a high-level overview of the AWS Load Balancer controller and how we can utilize the resource of a Load Balancer with multiple Ingress Objects.
The annotations allow you to do more, so if you want to know more about the Load Balancer Controller, please visit the official documentation.
In the upcoming blog post, I will explain how to manually and automatically configure the DNS record on Route53 for Service Objects and create and attach TLS certificates through the AWS Certificate Manager.