How to Create AWS EKS Cluster Using eksctl

Create AWS EKS Cluster Using eksctl

In this Kubernetes tutorial, you will learn to create an AWS EKS cluster using eksctl. I will also cover the important eksctl concepts.

Prerequisites

To work with eksctl you need to have the following installed and configured on your workstation.

  1. AWS CLI installed and configured with required IAM permissions to launch eks cluster.
  2. eksctl CLI should be installed
  3. kubectl should be installed.

How Does eksctl Work?

When you deploy a eksctl YAML file or execute a cluster create command, it deploys Cloudformation templates at the backend. Ideally, the Cloudformation templates deploy the clusters.

eksctl is just a wrapper for Cloudformation.

Once you execute the eksctl cluster create command and if you look at the Cloudformation dashboard, you can see Cloudformation got created for EKS and getting deployed.

eksctl eks cloudformation stacks

Create EKS Cluster Using eksctl

You can launch an EKS cluster using eksctl in two ways.

  1. Using eksctl CLI and parameters
  2. Using eksctl CLI and YAML config.

Using CLI and parameters is pretty straightforward. However I would prefer the YAML config as you can have the cluster configuration as a config file.

Create a file named eks-cluster.yaml

vi eks-cluster.yaml 

Copy the following contents to the file. You need to replace the VPC id, CIDR, and subnet IDs with your own ids. Replace techiescamp with the name of your keypair.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: eks-spot-cluster
  region: us-west-2

vpc:
  id: "vpc-0951fe2c76e36eab9"
  cidr: "10.0.0.0/16"
  subnets:
    public:
      us-west-2a: { id: subnet-01b8ff5eaa0b39c10 }
      us-west-2b: { id: subnet-0e5de906289149fc0 }
      us-west-2c: { id: subnet-0185f1eee8a1a6561 }

managedNodeGroups:
  - name: ng-db
    instanceType: t3.small
    labels: { role: builders }
    minSize: 2
    maxSize: 4
    ssh: 
      allow: true
      publicKeyName: techiescamp
    tags:
      Name: ng-db
  - name: ng-spot
    instanceType: t3.medium
    labels: { role: builders }
    minSize: 3
    maxSize: 6
    spot: true
    ssh: 
      allow: true
      publicKeyName: techiescamp
    tags:
      Name: ng-spot

The above config has the following.

  1. Cluster VPC configurations with public subnet spanning three availability zones.
  2. Two managed node groups. One with regular on-demand instances and one with spot instances.

Now that you have a config ready, deploy the cluster using the following command. It will take a while for the cluster control plane and worker nodes to be provisioned.

eksctl create cluster -f eks-cluster.yaml

The following security groups get created during the cluster launch.

eks security groups created by eksctl

Connect to EKS cluster

Once the cluster is provisioned, you can use the following AWS CLI command to get or update the kubeconfig file.

aws eks update-kubeconfig --region us-west-2 --name eks-spot-cluster

You should see the following output.

➜  public git:(main) ✗ aws eks update-kubeconfig --region us-west-2 --name eks-spot-cluster
Added new context arn:aws:eks:us-west-2:936855596904:cluster/eks-spot-cluster to /Users/bibinwilson/.kube/config

Verify the cluster connectivity by executing the following kubectl commands.

kubectl cluster-info
kubectl get nodes
kubectl get po -n kube-system
eks cluster validation using kubectl

Install Kubernetes Metrics Server

By default the metrics server is not installed on the EKS cluster. You will get the following error if you try to get the pod or node metrics

$ kubectl top nodes
error: Metrics API not available
$ kubectl top pods
error: Metrics API not available

You can install the metrics server using the following command.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Validate the deployment using the following command. It will take a couple of minutes for the metrics server deployment to be in ready state.

kubectl get deployment metrics-server -n kube-system

Now if you check the node metrics, you should be able to see it.

$ kubectl top nodes
NAME                                        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
ip-10-0-19-135.eu-west-2.compute.internal   29m          1%     410Mi           28%       
ip-10-0-3-139.eu-west-2.compute.internal    27m          1%     381Mi           26%       

Increase EKS Pods Per Node

You can host 110 Pods per node is in a standard Kubernetes cluster.

However, For EKS by default, there is a pod per node limitation based on the instance type.

You can increase this limit by setting maxPodsPerNode parameter in the YAML

For example, If you dont parameter, the default and recommended pods per node for t3.medium instance is 17.

For testing purposes, I am giving the value 110, so that I can create more than 17 pods in each node.

If you want to calculate the recommended pods for your node, then first download this script.

curl -O https://raw.githubusercontent.com/awslabs/amazon-eks-ami/master/files/max-pods-calculator.sh

Giving executable permission for this script

chmod +x max-pods-calculator.sh

Before you run the script, you need two things, one is instance type and the other one is the cni version.

You know the instance type, so we have to find the cni version. for that, use the following command.

kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2

You will get a similar output

Here, 1.15.3-eksbuild.1 is the cni version.

Now, we can run the script.

./max-pods-calculator.sh --instance-type t3.medium --cni-version 1.15.3-eksbuild.1

If you are also using the t3.medium instance, then it will give the output is 17.

To create a cluster using the above configuration, use the following command.

eksctl create cluster -f eks-cluster.yml

After the cluster creation, use the following command to enable more IPs for the network interface.

kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true

Possible eksctl Errors

Let’s look at some of the possible eksctl errors.

Stack Already Exists Error

If you try to create a NodeGroup using eksctl with an existing Cloudformation stack, you will get the following error.

creating CloudFormation stack "stack-name": operation error CloudFormation: CreateStack, https response error StatusCode: 40, AlreadyExistsException: Stack [stack-name] already exists

To rectify this, Go to the Cloudformation dashboard and delete the cloud formation stack for the NodeGroup.

Subnet Autoassign Public IP Error

Resource handler returned message: "[Issue(Code=Ec2SubnetInvalidConfiguration, Message=One or more Amazon EC2 Subnets of [subnet-0eea88c0faa8241d4, subnet-05ff592bd0095ad75] for node group ng-app does not automatically assign public IP addresses to instances launched into it. If you want your instances to be assigned a public IP address, then you need to enable auto-assign public IP address for the subnet

To rectify this error, go the subent settings and enable “Enable Autoassign Public IPv4 Address Option.

EKS Subnet Autoassign Public IP Error

invalid apiversion “client.authentication.k8s.io Error

This error primarily happens due to IAM RBAC issues.

We have created a detailed blog explaining the solutions for this issue.

Please refer client.authentication.k8s.io Error blog for more information.

Conclusion

We have looked into AWS EKS cluster creation using eksctl CLI.

When it comes to production deployment, ensure you follow the kubernetes cluster best practices.

If you are planning for Kubernetes certification, you can use eksctl to deploy test clusters very easily. Also, check out the kubernetes certification coupon to save money on CKA, CKAD, and CKS certification exams.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like