Kubernetes Gateway API: A Beginner’s Guide

Kubernetes Gateway API

In this blog, we will learn about Kubernetes Gateway API and its workflow as well as functions.

The Kubernetes Gateway API is the advanced traffic routing mechanism for the Kubernetes cluster.

Ingress is the traffic routing mechanism currently used in the Kubernetes clusters, but it has some limitations.

To overcome all the Ingress limitations, Kubernetes introduced Gateway API.

Importance of the Kubernetes Gateway API

  1. Gateway API can perform network protocol based routing such as HTTP based, TCP/UDP or gRPC based.
  2. Each type of routing can be done with dedicated kubernetes resources.
  3. Split the traffic to services using various methods such as weighed traffic routing, blue-green, canary, etc.
  4. HTTP header-based custom routing is possible in the Gateway API.

Gateway API works well with service mesh integrations such as Istio and Linkerd and can also handle the traffic of large-scale applications.

Kubernetes Gateway API Workflow

The below workflow explains how the traffic is routed to the endpoints through the Kubernetes Gateway API.

The workflow diagram of the Kubernetes Gateway API
  1. The Gateway Controller runs as a Pod inside the Kubernetes cluster, which is capable of routing traffic to the endpoints of the Kubernetes services (Pods).
  2. When a user tries to access the application, the traffic is routed through the external Load Balancer to reach the Gateway API Controller.
  3. The Gateway API Controller keeps watching its custom resources such as Gateway, HTTProute, etc.
  4. The Gateway Custom Resource makes an entry point to route the traffic inside the cluster.
  5. The HTTPRoute Custom Resource has rules and conditions for routing traffic.
  6. The information on Custom Resources will automatically be configured in the Gateway API controller.
  7. Traffic from the Gateway API Controller will be routed to the intended endpoints based on the conditions and rules.

Setup Gateway API in the EKS Cluster

Prerequisites:

  1. EKS Cluster v1.30 or higher.
  2. EKSCTL v1.30 or higher should be available on the local machine
  3. AWS CLI v2.22 or higher should be available and configured to access the cluster.
  4. HELM v3.16 or higher should be available in the local system.

Step 1: Install Gateway API Custom Resource Definitions (CRDs) in the EKS Cluster.

We are using the latest stable version of the Gateway API v1.2.1.

Custom Resource Definitions will help to create the Gateway API resources as Kubernetes native resources.

To install the Gateway API CRDs, use the following command.

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml

To check the CRDs, use the following command.

kubectl get crds | grep gateway
the list of the kubernetes custom resource definitions.

Gateway Class – Tells which Gateway API Controller we use to manage the traffic.

Gateway – Entry point of the traffic

gRPC Route – Manage the gRPC type of traffic.

HTTP Route – Manage the HTTP type of traffic.

Reference Grants – Refer to other namespace resources securely.

Step 2: Install Gateway API Controller on the EKS Cluster.

There are various controllers that support the Gateway API, you can refer to the supported controllers list in the official documentation.

For this tutorial, we are choosing the Nginx Gateway Fabric Controller.

To pull the Helm Chart of the controller, use the following command.

helm pull oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --untar

This will give the local copy of the Helm chart so we can modify and store it in our version control systems like GitHub.

The modifiable values will be available in the values.yaml file, and we can make changes in this file as per our requirements.

The following snapshot is the directory structure of the Nginx Gateway Fabric Controller Helm Chart.

the directory struecture of the nginx fabric gateway controller helm chart

The Chart.yaml file will give the information about the source of the Chart.

the source of the nginx fabric gateway api controller helm chart

We are not modifying the controller configuration for now, so we are directly installing the Nginx Gateway Fabric Controller using the Helm.

helm install ngf oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway

Once the installation is completed, we must ensure all the controller components are running correctly.

kubectl -n nginx-gateway get all
nginx gateway fabric componentsl list

Here, we can see the controller Pods running without any issues. Also, it provisioned a load balancer in AWS.

By default, the controller installation comes with the type Load Balancer service.

In AWS, the Gateway API Controler will provision a Classic Load Balancer.

If you want to know what are the images the controller chart uses.

  1. ghcr.io/nginxinc/nginx-gateway-fabric:1.5.1
  2. ghcr.io/nginxinc/nginx-gateway-fabric/nginx:1.5.1

Note: If you are running from a corporate network, you might not have access to these public images. You should first push these images to the organization private registry first and then deploy the chart. Also, check if the security guidelines allow you to push community image to private registries.

Route Traffic to the Applications

The Gateway API and the Gateway API Controller are required to route the traffic.

The installation and configuration of both have been done in the above steps.

Let’s start with a simple application and see how the traffic is routed through the Gateway API.

Step 1: Deploy Demo Applications

Deploying an Nginx webserver application and exposing it as a ClusterIP service for demo purposes.

cat <<EOF > nginx-deploy.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: webserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: webserver
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.21
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: webserver
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

To deploy this, use the following command.

kubectl apply -f nginx-deploy.yaml

To get the deployed components status.

kubectl -n webserver get all
Listing the demo deployment resources from the particular namespace

This ensures that the deployed webserver is appropriately running and that the service is also created for the deployment.

Step 2: Describe Kubernetes Gateway API Gateway Class

The Gateway Class is a Custom Resource of the Kubernetes Gateway API, which indicates which Gateway API Controller we will use.

In our case, the Gateway API controller is Nginx Gateway Fabric.

We don’t have to create this object manually. Instead, it will automatically be created when we deploy the controller.

The Gateway Class is a cluster-scoped object so that we can leverage this for any namespaces in the cluster.

To list the Gateway Class, use the following command.

kubectl get gatewayclass

We can describe the Gateway Class to get detailed information.

kubectl describe gatewayclass nginx
the detailed information of the gateway api controller gateway class

When we describe the Gateway Class, we can see the status, controller name, and other information.

Now, with this Gateway Class, we create the Gateway object, but before, note down the Gateway Class name for the upcoming configurations.

Step 3: Create Kubernetes Gateway API Gateway object

The Gateway is the entry point of the HTTP traffic, and this object is Namespace-scoped, so we can’t use the same Gateway for other Namespaces.

Create a Gateway for the demo application.

cat <<EOF > web-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: web-gateway
  namespace: webserver
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    protocol: HTTP
    port: 80
EOF
kubectl apply -f web-gateway.yaml

To list the Gateways from a particular Namespace, use the following command.

kubectl -n webserver get gateway

To get more detailed information.

kubectl -n webserver describe gateway web-gateway
the detailed information of the the kubernetes gateway api gateway resource

Here, we can see that the Gateway Class is configured with the Gateway and also see the DNS name of the AWS Load Balancer.

No routes are attached in the Listeners section, which we will configure in the next step.

This indicates that we haven’t configured any services with the Gateway API.

On the last section it shows the default supported Kinds, which are HTTPRoute and GRPCRoute.

Step 4: Create a HTTP Route Custom Resource

HTTPRoute is the Custom Resource of the Gateway API, which contains the configurations to route the traffic to the HTTP/HTTPS based applications.

Since the demo application nginx is a webserver, which routes the HTTP/HTTPS traffic, so that we are creating this resource.

This Custom Resource can handle functions such as path-based routing, hostname-based routing, custom header routing, and namespace routing.

cat <<EOF > webserver-httproute.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: webserver-httproute
  namespace: webserver
spec:
  parentRefs:
  - name: web-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: nginx-service
      port: 80
EOF
kubectl apply -f webserver-httproute.yaml

To list the HTTPRoute custom resource, use the following command.

kubectl -n webserver get httproute

Describe the httproute Custom Resource to get more detailed information.

kubectl -n webserver describe httproute webserver-httproute
the detailed information of the gateway api http route custom resource

If the configurations are correctly done, we can see the Gateway API Controller, Gateway Class, and Gateway information along with the routing rules.

Before we check our application, we must ensure that the Gateway Custom Resource is updated with the routes.

the updated routed information in the gateway api gateway

This clearly shows when we create a HTTPRoute Custom Resource, the Gateway resource will automatically be updated.

Now, we can check our application over the browser.

Paste the Load Balancer DNS name as a URL in any web browser.

The output of the application over the browser which is using the gateway api

If you check the Nginx controller Pod configuration, you can see an upstream block with the details of the Pods IPs.

k -n nginx-gateway exec -it <CONTROLLER POD NAME> -c nginx  -- nginx -T
the gateway api controller pod configuration which is describing the mapping of the kubernetes pods

This is how the controller registers the Pods with the internal configuration and routes the traffic to that.

We can list the Pod IPs to ensure the IP mapping is done correctly with the Nginx Pod.

kubectl -n webserver get po -o wide
lising the demo deployment pods to identify the ip addresses of the Pod

Advanced Traffic Routing Based on Conditions

The Gateway API can do path-based, host-based, header, and method-based routing.

Since this is an introduction to the Gateway API, it just covers path and host-based routing.

Step 1: Create a Gateway Custom Resource

Let’s start with the Gateway resource, this time also we are creating a Gateway with the same configurations that we have created for earlier demo.

Before creating the Gateway, create a Namespace colors

kubectl create namespace colors

Create a Gateway in colors namespace.

cat <<EOF > colors-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: colors-gateway
  namespace: colors
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    allowedRoutes:
      namespaces:
        from: All
EOF

Here, I have only changed the name of the Gateway colors-gateway and Namespace colors

kubectl apply -f colors-gateway.yaml

After the deployment, if we list the Gateways, we can see that the provisioned Load Balancer by the Gateway Controller is attached to the Gateway.

kubectl -n colors get gateway
the provisioned load balancer by the gateway api controller and their information.

We can verify this from the AWS console as well.

checking the aws console to ensure that the load balancer is properly provisioned.

We can use the DNS name of the Load Balancer to test our application, but that won’t be quite convenient.

So configuring the Local DNS Resolution would make the work easier.

To set the local DNS resolution, get the public IPs of the Load Balancer, choose an intended host name (Ex – dev.techiescamp.com) and configure it on /etc/hosts.

Use the following command to get the public IPs of the Load Balancer.

dig +short <LOAD BALANCER DNS NAME>

This will list the public IPs of the Load Balancer.

Open /etc/hosts from your local machine and map the IP address with the hostname dev.techiescamp.com.

sudo vim /etc/hosts
local dns resolution for the load balancer provisioned by the gateway apli controller

You can give any name instead of dev.techiescamp.com , and this is only resolved on your local machine.

Note: If want to access the application by anyone over the internet with a domain name, you have to configure the Load Balancer DNS records with a DNS Server (Example: Route53).

Step 2: Deploy Applications for Demo

Deploying two demo applications in the same Namespace to explain the workflow of the Gateway API.

cat <<EOF > colors-deployments.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: orange-app
  namespace: colors
  labels:
    app: orange-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: orange-app
  template:
    metadata:
      labels:
        app: orange-app
    spec:
      containers:
      - name: color-app
        image: techiescamp/go-color-app-demo:latest
        ports:
        - containerPort: 8080
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: green-app
  namespace: colors
  labels:
    app: green-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: green-app
  template:
    metadata:
      labels:
        app: green-app
    spec:
      containers:
      - name: color-app
        image: techiescamp/go-color-app-demo:latest
        ports:
        - containerPort: 8080
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
EOF
kubectl apply -f colors-deployments.yaml

Create ClusterIP services for these two applications.

cat <<EOF > colors-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: orange-app-service
  namespace: colors
  labels:
    app: orange-app
spec:
  selector:
    app: orange-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  name: green-app-service
  namespace: colors
  labels:
    app: green-app
spec:
  selector:
    app: green-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: ClusterIP
EOF
kubectl apply -f colors-svc.yaml

Our demo applications are ready, now we can check the status and service list of the these two applications.

kubectl -n colors get po,svc
listing the demo deployment pods and services to ensure everything is running smoothly

This ensures that the application’s deployment and services are correctly done.

Step 3: Create HTTPRoute Resource for Applications.

Till now, the external HTTP traffic can only reach the Gateway.

Now, we need to inform the Gateway to route the traffic to the Green and Orange applications.

We can achieve this by creating a HTTPRoute resource with certain conditions.

cat <<EOF > colors-http-route.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: colors-httproute
  namespace: colors
spec:
  parentRefs:
  - name: colors-gateway
    sectionName: http
  hostnames:
  - "dev.techiescamp.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /green
    backendRefs:
    - name: green-app-service
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /orange
    backendRefs:
    - name: orange-app-service
      port: 80
EOF
kubectl apply -f colors-http-route.yaml

After the deployment, we must ensure all the routes are correctly configured.

kubectl -n colors describe httproutes colors-httproute
the detailed information about the gateway api httproute resource.

In the marked first section, we can see the host name that we have configured dev.techiescamp.com.

In the Parent Refs section, we can see that the Gateway is configured with the HTTPRoute.

The first part of the rules section indicates the first application (Green) and the condition, which means that if you want to access that application, you have to use /green the path along with the host name dev.techiescamp.com.

The second section is for the second application (Orange), here one condition we have changed is the path, we have to use /orange with the hostname dev.techiescamp.com.

First, we can try to access the first application (Green)

Open any of the web browser and paste the URL dev.techiescamp.com/green

the demo application output from the web browser

The application I have created for this demo will give the output of the intended Pod name and the IP.

Here, we can see that the traffic has reached the first application (Green), so the routing is configured correctly.

We can check this with CLI as well using the following command.

curl dev.techiescamp.com/green

Now, we can try to access the second application.

the demo application output from the web browser

The second application is also accessible, ensuring the configured path is done correctly and the Gateway API routs the traffic to the intended services.

Expose Gateway API Controller as Node Port Service

Without the help of any Load Balancer, we can configure to reach the external traffic to the Gateway API Controller Pod with the help of Node Port service.

This is not a proper way to route the traffic, even though if someone wants to try the Gateway API on their local Kubeadm cluster, this method would be helpful.

For this, we have to modify the Gateway API Controller installations.

In this method, the Gateway API CRDs first need to be installed in the Kubernetes cluster.

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml

Before we install the Gateway API Controller (Nginx Fabric Gateway Controller), we need to create a values.yaml configuration file with the following information.

cat <<EOF > dev-values.yaml
service:
  create: true
  type: NodePort
EOF

This configuration overrides the default Gateway API Controller deployment.

In this configuration, we have mentioned that instead of Load Balancer, we are using the Node Port service, so that we can access our application with the help of the Nodes public IP.

helm install ngf oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway -f dev-values.yaml

After the deployment, we can check the service.

kubectl -n nginx-gateway get svc
Deploying the gateway api controller with nodeport service.

Now, what happens is that there is no Load Balancer, so the traffic directly reaches the Node initially, and from there, it goes to the Gateway API Controller.

Using the second example (Colors) to demonstrate the setup, but for that, note down the Node Port number of the service to access the application 30859, your Node Port number might be different.

We need to update the Local DNS configuration as well.

Here, instead of using the public IP of the Load Balancer, we use the public IP of the Node.

To get the public IP of the Nodes, use the following command.

kubectl get no -o wide
listing the kubernets nodes to identify  the public ips of the no

We are mapping these public IPs in /etc/hosts

local dns configuration for the gateway api controller

Now, we can use the hostname (dev.techiescamp.com) with the Node Port number 30859 to access the application.

the demo application output from the web browser
the demo application output from the web browser

We can use this method to test the Gateway API from the local Kubernetes clusters.

Difference between Kubernetes Ingress and Gateway API

  1. Ingress primarily supports the HTTP and HTTPS network protocols [Layer 7 protocols], Gateway API as well as supports TCP, UDP, TLS, and gRPC (Example: Databases, Messaging systems).
  2. In Ingress, only one object ingress where we have to give all our routing configurations, but in Gateway API, dedicated resources for each type of protocol, such as httproute, tcproute, tlsroute and udproute.
  3. Ingress is limited to path-based and host-based routing, but the Gateway API can perform the custom HTTP header-based routing, weighted routing, canary and blue-green network routings.
  4. The Ingress object is the namespace scoped, so only the same namespace applications can use the Ingress object, but in Gateway API, the Gateway object is similar to the Ingress but is cluster scoped, so applications from any namespace can utilize the Gateway object.
  5. The Gateway API also required a controller similar to the Ingress, but the list of Gateway API controller is quite extensive.

Conclusion

This is just an introduction to the Kubernetes Gateway API, so we just covered the basic path and host-based routing, which is similar to Ingress. There are many advanced concepts available, and we will explore them individually.

To learn more about the Gateway API, please refer to the official documentation. If youwant to know what functions the Nginx Fabric Gateway Controller offers, you can refer to this official documentation.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like