Kubernetes Gateway API Tutorial for Beginner's

Kubernetes Gateway API Tutorial for Beginner's

In this blog, you will earn the basics of Kubernetes Gateway API and GatewayClass, Gateway HTTPRoute etc with a step-by-step tutorial for beginners.

By the end of this guide, you will have learned:

  • What the Kubernetes Gateway API is
  • Key concepts of the Gateway API
  • Practical implementation of Gateway API controllers
  • How to use Gateway API objects like GatewayClass, Gateway, and HTTPRoute
  • How to implement path-based routing using the Gateway API and more.

Lets get started.

What is Kubernetes Gateway API?

As the name suggests, the Gateway API is a Kubernetes feature that helps create gateways for external traffic entering your cluster.

Ingress is the traffic routing mechanism primarily used in the Kubernetes environments. However it comes with several limitations. One such example is, it supports only Layer 7 HTTP based traffic.

So, to overcome all the Ingress limitations, the Kubernetes Gateway API was development.

The Gateway API has the following key features.

  1. Gateway API can perform L4 and L7 routing based on HTTP, gRPC or TCP/UDP (experimental)
  2. Can route traffic based on HTTP headers.
  3. Can perform cross-namespace routing.
  4. Supports weighed traffic routing, blue-green, canary, etc.
  5. Reduced reliance on vendor-specific controller annotations, making configurations more portable across environments.
  6. Gateway API also works well with service mesh integrations such as Istio and Linkerd etc.

In summary, the Gateway API is an improved version of Kubernetes Ingress that offers more powerful and flexible traffic management.

Kubernetes Gateway Concepts

Before we get in to the hands-on section, lets understand the key concepts behind Gateway API. It will help you understand the end to end traffic flow and its features better.

Gateway API Controller

As we learned in the Ingress lesson, even though we define routing rules in the Ingress object, the actual routing is handled by the Ingress Controller.

The same concept applies to the Gateway API.

While the Gateway API provides many objects to manage cluster traffic, the actual routing is done by a Gateway API Controller. This controller is not built into Kubernetes. You need to set up a third-party (vendor) controller, just like with Ingress.

Gateway API Resources

Following are the key resources that are part of Kubernetes Gateway API

  1. GatewayClass - This resource is used to select the Gateway Controller that will manage the Gateway resources. It tells the system which controller should handle the traffic routing. It is similar to ingressClass in Kubernetes ingress.
  2. Gateway - The Gateway resource defines how external traffic enters the cluster. Think of it as the main entry point for requests coming from outside the cluster.
  3. HTTPRoute - The HTTPRoute resource defines how traffic should be routed to applications inside the cluster once it reaches the Gateway for HTTP traffic
  4. gRPCRoute - Resource to manage the gRPC traffic.
  5. ReferenceGrants - This resource is used to refer resources in other namespaces securely. Primarily for cross namespace routing.

Complete Gateway API Traffic Flow

Now that we have looked in to key Gateway API concepts, lets understand how everything works together to handle the traffic.

The following workflow explains how the traffic is routed to the cluster pods from outside world through the Kubernetes Gateway API resources.

Kubernetes Gateway API Traffic Flow Explained

Here is how it works.

  1. The Gateway Controller runs as a Pod inside the Kubernetes cluster, which is capable of routing traffic to the endpoints of the Kubernetes services (Pods). It is kind of a reverse proxy implementation similar to ingress controller.
  2. When a user tries to access the application, the traffic enter the external Load Balancer and then reaches the Gateway API Controller.
  3. The Gateway API Controller keeps watching its custom resources such as Gateway, HTTProute, etc.
  4. So all the configurations from the Gateway and HTTPRoute custom resources gets translated to routing configurations in the controller.
  5. Then the traffic from the Gateway API Controller will be routed to the intended service backends based on the configurations and rules.

Setup Prerequisites

Following are the setup prerequisites

  1. Kubernetes Cluster v1.30 or higher.
  2. HELM v3.16 or higher should be installed.

Setup Gateway API in Kubernetes Cluster

The Gateway API setup consists of three important section.

  1. Gateway API CRD installation
  2. Gateway API controller installation
  3. Gateway API object creation and traffic validation.

We will look at all the steps in detail.

Install Gateway API CRDs

The Gateway API objects are not available as native objects in Kubernetes. We need to enable them by installing the Gateway API Custom Resource Definitions (CRDs)

To install the Gateway API CRDs, use the following command. To get the latest version of the CRD, please visit this page. At the time of updating this guide, v1.3.0 is the latest version.

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml

Now, validate the installed CRDs using the following command. You should see the five CRD resources we discussed earlier.

$ kubectl get crds | grep gateway

gatewayclasses.gateway.networking.k8s.io     2025-05-21T13:49:23Z
gateways.gateway.networking.k8s.io           2025-05-21T13:49:29Z
grpcroutes.gateway.networking.k8s.io         2025-05-21T13:49:32Z
httproutes.gateway.networking.k8s.io         2025-05-21T13:49:35Z
referencegrants.gateway.networking.k8s.io    2025-05-21T13:49:36Z

You can also check the API resources to get more details about the registered CRDs..

$ kubectl api-resources --api-group=gateway.networking.k8s.io

NAME              SHORTNAMES   APIVERSION                          NAMESPACED   KIND
gatewayclasses    gc           gateway.networking.k8s.io/v1        false        GatewayClass
gateways          gtw          gateway.networking.k8s.io/v1        true         Gateway
grpcroutes                     gateway.networking.k8s.io/v1        true         GRPCRoute
httproutes                     gateway.networking.k8s.io/v1        true         HTTPRoute
referencegrants   refgrant     gateway.networking.k8s.io/v1beta1   true         ReferenceGrant

Install Gateway API Controller

There are various controllers that support the Gateway API, you can refer to the supported controllers list in the official documentation.

For this tutorial, we'll be using the NGINX Gateway Fabric Controller, which is now generally available (GA).

We'll deploy the controller using Helm. To pull the Helm chart, run the following command:

helm pull oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --untar

This will give the local copy of the Helm chart so we can modify and store it in our version control systems like GitHub.

The modifiable values will be available in the values.yaml file, and we can make changes in this file as per our requirements.

The following snapshot is the directory structure of the Nginx Gateway Fabric Controller Helm Chart.

nginx-gateway-fabric: tree
β”œβ”€β”€ Chart.yaml
β”œβ”€β”€ README.md
β”œβ”€β”€ crds
β”‚   β”œβ”€β”€ gateway.nginx.org_clientsettingspolicies.yaml
β”‚   β”œβ”€β”€ gateway.nginx.org_nginxgateways.yaml
β”‚   β”œβ”€β”€ gateway.nginx.org_nginxproxies.yaml
β”‚   β”œβ”€β”€ gateway.nginx.org_observabilitypolicies.yaml
β”‚   └── gateway.nginx.org_snippetsfilters.yaml
β”œβ”€β”€ templates
β”‚   β”œβ”€β”€ _helpers.tpl
β”‚   β”œβ”€β”€ clusterrole.yaml
β”‚   β”œβ”€β”€ clusterrolebinding.yaml
β”‚   β”œβ”€β”€ configmap.yaml
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ gatewayclass.yaml
β”‚   β”œβ”€β”€ nginxgateway.yaml
β”‚   β”œβ”€β”€ nginxproxy.yaml
β”‚   β”œβ”€β”€ scc.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   └── serviceaccount.yaml
β”œβ”€β”€ values.schema.json
└── values.yaml
πŸ’‘
In the controller Helm chart, the default service type values.yaml is set to LoadBalancer. This means that when you deploy it in a cloud environment, it will automatically provision a LoadBalancer.

If you're not deploying in a cloud environment, you'll need to expose the controller service using a NodePort instead. Refer to the NodePort section for instructions on how to deploy the controller with this configuration.

Following are the container images the controller chart uses.

  1. ghcr.io/nginxinc/nginx-gateway-fabric:1.5.1
  2. ghcr.io/nginxinc/nginx-gateway-fabric/nginx:1.5.1
πŸ’‘
If you're setting this up within a corporate network, access to public images might be restricted. In that case, you should first push the required images to your organization's private registry, then update the image references in the chart before deploying it.

Also, make sure to review your company's security policies to confirm whether pushing community images to private registries is permitted.

We're not making any changes to the controller configuration at this stage.

We'll go ahead and install the NGINX Gateway Fabric Controller directly using the following command.

helm install ngf oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway

Once the installation is complete, make sure all the controller components are running correctly by using the following command.

kubectl -n nginx-gateway get all
nginx gateway fabric componentsl list

As you can see, the controller pods are up and running without any issues.

Since we're deploying this setup on a cloud platform, a corresponding cloud load balancer is automatically provisioned as part of the process.

In this tutorial, we're using AWS, so an AWS Load Balancer has been created to handle external traffic.

πŸ’‘
By default, the controller is installed with a service of type LoadBalancer.

In AWS, this means the Gateway API Controller will provision a Classic Load Balancer to handle incoming traffic.

In Azure, it creates a Standard Public Load Balancer.

In GCP, it provisions a Google Cloud External Load Balancer to route traffic to the backend pods.

Validate Gateway Class

When we deployed the Nginx Gateway Fabric controller, it automatically created the GatewayClass as it was part of the Helm templates (templates/gatewayclass.yaml).

πŸ’‘
This may not apply to every implementation. If the GatewayClass isn't included in the controller Helm chart, you'll need to create it manually.

Lets list the created GatewayClass, using the following command.

kubectl get gatewayclass

We can describe the GatewayClass to get the detailed information.

kubectl describe gatewayclass nginx
the detailed information of the gateway api controller gateway class

When you describe the GatewayClass, you'll see details like its status, controller name, and other relevant information.

Before creating the Gateway object, make sure to note down the GatewayClass name, as you'll need it for the Gateway configuration.

Now, let's start with a simple application to see how traffic is routed through the Gateway API.

Deploy a Demo Application

To test the Gateway API implementation, we will Deploy an Nginx webserver and expose it as a ClusterIP service.

cat <<EOF > nginx-deploy.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: webserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: webserver
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.21
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: webserver
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

To deploy this, use the following command.

kubectl apply -f nginx-deploy.yaml

To get the deployed components status.

kubectl -n webserver get all
Listing the demo deployment resources from the particular namespace

This ensures that the deployed webserver is appropriately running and that the service is also created for the deployment.

Create Gateway object

The Gateway acts as the entry point for traffic entering the cluster via the Gateway API. Meaning, once the traffic reaches the controller, the Gateway determines how it should be handled based on the defined Listeners

πŸ’‘
Gateway it is a namespace-scoped object. So you can't use the same Gateway across multiple namespaces.

Lets create a Gateway for the demo application. We are creating the Gateway in the webserver namespace

cat <<EOF > web-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: web-gateway
  namespace: webserver
spec:
  gatewayClassName: nginx
  listeners:
    - name: http
      protocol: HTTP
      port: 80
      hostname: "*.devopsproject.dev"
EOF

🧱 Key Parts in this YAML should know.

  1. gatewayClassName: nginx : Associates this Gateway with the GatewayClass named nginxβ€”the one that was created along with the controller.
  2. hostname: "*.devopsproject.dev": This means, it will only accept traffic coming to domains like,
    1. app.devopsproject.dev
    2. api.devopsproject.dev etc
πŸ’‘
The * is a wildcard, so it matches anything before .devopsproject.dev.

So, yhy use wildcard DNS?

Because, when you have many services like app, api, user, etc, you want one Gateway to handle all of them without creating separate listeners

Now, lets create the Gateway

kubectl apply -f web-gateway.yaml

To list the Gateway from webserver Namespace, use the following command.

kubectl -n webserver get gateway

To get more detailed information, execute:

kubectl -n webserver describe gateway web-gateway
the detailed information of the the kubernetes gateway api gateway resource

Here, we can see that the GatewayClass is configured with the Gateway and also see the DNS name of the AWS Load Balancer.

No routes are attached in the Listeners section, which we will configure in the next step.

This indicates that we haven't configured any services with the Gateway API.

On the last section it shows the default supported Kinds, which are HTTPRoute and GRPCRoute.

Step 4: Create a HTTP Route Custom Resource

HTTPRoute is the Custom Resource of the Gateway API, which contains the configurations to route the traffic to the HTTP/HTTPS based applications.

Since the demo application nginx is a webserver, which routes the HTTP/HTTPS traffic, so that we are creating this resource.

This Custom Resource can handle functions such as path-based routing, hostname-based routing, custom header routing, and namespace routing.

cat <<EOF > webserver-httproute.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: webserver-httproute
  namespace: webserver
spec:
  parentRefs:
  - name: web-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: nginx-service
      port: 80
EOF
kubectl apply -f webserver-httproute.yaml

To list the HTTPRoute custom resource, use the following command.

kubectl -n webserver get httproute

Describe the httproute Custom Resource to get more detailed information.

kubectl -n webserver describe httproute webserver-httproute
the detailed information of the gateway api http route custom resource

If the configurations are correctly done, we can see the Gateway API Controller, Gateway Class, and Gateway information along with the routing rules.

Before we check our application, we must ensure that the Gateway Custom Resource is updated with the routes.

kubectl -n webserver describe gateway web-gateway
the updated routed information in the gateway api gateway

This clearly shows when we create a HTTPRoute Custom Resource, the Gateway resource will automatically be updated.

Now, we can check our application over the browser.

Paste the Load Balancer DNS name as a URL in any web browser.

The output of the application over the browser which is using the gateway api

If you check the Nginx controller Pod configuration, you can see an upstream block with the details of the Pods IPs.

k -n nginx-gateway exec -it <CONTROLLER POD NAME> -c nginx  -- nginx -T
the gateway api controller pod configuration which is describing the mapping of the kubernetes pods

This is how the controller registers the Pods with the internal configuration and routes the traffic to that.

We can list the Pod IPs to ensure the IP mapping is done correctly with the Nginx Pod.

kubectl -n webserver get po -o wide
lising the demo deployment pods to identify the ip addresses of the Pod

Advanced Traffic Routing Based on Conditions

The Gateway API can do path-based, host-based, header, and method-based routing.

Since this is an introduction to the Gateway API, it just covers path and host-based routing.

Step 1: Create a Gateway Custom Resource

Let's start with the Gateway resource, this time also we are creating a Gateway with the same configurations that we have created for earlier demo.

Before creating the Gateway, create a Namespace colors

kubectl create namespace colors

Create a Gateway in colors namespace.

cat <<EOF > colors-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: colors-gateway
  namespace: colors
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    allowedRoutes:
      namespaces:
        from: All
EOF

Here, I have only changed the name of the Gateway colors-gateway and Namespace colors

kubectl apply -f colors-gateway.yaml

After the deployment, if we list the Gateways, we can see that the provisioned Load Balancer by the Gateway Controller is attached to the Gateway.

kubectl -n colors get gateway
the provisioned load balancer by the gateway api controller and their information.

We can verify this from the AWS console as well.

checking the aws console to ensure that the load balancer is properly provisioned.

We can use the DNS name of the Load Balancer to test our application, but that won't be quite convenient.

So configuring the Local DNS Resolution would make the work easier.

To set the local DNS resolution, get the public IPs of the Load Balancer, choose an intended host name (Ex - dev.techiescamp.com) and configure it on /etc/hosts.

Use the following command to get the public IPs of the Load Balancer.

dig +short <LOAD BALANCER DNS NAME>

This will list the public IPs of the Load Balancer.

Open /etc/hosts from your local machine and map the IP address with the hostname dev.techiescamp.com.

sudo vim /etc/hosts
local dns resolution for the load balancer provisioned by the gateway apli controller

You can give any name instead of dev.techiescamp.com , and this is only resolved on your local machine.

Note: If want to access the application by anyone over the internet with a domain name, you have to configure the Load Balancer DNS records with a DNS Server (Example: Route53).

Step 2: Deploy Applications for Demo

Deploying two demo applications in the same Namespace to explain the workflow of the Gateway API.

cat <<EOF > colors-deployments.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: orange-app
  namespace: colors
  labels:
    app: orange-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: orange-app
  template:
    metadata:
      labels:
        app: orange-app
    spec:
      containers:
      - name: color-app
        image: techiescamp/go-color-app:latest
        ports:
        - containerPort: 8080
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: green-app
  namespace: colors
  labels:
    app: green-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: green-app
  template:
    metadata:
      labels:
        app: green-app
    spec:
      containers:
      - name: color-app
        image: techiescamp/go-color-app:latest
        ports:
        - containerPort: 8080
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
EOF
kubectl apply -f colors-deployments.yaml

Create ClusterIP services for these two applications.

cat <<EOF > colors-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: orange-app-service
  namespace: colors
  labels:
    app: orange-app
spec:
  selector:
    app: orange-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  name: green-app-service
  namespace: colors
  labels:
    app: green-app
spec:
  selector:
    app: green-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: ClusterIP
EOF
kubectl apply -f colors-svc.yaml

Our demo applications are ready, now we can check the status and service list of the these two applications.

kubectl -n colors get po,svc
listing the demo deployment pods and services to ensure everything is running smoothly

This ensures that the application's deployment and services are correctly done.

Step 3: Create HTTPRoute Resource for Applications.

Till now, the external HTTP traffic can only reach the Gateway.

Now, we need to inform the Gateway to route the traffic to the Green and Orange applications.

We can achieve this by creating a HTTPRoute resource with certain conditions.

cat <<EOF > colors-http-route.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: colors-httproute
  namespace: colors
spec:
  parentRefs:
  - name: colors-gateway
    sectionName: http
  hostnames:
  - "dev.techiescamp.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /green
    backendRefs:
    - name: green-app-service
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /orange
    backendRefs:
    - name: orange-app-service
      port: 80
EOF
kubectl apply -f colors-http-route.yaml

After the deployment, we must ensure all the routes are correctly configured.

kubectl -n colors describe httproutes colors-httproute
the detailed information about the gateway api httproute resource.

In the marked first section, we can see the host name that we have configured dev.techiescamp.com.

In the Parent Refs section, we can see that the Gateway is configured with the HTTPRoute.

The first part of the rules section indicates the first application (Green) and the condition, which means that if you want to access that application, you have to use /green the path along with the host name dev.techiescamp.com.

The second section is for the second application (Orange), here one condition we have changed is the path, we have to use /orange with the hostname dev.techiescamp.com.

First, we can try to access the first application (Green)

Open any of the web browser and paste the URL dev.techiescamp.com/green

the demo application output from the web browser

The application I have created for this demo will give the output of the intended Pod name and the IP.

Here, we can see that the traffic has reached the first application (Green), so the routing is configured correctly.

We can check this with CLI as well using the following command.

curl dev.techiescamp.com/green

Now, we can try to access the second application.

the demo application output from the web browser

The second application is also accessible, ensuring the configured path is done correctly and the Gateway API routs the traffic to the intended services.

Expose Gateway API Controller as Node Port Service

Without the help of any Load Balancer, we can configure to reach the external traffic to the Gateway API Controller Pod with the help of Node Port service.

This is not a proper way to route the traffic, even though if someone wants to try the Gateway API on their local Kubeadm cluster, this method would be helpful.

For this, we have to modify the Gateway API Controller installations.

In this method, the Gateway API CRDs first need to be installed in the Kubernetes cluster.

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml

Before we install the Gateway API Controller (Nginx Fabric Gateway Controller), we need to create a values.yaml configuration file with the following information.

cat <<EOF > dev-values.yaml
service:
  create: true
  type: NodePort
EOF

This configuration overrides the default Gateway API Controller deployment.

In this configuration, we have mentioned that instead of Load Balancer, we are using the Node Port service, so that we can access our application with the help of the Nodes public IP.

helm install ngf oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway -f dev-values.yaml

After the deployment, we can check the service.

kubectl -n nginx-gateway get svc
Deploying the gateway api controller with nodeport service.

Now, what happens is that there is no Load Balancer, so the traffic directly reaches the Node initially, and from there, it goes to the Gateway API Controller.

Using the second example (Colors) to demonstrate the setup, but for that, note down the Node Port number of the service to access the application 30859, your Node Port number might be different.

We need to update the Local DNS configuration as well.

Here, instead of using the public IP of the Load Balancer, we use the public IP of the Node.

To get the public IP of the Nodes, use the following command.

kubectl get no -o wide
listing the kubernets nodes to identify  the public ips of the no

We are mapping these public IPs in /etc/hosts

local dns configuration for the gateway api controller

Now, we can use the hostname (dev.techiescamp.com) with the Node Port number 30859 to access the application.

the demo application output from the web browser
the demo application output from the web browser

We can use this method to test the Gateway API from the local Kubernetes clusters.

Difference between Kubernetes Ingress and Gateway API

  1. Ingress primarily supports the HTTP and HTTPS network protocols [Layer 7 protocols], Gateway API as well as supports TCP, UDP, TLS, and gRPC (Example: Databases, Messaging systems).
  2. In Ingress, only one object ingress where we have to give all our routing configurations, but in Gateway API, dedicated resources for each type of protocol, such as httproute, tcproute, tlsroute and udproute.
  3. Ingress is limited to path-based and host-based routing, but the Gateway API can perform the custom HTTP header-based routing, weighted routing, canary and blue-green network routings.
  4. The Ingress object is the namespace scoped, so only the same namespace applications can use the Ingress object, but in Gateway API, the Gateway object is similar to the Ingress but is cluster scoped, so applications from any namespace can utilize the Gateway object.
  5. The Gateway API also required a controller similar to the Ingress, but the list of Gateway API controller is quite extensive.

Conclusion

This is just an introduction to the Kubernetes Gateway API, so we just covered the basic path and host-based routing, which is similar to Ingress. There are many advanced concepts available, and we will explore them individually.

To learn more about the Gateway API, please refer to the official documentation. If youwant to know what functions the Nginx Fabric Gateway Controller offers, you can refer to this official documentation.
About the author
Arun Lal

Arun Lal

Arun Lal is a DevOps Engineer & AWS Community Builder, also an Expert in AWS infrastructure, Terraform automation, and GitLab CI/CD pipelines.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to DevOpsCube – Easy DevOps, SRE Guides & Reviews.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.