In this blog, we will learn about integrating TLS certificates to the Kubernetes cluster using Cert Manager and the Let’s Encrypt Certificate Authority.
SSL/TLS certificates are essential for Kubernetes Ingress objects to secure communication between the users and the application.
Cert Manager integrates the certificates and manages their validity to renew them at the right time.
The Cert Manager works with Certificate Authorities (CA) such as Let’s Encrypt, Hashicorp Vault, etc.
Cert Manager Workflow
The below diagram explains how the Cert manager works with the kubernetes cluster to provision and manage the TLS Certificates to safeguard the Ingress.
- The Ingress object will be created with the reference of the Cert Manager Issuers.
- The Ingress Controller will get the information from the Ingress object and request a certificate from the Cert Manager.
- The Cert Manager will request the Certificate Authority, for example, Let’s Encrypt.
- After the verification, the CA will generate and provide the certificate to the Cert Manager.
- The generated certificate will be stored in Kubernetes as a TLS Secret.
- The Ingress Controller will encrypt the traffic using the stored certificate for the TLS termination.
- When a user tries to access the application, the external traffic is routed from the external Load Balancer to the Ingress Controller.
- The TLS termination will happen in the Ingress Controller with the TLS certificate and securely route the traffic to the application Pods.
How to Setup Cert Manager on Kubernetes
Prerequisites:
- Kubernetes Cluster version 1.30 or higher.
- Helm v3.16.3 or higher should be available on the local system.
- Kubectl v1.30 or higher should be available on the local system.
Note: For this tutorial, I am using AWS EKS cluster but you can use any of the Kubernetes cluster.
Step 1: Install Cert Manager on Kubernetes
First, we need to add the Cert Manager Helm Repository
helm repo add jetstack https://charts.jetstack.io --force-update
To update the repository, use the following command.
helm repo update
To get the complete modifiable values of the Helm Chart, use the following command.
helm show values jetstack/cert-manager
You can store the entire output as a YAML file to make modifications or take only the necessary parameters to create a new one.
For now, I am picking only the necessary parameters and creating a file dev_cert_manager_values.yaml
crds:
enabled: true
keep: false
Why we have modified these two parameters,
crds.enabled: true
– This ensures that the Cert Manager Custom Resource Definitions are deployed. (CRDs are required objects to install the Cert Manager)
crds.keep: false
– This ensures that the deployed CRDs will not be available once we uninstall the Cert Manager from the cluster.
Now, we can install the Cert Manager with the dev_cert_manager_values.yaml
file.
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--values dev_cert_manager_values.yaml
Ensure all the Cert Manager related objects are deployed and running properly.
kubectl get all -n cert-manager
Let’s list the Custom Resource Definitions related to the Cert Manager.
kubectl get crds
certificaterequests
– Track the status of the certificate requests.
certificates
– The provisioned certificates by the Certificate Authority (CA)
challenges
– Ownership verifications of the requester.
clusterissuers
– Global-level certificate issuer, which can help create certificates for the whole cluster.
issuers
– Namespace level certificate issuers.
orders
– Tracking list of the CA requests.
Step 2: Setup Nginx Ingress Controller on Kubernetes
The Ingress Controller is used to route the external traffic to the Kubernetes cluster.
When traffic comes over the internet, the Ingress Controller will help inside the cluster to reach the traffic to the correct destination.
Before installing the Ingress Controller, we need to create a dev_nginx_ingress_values.yaml
file, because we are making a few modifications to the Controller.
controller:
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-name: apps-ingress
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-028a738bdafc344c6,subnet-094d01de2dd2148c0,subnet-04429e132a1f42826 "
Note: If you are testing this in the local clusters like kubeadm or minikube, instead of
service.type: LoadBalancer
, useservice.type: NodePort
and omit the annotations, so that without a Load Balancer we can test the Ingress by using the IP of the Node and the Ingress controller Node Port Number.
The overview of the above parameters,
- Load Balancer name (apps-ingress)
- Load Balancer type (Network Load Balancer)
- Load Balancer scheme (internet-facing)
- Subnets (Instead of these subnets, provide yours)
I am using AWS EKS Cluster; by default, the Ingress Controller will provision an Internal Classic Load Balancer.
However, I want an External Network Load Balancer to route the traffic from outside to the cluster, so I have made the above modifications.
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace -f dev_nginx_ingress_values.yaml
Ensure all objects related to the Ingress Controller are running properly.
kubectl get all -n ingress-nginx
The Ingress Controller is provisioned a Network Load Balancer, we can ensure that by using the AWS Console.
Here, you can see the DNS A Record of the Load Balancer, and we have to map this to the intended DNS Server.
To get more detailed information, click the Load Balancer name.
In the Listeners tab, we can see the protocols and the target groups where the traffic has to route.
In the Network mapping section, we can ensure the subnets that we provide on the Nginx Ingress Controller configuration file.
The Resource map section provides a diagram of the traffic routes from external to the target, which are EC2 instances.
Step 3: Install a Demo Application in the Kubernetes Cluster
Before we continue the configuration, I am deploying an application for demo purposes.
I am deploying the ArgoCD, but you can use any application.
Add ArgoCD Helm repository.
helm repo add argo https://argoproj.github.io/argo-helm
Create a values file dev_argocd_values.yaml
with the required parameters for the ArgoCD deployment.
service:
type: NodePort
Now, we can install the ArgoCD in our Kubernetes cluster.
helm install argocd argo/argo-cd \
--namespace argocd \
--create-namespace \
--values dev_argocd_values.yaml
After the installation, ensure all related resources are running properly.
Refer to this blog for more detailed information about the ArgoCD installation and configuration.
We can check the web interface of the ArgoCD using the Node Port and the Public IP of any of the instances.
We can view the web interface successfully, ensuring the ArgoCD runs properly.
At the beginning of the URL, we can see that the traffic is routed using the insecure HTTP protocol.
Step 4: Map the AWS ELB DNS A Record on Route 53
Note: You can skip this step if you don’t have a DNS server or don’t want to configure it with the DNS server.
Now, I am going to configure the Load Balancer DNS name to the DNS server (Route 53) with an easily memorable domain name, which is argocd.devopsproject.dev
.
I already have a hosted zone on Route 53, so you can use your DNS provider.
We have to create a new DNS Record for the Load Balancer and devopsproject.dev
is my Hosted Zone.
I am giving the prefix argocd
so that the Domain Name will be argocd.devopsproject.dev
.
The record type should be A
because the Network Load Balancer generates the A
record.
You need to select the Load Balancer type and the region, then click the Create Records button to create a record on Route 53.
I am making a DNS query to ensure that the Domain Name argocd.devopsproject.dev
is pointing to the Load Balancer and that the query is resolved by the DNS server (Route53)
dig argocd.devopsproject.dev
Step 5: Create a Cluster Issuer Object in the Cluster
We need to decide which certificate issuer we will get the certificates from.
I am choosing Let’s Encrypt for now.
Create a manifest for the ClusterIssuer
object cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-dev
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-dev
solvers:
- http01:
ingress:
class: nginx
This object will create a Cluster Scoped Issuer, which means you can create certificates for any of the Namespaces in the cluster.
Replace the email ID with yours, and you will be notified before the certificate expires.
spec.acme.solvers.http01.ingress.class: nginx
is the default Ingress Class of the Nginx Ingress Controller.
kubectl apply -f cluster-issuer.yaml
We have to ensure that the object is successfully created.
kubectl describe clusterissuer letsencrypt-dev
Step 6: Create an Ingress Object with the Certificate Issuer
Create a manifest for the Ingress Object argo-cert-manager-ingress.yaml
and add the following contents.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
cert-manager.io/cluster-issuer: letsencrypt-dev
spec:
ingressClassName: nginx
rules:
- host: argocd.devopsproject.dev
http:
paths:
- pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 443
path: /
tls:
- hosts:
- argocd.devopsproject.dev
secretName: argocd-ingress-http
Ingress Object should be created on Namespace where your application is running, in my case, the ArgoCD application is running on metadata.namespace: argocd
Namespace.
Integrating the Cert Manager with the Ingress Object is very simple, we just pass an annotation with the value of the Issuer name metadata.annotations.cert-manager.io/cluster-issuer: letsencrypt-prod
In the next section, we will map the application service with the hostname spec.rules.host: argocd.devopsproject.dev
Finally, we add the TLS Certificates to the Hostname spec.tls.hosts: argocd.devopsproject.dev
.
kubectl apply -f argo-cert-manager-ingress.yaml
The TLS certificate will be generated only after the Ingress object deployment.
Ensure that the Ingress Object is created and verify the status.
kubectl describe ingress -n argocd argocd-server-ingress
So, each time you create an Ingress object for an application, the TLS certificate will automatically be generated and attached to the resource.
Now, we can check the Secret because the TLS certificate will be stored as a secret in the cluster.
To list the secrets, use the following command.
kubectl -n argocd get secrets
If you want to see the contents of the secret, use the following command.
kubectl -n argocd get secrets argocd-ingress-http -o yaml
If you want to know more details about the generated certificate, we can describe the certificates
Custom Resource Definition.
kubectl -n argocd describe certificates argocd-ingress-http
We can now check our application with the hostname and ensure the TLS Certificate is attached.
The TLS termination will happen in the Ingress Controller when the external traffic is reached.
In the web browser, we can see that our application uses the secure HTTP protocol, and anyone can also see the Certificate Authority and validity information.
Conclusion
Cert Manager will track the certificates that have been created and renew them before their expiration.
By default, the validity of the certificates that the Cert Manager creates is 90 days; the Cert Manager takes care of the renewal, but if you want modifications on the renewal, you can do that as well.
Cert Manager can generate Self-Signed Certificates, so you can use the capability if required. To know more about the Cert Manager, please refer to the official documentation.