In this guide, we will look at the Prometheus setup on Kubernetes using a helm chart with all the best practices.
If you want to learn about all the Kubernetes objects involved in the Prometheus setup, you can follow the Prometheus on Kubernetes guide, where we used plain YAML manifest to deploy Prometheus.
Prerequisites
For this setup, ensure you have the following prerequisites.
- Helm configured on your workstation or the CI server where you want to run the helm commands. (v3.16.3 or higher)
- A working Kubernetes cluster (v1.30 or higher)
Prometheus Helm Chart Repo
The Prometheus community maintains all the Prometheus related helm charts in the following GitHub repository.
https://github.com/prometheus-community/helm-charts/
This repo contains Prometheus stack, exporters, Pushgateways, etc. You can install the required charts as per your requirements.
To get started, we will deploy the core Prometheus chart that installs the following.
- Prometheus server
- Alertmanager
- Kube State Metrics
- Prometheus Node Exporter
- Prometheus Pushgateway
Except for the Prometheus server, other components are installed from the dependency charts (sub-charts). If you check the Chart.yaml
, you will find the added chart dependencies below.
You can refer to this Prometheus Architecture blog to learn the complete workflow of Prometheus and its components.
Install Prometheus Stack Using Helm
Now, let’s get started with the setup.
Follow the steps below to set up Prometheus using the community helm chart.
Step1: Add Prometheus Helm Repo
Add the Prometheus chart to your system using the following command.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
You can list all the charts in the repo using the following command. We are going to use the Prometheus
chart
helm search repo prometheus-community
Before you deploy the Prometheus helm chart, you can view all the YAML manifest by converting the chart to plain YAML files using the following command.
helm template prometheus-community prometheus-community/prometheus --output-dir prometheus-manifests
Here is the tree view of all the associated charts with YAML and Prometheus YAML.
➜ prometheus-manifests tree
.
└── prometheus
├── charts
│ ├── alertmanager
│ │ └── templates
│ │ ├── configmap.yaml
│ │ ├── serviceaccount.yaml
│ │ ├── services.yaml
│ │ └── statefulset.yaml
│ ├── kube-state-metrics
│ │ └── templates
│ │ ├── clusterrolebinding.yaml
│ │ ├── deployment.yaml
│ │ ├── role.yaml
│ │ ├── service.yaml
│ │ └── serviceaccount.yaml
│ ├── prometheus-node-exporter
│ │ └── templates
│ │ ├── daemonset.yaml
│ │ ├── service.yaml
│ │ └── serviceaccount.yaml
│ └── prometheus-pushgateway
│ └── templates
│ ├── deployment.yaml
│ ├── service.yaml
│ └── serviceaccount.yaml
└── templates
├── clusterrole.yaml
├── clusterrolebinding.yaml
├── cm.yaml
├── deploy.yaml
├── pvc.yaml
├── service.yaml
└── serviceaccount.yaml
From the manifests, you can see that the Prometheus helm chart deploys the following.
- Alertmanager (Statefulset)
- Kube State Metrics (Deployment)
- Prometheus Node Exporter (Daemonset)
- Prometheus Pushgateway (Deployment)
- Prometheus Server (Deployment)
Step 2: Customize Prometheus Helm Chart Configuration Values
While deploying Prometheus, it is very important to know the default values that are part of the values.yaml
file.
If you are using the community chart for your project requirements, you should modify the values.yaml
file as per your environment requirements.
You can write all the default values to a values.yaml
file using the following command.
helm show values prometheus-community/prometheus > values.yaml
Following are the images used in this Prometheus Helm chart.
- quay.io/prometheus-operator/prometheus-config-reloader
- quay.io/prometheus/prometheus
The subcharts use the following images.
- quay.io/prometheus/alertmanager
- registry.k8s.io/kube-state-metrics/kube-state-metrics
- quay.io/prometheus/node-exporter
- quay.io/prometheus/pushgateway
You can customize the values to your needs. For example, the Prometheus Persistent Volume is set to 8Gi by default.
Note: If you are running from a corporate network, you might not have access to these public images. You should first push these images to the organization private registry first and then deploy the chart. Also, check if the security guidelines allow you to push community image to private registries.
Step 3: Deploy Prometheus using the Helm Chart
First, create a namespace monitoring
. We will deploy Prometheus in the monitoring namespace.
kubectl create namespace monitoring
Now, let’s deploy Prometheus using the values.yaml file.
Here, I am adding two parameters to create a Persistent volume for Prometheus and Alertmanger.
We can do the same configurations in the values.yaml
as well.
I am using the EKS cluster for the demo purpose, so the mentioned storage class gp2
is the default of the EKS cluster.
helm upgrade -i prometheus prometheus-community/prometheus \
--namespace monitoring \
--set alertmanager.persistence.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2"
You will get the status as deployed on a successful deployment, as shown below.
Before we access the Prometheus, we can check all the components are deployed and running properly.
kubectl -n monitoring get all
Step 4: Port Forward the Prometheus Pod
The above screenshot clearly shows that each Prometheus stack component has the Service type as Cluster IP, so it can only access from inside the cluster.
But we need to access it from our local machine to see the dashboard, so we perform the port-forwarding.
First, start with the Prometheus port forwarding, identify the Prometheus Pod name, and create that as an environment variable.
export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
To perform port forward, use the following command.
kubectl --namespace monitoring port-forward $POD_NAME 9090
The port forwarding is properly done; you will see the following output.
Don’t close the terminal; meanwhile, open any of the web browsers from the same machine and paste this URL localhost:9090
In the Target section, we can see the cluster resources that Prometheus is monitoring by default.
Now, we can try the Alertmanager port forwarding to see the dashboard.
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app.kubernetes.io/name=alertmanager,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
To perform Port forward to Alertmanager Pod, use the following command.
kubectl --namespace prometheus port-forward $POD_NAME 9093
Same as Prometheus, use localhost URL from the browser, but this time use the Port 9093
The Alertmanger is also working correctly, so the installation was successful.
Here, we have explored port forward method to expose the Prometheus application.
Note: If you want a static endpoint to access the Prometheus via internal or external DNS, you can either use the type Node Port or Load Balancer service. Also you can use ingress to expose it via DNS. For TLS, use ingress TLS configurations.
Conclusion
This guide provides a basic installation of the Prometheus stack using Helm. You will need to configure it to monitor your applications or endpoints.
For advanced configuration or production level setup, you can make use of Prometheus Operator where every configuration is available as a Kubernetes CRD.