If you want to know how the Kubernetes nodes perform or monitor system-level insights of kubernetes nodes, you need to set up a Prometheus node exporter on Kubernetes cluster.
This guide will walk you through the node-exporter setup on a Kubernetes cluster and integrate Prometheus scrape config to scrape the node metrics.
What is Prometheus Node Exporter?
Node exporter is an official Prometheus exporter for capturing all the Linux system-related metrics.
It collects all the hardware and Operating System level metrics that are exposed by the kernel.
You can use the node exporter to collect the system metrics from all your Linux systems. Check this article on node monitoring using node-exporter.
Why do we need Node Exporter on Kubernetes?
By default, most of the Kubernetes clusters expose the metric server metrics (Cluster level metrics from the summary API) and Cadvisor (Container level metrics). It does not provide detailed node-level metrics.
To get all the kubernetes node-level system metrics, you need to have a node-exporter running in all the kubernetes nodes. It collects all the Linux system metrics and exposes them via
/metrics endpoint on port
Similarly, you need to install Kube state metrics to get all the metrics related to kubernetes objects.
The Kubernetes manifest used in this guide is present in the Github repository. Clone the repo to your local system.
git clone https://github.com/bibinwilson/kubernetes-node-exporter
Setup Node Exporter on Kubernetes
Note: If you don’t have the Prometheus setup, please follow my guide on setting up Prometheus on kubernetes.
Here is what we are going to do.
- Deploy node exporter on all the Kubernetes nodes as a
daemonset. Daemonset makes sure one instance of node-exporter is running in all the nodes. It exposes all the node metrics on port
- Create a service that listens on port
9100and points to all the daemonset node exporter pods. We would be monitoring the service endpoints (Node exporter pods) from Prometheus using the endpoint job config. More explanation on this in the Prometheus config part.
Lest get started with the setup.
Step 1: Create a file name
daemonset.yaml and copy the following content.
Note: This Daemonset will be deployed in the monitoring namespace. If you wish to deploy it in a different namespace, change it in the following YAML
apiVersion: apps/v1 kind: DaemonSet metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter name: node-exporter namespace: monitoring spec: selector: matchLabels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter template: metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter spec: containers: - args: - --path.sysfs=/host/sys - --path.rootfs=/host/root - --no-collector.wifi - --no-collector.hwmon - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) - --collector.netclass.ignored-devices=^(veth.*)$ name: node-exporter image: prom/node-exporter ports: - containerPort: 9100 protocol: TCP resources: limits: cpu: 250m memory: 180Mi requests: cpu: 102m memory: 180Mi volumeMounts: - mountPath: /host/sys mountPropagation: HostToContainer name: sys readOnly: true - mountPath: /host/root mountPropagation: HostToContainer name: root readOnly: true volumes: - hostPath: path: /sys name: sys - hostPath: path: / name: root
Step 2: Deploy the daemonset using the kubectl command.
kubectl create -f daemonset.yaml
Step 3: List the
daemonset in the monitoring namespace and make sure it is in the available state.
kubectl get daemonset -n monitoring
Step 4: Create a file names
service.yaml and copy the following contents.
--- kind: Service apiVersion: v1 metadata: name: node-exporter namespace: monitoring annotations: prometheus.io/scrape: 'true' prometheus.io/port: '9100' spec: selector: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter ports: - name: node-exporter protocol: TCP port: 9100 targetPort: 9100
Step 5: Create the service.
kubectl create -f service.yaml
Step 6: Now, check the service’s endpoints and see if it is pointing to all the daemonset pods.
kubectl get endpoints -n monitoring
As you can see from the above output, the node-exporter service has three endpoints. Meaning three node-exporter pods running on three nodes as part of Daemonset.
Node-exporter Prometheus Config
We have the node-exporter daemonset running on port 9100 and a service pointing to all the node-exporter pods.
You need to add a scrape config to the Prometheus config file to discover all the node-exporter pods.
Let’s take a look at the Prometheus scrape config required to scrape the node-exporter metrics.
- job_name: 'node-exporter' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_endpoints_name] regex: 'node-exporter' action: keep
In this config, we mention the role as endpoints to scrape the endpoints with the name
See Prometheus config map file I have created for the Kubernetes monitoring stack. It includes all the scrape configs for kubernetes components.
Once you add the scrape config to Prometheus, you will see the node-exporter targets in Prometheus, as shown below.
Querying Node-exporter Metrics in Prometheus
Once you verify the node-exporter target state in Prometheus, you can query the Prometheus dashboard’s available node-exporter metrics.
All the metrics from node-exporter is prefixed with
You can query the metrics with different PromQL expressions. See querying basics to learn about PromQL queries.
If you type
node_ in the Prometheus dashboard, it will list all the available metrics as shown below.
Visualizing Prometheus Node Exporter Metrics as Grafana Dashboards
Visualising the node exporter metrics on Grafana is not difficult as you think.
A community Grafana node exporter dashboard template has a predefined dashboard with all the supported node exporter metrics.
You can modify the template as per your project requirements.
If you don’t know how to import a community template, please check my Grafana Prometheus integration article, where I have added the steps to import community dashboard templates.
So here is how the node-exporter Grafana dashboard looks for CPU/memory and disk statistics.
Once you have the dashboard, you will find the following sections. If you expand it, you will find all the metrics panel.
NAME READY STATUS RESTARTS AGE
alertmanager-6d864b7cb9-b8lhs 1/1 Running 0 24h
node-exporter-7jhns 1/1 Running 0 28h
node-exporter-pv6hw 1/1 Running 0 28h
prometheus-deployment-5978c4f57-6szl4 1/1 Running 0 26h
[email protected]:~/prometheus/kubernetes-prometheus# kubectl get node
NAME STATUS ROLES AGE VERSION
vm243 Ready worker 33d v1.24.0
vm244 Ready control-plane 33d v1.24.0
vm245 Ready worker 33d v1.24.0
i have 3 nodes but only 2 daemonset exporters created,
How have you been doing when you need to get metrics from Kubernetes nodes through Prometheus?
Here I deployed the node-exporter, but it brings me metrics from the Endpoints it collects, but these values are not compatible with the values of the Kubernetes nodes
For example, the metric:
It returns several results, referring to the Endpoints that Node-exporter creates:
However, the value of the node_memory_MemAvailable_bytes metric is not equivalent to the MemAvailable of the Nodes.
I’ve tried a few combinations of Prometheus SD Config settings, to no avail.
I can only get the Endpoints metrics, but that doesn’t work, because they don’t actually bring the state of the Node.
I can’t understand the image you’ve been using.
” image: prom/node-exporter ” where this image is
It is from Docker hub.
I have follow your tutorial For install Prometheus and node exporter without change config. And i don’t have node exporter in prometheus…
Can you help me pls ?
I have tested the manifests again it is working as expected…The targets are showing all the node-exporter endpoints.
Could you please check if your cluster has enough resources to run the node-exporter pods?
Here is the screenshot of expected pos state
Yes i have same ! but in prometheus GUI don’t have…
For infos, my cluster k3s is, Master Debian 10 and others node Debian 10, i use Calico for networking. So, nothing into prometheus GUI… But i have my node with kubectl get po -n monitoring.
Can you help me pls
Can you please send the Screenshot of the Prometheus Target UI page
Yes I removed everything and I did everything like you and I have no list that appears in my prometheus GUI and when I do the command kubectl get all -A I can see my two nodes-exporter because I have that two nodes while running with Prometheus also running.
Can you help me pls ?
HI Neridaar, You are checking under the targets page, right? Here is the full Prometheus config map with node-exporter configuration.. please verify it with your configuration https://github.com/bibinwilson/kubernetes-prometheus/blob/master/config-map.yaml
Hi, thanks you ! but i don’t have in prometheus list of node exporter after your configuration.. In my prometheus/config-map i need target for had a list node-exporter and the state is “Down”….
– job_name: ‘node-exporter’
Can you help me pls ?
Did you set up the node exporter daemon sets? if not, please follow this guide https://devopscube.com/node-exporter-kubernetes/
Prometheus will auto-discover all the nodes with the config. You don’t have to manually specify the node IPs