How to Setup Prometheus Node Exporter on Kubernetes

Prometheus Node Exporter on Kubernetes

If you want to know how the Kubernetes nodes perform or monitor system-level insights of kubernetes nodes, you need to set up a Prometheus node exporter on Kubernetes cluster.

This guide will walk you through the node-exporter setup on a Kubernetes cluster and integrate Prometheus scrape config to scrape the node metrics.

It is also an important topics in the Prometheus Certified Associate Certification.

What is Prometheus Node Exporter?

Node exporter is an official Prometheus exporter for capturing all the Linux system-related metrics.

It collects all the hardware and Operating System level metrics that are exposed by the kernel.

You can use the node exporter to collect the system metrics from all your Linux systems. Check this article on node monitoring using node-exporter.

Why do we need Node Exporter on Kubernetes?

By default, most of the Kubernetes clusters expose the metric server metrics (Cluster level metrics from the summary API) and Cadvisor (Container level metrics). It does not provide detailed node-level metrics.

To get all the kubernetes node-level system metrics, you need to have a node-exporter running in all the kubernetes nodes. It collects all the Linux system metrics and exposes them via /metrics endpoint on port 9100

Similarly, you need to install Kube state metrics to get all the metrics related to kubernetes objects.

Kubernetes Manifests

The Kubernetes manifest used in this guide is present in the Github repository. Clone the repo to your local system.

git clone

Setup Node Exporter on Kubernetes

Note: If you don’t have the Prometheus setup, please follow my guide on setting up Prometheus on kubernetes.

Here is what we are going to do.

  1. Deploy node exporter on all the Kubernetes nodes as a daemonset. Daemonset makes sure one instance of node-exporter is running in all the nodes. It exposes all the node metrics on port 9100 on the /metrics endpoint
  2. Create a service that listens on port 9100 and points to all the daemonset node exporter pods. We would be monitoring the service endpoints (Node exporter pods) from Prometheus using the endpoint job config. More explanation on this in the Prometheus config part.

Lest get started with the setup.

Step 1: Create a file name daemonset.yaml and copy the following content.

Note: This Daemonset will be deployed in the monitoring namespace. If you wish to deploy it in a different namespace, change it in the following YAML

apiVersion: apps/v1
kind: DaemonSet
  labels: exporter node-exporter
  name: node-exporter
  namespace: monitoring
    matchLabels: exporter node-exporter
      labels: exporter node-exporter
      - args:
        - --path.sysfs=/host/sys
        - --path.rootfs=/host/root
        - --no-collector.wifi
        - --no-collector.hwmon
        - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
        - --collector.netclass.ignored-devices=^(veth.*)$
        name: node-exporter
        image: prom/node-exporter
          - containerPort: 9100
            protocol: TCP
            cpu: 250m
            memory: 180Mi
            cpu: 102m
            memory: 180Mi
        - mountPath: /host/sys
          mountPropagation: HostToContainer
          name: sys
          readOnly: true
        - mountPath: /host/root
          mountPropagation: HostToContainer
          name: root
          readOnly: true
      - hostPath:
          path: /sys
        name: sys
      - hostPath:
          path: /
        name: root

Step 2: Deploy the daemonset using the kubectl command.

kubectl create -f daemonset.yaml

Step 3: List the daemonset in the monitoring namespace and make sure it is in the available state.

kubectl get daemonset -n monitoring

Step 4: Create a file names service.yaml and copy the following contents.

kind: Service
apiVersion: v1
  name: node-exporter
  namespace: monitoring
  annotations: 'true'   '9100'
  selector: exporter node-exporter
  - name: node-exporter
    protocol: TCP
    port: 9100
    targetPort: 9100

Step 5: Create the service.

kubectl create -f service.yaml

Step 6: Now, check the service’s endpoints and see if it is pointing to all the daemonset pods.

kubectl get endpoints -n monitoring 
Prometheus node exporter daemonset

As you can see from the above output, the node-exporter service has three endpoints. Meaning three node-exporter pods running on three nodes as part of Daemonset.

Node-exporter Prometheus Config

We have the node-exporter daemonset running on port 9100 and a service pointing to all the node-exporter pods.

You need to add a scrape config to the Prometheus config file to discover all the node-exporter pods.

Let’s take a look at the Prometheus scrape config required to scrape the node-exporter metrics.

      - job_name: 'node-exporter'
          - role: endpoints
        - source_labels: [__meta_kubernetes_endpoints_name]
          regex: 'node-exporter'
          action: keep

In this config, we mention the role as endpoints to scrape the endpoints with the name node-exporter.

See Prometheus config map file I have created for the Kubernetes monitoring stack. It includes all the scrape configs for kubernetes components.

Once you add the scrape config to Prometheus, you will see the node-exporter targets in Prometheus, as shown below.

Node exporter target state in prometheus

Querying Node-exporter Metrics in Prometheus

Once you verify the node-exporter target state in Prometheus, you can query the Prometheus dashboard’s available node-exporter metrics.

All the metrics from node-exporter is prefixed with node_

You can query the metrics with different PromQL expressions. See querying basics to learn about PromQL queries.

If you type node_ in the Prometheus dashboard, it will list all the available metrics as shown below.

Querying node exporter metrics from Prometheus dashboard

Visualizing Prometheus Node Exporter Metrics as Grafana Dashboards

Visualising the node exporter metrics on Grafana is not difficult as you think.

A community Grafana node exporter dashboard template has a predefined dashboard with all the supported node exporter metrics.

You can modify the template as per your project requirements.

If you don’t know how to import a community template, please check my Grafana Prometheus integration article, where I have added the steps to import community dashboard templates.

So here is how the node-exporter Grafana dashboard looks for CPU/memory and disk statistics.

node exporter grafana dashbaord

Once you have the dashboard, you will find the following sections. If you expand it, you will find all the metrics panel.

node exporter grafana metric panels

More References

  1. Official Node exporter Github repository
  2. Prometheus Linux host metrics guide
  3. Prometheus Exporters
    alertmanager-6d864b7cb9-b8lhs 1/1 Running 0 24h
    node-exporter-7jhns 1/1 Running 0 28h
    node-exporter-pv6hw 1/1 Running 0 28h
    prometheus-deployment-5978c4f57-6szl4 1/1 Running 0 26h
    root@vm244:~/prometheus/kubernetes-prometheus# kubectl get node
    vm243 Ready worker 33d v1.24.0
    vm244 Ready control-plane 33d v1.24.0
    vm245 Ready worker 33d v1.24.0
    i have 3 nodes but only 2 daemonset exporters created,

  2. How have you been doing when you need to get metrics from Kubernetes nodes through Prometheus?

    Here I deployed the node-exporter, but it brings me metrics from the Endpoints it collects, but these values are not compatible with the values of the Kubernetes nodes

    For example, the metric:

    It returns several results, referring to the Endpoints that Node-exporter creates:
    node_memory_MemAvailable_bytes{instance=”″, job=”node-exporter”}
    node_memory_MemAvailable_bytes{instance=”″, job=”node-exporter”}
    node_memory_MemAvailable_bytes{instance=”″, job=”node-exporter”}
    node_memory_MemAvailable_bytes{instance=”″, job=”node-exporter”}
    node_memory_MemAvailable_bytes{instance=”″, job=”node-exporter”}
    node_memory_MemAvailable_bytes{instance=”″, job=”node-exporter”}
    node_memory_MemAvailable_bytes{instance=”″, job=”node-exporter”}
    node_memory_MemAvailable_bytes{instance=”″, job=”node-exporter”}

    However, the value of the node_memory_MemAvailable_bytes metric is not equivalent to the MemAvailable of the Nodes.

    I’ve tried a few combinations of Prometheus SD Config settings, to no avail.
    I can only get the Endpoints metrics, but that doesn’t work, because they don’t actually bring the state of the Node.

  3. I can’t understand the image you’ve been using.
    ” image: prom/node-exporter ” where this image is

  4. Hello Bibin,
    I have follow your tutorial For install Prometheus and node exporter without change config. And i don’t have node exporter in prometheus…

    Can you help me pls ?

    1. Hi Neridaar,

      I have tested the manifests again it is working as expected…The targets are showing all the node-exporter endpoints.

      Could you please check if your cluster has enough resources to run the node-exporter pods?

      Here is the screenshot of expected pos state

      1. Yes i have same ! but in prometheus GUI don’t have…
        For infos, my cluster k3s is, Master Debian 10 and others node Debian 10, i use Calico for networking. So, nothing into prometheus GUI… But i have my node with kubectl get po -n monitoring.

        Can you help me pls

  5. Yes I removed everything and I did everything like you and I have no list that appears in my prometheus GUI and when I do the command kubectl get all -A I can see my two nodes-exporter because I have that two nodes while running with Prometheus also running.
    Can you help me pls ?

  6. Hi, thanks you ! but i don’t have in prometheus list of node exporter after your configuration.. In my prometheus/config-map i need target for had a list node-exporter and the state is “Down”….
    – job_name: ‘node-exporter’
    – targets:
    – ip_node:9100
    – ip_node:9100
    Can you help me pls ?

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like