This Prometheus kubernetes tutorial will guide you through setting up Prometheus on a Kubernetes cluster for monitoring the Kubernetes cluster.
This setup collects node, pods, and service metrics automatically using Prometheus service discovery configurations.
About Prometheus
Prometheus is a high-scalable open-source monitoring framework. It provides out-of-the-box monitoring capabilities for the Kubernetes container orchestration platform. Also, In the observability space, it is gaining huge popularity as it helps with metrics and alerts.
Explaining Prometheus is out of the scope of this article. If you want to know more about Prometheus, You can watch all the Prometheus-related videos from here.
However, there are a few key points I would like to list for your reference.
- Metric Collection: Prometheus uses the pull model to retrieve metrics over HTTP. There is an option to push metrics to Prometheus using
Pushgateway
for use cases where Prometheus cannot Scrape the metrics. One such example is collecting custom metrics from short-lived kubernetes jobs & Cronjobs - Metric Endpoint: The systems that you want to monitor using Prometheus should expose the metrics on an
/metrics
endpoint. Prometheus uses this endpoint to pull the metrics in regular intervals. - PromQL: Prometheus comes with
PromQL
, a very flexible query language that can be used to query the metrics in the Prometheus dashboard. Also, the PromQL query will be used by Prometheus UI and Grafana to visualize metrics. - Prometheus Exporters: Exporters are libraries that convert existing metrics from third-party apps to Prometheus metrics format. There are many official and community Prometheus exporters. One example is, the Prometheus node exporter. It exposes all Linux system-level metrics in Prometheus format.
- TSDB (time-series database): Prometheus uses TSDB for storing all the data efficiently. By default, all the data gets stored locally. However, to avoid a single point of failure, there are options to integrate remote storage for Prometheus TSDB.
If you would like to install Prometheus on a Linux VM, please see the Prometheus on Linux guide.
Prometheus Architecture
Here is the high-level architecture of Prometheus. If you want to understand prometheus components in detail, please read the detailed Prometheus Architecture blog.
The Kubernetes Prometheus monitoring stack has the following components.
- Prometheus Server
- Alert Manager
- Grafana
In a nutshell, the following image depicts the high-level Prometheus kubernetes architecture that we are going to build. We have separate blogs for each component setup.
Note: The Linux Foundation has a Prometheus Certified Associate (PCA) certification exam. PCA focuses on showcasing skills related to observability, open-source monitoring, and alerting toolkit. Use code DCUBE20 Today to get instance discount on the certificatication.
Prometheus Monitoring Setup on Kubernetes
I assume that you have a Kubernetes cluster up and running with kubectl setup on your workstation.
Note: If you don’t have a Kubernetes setup, you can set up a cluster on google cloud or use minikube setup, or a vagrant automated setup or EKS cluster setup
The latest Prometheus is available as a docker image in its official docker hub account. We will use that image for the setup.
Connect to the Kubernetes Cluster
Connect to your Kubernetes cluster and make sure you have admin privileges to create cluster roles.
Only for GKE: If you are using Google cloud GKE, you need to run the following commands as you need privileges to create cluster roles for this Prometheus setup.
ACCOUNT=$(gcloud info --format='value(config.account)')
kubectl create clusterrolebinding owner-cluster-admin-binding \
--clusterrole cluster-admin \
--user $ACCOUNT
Prometheus Kubernetes Manifest Files
All the configuration files I mentioned in this guide are hosted on Github. You can clone the repo using the following command.
git clone https://github.com/techiescamp/kubernetes-prometheus
Thanks to James for contributing to this repo. Please don’t hesitate to contribute to the repo for adding features.
You can use the GitHub repo config files or create the files on the go for a better understanding, as mentioned in the steps.
Let’s get started with the setup.
Create a Namespace & ClusterRole
First, we will create a Kubernetes namespace for all our monitoring components. If you don’t create a dedicated namespace, all the Prometheus kubernetes deployment objects get deployed on the default namespace.
Execute the following command to create a new namespace named monitoring.
kubectl create namespace monitoring
Prometheus uses Kubernetes APIs to read all the available metrics from Nodes, Pods, Deployments, etc. For this reason, we need to create an RBAC policy with read access
to required API groups and bind the policy to the monitoring
namespace.
Step 1: Create a file named clusterRole.yaml
and copy the following RBAC role.
Note: In the role, given below, you can see that we have added
get
,list
, andwatch
permissions to nodes, services endpoints, pods, and ingresses. The role binding is bound to the monitoring namespace. If you have any use case to retrieve metrics from any other object, you need to add that in this cluster role.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: default
namespace: monitoring
Step 2: Create the role using the following command.
kubectl create -f clusterRole.yaml
Create a Config Map To Externalize Prometheus Configurations
All configurations for Prometheus are part of prometheus.yaml
file and all the alert rules for Alertmanager are configured in prometheus.rules
.
prometheus.yaml
: This is the main Prometheus configuration which holds all the scrape configs, service discovery details, storage locations, data retention configs, etc)prometheus.rules
: This file contains all the Prometheus alerting rules
By externalizing Prometheus configs to a Kubernetes config map, you don’t have to build the Prometheus image whenever you need to add or remove a configuration. You need to update the config map and restart the Prometheus pods to apply the new configuration.
The config map with all the Prometheus scrape config and alerting rules gets mounted to the Prometheus container in /etc/prometheus
location as prometheus.yaml
and prometheus.rules
files.
Step 1: Create a file called config-map.yaml
and copy the file contents from this link –> Prometheus Config File.
Step 2: Execute the following command to create the config map in Kubernetes.
kubectl create -f config-map.yaml
It creates two files inside the container.
Note: In Prometheus terms, the config for collecting metrics from a collection of endpoints is called a
job
.
The prometheus.yaml
contains all the configurations to discover pods and services running in the Kubernetes cluster dynamically. We have the following scrape jobs in our Prometheus scrape configuration.
kubernetes-apiservers
: It gets all the metrics from the API servers.kubernetes-nodes
: It collects all the kubernetes node metrics.kubernetes-pods
: All the pod metrics get discovered if the pod metadata is annotated withprometheus.io/scrape
andprometheus.io/port
annotations.kubernetes-cadvisor
: Collects all cAdvisor metrics.kubernetes-service-endpoints
: All the Service endpoints are scrapped if the service metadata is annotated with prometheus.io/scrape and prometheus.io/port annotations. It can be used for black-box monitoring.
prometheus.rules
contains all the alert rules for sending alerts to the Alertmanager.
Create a Prometheus Deployment
Step 1: Create a file named prometheus-deployment.yaml
and copy the following contents onto the file. In this configuration, we are mounting the Prometheus config map as a file inside /etc/prometheus
as explained in the previous section.
Note: This deployment uses the latest official Prometheus image from the docker hub. Also, we are not using any persistent storage volumes for Prometheus storage as it is a basic setup. When setting up Prometheus for production uses cases, make sure you add persistent storage to the deployment.
apiVersion: apps/v1 kind: Deployment metadata: name: prometheus-deployment namespace: monitoring labels: app: prometheus-server spec: replicas: 1 selector: matchLabels: app: prometheus-server template: metadata: labels: app: prometheus-server spec: containers: - name: prometheus image: prom/prometheus args: - "--storage.tsdb.retention.time=12h" - "--config.file=/etc/prometheus/prometheus.yml" - "--storage.tsdb.path=/prometheus/" ports: - containerPort: 9090 resources: requests: cpu: 500m memory: 500M limits: cpu: 1 memory: 1Gi volumeMounts: - name: prometheus-config-volume mountPath: /etc/prometheus/ - name: prometheus-storage-volume mountPath: /prometheus/ volumes: - name: prometheus-config-volume configMap: defaultMode: 420 name: prometheus-server-conf - name: prometheus-storage-volume emptyDir: {}
Step 2: Create a deployment on monitoring namespace using the above file.
kubectl create -f prometheus-deployment.yaml
Step 3: You can check the created deployment using the following command.
kubectl get deployments --namespace=monitoring
You can also get details from the kubernetes dashboard as shown below.
Connecting To Prometheus Dashboard
You can view the deployed Prometheus dashboard in three different ways.
- Using Kubectl port forwarding
- Exposing the Prometheus deployment as a service with NodePort or a Load Balancer.
- Adding an Ingress object if you have an Ingress controller deployed.
Let’s have a look at all three options.
Method 1: Using Kubectl port forwarding
Using kubectl port forwarding, you can access a pod from your local workstation using a selected port on your localhost
. This method is primarily used for debugging purposes.
Step 1: First, get the Prometheus pod name.
kubectl get pods --namespace=monitoring
The output will look like the following.
➜ kubectl get pods --namespace=monitoring
NAME READY STATUS RESTARTS AGE
prometheus-monitoring-3331088907-hm5n1 1/1 Running 0 5m
Step 2: Execute the following command with your pod name to access Prometheus from localhost port 8080.
Note: Replace prometheus-monitoring-3331088907-hm5n1 with your pod name.
kubectl port-forward prometheus-monitoring-3331088907-hm5n1 8080:9090 -n monitoring
Step 3: Now, if you access http://localhost:8080
on your browser, you will get the Prometheus home page.
Method 2: Exposing Prometheus as a Service [NodePort & LoadBalancer]
To access the Prometheus dashboard over a IP
or a DNS
name, you need to expose it as a Kubernetes service.
Step 1: Create a file named prometheus-service.yaml
and copy the following contents. We will expose Prometheus on all kubernetes node IP’s on port 30000
.
Note: If you are on AWS, Azure, or Google Cloud, You can use Loadbalancer type, which will create a load balancer and automatically points it to the Kubernetes service endpoint.
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9090'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
The annotations
in the above service YAML
makes sure that the service endpoint is scrapped by Prometheus. The prometheus.io/port
should always be the target port mentioned in service YAML
Step 2: Create the service using the following command.
kubectl create -f prometheus-service.yaml --namespace=monitoring
Step 3: Once created, you can access the Prometheus dashboard using any of the Kubernetes node’s IP on port 30000
. If you are on the cloud, make sure you have the right firewall rules to access port 30000
from your workstation.
Step 4: Now, if you browse to status --> Targets
, you will see all the Kubernetes endpoints connected to Prometheus automatically using service discovery as shown below.
The kube-state-metrics down is expected and I’ll discuss it shortly.
Step 5: You can head over to the homepage and select the metrics you need from the drop-down, and get the graph for the time range you mention. An example graph for container_cpu_usage_seconds_total
is shown below.
Method 3: Exposing Prometheus Using Ingress
If you have an existing ingress controller setup, you can create an ingress object to route the Prometheus DNS to the Prometheus backend service.
Also, you can add SSL for Prometheus in the ingress layer. You can refer to the Kubernetes ingress TLS/SSL Certificate guide for more details.
Here is a sample ingress object. Please refer to this GitHub link for a sample ingress object with SSL
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-ui
namespace: monitoring
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
# Use the host you used in your kubernetes Ingress Configurations
- host: prometheus.example.com
http:
paths:
- backend:
serviceName: prometheus-service
servicePort: 8080
Setting Up Kube State Metrics
Kube state metrics service will provide many metrics that are not available by default. Please make sure you deploy Kube state metrics to monitor all your Kubernetes API objects like deployments
, pods
, jobs
, cronjobs
etc.
Please follow this article to setup Kube state metrics on Kubernetes ==> How To Setup Kube State Metrics on Kubernetes
Setting Up Alertmanager
Alertmanager handles all the alerting mechanisms for Prometheus metrics. There are many integrations available to receive alerts from the Alertmanager (Slack, email, API endpoints, etc)
I have covered the Alert Manager setup in a separate article. Please follow ==> Alert Manager Setup on Kubernetes
Setting Up Grafana
Using Grafana, you can create dashboards from Prometheus metrics to monitor the Kubernetes cluster.
The best part is, you don’t have to write all the PromQL queries for the dashboards.
There are many community dashboard templates available for Kubernetes. You can import it and modify it as per your needs. I have covered it in the article.
Please follow this article for the Grafana setup ==> How To Setup Grafana On Kubernetes
Setting Up Node Exporter
Node Exporter will provide all the Linux system-level metrics of all Kubernetes nodes.
I have written a separate step-by-step guide on node-exporter daemonset deployment. Please follow Setting up Node Exporter on Kubernetes
The scrape config for node-exporter is part of the Prometheus config map. Once you deploy the node-exporter, you should see node-exporter targets and metrics in Prometheus.
Prometheus Production Setup Considerations
For the production Prometheus setup, there are more configurations and parameters that need to be considered for scaling, high availability, and storage. It all depends on your environment and data volume.
For example, Prometheus Operator project makes it easy to automate Prometheus setup and its configurations.
If you have multiple production clusters, you can use the CNCF project Thanos to aggregate metrics from multiple Kubernetes Prometheus sources.
Thanos provides features like multi-tenancy, horizontal scalability, and disaster recovery, making it possible to operate Prometheus at scale with high availability.
With Thanos, you can query data from multiple Prometheus instances running in different kubernetes clusters in a single place, making it easier to aggregate metrics and run complex queries.
Additionally, Thanos can store Prometheus data in an object storage backend, such as Amazon S3 or Google Cloud Storage, which provides an efficient and cost-effective way to retain long-term metric data.
Conclusion
In this comprehensive Prometheus kubernetes tutorial, I have covered the setup of important monitoring components to understand Kubernetes monitoring.
In the next blog, I will cover the Prometheus setup using helm charts. We will have the entire monitoring stack under one helm chart.
Also, If you are learning Kubernetes, you can check out my Kubernetes beginner tutorials where I have 40+ comprehensive guides.
Let me know what you think about the Prometheus monitoring setup by leaving a comment.
You can also use this setup to prepare for the Prometheus Certified Associate Certification.
76 comments
i have my preprod cluster up and running with my deployments and all
I have used helm and installed prometheus and grafana.
Is Helm a right way to have customization if i want for alert and all ? bcz helm dont give config file and other stuffs
Hi,
If we want to monitor 2 or more cluster do we need to install prometheus , kube-state-metrics in all cluster.
Hi Prajwal, Try Thanos. It helps you monitor kubernetes with Prometheus in a centralized way.
Blog was very helpful.tons of thanks for posting this good article. it helps many peoples like me to achieve the task. thanks a lot again.
I have a problem, the installation went well. On the other hand in prometheus when I click on status >> Targets , the status of my endpoint is DOWN
can you help me??
hi Brice, could you check if all the components are working in the cluster…Sometimes due to resource issues the components might be in a pending state.
very well explained I executed step by step and I managed to install it in my cluster. I specify that I customized my docker image and it works well.
thank you again for this document and above all good luck.
Thank you Brice 🙂
Thanks Bibin for this insightful article
My kubernetes-apiservers metric is not working giving error saying x509: certificate is valid for 10.0.0.1, not public IP address
Trying to deploy the Prometheus on AKS
Glat it helped Ajit.
At what step are you getting error?
Hi, I am not able to deploy, deployment.yml file do I have to create PV and PVC before deployment
Hi Piyush,
In this setup, I haven’t used PVC. What error are you facing?
Note: for a production setup, PVC is a must.
I get this error when I check logs for the prometheus pod
“Error sending alert” err=”Post \”http://alertmanager.monitoring.svc:9093/api/v2/alerts\”: dial tcp: lookup alertmanager.monitoring.svc on 10.53.176.10:53: no such host”
ts=2021-12-30T11:20:47.129Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager.monitoring.svc:9093/api/v2/alerts count=1 msg=”Error sending alert” err=”Post \”http://alertmanager.monitoring.svc:9093/api/v2/alerts\”: dial tcp: lookup alertmanager.monitoring.svc on 10.53.176.10:53: no such host”
Hi Elsa,
is the alert manager up and running?
Hi, I am trying to reach to prometheus page using the port forward method.
kubectl port-forward 8080:9090 -n monitoring
But this does not seem to work when I open localhost:8080 from the browser. I get a response “localhost refused to connect”.
Please help!
Hi Kavya,
You need to check the firewall and ensure the port-forward command worked while executing. Also, are you using a corporate Workstation with restrictions?
HI
Great article
Can you say why a scrape job is entered for K8s Pods when they are auto-discovered via annotations ?
Liam
Hi liam, Thank you.
The scrape config is to tell Prometheus what type of Kubernetes object it should auto-discover.
Hey,
I’m trying to get Prometheus to work using an Ingress object.
However, I’m not sure I fully understand what I need in order to make it work.
My setup:
Raspberry pi running k3s.
I installed MetalLB as a LB solution, and pointing it towards an Nginx Ingress Controller LB service.
I tried exposing Prometheus using an Ingress object, but I think I’m missing something here: do I need to create a Prometheus service as well?
Right now for Prometheus I have: Deployment (Server) and Ingress.
Looking at the Ingress configuration I can see it is pointing to a “prometheus-service”, but I do not have any Prometheus Service – should I create it?
If so, what would be the configuration?
Thanks,
Daniel
Hi Daniel,
Yes, you have to create a service. Ingress object is just a rule. Your ingress controller can talk to the Prometheus pod through the Prometheus service.
PLease release a tutorial to setup pushgateway on kubernetes for prometheus
Hi Lisha,
We will publish it soon
Could you please share some important point for setting this up in production workload .
Hi Mohit,
There is one blog post in the pipeline for Prometheus production-ready setup and consideration.
Hi there, is there any way to monitor kubernetes cluster B from kubernetes cluster A for example: prometheus and grafana pods are running inside my cluster A and I have cluster B and I want to monitor it from cluster A. How we can achieve that?
Hi Bhavishya,
You can have Grafana monitor both clusters. You need to have Prometheus setup on both the clusters to scrape metrics and in Grafana you can add both the Prometheus endpoint as data courses. You can monitor both clusters in single grain dashboards.
Also, look into Thanos https://thanos.io/
Thanks na. This really help us to setup the prometheus.
Great tutorial, was able to set this up so easily
Very helpful… !
Thanks.
Just want to thank you for the great tutorial I’ve ever seen.
I successfully setup grafana on my k8s. 🙂
Thanks
Kevin Su
Hi Kevin,
Glad it helped!
Thanks for this, worked great. I only needed to change the deployment YAML.
— From —
— To —
Thanks, John for the update. We changed it in the article. Actually, the referred Github repo in the article has all the updated deployment files.
why i have also the cadvisor metric for example the node_cpu not present in the list thx
can we create normal roles instead of cluster roles to restrict for a namespace and if we change how can use nonResourceURLs: [“/metrics”] because it throws error like nonresource url not allowed under namescope.
It should state the prerequisites. I am running windows – in the yaml file I see
args:
– “–config.file=/etc/prometheus/prometheus.yml”
– “–storage.tsdb.path=/prometheus/”
how do you get this?
Nice Article. Thanks for your efforts.
It will be good if you install prometheus with Helm .
didnt get where the values __meta_kubernetes_node_name come from , can u point me to how to write these files themselves ( sorry beginner here ) , do we need to install cAdvisor to the collect before doing the setup . also can u explain how to scrape memory related stuff and show them in prometheus plz
thanks in advance ,
Arjun
Running through this and getting the following error/s:
Warning FailedMount 41s (x8 over 105s) kubelet, hostname MountVolume.SetUp failed for volume “prometheus-config-volume” : configmap “prometheus-server-conf” not found
Warning FailedMount 66s (x2 over 3m20s) kubelet, hostname Unable to mount volumes for pod “prometheus-deployment-7c878596ff-6pl9b_monitoring(fc791ee2-17e9-11e9-a1bf-180373ed6159)”: timeout expired waiting for volumes to attach or mount for pod “monitoring”/”prometheus-deployment-7c878596ff-6pl9b”. list of unmounted volumes=[prometheus-config-volume]. list of unattached volumes=[prometheus-config-volume prometheus-storage-volume default-token-9699c]
Anyone run into this when creating this deployment?
I am also getting this problem, has anyone found the solution
great article, worked like magic!
under the note part you can add Azure as well along side AWS and GCP 🙂
On Aws when we expose service to Load Balancer it is creating ELB. Does it support Application Load Balancer if so what changes should i do in service.yaml file.
kubernetes-service-endpoints is showing down. Using “Exposing Prometheus As A Service” example, e.g. NodePort. The endpoint showing under targets is: http://172.17.0.7:8080/. Using the annotations:
prometheus.io/scrape: ‘true’
prometheus.io/path: /
prometheus.io/port: ‘8080’
Please check if the cluster roles are created and applied to Prometheus deployment properly!
waiting for next article to create alert managment
hi Sreekanth,
We will publish it as soon as possible.
Thanks for the article! I can get the prometheus web ui using port forwarding, but for exposing as a service, what do you mean by kubernetes node IP? How do I find it?
Hi Jake,
If you mention Nodeport for a service, you can access it using any of the Kubernetes app node IPs. You can read more about it here https://kubernetes.io/docs/concepts/services-networking/service/
Hari Krishnan, the way I did to expose prometheus is change the prometheus-service.yaml NodePort to LoadBalancer, and that’s all.
But i’m using AWS.
Hello Sir, I am currently exploring the Prometheus to monitor k8s cluster. I would like to know how to Exposing Prometheus As A Service with external IP, you please guide me..
kubernetes-service-endpoints is showing down when I try to access from external IP. I believe we need to modify in configmap.yaml file, but not sure what need to make change. Need your help on that.
You awesome!
Hey,
Can anyone tell if the next article to monitor pods has come up yet?
Thanks, great post.
It worked like a charm for me.
Regards,
~ Sachin. K.
Hi ,
Thanks to your artical was able to set prometheus. can you post the next article soon. for alert configuration. waiting…!!!
Hi – I’m getting the following error
From the k8s log
parsing YAML file /etc/prometheus/prometheus.yml: yaml: line 58: mapping values are not allowed in this context”
prometheus-deployment-79c7cf44fc-p2jqt 0/1 CrashLoopBackOff
Could you help me with this ?
Thanks in advance
I’m guessing you created your config-map.yaml with cat or echo command? You’ll want to escape the $ symbols on the placeholders for $1 and $2 parameters
Hi does anyone know when the next article is? I need to set up Alert manager and alert rules to route to a web hook receiver. If anyone has attempted this with the config-map.yaml given above could they let me know please?
Thanks in advance!
When I run ./kubectl get pods –namespace=monitoring I also get the following:
NAME READY STATUS RESTARTS AGE
prometheus-deployment-5cfdf8f756-mpctk 1/1 Running 0 1d
When this article tells me I should be getting
prometheus-monitoring-xxxxxxxxx-xxxx
Could you please advise on this?
Then when I run this command – kubectl port-forward prometheus-deployment-5cfdf8f756-mpctk 8080:9090 I get the following
Error from server (NotFound): pods “prometheus-deployment-5cfdf8f756-mpctk” not found
Could someone please help?
Many thanks in advance
You need to specificate the namespace.
Try
kubectl port-forward prometheus-deployment-5cfdf8f756-mpctk 8080:9090 -n monitoring
(if the namespace is called “monitoring”)
Thanks MP, that worked!
Appreciate the article, it really helped me get it up and running. I do have a question though. Where did you get the contents for the config-map and the Prometheus deployment files. Is this something Prometheus provides? I have no other pods running in my monitoring namespace and can find no way to get Prometheus to see the pods in other namespaces. I went ahead and changed the namespace parameters in the files to match namespaces I had but I was just curious. Thanks
An example config file covering all the configurations is present in official Prometheus GitHub repo. https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml
Hi Joshua, I think I am having the same problem as you. Where did you update your service account in, the prometheus-deployment.yaml file? Also what parameters did you change to pick of the pods in the other namespaces?
Thanks in advance
Thanks! Great Tutorial. Can you please provide me link for the next tutorial in this series.
Great article. I am using this for a GKE cluster, but when I got to targets I have nothing. Could you please advise?
You should check if the deployment has the right service account for registering the targets
Same situation here Vlad.
My application’s namespace is DEFAULT. Do I need to change something?
Nice Article, I’m new to this tools and setup. Now got little bit idea before entering into spike.
Nice Article.
Thankyou so much.
It doesn’t work with Prometheus 2.0 tho…
Updated to work with version 2.X
Nice article. There is a Syntax change for command line arguments in the recent Prometheus build, it should two minus ( — ) symbols before the argument not one.
Nice article. There is a Syntax change for command line arguments in the recent Prometheus build, it should two minus ( — ) symbols before the argument not one.
Thanks for the tutorial.
Looks like the arguments need to be changed from
“-config.file=/etc/prometheus/prometheus.yml”
and
“-storage.local.path=/prometheus/”
to
“–config.file=/etc/prometheus/prometheus.yml”
“–storage.tsdb.path=/prometheus/”
Thank again!
Thanks for the update. Changes commited to repo.
Thanks a Ton !! I am new to Kubernetes and while Exposing Prometheus As A Service i am not getting external IP for it. My Graphana dashboard cant consume localhost. Can you please guide me how to Exposing Prometheus As A Service with external IP.
Thanks
Alok