Prometheus and Node Exporter architecture
Question 1
Since it is prometheus who decides the scraping interval, how can be configured to just scrap those values?
You can have different job configured each with its own scrape_interval
and HTTP URL parameters params
. Then, it depends on the features proposed by the exporter.
In the case of node_exporter, you can pass a list of collectors:
cpu
every 15s (job: node_cpu)process
every 30s (job: node_process)- (well you get the idea) ...
Note that a scrape interval of 5min is likely to be too big because of data staleness: you run the risk of not getting any data in an instant vector on this data. A scrape interval of 1min is already big and has no impact on performance.
Question 2
How can I securely query the metrics from my 3 servers?
The original assumption of Prometheus is that you would use a private network. In the case of public network, you'll need some kind of proxy.
Personally, I have used exporter_exporter on a classical architecture.
Question 3
Can node-exporter be configured to send a set of metrics every X time to the prometheus server? (so I don't have to expose a public port in every public server, just the prometheus server) I understand "pushgateway" is for that? How to change the node-exporter behavior?
No, Prometheus is pull based architecture: you will need an URI accessible by Prometheus on each service you want to monitor.I imagine you could reuse components from another monitoring solution and use an adhoc exporter like the collectd exporter.
The push gateway is intended for short lived jobs that cannot wait to be scraped by Prometheus. This is a specific use case and general consensus is not to abuse it.