Prometheus Pushgateway
The Prometheus Pushgateway is a necessary component for collecting metrics from short-lived jobs or services that cannot be scraped directly by Prometheus. It acts as a middleman, allowing these ephemeral applications to push their metrics to a central endpoint. This guide provides instructions on how to set up and utilize the Pushgateway effectively.
Start Pushgateway
The easiest way to get started with Pushgateway is by running it using Docker. This ensures a consistent and isolated environment for the service.
$ docker run -it -p 9091:9091 prom/pushgateway
Producing Metrics to Pushgateway
Applications need to send their metrics to the Pushgateway. This can be achieved using various tools and programming languages. Below are examples using curl
and Python.
Using curl
For simple metric pushes, curl
is a convenient command-line tool.
$ duration=0.01
$ echo "job_duration_seconds 0.01" | curl --data-binary @- http://localhost:9091/metrics/job/mysqldump/instance/db01
Pushing multiline metrics with curl
requires careful formatting of the input data.
$ cat <<EOF | curl --data-binary @- http://localhost:9091/metrics/job/mysqldump/instance/db01
job_duration_seconds{instance="db01",job="mysqldump"} 0.02
job_exit_code_status{instance="db01",job="mysqldump"} 0
EOF
Using Python
Python offers more flexibility for programmatic metric pushing. The requests
library is commonly used for HTTP communication.
import requests
requests.post('http://192.168.0.4:9091/metrics/job/mysqldump/instance/db01', data='job_exit_code_status 0.04\n')
Using the Prometheus Python Client
For a more integrated approach within Python applications, the official Prometheus client library provides dedicated functions for pushing metrics.
Refer to the official documentation for detailed usage:
from prometheus_client import CollectorRegistry, Gauge, push_to_gateway
registry = CollectorRegistry()
g = Gauge('job_duration_seconds', 'The time it takes for the job to run', registry=registry)
g.set(0.05) # Alternatively: g.inc(2) / g.dec(2)
push_to_gateway('localhost:9091', job='mysqldump', grouping_key={'instance':'db01','job':'mysqldump','process':'script.py'}, registry=registry)
Reading Metrics from Pushgateway
Once metrics are pushed, they can be accessed via the Pushgateway's metrics endpoint. Prometheus can then be configured to scrape this endpoint.
Accessing the Metrics Endpoint
You can retrieve the exposed metrics using curl
.
$ curl -L http://localhost:9091/metrics/
# TYPE job_duration_seconds untyped
job_duration_seconds{instance="db01",job="mysqldump",process="script.py"} 0.02
# TYPE job_exit_code_status untyped
job_exit_code_status{instance="db01",job="mysqldump",process="script.py"} 0
Prometheus Static Scrape Configuration
To integrate Pushgateway with Prometheus, you need to configure Prometheus to scrape the Pushgateway's metrics endpoint. Here's an example of a static scrape configuration:
scrape_configs:
- job_name: 'pushgateway-exporter'
scrape_interval: 15s
static_configs:
- targets: ['192.168.0.10:9091']
labels:
instance: node1
region: eu-west-1
- targets: ['192.168.1.10:9091']
labels:
instance: node1
region: eu-west-2
By implementing these steps, you can effectively leverage the Prometheus Pushgateway to manage metrics from dynamic and ephemeral sources within your monitoring infrastructure.