Docker, Prometheus, & Spring Boot: Quick Start

In this post, I’ll demo monitoring a Spring Boot app using Prometheus and Alertmanger as Docker containers. I’ll assume that you are already familiar with Spring Boot and Docker, if not, familiarize yourself with those first.

Prometheus can be bootstrapped several ways. You can run Prometheus as a standalone server, a Docker container, or inside of Kubernetes using kube-prometheus. Here, we will focus on getting Prometheus up and running using Docker.

Export Metrics

First, you need to export metrics from your spring boot app. To do that, add the following dependencies to your spring project:


Next, add @EnablePrometheusEndpoint and @EnableSpringBootMetricsCollector to your

public class Application {
    public static void main(String[] args) {
        DefaultExports.initialize();, args);

At this point, you should be able to run your spring boot app and hit the /prometheus endpoint to see metrics:

Running Prometheus Server

Now we can stand up Prometheus to start collecting metrics from the spring boot app. Let’s create a directory to place our configuration file.

$ mkdir prometheus

Inside your prometheus directory, add the following prometheus.yml config file. Replace <YOUR_IP> with the ip of your machine:

  scrape_interval:     15s # By default, scrape targets every 15 seconds.

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
  - 'prometheus.rules.yml'
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

      - targets: ['<YOUR_IP>:9090']

  - job_name:  'spring-boot'
    scrape_interval: 5s
    metrics_path: '/prometheus' 

      - targets: ['<YOUR_IP>:8080']
          group: 'demo-api'

     - static_configs:
       - targets: ['<YOUR_IP>:9093']

The config file sets up the prometheus server to monitor two targets. The first one is Prometheus itself, the second is our spring boot project. Notice the metrics_path: which points to the endpoint we configured earlier. Additionally, we have an alerting: configuration that tells Prometheus Server where to push alerts. We will stand up the AlertManager shortly.

Rules File

Next, create the rules file, prometheus.rules.yml, which describes the alerting rules used to notify the AlertManager. You can create this config in the same directory as your prometheus.yml.

- name: demo
  - alert: InstanceDown
    expr: up == 0
    for: 10s
      severity: critical
      summary: "Instance {{ $labels.instance }} down"
      description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 10 seconds."

This is a simple rule that just checks to see if the scrape target is still up for a given period, 10 seconds, in this case. You can give this rule additonal labels and annotations that can be used for routing or templating in the AlertManager.

Starting the Server

Now that we have our configs, we will start Prometheus using Docker. You can create a custom Dockerfile if you like, but I’ll just use a default image and mount our config directory as a volume.

$ docker run --rm -d -v ~/prometheus/:/etc/prometheus/ -p 9090:9090 prom/prometheus:v2.2.1

This docker command starts Prometheus on localhost:9090. Now you should navigate to http://localhost:9090 and see the Prometheus UI.

Click the Status drop down and select Targets. Here you will see the endpoints Prometheus scrapes for metrics. You should see two, Prometheus and the Spring Boot endpoint.

Great! You successfully have Prometheus monitoring your app. Let’s move on to setup the AlertManager!

Alert Manager

Now, create an alertmanager directory for your Alertmanager’s config file.

ralphmcneal at Ralphs-MacBook-Pro in ~/prometheus
$ mkdir alertmanager

Here, we will configure a default receiver to send alerts via Slack. The AlertManager has several receiver options. If you don’t have Slack, see Prometheus Alerting Docs for a receiver config that suits you. For Slack, you’ll just need a webhook url, username, and the channel to send the Slack message. You can setup multiple receivers and configure routing based on the alert type or labels. For simplicity, we will just setup a default receiver. Save the following file as config.yml:

  receiver: 'default-receiver'
  group_wait: 30s
  group_interval: 2m
  repeat_interval: 1h
  group_by: [ alertname, job]

- name: 'default-receiver'
  - send_resolved: true
    api_url: "<YOUR_WEBHOOK_URL>"
    username: "alertmanager"
    channel: "<YOUR_CHANNEL>"

Starting Alert Manager

Now we are set to get the Alert Manger running. Again, we are going to use Docker to run it. Make sure you mount the volume to the location where you created the alertmanger config directory.

docker run --rm -d -v ~/prometheus/alertmanager/:/etc/alertmanager -p 9093:9093 prom/alertmanager:v0.14.0

You should now have the alertmanager running on http://localhost:9093:


Time to test our Slack notification. Let’s shutdown our Spring boot app and watch the Prometheus scrape fail. If all works, we should receive a Slack notification.

Success!!! If you didn’t receive an alert, make sure your Prometheus server is connected to your Alert Manager. You can do that by making a request here http://localhost:9090/api/v1/alertmanagers.


You should now be able to run Prometheus and the Alert Manager using Docker. You can setup scrape targets and add receivers to get notifications based on routing rules. Play around a bit, there’s a lot more to discover with Prometheus!

1 Comment

  1. Admiring the hard work you put into your blog and in depth
    information you present. It’s good to come across a blog every once in a
    while that isn’t the same unwanted rehashed material. Fantastic read!
    I’ve saved your site and I’m adding your RSS feeds to my Google account.

Leave a Reply

Your email address will not be published. Required fields are marked *