top of page
Recent Posts

How To Install Alertmanager to Alert Based on Metrics From Prometheus

Updated: Oct 3, 2021

Previously, we have posted "How to install and configure Prometheus & grafana in Red Hat Enterprise Linux 7",Now, let's try to install and configure the Alertmanager to get alert based on metrics from Prometheus. Let's start the configuration.

Setup Alertmanager:

Step:1 To download Alertmanager binaries for Linux from the Prometheus Download Page.

# wget 

::::::::::::: CUT SOME OUTPUT :::::::::::::

100%[========================================================================================================>] 25,710,888   174KB/s   in 2m 32s 

2020-07-27 13:31:05 (166 KB/s) - ‘alertmanager-0.21.0.linux-amd64.tar.gz’ saved [25710888/25710888]

Step:2 To prepared prerequisite configurations for Alertmanager.

# useradd --no-create-home --shell /bin/false alertmanager

Step:3 To extract downloaded zip file & configure alertmanager.

# tar -xvf alertmanager-0.21.0.linux-amd64.tar.gz

# cp alertmanager-0.21.0.linux-amd64/alertmanager /usr/local/bin/
# cp alertmanager-0.21.0.linux-amd64/amtool /usr/local/bin/

# chown alertmanager:alertmanager /usr/local/bin/alertmanager
# chown alertmanager:alertmanager /usr/local/bin/amtool

Step:3 To create the alertmanager directory and configure the global alertmanager configuration:

# mkdir /etc/alertmanager
# vim /etc/alertmanager/alertmanager.yml
  smtp_smarthost: 'localhost:25'
  smtp_from: 'AlertManager <>'
  # My smtp server does not require authentication, 
  # but still we need to set below attributes.
  smtp_require_tls: false
  smtp_hello: 'alertmanager'
  smtp_auth_username: 'username'
  smtp_auth_password: 'password'
  slack_api_url: ''

  group_by: ['instance', 'alert']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 3h
  receiver: team-1

  - name: 'team-1'
      - to: 'root@localhost'
      - channel: '#ansible'
      - username: 'AlertManager'
      - icon_emoji: ':joy:'

Note: In my privious post, we have created the slack workspace and the webhook, see the the details at "How to create a new workspace and setup a Slack Webhook for Sending Messages From Applications"

Step:4 To create a systemd configuration file for alertmanager.

# vim /usr/lib/systemd/system/alertmanager.service

ExecStart=/usr/local/bin/alertmanager --config.file=/etc/alertmanager/alertmanager.yml --web.external-url


Note: is my alertmanager ip address.

Step:5 To start the alertmanager service.

# systemctl enable --now alertmanager.service
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/alertmanager.service.
# systemctl status alertmanager.service

Step:5 To verify the running alertmanager service from the alertmanager user interface (http://<alertmanager_ip_address>:9093)

Okay, alertmanager service is running...

Change the required configuration in Prometheus:

Step:1 To add (enable) the alertmanager configuration in prometheus:

# vim /etc/prometheus/prometheus.yml 
::::::::::::: CUT SOME OUTPUT :::::::::::::
# Alertmanager configuration
  - static_configs:
    - targets:
#       - localhost:9093
#       - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
   - first_rules.yml
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
::::::::::::: CUT SOME OUTPUT :::::::::::::

Note: we are going to create first_rules.yml in next steps

Step:2 To verify the running instances from the Prometheus user interface (http://<prometheus_ip_address>:9090)

We need to create a rules file that will specify the conditions when would like to be alerted. Let's consider we will get alert when instances goin to down.

As we have seen, All running instances have value of 1, while all instances that are currently not running have value of 0.