We are monitoring a HPC computing cluster using a combination of Prometheus, Alertmanager and Grafana. On our machines, things like SWAP memory filling up to essentially the limit happen frequently, and while it is useful to see the corresponding info
-level alerts in the Grafana Alerts dashboard, we would preferably not send the corresponding emails.
Is there a way to mute/disable all, say, alerting emails that have severity info
in the alertmanager.yml
config file?
The alerts are all defined similar to this one (adjusted from https://awesome-prometheus-alerts.grep.to/rules.html):
- alert: HostSwapIsFillingUp
expr: (1 - (node_memory_SwapFree_bytes / node_memory_SwapTotal_bytes)) * 100 > 95
for: 60m
labels:
severity: info
annotations:
summary: Host swap is filling up (instance {{ $labels.instance }})
description: "Swap is filling up (>95%)\n VALUE = {{ $value }}"
and the corresponding section in the alertmanager.yml
file reads
routes:
- match:
severity: 'warning'
repeat_interval: 24h
continue: true
- match:
severity: 'info'
repeat_interval: 24h
continue: true
receiver: dropped
receivers:
- name: 'admin-mails'
email_configs:
- to: 'admins@DOMAIN'
- name: 'dropped'
email_configs:
- to: 'admins@DOMAIN'
Is there a possibility to make sure that the info
-level alerts never cause emails while simultaneously still having them "fire", so that Grafana will display them?