Introduction
Prometheus is an open-source software application used for event monitoring and alerting.
Validated Version: Prometheus 2.14.0
OpsRamp configuration
Configuration involves:
- Installing the integration.
- Configuring the integration.
Step 1: Install the integration
To install:
- Select a client from the All Clients list.
- Go to Setup > Integrations > Integrations.
- From Available Integrations, select Monitoring > Prometheus.
- Click Install.
Step 2: Configure the integration
To configure the integration:
- From the API > Authentication section, copy the following:
- Tenant Id
- Token
- Webhook URL
Note: These settings are used to create an HTTP Request template.
- From the API > Map Attributes section, configure the mapping attributes.
Note: These parameters are used for the third-party’s software configuration.
Configuring the map attributes
To configure the mapping attributes:
- In OpsRamp Entity, specify
Alert
. - In OpsRamp Property, select the property from the drop-down.
- Click + to define the mappings.
- From Create Alert Mappings on Status, define the mappings, parsing conditions, and default values.
- Click Save.
Note
Attributes can be modified at any time.The following table shows attribute mappings:
Third-Party Entity | OpsRamp Entity | Third-Party Property | OpsRamp Property | Third-Party Property Value | OpsRamp Property Value |
---|---|---|---|---|---|
Alert | ALERT | severity | alert.currentState | critical | Critical |
Alert | ALERT | metric | alert.serviceName | container_memory_usage_bytes | NA |
Alert | ALERT | description | alert.description | testing alert1 | NA |
Alert | ALERT | summary | alert.subject | High Memory Usage | NA |
The following mappings examples are based on a custom label and values and reflect the Prometheus Alert Browse configuration examples.
- severity: critical
- metric: container_memory_usage_bytes
- description: testing alert1
- annotations
- summary: High Memory Usage
Prometheus configuration
Routing the Prometheus alerts to OpsRamp is configured using the YAML definition used during deployment. Based on the requirement, you can configure Prometheus using either of the following scenarios:
- Configure without defining a label for the app and forward all generated alerts of Prometheus to OpsRamp.
- Configure with a label defined for the app and forward selective generated alerts to OpsRamp.
Configuring without defining App label
Configuration without defining App label involves configuring only Prometheus Alert Manager. OpsRamp webhook becomes the default receiver in the Alert Manager configuration. A label is not defined and all alerts with severity level of error/warning that Prometheus generates are forwarded to OpsRamp. As a result, configuring alert rules is not required.
Configure Prometheus Alert Manager
Alert Manager is the receiver to route the alerts.
To configure Alerts in Prometheus,
- Get the Webhook URL from the OpsRamp configuration.
- Use the Webhook URL in the Prometheus Alert Manager configuration to map the YAML file:
{
receivers:
- name: opsramp-webhook
webhook_configs:
- url: https://{apiserver}/integrations/alertsWebhook/client_123/alerts?vtoken=<token>
send_resolved: true
route:
group_by:
- alertname
group_interval: 1m
group_wait: 30s
receiver: opsramp-webhook
repeat_interval: 2m
routes:
- receiver: opsramp-webhook
match_re:
severity: error|warning
}
Configuring with App label defined
With App label defined only selective alerts are forwarded from Prometheus to OpsRamp. Configuration involves the following:
- Configuring Prometheus Alert Manager.
- Configuring alert rules.
Step 1: Configure Prometheus Alert Manager
Alert Manager is the receiver to route the alerts.
To configure Alerts in Prometheus,
- Get the Webhook URL from the OpsRamp configuration.
- Use the following Webhook URL in the Prometheus Alert Manager configuration to map the YAML file:
Note
For a standalone Prometheus Alert Manager server, this configuration YAML file is located at the/etc/alertmanager
folder.kind: ConfigMap
apiVersion: v1
metadata:
name: alertmanager-config
namespace: monitoring
data:
config.yml:
global:
templates:
- '/etc/alertmanager/*.tmpl'
receivers:
- name: default-receiver
- name: opsramp-webhook
webhook_configs:
- url: "https://<webhook_url>/integrations/alertsWebhook/client_14/alerts?vtoken=<TokenValue>"
route:
group_wait: 10s
group_interval: 5m
receiver: default-receiver
repeat_interval: 3h
routes:
- receiver: opsramp-webhook
match_re:
app: opsramp
Step 2: Configure alert rules
This step configures the alert rules in the Prometheus Alert Manager. Filtering rules are created by using Alerting profiles. The alert rules are labeled as OpsRamp alerts for the receiver.
To configure alert rules, add the required OpsRamp labels in the prometheus.rules
file (config map for alert rules)
so that you can map the alerts generated from these rules to the corresponding OpsRamp entities in the OpsRamp alert browser:
YAML file
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-server-conf
labels:
name: prometheus-server-conf
namespace: monitoring
data:
prometheus.rules: |-
groups:
- name: devopscube demo alert
rules:
- alert: High Pod Memory
expr: sum(container_memory_usage_bytes) > 1
for: 1m
**labels:**
**severity: critical**
**app: opsramp**
**metric: container_memory_usage_bytes**
**description: testing alert1**
annotations:
**summary: High Memory Usage**
- name: devopscude demo alert2
rules:
- alert: High Pod Memory2
expr: sum(container_memory_usage_bytes) > 2
for: 1m
** labels:**
**severity: VeryCritical**
**app: opsramp**
**metric: container_memory_usage_bytes**
**description: testing alert2**
annotations:
**summary: High Memory Usage2**
prometheus.yml: |-
global:
scrape_interval: 5s
evaluation_interval: 5s
rule_files:
- /etc/prometheus/prometheus.rules
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "alertmanager.monitoring.svc:9093"
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::d+)?;(d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::d+)?;(d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
Notes
- For the standalone Prometheus server, this configuration file is located at the
/etc/prometheus/prometheus.rules
folder. - The terminology used in the severity label is used for corresponding critical/warning/ok alerts at OpsRamp and you need to configure the same at OpsRamp for the Prometheus integration.
Sample payload
{ "receiver": "opsramp-webhook", "status": "firing", "alerts":
[{
"status": "firing", "labels":
{ "alertname": "High Pod Memory", "app": "opsramp", "severity": "slack" },
"annotations": { "summary": "High Memory Usage" },
"startsAt": "2019-09-19T08:14:52.059731582Z",
"endsAt": "0001-01-01T00:00:00Z",
"generatorURL": "[http://prometheus-deployment-7bc6dc6f77-ds6j2:9090/graph?g0.expr=sum%28container_memory_usage_bytes%29+%3E+1u0026g0.tab=1](http://prometheus-deployment-7bc6dc6f77-ds6j2:9090/graph?g0.expr=sum%28container_memory_usage_bytes%29+%3E+1u0026g0.tab=1)",
"fingerprint": "243ccc9d065e8b26"
},
{
"status": "firing",
"labels":
{ "alertname": "Low Containers Count", "app": "opsramp", "severity": "page" },
"annotations": { "summary": "Low Container Count" },
"startsAt": "2019-09-19T08:14:53.135072669Z",
"endsAt": "0001-01-01T00:00:00Z",
"generatorURL": "[http://prometheus-deployment-7bc6dc6f77-ds6j2:9090/graph?g0.expr=sum%28kubelet_running_container_count%29+%3C+40u0026g0.tab=1](http://prometheus-deployment-7bc6dc6f77-ds6j2:9090/graph?g0.expr=sum%28kubelet_running_container_count%29+%3C+40u0026g0.tab=1)",
"fingerprint": "a95e6f948c14554a"
}
],
"groupLabels": { },
"commonLabels": { "app": "opsramp" },
"commonAnnotations": { },
"externalURL": "[http://alertmanager-7b6d855bd8-7mvf2:9093](http://alertmanager-7b6d855bd8-7mvf2:9093/)",
"version": "4",
"groupKey": "{}/{app=~\" ^ ( ? : opsramp) $\"}:{}"
}
What to do next
View alerts in OpsRamp.
- From the Workspace drop-down in the OpsRamp Console, navigate to Alerts.
- On the Alerts page, search using Prometheus as the Source name.
The related alerts are displayed. - Click Alert ID to view.