Metric Alerts

Alerts notify you of changes in your environments. Metric alerts allow you to monitor time-series metrics and receive an alert if they violate pre-defined thresholds. Define metric-based alerts using Form and PromQL.

To create a Metric Alert:

  1. Log in to Sysdig Monitor and open Alerts from the left navigation bar.
  2. Click Add Alert and choose Metric to begin defining a Metric Alert.

Define a Metric Alert

  • Scope: The alert is set to apply to the Entire Infrastructure of your Team Scope by default. However, you can restrict the alert scope by filtering with specific labels, such as cloud_provider_region or kube_namespace_name.

  • Metric: Select the metric you want to monitor, and configure how you want the data to be aggregated. For instance, if you want to monitor the read latency of a Cassandra Cluster, you can set the metric to cassandra_read_latency. From there, you can choose the aggregation method that best suits your needs. For example, if you want to understand the mean latency across the entire cluster, you can use the average aggregation. Alternatively, if you want to identify nodes with the highest latency, you can use the maximum aggregation.

  • Group By: By grouping metrics by labels such as cloud_provider_availability_zone, a unique segment is generated for each availability zone. This allows you to quickly detect if a particular availability zone is responsible for increased cassandra_read_latency or other performance degredation.

Range and Duration

The range of an alert query defines the time period over which the relevant metric data is evaluated. It should not be confused with the duration of an alert rule, which can only be configured for PromQL Alerts and refers to the length of time an alert condition must persist before triggering an alert. Metric Alerts, even when translated to PromQL, will trigger as soon as the alert condition is satisfied.

Frequency of Alert Rule Evaluation

The Alert Editor automatically displays the time window that works best with your alert rule. Every data point in the alert preview corresponds with an evaluation of an alert rule.

To view time series data older than the recommended window, click Explore Historical Data in the top right corner of Alert Editor. This will populate a PromQL Query in the Explore module with your current settings.

The frequency at which an alert rule is evaluated depends on the range specified in its query, defined by the over the last parameter. Using a larger time window for data aggregation can lead to less frequent checks of the alert rule. For instance, consider monitoring a service’s error rate: occasional errors might be tolerable, but a steady stream of errors over a certain period could indicate a problem. By setting up an alert query like min_over_time(service_error_rate[4h]), the alert rule is evaluated every 10 minutes. Each evaluation analyzes the error rate over the past 4 hours to determine if the alert rule should be triggered.

Re-notifications for an alert cannot be sent more frequently than the alert rule’s evaluation interval and must be a multiple of this interval. For example, if an alert rule is evaluated every 10 minutes, re-notifications can only occur at multiples of the evaluation frequency, such as 20 minutes, 30 minutes, and so forth.

Range of Alert QueryFrequency of Alert Rule Evaluation
up to 3h1m
up to 1d10m
up to 7d1h
up to 60d1d
60d+Not Supported

Users seeking to explore metric data outside the window recommended by the alert editor can navigate to Explore Historical Data in order to view the time series data older than the recommended window.

Configure Threshold

Define the threshold and time range for assessing the alert condition.

Metric alerts can aggregate data over the time range in various ways:

Aggregation

Description

average

The average of the retrieved metric values across the time period.

sum

The sum of the metric across the time period evaluated.

maximum

The maximum of the retrieved metric values across the time period.

minimum

The minium of the retrieved metric values across the time period.

For more information on thresholds, see Multiple Thresholds.

Images and Labels in Metric Alert Notifications

Metric Alert notifications forwarded to Slack or Email include a snapshot of the triggering time series data. For Slack notification channels, the snapshot can be toggled within the notification channel settings. When the channel is configured to Notify when Resolved, a snapshot of the time series data that resolves the alert is also provided in the notification.

All Metric Alert notifications are enriched by default with contextual labels, which aid in faster issue identification and resolution. When an alert rule triggers, Sysdig automatically appends contextual labels to the alert notification, such as host_hostname, cloud_provider_region, and kube_cluster_name.

Multiple Thresholds

In addition to an alert threshold, a warning threshold can be configured. Warning thresholds and alert thresholds can be associated with different notification channels. In the following example, a user may want to send a warning and alert notification to Slack, but also page the on-call team on Pagerduty if an alert threshold is met.

If both warning and alert thresholds are associated with the same notification channel, a metric immediately exceeding the alert threshold will ignore the warn threshold and only trigger the alert threshold.

Create an Alert on No Data

With the No Data alert configuration, you can choose how to handle situations when there is no incoming data for a metric across all its time series. In the Settings section, select from the two options for No Data:

Ignore: Select this option if you prefer not to receive notifications when all time series of a metric stop sending data.

Notify: Choose this if you want to be alerted when data stops coming in for all time series of a metric.

A No Data alert will not be triggered by an individual time series ceasing to report data; it activates only when all time series for a metric stop reporting.

Metric Alerts in Sysdig Monitor do not auto-resolve when a time series that triggered an alert rule stops reporting data, unlike PromQL alerts which can auto-resolve under similar conditions. This means you must manually resolve an alert occurrence if the time series that triggered the alert rule ceases to report data.

Translate to PromQL

You can automatically translate from Form to PromQL in order to leverage the flexibility and power of PromQL. Use the Translate to PromQL option to create more complex queries.

This query, for example, looks at the percentage of available memory on a host:

sysdig_host_memory_available_bytes / sysdig_host_memory_total_bytes * 100

Thresholds are configured separately from the query. This means you can specify both an alert threshold and a warning threshold.

Metric alerts translated from form to PromQL do not currently support configuring a duration.

Example: Alert When Data Transfer Over the Threshold

The example given below shows an alert that triggers when the average bytes of data transferred by a container is over 20 KiB/s for a period of 1 minute.

In the alert Settings, you can configure a Link to Runbook and Link to Dashboard to speed up troubleshooting when the alert fires.When viewing the triggered alert you will be able to quickly access your defined Runbook and Dashboard.