Threshold Alerts

Monitor your infrastructure by comparing any metric against user-defined thresholds.

Threshold Alerts were formerly known as Metric Alerts.

Create a Threshold Alert

To create a Threshold Alert:

  1. Log in to Sysdig Monitor and open Alerts from the left navigation bar.
  2. Click Add Alert and choose Threshold to begin defining a Threshold Alert.

Define a Threshold Alert

  • Scope: The alert is set to apply to the Entire Infrastructure of your Team Scope by default. However, you can restrict the alert scope by filtering with specific labels, such as cloud_provider_region or kube_namespace_name.

  • Threshold: Select the metric you want to monitor, and configure how you want the data to be aggregated. For instance, if you want to monitor the read latency of a Cassandra Cluster, you can set the metric to cassandra_read_latency. From there, you can choose the aggregation method that best suits your needs. For example, if you want to understand the mean latency across the entire cluster, you can use the average aggregation. Alternatively, if you want to identify nodes with the highest latency, you can use the maximum aggregation.

  • Group By: By grouping metrics by labels such as cloud_provider_availability_zone, a unique segment is generated for each availability zone. This allows you to quickly detect if a particular availability zone is responsible for increased cassandra_read_latency or other performance degredation.

  • “Over the last”/Range: The range of an alert rule determines the time window over which the selected metric is aggregated. For example, if you select the avg aggregation for the cassandra_read_latency metric with a specified range, it calculates the average value of the cassandra_read_latency metric over that time window. This range defines how far back in time the metric values are considered for time aggregation.

  • Duration: Duration defines the time an alert condition must continuously be satisfied before triggering an alert. For instance, a duration of 10m means the condition must be met for a continuous 10 minutes. If the alert condition is not satisfied at any time within this period, the 10-minute timer resets and must be satisfied for a full, uninterrupted 10 minutes again. Setting a longer duration reduces false positives by preventing alerts from being triggered by short-lived threshold violations

Range and Duration

The Range/“Over the last” of an alert query defines the time period over which the relevant metric data is evaluated. It should not be confused with the Duration of an alert rule, which refers to the length of time an alert condition must be met before it triggers an alert.

Frequency of Alert Rule Evaluation

The Alert Editor automatically displays the time window that works best with your alert rule. Every data point in the alert preview corresponds with an evaluation of an alert rule.

To view time series data older than the recommended window, click Explore Historical Data in the top right corner of Alert Editor. This will populate a PromQL Query in the Explore module with your current settings.

The frequency at which an alert rule is evaluated depends on the Range specified in its query, defined by the “over the last” parameter. Using a larger Range for time aggregation can lead to less frequent evaluations of the alert rule. For example, you may wish to monitor a service’s error rate. Occasional errors are tolerable, but a steady stream of errors over a certain period might indicate a problem. In this case, you can set up an alert query with a Range of 4h, min_over_time(service_error_rate[4h]). Every ten minutes, this alert rule will analyze the error rate over the past four hours to determine if the alert rule should be triggered.

Re-notifications for an alert cannot be sent more frequently than the alert rule’s evaluation interval and must be a multiple of this interval. For example, if an alert rule is evaluated every 10 minutes, re-notifications can only occur at multiples of the evaluation frequency, such as 20 minutes, 30 minutes, and so forth.

Range of Alert QueryFrequency of Alert Rule Evaluation
up to 3h1m
up to 1d10m
up to 7d1h
up to 60d1d
60d+Not Supported

Users seeking to explore metric data outside the window recommended by the alert editor can navigate to Explore Historical Data in order to view the time series data older than the recommended window.

Configure Threshold

Define the threshold and time range for assessing the alert condition.

Threshold Alerts can aggregate data over the time range in various ways:

Aggregation

Description

average

The average of the retrieved metric values across the time period.

sum

The sum of the metric across the time period evaluated.

maximum

The maximum of the retrieved metric values across the time period.

minimum

The minium of the retrieved metric values across the time period.

For more information on thresholds, see Multiple Thresholds.

Time Series Visualization in Threshold Alert Notifications

Threshold Alert notifications forwarded to Slack or Email include a snapshot of the triggering time series data. For Slack notification channels, the snapshot can be toggled within the notification channel settings. When the channel is configured to Notify when Resolved, a snapshot of the time series data that resolves the alert is also provided in the notification.

Enriched Labels in Threshold Alert Notifications

All Threshold Alert notifications are enriched by default with contextual labels, which aid in faster issue identification and resolution. When an alert rule triggers, Sysdig automatically appends contextual labels to the alert notification, such as host_hostname, cloud_provider_region, and kube_cluster_name.

Multiple Thresholds

In addition to an alert threshold, a warning threshold can be configured. Warning thresholds and alert thresholds can be associated with different notification channels. In the following example, you may want to send a warning and alert notification to Slack, but also page the on-call team on Pagerduty if an alert threshold is met.

If both warning and alert thresholds are associated with the same notification channel, a metric immediately exceeding the alert threshold will ignore the warn threshold and only trigger the alert threshold.

Create an Alert on No Data

With the No Data alert configuration, you can choose how to handle situations when there is no incoming data for a metric across all its time series. In the Settings section, select from the two options for No Data:

Ignore: Select this option if you prefer not to receive notifications when all time series of a metric stop sending data.

Notify: Choose this if you want to be alerted when data stops coming in for all time series of a metric.

A No Data alert will not be triggered by an individual time series ceasing to report data; it activates only when all time series for a metric stop reporting.

Threshold Alerts in Sysdig Monitor do not auto-resolve when a time series that triggered an alert rule stops reporting data, unlike Prometheus alerts which can auto-resolve under similar conditions. This means you must manually resolve an alert occurrence if the time series that triggered the alert rule ceases to report data.

Translate to PromQL

You can automatically translate from Form to PromQL in order to leverage the flexibility and power of PromQL. Use the Translate to PromQL option to create more complex queries.

This query, for example, looks at the percentage of available memory on a host:

sysdig_host_memory_available_bytes / sysdig_host_memory_total_bytes * 100

Thresholds are configured separately from the query. This means you can specify both an alert threshold and a warning threshold.

Example: Alert When Data Transfer Over the Threshold

The example given below shows an alert that triggers when the average bytes of data transferred by a container is over 500 KiB/s for a period of 1 minute.