Legacy Event Alerts
Monitor occurrences of specific events, and alert if the total number of occurrences violates a threshold. Useful for alerting on container, orchestration, and service events like restarts and deployments.
Alerts on events support only one segmentation label. An alert is generated for each segment.
Defining a Metric Alert
Set a unique name and description: Set a meaningful name and description that help recipients easily identify the alert.
Severity: Set a severity level for your alert. The Priority:
Infoare reflected in the Alert list, where you can sort by the severity by using the top navigation pane. You can use severity as a criterion when creating events and alerts, for example: if there are more than 10 high severity events, notify.
Event Source: Filter by one or more event sources that should be considered by the alert. Predefined options are included for infrastructure event sources (kubernetes, docker, and containerd), but you can freely specify other values to match custom event sources.
Trigger: Specify the trigger condition in terms of the number of events for a given range.
Event alert support only one segmentation label. If you choose Multiple Alerts, Sysdig generates only one alert for a selected segment.
Filter the environment on which this alert will apply. Use advanced operators to include, exclude, or pattern-match groups, tags, and entities. You can also create alerts directly from Explore and Dashboards for automatically populating this scope.
In this example, failing a liveness probe in the agent-process-whitelist-cluster cluster triggers an alert.
Define the threshold and time window for assessing the alert condition. Single Alert fires an alert for your entire scope, while Multiple Alert fires if any or every segment breach the threshold at once.
If the number of events triggered in the monitored entity is greater than 5 for the last 10 minutes, recipients will be notified through the selected channel.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.