Sysdig Secure
Sysdig Secure is part of Sysdig’s container intelligence platform. Sysdig uses a unified platform to deliver security, monitoring, and forensics in a cloud, container and microservices-friendly architecture integrated with Docker and Kubernetes. Sysdig Secure takes a services-aware approach to protect workloads while bringing deep cloud and container visibility, posture management (compliance, benchmarks, CIEM), vulnerability scanning, forensics and threat detection and blocking.
In the background, the Sysdig agent lives on the hosts being monitored
and collects the appropriate data and events. For more information, see
the Sysdig Agent
Documentation.
Key Features
Presents relevant performance and security data together.
Offers host and image scanning, auditing, and runtime vulnerability
management capabilities:
Filter and surface vulnerabilities against images, clusters,
namespaces, hosts or any other label
Alert on unscanned images or images whose evaluation status has
changed from new vulnerabilities
Log user actions, container activity, and command-line arguments
Enforce security policies and block attacks
Provides posture management for a distributed environment:
Easily schedule customized benchmark tests to run across cloud, hosts,
services, or clusters
Control compliance at cloud, orchestrator and container level.
Track and optimize cloud users permissions and entitlements.
Export results to SIEM, logging clusters, or other tools your
organization uses
Provides runtime detection and data enrichment:
Identify and block threats in real-time, based on application,
container, and network activity
Instrument Kernel to track all app, container, host, and network
system calls
View security policy violation based on orchestrated services
Manage multi cloud events using single and multiple accounts
Supports incident response and forensics:
Protect distributed, dynamic, and ephemeral services with a
single-service policy involving no manual configuration
Create detailed system captures for any policy violation or
incident, enabling the ability to take actions against malicious
activity
Drill down from policy violations into 100% granularity captures
of pre- and post-attack activity
View SCAP files to see all system activity before, during, and
after any security event
Create detailed system captures for any policy violation or
incident enabling ability to take actions malicious activity
Integrate alerting and incident response
1 - Home
The Home page offers a clean, visual representation of the most important issues in your environment and a curated list of the top tasks required. The top half encompasses the Dashboards and the bottom half the To Do Recommendations list.
With the introduction of the Home page, earlier interfaces, such as Get Started and the Sysdig Secure Overview, are no longer required.
1.1 - Secure Dashboards
The Home page offers a clean, visual representation of the most important issues in your environment and a curated list of the top tasks required. With the introduction of the Home page, earlier interfaces, such as Get Started and the Sysdig Secure Overview, are no longer required.
The top half of the page encompasses the Dashboards.

For the Home page dashboards to display data, you must have completed basic onboarding and at least one data source must be connected. Otherwise, the page will provide prompts for completing those setup tasks.
Data Source Status
At the top of the page is a status summary of data sources:
- Detected cloud accounts
- Sysdig agents status, based on nodes where agents have been or could be deployed.

Cloud Accounts
If you have installed Sysdig Secure for cloud, cloud account links are displayed per cloud provider (AWS | GCP | Azure
). From here you can see:
- Detected accounts
- Any
Out of Date
or Almost Out of Date
clusters - Link to the Data Sources page to take action

Sysdig Agents
Similarly, here you can see:
- How many nodes detected
- Which nodes might require attention because agents are out of date or almost out of date
- Link to the Data Sources page to take action

Dashboard Panels
Each dashboard in the top half of the Home page provides a view of the trends and most urgent issues in the areas of Compliance, Risks and Vulnerabilities, and Identity and Access.
Each dashboard:
- Links directly to the related Sysdig Secure module and task
- Provides an at-a-glance visualization of the environment status across these modules
You can expand the Dashboard for a full page experience or minimize to a collapsed view using the highlighted toggles:

Compliance
The Compliance dashboard requires the use of the new Compliance module, and gives a window into the Compliance Views landing page.

Hover
over elements for tooltip descriptors and click
on a row to jump to the Compliance Views: Results page to begin remediations.
Risks and Vulnerabilities
This dashboard gives a variety of ways to quickly understand the top risks and vulnerabilities in your environment.
The top ways to use it are:
Scan for spiking runtime events

The chart collates runtime events with medium
or high
severity in a 7-day view, per day
Scan for Mitre attack events broken down per cluster

Check top workloads at risk - running workloads with critical vulnerabilities and respective packages in execution

Expand the panel to full-screen to show the Workload Risk Assessment chart which maps your riskiest workloads, showing those with running vulnerabilities from the past 24 hours and runtime events in the past week..

Focus on the top-right bubbles first (critical vulnerability with high risk event).
Click the panel to jump into the Vulnerabilites | Runtime module for more triaging.
You can also filter by Cluster
or Namespace
within the Risks and Vulnerabilities panel.
Identity and Access
This dashboard shows:
- How many users and/or roles are inactive
- How many users and/or roles lack multifactor authentication (MFA).

Click on the panel to jump into the Identity and Access | Users page to triage.
You can also filter by Account ID
within the panel.

1.2 - To Do
The Home page offers a clean, visual representation of the most important issues in your environment and a curated list of the top tasks required. The bottom half encompasses the To Do recommended task list.
With the introduction of the Home page, earlier interfaces, such as Get Started and the Sysdig Secure Overview, are no longer required.
ToDo Task Recommendations guide users to take the most impactful actions to reduce security risks in their environments. It helps to cut through the noise and focus on the most important alerts and findings.
When using the Task Recommendations panel, you can always:
Expand
or Shrink
the panel to focus on the top or bottom of the page

Sort your recommendations:

- By
Highest Priority
: Sysdig’s prioritized list sorts recommendations by the actions that will have the largest impact on reducing security risk in your environments. Recommendations are sorted within a specific product area (i.e. Compliance, Identity, Setup). - By
Last Updated
: This sorts recommendations by what has been updated most recently. It may be either a new recommendation or an existing recommendation with new findings or failing resources. Within a Recommendation
: Any list of findings or failing resources within a recommendation will be automatically sorted by highest risk.
Scroll the Top 3 tasks
in each category, or click See All
to see all the recommended tasks in that category

Check details
by clicking a task to open the details panel on the right
Dismiss
a task (for 1 day/week/month/3 months
).
NOTE: This applies only to the current user profile; it does not remove the task from other user’s lists
Take action
from the detail panel, depending on the task type
Setup
ToDo will recommend certain Setup Tasks which vary based on what phase of onboarding a user is. These tasks include Connecting a Data Source, Setting Up Integrations, and Educational Product Tours.

Compliance
Compliance recommendations show the top actions you can take to affect the greatest improvement in compliance scores and exposure. Selecting the Remediate
button opens an Actionable Compliance drawer detailing the failing resources and respective remediation actions.

Identity and Access
Identity recommendations focus on highlighting IAM risks based on both overly permissive Policies and risky attributes Sysdig identifies for Users and Roles.
Wherever possible, the steps to be taken are summarized directly in the panel.

Open JIRA Tickets from Identity Recommendations
If the Sysdig Secure administrator has enabled an integration with a JIRA ticketing system, then Identity recommendations include the option to assign the policy updates to another team member via a JIRA ticket.

- Project: Drop-down displays all the projects to which the user who created the API token has access. You must choose a project in order to see available assignees.
- Issue Type: The integration currently supports
Task
, Story
, and Bug
- Description: Auto-filled. Content can also be added freely
- Assignee: Drop-down list of all possible assignees from JIRA for the selected project. If left blank, it will default to the lead for the project on JIRA. Type to quick-search the assignee list.
- Attachments: Least Permissive policy suggestions will attach a CSV summary and a JSON with the suggested policy. Other types will attach a CSV Summary.
Creation/Deletion Notes:
- If you delete a Jira integration, it won’t affect the tickets you opened already.
- Creation and deletion of a JIRA Integration will be noted in the Sysdig platform audit.
2 - Insights
Sysdig Secure (SaaS) has introduced a powerful visualization tool for threat detection, investigation, and risk prioritization, to help identify compliance anomalies and ongoing threats to your environment.
With Insights, all findings generated by Sysdig across both workload and
cloud environments are aggregated into a visual platform that
streamlines threat detection and forensic analysis.

Highlights:
Birds-eye view of findings across environments and timelines, with
responsive representations combined with summaries plus the linear
events feed
Instantly hone in on problem areas or block out noisy results
Share views with team members
Access the Insights Page
The Insights page is enabled automatically as the landing page for
Sysdig Secure.
Usage
The Insights tool is intuitive and easy to use. Note the following
design and usage attributes.
Navigation
Choose the resources you want to view from the top-left dropdown.
Cloud User Activity: Detects vulnerabilities and events related
to user activity in connected cloud accounts. It includes User,
Account, Region, Resource Category, Resource Type, and Resource.
Cloud Activity: Detects all findings in connected cloud
accounts. Specifically, it includes Account, Region, Resource
Category, Resource Type, and Resource.
Kubernetes Activity: Detects all findings in connected
Kubernetes clusters, namespaces, and workloads. It includes Cluster,
Namespace, Pod Owner, and Workload.
Composite View: Detects and aggregates all findings from both
the Cloud Activity and the Kubernetes Activity views. It includes
Account, Region, Resource Category, Resource Type, Resource,
Cluster, Namespace, Pod Owner, and Workload.
The default view shown will be based on the findings in your
environment. If there are events in Cloud and Kubernetes, the Composite
view is default; otherwise the Cloud or Kubernetes Activity view is
chosen.
If a particular type of resource is not connected in your environment,
that page will show no findings.
Timeline
As with many other Sysdig tools, you scope by timespan using the
timeline at the bottom of the page.

The default span is 14 days
. You can choose other presets
(3H, 12H, 1D, 3D
, etc.) or set a span using the clickable
calendar.
Insights display up to 14 days or 999 events, whichever comes first.
Visualization Panel
The power of the Insights tool resides in the Visualization panel.
Experiment with the Visualization panel features:
Concentric rings drill down the resources to the most granular
findings. Note that the header labels each level in order
(Account > Region > Resource Category > ...
)

Hover over a target area for details, and click to isolate in the
summary.

Change the
Timeline.
Take advantage of Search | Show | Hide |
Exclude.
Activity Panel: Summary
The Summary panel recapitulates the Visualization panel as an ordered
list, organized by Severity
level and impacted Rule Name
.
Click a line item to open the details. See at a glance the
affected containers, images, rules, user names, etc.

Take advantage of Search | Show | Hide |
Exclude.
Cloud Activity Summary Panel
For AWS Cloud Activity, the summary also includes a link back to view
the data in the AWS Console.

Activity Panel: Events
The Events panel replicates the Sysdig Secure
Events feed. Click an entry
in the time-based list to open its details.

Search | Show | Hide | Exclude
The Search
bar works in conjunction with options in the
Activity Summary
.

Each line of the Activity Summary includes the Show (=)
,
Hide (!=)
and Exclude

options.Show (=): Click Show
to add that finding to the Search
bar, and to the page URL. The Visualization will be targeted
accordingly.
Hide (!=): Click Hide
to filter that finding from the
Visualization, adding the filter to the Search and the URL.
Exclude
: Click Exclude
to refetch
the data without the excluded entry. This cuts down on noisy
repetitious results (which in some cases could cause the
999-item limit to be exceeded).
Note that Show
and Hide
do not trigger a re-fetch of data.
Once you have excluded an entry, the Exclude
icon

is displayed in the Visualization header.
Insights Team-Based Views and Sharing
Note:
Your team and user role influence what Insights you have access to.
The page URL persists search and filter items, and can be shared
with team members with the same level of permissions.
See User and Team
Administration for more
detail.
3 - Vulnerability Management
This doc applies only to the Vulnerability Management engine, released April 20, 2022. Make sure you are using the correct documentation: Which Scanning Engine to Use
Understanding Vuln Management Stages
One key to designing your vulnerability management deployment and strategy is to understand the different lifecycle phases to be addressed:

Basic Concepts
- Vulnerabilities are present in the software that has been installed in the images during the build phase - when we define and assemble the image.
- A container image is immutable by definition. If we change the contents of an image, then it becomes a different image in practice (with different ImageID, etc.).
- Nevertheless, even if the image itself is immutable, Sysdig can discover new vulnerabilities contained in running container images (ex: kubernetes workloads) at any moment in time, given that the security feeds are constantly updated.
- For example, an image that had no known vulnerabilities at build time may be impacted by a newly discovered critical vulnerability 10 days after entering runtime. The image itself is exactly the same, but the security feeds discovered a new piece of information related to the image’s software.
Pipeline and Runtime
Although the underlying algorithm to analyze the image contents (SBOM) and match vulnerabilities to it is basically the same, Sysdig treats images differently depending on whether they are located in a pipeline or being used as the base for a running container, also known as runtime workloads.
Pipeline
Any analysis conducted prior to the runtime phase is considered pipeline. This typically means CI/CD builds (Jenkins, Github, etc), but can also be just an execution of the sysdig-cli-scanner binary performed on a developer laptop or with a custom scanning script.
- Pipeline images do not have runtime context.
- The scan happens outside of the execution nodes where the agent is installed:
- CI/CD
- External instrumentation
- Custom scripts or image scanning plugins
- Pipeline scans are one-off vulnerability reports; the information is a static snapshot with its corresponding execution date.
- If you want to evaluate a newer version of the image or just reevaluate the same image with newer feed information, the analysis needs to be triggered again.
- Images analyzed using the sysdig-cli-scanner will show up in the Pipeline section of the vulnerability management interface.
Runtime
Runtime workloads are executed from an image. Accessing the Runtime section of the Vulnerabilities menu, you will be able to see those images and their vulnerability and policy evaluation.
- Runtime workloads are located in an execution node and are being monitored by a Sysdig agent/node analyzer, for example a Kubernetes node that is instrumented using the Sysdig agent bundle.
- Runtime workloads will offer a live, auto-refreshing state. This means:
- Workloads that are no longer running will be removed from the runtime view
- Vulnerabilities and policies evaluations will automatically refresh without any user interaction, offering always the most up-to-date information known.
- Runtime workload have a runtime context associated with them, i.e. Kubernetes cluster and namespace.
- Workloads analyzed during runtime will show up in the Runtime section of the vulnerability management interface.
Vulnerabilities Features
Sysdig’s Vulnerabilities module addresses the top requirements for effective vulnerability management:
Provides highly accurate views of vulnerability risk at scale
Deep visibility into system calls provides high accuracy about active packages
Rich details provide precision about vulnerability risk (ex. CVSS vector, score, fix age) and insights from multiple expert feeds (ex. VulnDB)
Access to public exploits allows you to verify security controls and patch efficiently
Prioritized risk data focused on the vulns that are tied to the packages loaded at runtime
Accepting risks on a carefully considered basis
At this time, the Vulnerability Management engine supports: CI/CD pipeline & runtime image scanning, policies, notifications, and reporting for runtime. Registry scanning is not yet supported.
Getting Started with Vulnerabilities
Ensure you have completed the Sysdig Secure steps, so you have:
Log in to Sysdig Secure with Advanced User+
permissions and select Vulnerabilities
.

The out-of-the-box policies for Pipeline and Runtime vulnerabilities will work without further setup.
Choose Pipeline or Runtime to see the scanning results.
Choose Reporting to configure schedules for creating downloadable reports on runtime vulnerability results.
To create or edit Pipeline or Runtime Vuln Policies and Rule Bundles, select the relevant links from the Policies tab in the navigation bar.
To accept the risk of detected vulnerabilities, configure an acceptance based on scope, justification, and length of time. See Understanding and Usage steps.
Understanding Accept Risk
As of November, 2022, users can choose to accept the risk of a detected vulnerability or asset. Accept Risk is available for both Runtime and Pipeline, and for specific CVEs or specified hosts or images.
Enablement Prerequisites
Accept Risk requires Sysdig Secure SaaS to be installed with:
Because Accept Risk is applied to both pipeline and runtime vuln results impartially, the required versions of both components are required.
If the minimum enablement requirements are not met, the Accept Risk
button and panel will show in your interface, but will not activate. The created Acceptance will appear in Pending
status for 20 minutes, then disappear as if you had never created it.
Check Your Versions
Check sysdig-deploy
Helm Chart: Must be 1.5.0+
helm list -n <namespace>
(default namespace is sysdig-agent)
Example:
$ helm list -n sysdig-agent
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
sysdig-agent sysdig-agent 5 2022-11-11 17:57:54.109917081 +0100 CET deployed sysdig-deploy-1.5.0
Upgrade Helm Chart Instructions here
Check Cli Scanner: must be 1.3.0+
./sysdig-cli-scanner --version
Upgrade Cli Scanner: Instructions here
When to Use
When faced with a large number of reported vulnerabilities, organizations need to know which are the most relevant for their security posture. Sysdig already highlights critical vulns with a fix available, and vulns that occur in images actually in use.
An additional feature is the targeted ability to accept the risk of a vuln and not count it towards a policy violation, for example, when:
- An internal security team has analyzed the vuln and declared it a false positive
- The preconditions of the vuln don’t apply
- Deployment in production is required and it is reasonable to postpone the fix
- etc.
What Types of Risk
You can accept risk for different entities:
Accepting Risk in the context of vuln management applies an exception to the Vulnerability Policy. Adding an accept to a CVE doesn’t make the CVE disappear. It still shows in the list, but voids the policy violation associated with that CVE.
When accepting risks it is important to:
- Be careful with the accept scope or context; overly broad exceptions can create false negatives
- Sysdig offers several scoping options for the accepts created
- Remain aware of what is accepted so it doesn’t become a visibility gap
- The Sysdig UI presents clear indications of what is accepted and why
Usage
See:
Appendix: Supported Packages and Languages
Runtime
- Only Kubernetes Runtime for now, Hosts and Cloud infrastructure coming soon
- Supported container runtimes:
- Docker daemon
- ContainerD
- CRI-O
Installation Options
- Helm chart
- Plain daemonset
- Runtime scanner
- Runtime scanner + benchmark runner
CI/CD
- Docker Registry V2 - compatible
- Docker Daemon
- Podman
- Docker Archive (tar)
- OCI Archive
Supported Package Types
- Debian
- Alpine
- RHEL
- Ubuntu
- Java Maven
- Golang (built with go 1.13+)
- Pypi (Python)
- NPM (JS)
- Ruby Gems
- NuGet (.Net)
- Cargo (Rust)
- Composer (PHP)
Supported Container Image CPU Architectures
- linux/amd64
- linux/arm64
- (others coming soon)
3.1 - Pipeline
This doc applies only to the Vulnerability Management engine, released April 20, 2022. Make sure you are using the correct documentation: Which Scanning Engine to Use
Introduction
The sysdig-cli-scanner
tools allow you to manually scan a container image, either locally or from a remote registry. You can also integrate the sysdig-cli-scanner
as part of your CI/CD pipeline or automations to automatically scan any container image right after it is built and before pushing to the registry scanner.
Development / CI/CD / Pipeline / Shift-Left / …: all of these terms refer to scanning performed on container images that are not (yet) executed in a runtime workload. You can scan these images using the sysdig-cli-scanner
tool, and explore the results directly in the console or in the Sysdig UI.
Optionally, you can create additional pipeline scanning policies and rules.
The Pipeline section in Sysdig Secure will display the scan results for all images that are scanned using the sysdig-cli-scanner
For Runtime workloads, see how they are automatically scanned by the Sysdig Runtime Scanner.
Running the CLI Scanner
The sysdig-cli-scanner
is a binary you can download and execute locally in your computer or environment.
Scanning Images
- Download latest version of
sysdig-cli-scanner
with:
Optionally, you can check the sha256sum as:
Set the executable flag on the file:
chmod +x ./sysdig-cli-scanner
You only need to download and set executable once. Then:
You can scan images by running the sysdig-cli-scanner
command:
SECURE_API_TOKEN=<your-api-token> ./sysdig-cli-scanner --apiurl <sysdig-api-url> <image-name>
See Parameters for more detail.
Integrating in your CI/CD Pipelines
The sysdig-cli-scanner
can be included as a step in your CI/CD pipelines (i.e. Jenkins, Github actions or others) simply by running the sysdig-cli-scanner
command as part of your pipeline.
- Make sure that the
sysdig-cli-scanner
binary is available as part of the worker or runner where the pipeline is executing.- If you are running an ephemeral environment in the pipeline, include the download and set executable steps in your pipeline to download the tool on every execution.
- Define a secret containing the API-Token and make it available in the pipeline (i.e. via a
SECURE_API_TOKEN
environment variable). - Include a step in your pipeline to run the
sysdig-cli-scanner
after building the container image, and providing the image name as paremeter. For example:
./sysdig-cli-scanner --apiurl <sysdig-api-url> ${IMAGE_NAME}
See some examples on how to use it on different CI/CD pipelines:
About CI/CD Policies
Policies allow you to define a set of rules that will evaluate each scan result. After the evaluation, each policy will pass or fail. A policy failure or non-compliance happens if the scan result doesn’t meet all the rules in a policy.
For CI/CD and manual image scans, you can tell the sysdig-cli-scanner
tool to explicitly evaluate one or more policies using the --policy= policy1,policy2,...
flag and provide a comma-separated list of policy IDs.
CI/CD policies can be configured as Always apply. If a policy has the Always apply flag, it will be evaluated on every scanned image even if you don’t specify it explicitly.
Learn more about Vulnerability Management policies, the available rules, and how to define policies in Vulnerability Policies.
Parameters
Basic usage of the sysdig-cli-scanner:
sysdig-cli-scanner [OPTIONS] <ImageName>
Required
Option | Description |
---|
SECURE_API_TOKEN | Provide the API token as environment variable SECURE_API_TOKEN . You can retrieve this from Settings > User Profile in Sysdig Secure. |
--apiurl=<endpoint> | Sysdig Secure Endpoint. In SaaS, this value is region-dependent and is auto-completed on the Get Started page in the UI. |
ImageName | The image that you want to scan. For example mongo-express:0.54.0 . |
- The Sysdig CLI scanner will try to find a local image in Docker, ContainerD or other container runtimes, or try to pull if from the remote registry.
- Once the scan is complete, you will see the results directly in the console, and they will be available in the Pipeline section of the UI.
Registry credentials
Registry credentials can be supplied via the following environment variables
Option | Description |
---|
REGISTRY_USER | Provide the registry username as environment variable REGISTRY_USER . |
REGISTRY_PASSWORD | Provide the registry password as environment variable REGISTRY_PASSWORD . |
Example
$ REGISTRY_USER=<YOUR_REGISTRY_USERNAME> REGISTRY_PASSWORD=<YOUR_REGISTRY_PASSWORD> SECURE_API_TOKEN=<YOUR_API_TOKEN> ./sysdig-cli-scanner --apiurl https://secure.sysdig.com ${REPO_NAME}/${IMAGE_NAME}
Additional Parameters
Use the -h
/ --help
flag to display a list of all available command line parameters:
Example
Usage:
sysdig-cli-scanner [OPTIONS] [ImageName]
Application Options:
-a, --apiurl= Secure API base URL
-t, --apitimeout= Secure API timeout (seconds) (default: 120)
--output-json= Output path of the scan result report in json format
-s, --skiptlsverify Skip TLS certificate verification (default: false)
-u, --skipupload Skip the scan results upload (default: false)
-d, --dbpath= Database full path. By default it uses main.db.gz from the same directory
--policy= Identifier of policy to apply
-p, --cachepath= Cache path
-c, --clearcache Clear the cache before to run (default: false)
-l, --loglevel= Log level (default: info)
-o, --logfile= File destination for logs, used if --console-log not passed
--console-log Force logs to console, --logfile will be ignored
--full-vulns-table Show the entire list of packages found
--detailed-policies-eval Show a detailed view of the policies evaluation
--no-cache config flag Disable the cache layer during the scan
--standalone config flag Disable communication towards the backend. This implies:
skip upload of the scan-result; offline-analyze; no
policies; no policy remediations; no risk-acceptances; no
download of the mainDB (local path for an existing one
needs to be provided with the dedicated parameter)
Help Options:
-h, --help Show this help message
Arguments:
ImageName: Image name
Image Sources
The Sysdig CLI scanner can load images from different sources. By default, it will try to automatically find the provided image name from all supported sources, in the order specified by the following list. However, you can explicitly select the image source by using the corresponding prefix for the image name:
file://
- Load the image from a .tar filedocker://
- Load the image from the Docker daemon (honoring DOCKER_HOST
environment variable or other Docker configuration files)podman://
- Load the image from the Podman daemonpull://
- Force pulling the image from a remote repository (ignoring local images with same name)containerd://
- Load the image from Containerd daemoncrio://
- Load the image from Containers Storage location
i.e. pull the image from remote registry even if it is locally available:
./sysdig-cli-scanner -a https://secure.sysdig.com pull://nginx:latest
Sample Result in Terminal
It is possible to view scan results in the terminal window (see below)
$ SECURE_API_TOKEN=<YOUR_API_TOKEN> ./sysdig-cli-scanner --apiurl https://secure.sysdig.com redis
Type: dockerImage
ImageID: sha256:7614ae9453d1d87e740a2056257a6de7135c84037c367e1fffa92ae922784631
Digest: redis@sha256:db485f2e245b5b3329fdc7eff4eb00f913e09d8feb9ca720788059fdc2ed8339
BaseOS: debian 11.2
PullString: pull:*//redis*
66 vulnerabilities found
8 Critical (0 fixable)
2 High (0 fixable)
4 Medium (0 fixable)
5 Low (0 fixable)
47 Negligible (0 fixable)
POLICIES EVALUATION
Policy: Sysdig Best Practices FAILED (9 failures)`
You can use --full-vulns-table
or --detailed-policies-eval
flags to include further details in the output.
For a more user-friendly scan result, find the image in the UI.
JSON Output
You can use the --output-json=/path/to/file.json
to write a JSON report of the scan result
Scan Logs (for troubleshooting)
The sysdig-cli-scanner
automatically writes a log file on every execution. You can change the output path using -o
or --logfile
flags. For troubleshooting purposes, you can change the log level by setting --loglevel=debug
. This will increase the verbosity of the log messages to the debug
level.
Review Pipeline Scans in the UI
You can explore the details for every image that has been scanned by executing the sysdig-cli-scanner
in Sysdig Secure UI.
Navigate to Vulnerabilities > Pipeline
.

Filter the list by Pass
| Fail
if desired.
- The Policy Evaluation column reflects the policy state at evaluation time for that image and the assigned policies
- Failed: If any of the policies used to evaluate the image is failing, the image is considered “Failed”
- Passed If there is no violation of any of the rules contained in any of the policies, the image is considered “Passed”
From here you can drill down to the scan result details.
Drill into Scan Result Details
Select a result from the Pipeline list to see the details, parsed in different ways depending on your needs.
Overview Tab
Focuses on the package view and filters for those that are fixable. Clickable cells lead into the Vulnerabilities list (next).

Vulnerabilities Tab
Expanded filters and clickable list of CVEs that open the full CVE details, including source data and fix information.
The same security finding (e.g. a particular vulnerability) can be present in more than one rule violation table if it happens to violate several rules.

Content Tab
Also organized by package view, with expanded filters and clickable CVE cells.

Policies Tab
Shows CVEs organized by the policy+rule
that failed
. Use the toggle to show or hide policies+rules
that passed
. Click CVE names for the details.

Filter and Sort Results
Within the Pipeline results tabs, there are ways to further refine your view:

- Search by keyword or CVE name
- Use filters:
Severity (>=)
; CVSS Score (>=)
; Vuln Type
; Has Fix
; Exploitable
.
Accept Risk: Pipeline
As of November, 2022, users can choose to accept the risk of a detected vulnerability or asset. The process for handling Accepted Risk is the same for Pipeline as for Runtime.
Use the Runtime instructions, with the following difference:
Accept Validity - Pipeline
The pipeline scan results are point-in-time, so there is no automatic re-evaluation.
To trigger a new evaluation containing the accept:
- You must execute the pipeline process again over the same image
- The N+1 scan will contain the accept
3.2 - Runtime
Introduction
Sysdig Secure will automatically analyze and scan the container image for the workloads in your clusters, providing a list of vulnerabilities, policy evaluations, and the “In Use” spotlight, helping you focus on fixing the active, critical and exploitable vulnerabilities.
As of December, 2022, hosts can be scanned for vulnerabilities as well as containers. See Host Scanning for details.
Why Runtime Scanning?
Although shifting vulnerability management to the earliest phases (such as integrating with CI/CD) is essential, runtime vulnerability management remains important:
- Strong defense: Runtime VM provides an additional layer of defense to your arsenal
- Up-to-date: New vulnerabilities are discovered every day; new discoveries need to be checked against your running images
- Prioritized feedback: The In Use spotlight allows you to hone in on the most important vulnerabilities discovered within your running images so you can efficiently prioritize and act.
Sysdig’s runtime scanner will:
- Automatically observe and report on all the Runtime workloads, keeping a close-to-real time view of images and workloads executing on the different Kubernetes scopes of your infrastructure.
- Perform periodic re-scans, guaranteeing that the vulnerabilities associated with the Runtime workloads and images are up-to-date with the latest vulnerabilities feed databases. It will automatically match a newly reported vulnerability to your runtime workloads without requiring any additional user interaction.
Understanding the Runtime Workload and Labels
Runtime entities are associated using the concept of workload, defined by:
These workload labels are in the order: cluster > namespace > type > container
Kubernetes cluster name
, demo-kube-eks in the example aboveKubernetes namespace name
, example-voting-app aboveKubernetes workload type
, deployment (or daemonset, etc.)Kubernetes container name
, sysdiglabs/example-voting-app-result:metrics-3 above
This means:
- Several replicas of the same deployment are considered the same workload (single entry on the table), as the images are identical and the runtime context is the same.
- An identical image deployed on two different Kubernetes clusters will be considered two different workloads, as the runtime context is different.
About Runtime Policies
Policies allow you to define a set of rules that will evaluate each workload. After the evaluation, each policy will pass or fail. A policy failure or non-compliance happens if the scan result doesn’t meet all the rules in a policy.
Runtime policies contain a runtime scope filter, so it only applies workloads in that scope, or Entire infrastructure, which will apply globally.
NOTE: If you have enabled host scanning, then you can assign runtime policies to container image workloads, hosts, or the entire infrastructure.
Learn more about Vulnerability Management policies, the available rules, and how to define policies in Vulnerability Policies
Review Runtime Scan Results Overview
Navigate to Vulnerabilities > Runtime
.

By default, the entire infrastructure results are shown.
Results are ranked by:
- Number of actual exploits
- Severity of vulnerabilities
- Number of vulnerabilities
From here you can:
to find and remediate the priority issues discovered.
Understanding the In Use Column
The In Use column depends upon Image Profiling and is currently in Controlled Availability status.
- To enable In Use in your account, please contact your Sysdig representative.
- You will also need to set a parameter in the Node Analyzer of the Sysdig Agent and enable Image Profiling. See Profiling | Enable for Risk Spotlight and In Use.
Data in the In Use column will appear approximately 12 hours after the feature has been deployed.
The In Use designation allows you to focus first on the packages containing vulnerabilities that are actually being executed at runtime. If an image has 180 packages and 160 have vulnerabilities, but only 45 are used at runtime, then much of the vuln notification noise can be reduced.
Click on an image entry to see the In Use panel and drill down, clicking on the vulnerabilities for details and examining the link to any known exploits that exist.

Drill into Scan Result Details
Select a workload from the Runtime results list
Overview Tab
Focuses on the package view and top-priority running images (In Use).

Clickable cells lead into the Vulnerabilities list (next).
Vulnerabilities Tab
Provides expanded filters and clickable list of CVEs that open the full CVE details, including source data and fix information.

Content Tab
Also organized by package view, with expanded filters and clickable CVE cells.

Policies Tab
Shows CVEs organized by the policy+rule
that failed
. Use the toggle to show or hide policies+rules
that passed
. Click CVE names for the details.

Filter and Sort Results
Filter by workload labels and optionally save constructed filters as Favorite or Default from the kebab (3-dot) menu on the filter bar.
Hover over the workload labels and click =
or =!
to add them to the filter bar to refine by cluster, namespace, type, etc.

Filter by evaluation: Pass
/ Fail
/ No Policy

Click In Use to list the results that have been evaluated for risk first
Use further-refined filters within the image detail tabs, e.g. CVE Name
; Severity (>=)
; CVSS Score (>=)
; Has Fix
; Exploitable
.

Accept Risk: Runtime
As of November, 2022, users can choose to accept the risk of a detected vulnerability or asset.
Review Understanding Accept Risk and the Enablement Prerequisites if needed.
If the minimum enablement requirements are not met, the Accept Risk
button and panel will show in your interface, but will not activate. A created Acceptance would appear in Pending
status for 20 minutes, then disappear as if you had never created it.
The process for Accept Risk is the same for Runtime and for Pipeline.
For a Failed CVE
Navigate to Vulnerabilities > Runtime
.
Either:
Select a failed asset from the list and choose the Vulnerabilities
panel, then hover over the far-right column to see the Accept Risk
button

Select a failed asset from the list and choose the Policies
panel, then hover over the far-right column to see the Accept Risk
button

Click Accept Risk
and continue to Complete the Configuration.
For a Failed Host or Image
You can accept risk for an entire host or image, based on the image name or host name.
Note:
- In this case, you are not accepting the vulnerabilities within, just the asset as a whole
- The ImageID or digests are not taken into account
Navigate to Vulnerabilities > Runtime
and select a failed asset.
Choose the Policies
panel and select the Accept image as a risk
button.

Continue to Complete the Configuration.
Complete the Configuration
Select Accept Risk
or Accept image as risk
.

Enter the configuration details:
Reason: Risk Owned
, Transferred
, Avoided
, Mitigated
, Not Relevant
, or Custom
Add details in the free-text box if needed.
Context: Defines the scope, i.e., the cases to which the Accept will apply.
- Global: Every time this vulnerability is found, regardless of the asset or the package and also regardless of the phase (Pipeline, Runtime), this vulnerability will always be accepted.
- Package: You are accepting only the combination of this CVE and a particular package. There are two sub options:
Package name
AND package version
(default). For example: rpm 4.14.4
Package name
- Any package version
For example, rpm
(any version)
- This image: Select the particular image or host name from the drop-down.
Note that the context
can affect multiple assets with a single configuration. For example, accepting one CVE globally
would affect the policy evaluation of all the different images in which that vuln is found.
Expires In: 7/30/60/90 days
, Custom
time frame or Permanent
- Accepts should be exceptional choices; normally they should not mask a security risk forever
- When the Accept expires, the vulnerability (or asset) reappears in the violations count, potentially causing an evaluation to fail again.
Click Accept
.
A green acknowledgement message is displayed, and a greyed out Shield
icon shows the Accept is in Pending
status.

Manage Accepts
Accept Validity
The creation, editing, or revocation of an Accept does not take effect immediately. The change is in Pending status with the grey shield icon until the next runtime scan is:
- Automatically triggered
- Within 20 minutes
No additional changes can made to the Accept configuration while it is pending.
Note: This differs in Pipeline
Limits
There is a limit of 1000 Accept Risk items (per customer account)
- This is the number of configurations created, not the number of impacted assets/CVEs
- For example, a global CVE Accept impacting 30 images counts as 1 Accept Risk item
- Both CVE accepts and Asset accepts count towards that total
Review Accepts
When no longer pending, the Accept Risk shield is not greyed out and appears in the list of assets. You can also filter by Accept Risk to see all assets where an Accept has been applied.
Click in to the asset to see more, and hover over the shield icon to see all the Accept Risk config details.

Accepted Risk on a CVE will be shown in the:
- List of CVEs in the “Vulnerabilities” panel
- List of Policy violations under the “Policies” panel
- Policy evaluation card, showing the number of overridden violations
Passing Evaluations
A policy will pass if:
- All the rules inside the policy pass, or
- All the violations to a policy have been voided by a matching accept
A host or image will pass if:
All the policies attached to the asset pass, or
The asset itself is accepted

In this example, the policies are failing but the asset has been accepted, indicated by the shield icon beside the [PASSED] global evaluation.
Edit an Accept
To edit an existing risk, click on the pencil icon in the Accept details.

You can edit the
Reason
Description
Expiration
To change the affected resource
or the context
, you must create a new Accept configuration, and delete the old one if no longer valid.
3.2.1 - Host Scanning
A “host” is any runtime entity where you could execute the Sysdig agent, including virtual machines, Kubernetes nodes, bare metal, cloud-managed hosts such as EC2, etc.
Scanning for vulnerabilities on hosts is as important as scanning on containers, and certain standards such as NIST 800-190 require vulnerability reports on running hosts to pass compliance. Sysdig’s host scanning feature provides a unified flow with image scanning, for a smooth user experience.
Note: Having the agent installed on the hosts is not required, but is recommended. Metadata autocomplete on the filters and searches depend on the Sysdig agent.
Enable Host Scanning
Installation methods include Helm (recommended), Docker container, or non-containerized binaries.
Supported OSes and Host Types
- Ubuntu 22.04
- Ubuntu 20.04
- Debian 11
- Debian 10
- Redhat Red Hat Enterprise Linux 9
- Redhat Red Hat Enterprise Linux 8
- Redhat Red Hat Enterprise Linux 7
- Red Hat Red Hat Enterprise Linux Core OS
- Amazon Linux 2
- Flatcar Container Linux
- Alibaba Cloud Linux (a.k.a. Aliyun Linux)
- Google Container-Optimized OS (COS), build 89+
Currently Supported CPU Architectures
AMD64 (x86_64)
ARM (arm64)
Current Feature Limitations
- No Risk Spotlight/In Use integration
How Long until Host Scan Results Appear in the UI?
After installation:
- If the default parameter
nodeAnalyzer.nodeAnalyzer.hostScanner.scanOnStart=true
is set, then a scan will start just after the pod is ready. You can expect the results in a few minutes, ~15 min max. - If this parameter is not set, results will be shown ~11 hours from install
- In all cases, scans are refreshed every 12 hours
- Helm chart and Docker container installations behave the same
Helm Install
If you have Kubernetes, the Helm install is the preferred method.
Prerequisites
Host scanning requires Sysdig Secure SaaS to be installed with:
sysdig-deploy
Helm chart version 1.5.0+HostScanner
container version 0.3.0+ (*0.3.1+ for Google COS)- Included by default on the helm chart version 1.5.0+, unless the user pins or modifies the defaults
- Host scanning is installed out of the box by default with the Helm chart; you can opt-out if desired.
Check Your Versions
Check your sysdig-deploy
Helm chart (default namespace is sysdig-agent)
helm list -n <namespace>
Example:
$ helm list -n sysdig-agent
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
sysdig-agent sysdig-agent 5 2022-11-11 17:57:54.109917081 +0100 CET deployed sysdig-deploy-1.5.0
Upgrade Helm Chart Instructions here
Opting Out
If for some reason you don’t want to use host scanning, you can opt-out using the Helm chart flag:
--set nodeAnalyzer.nodeAnalyzer.hostScanner.deploy=false
Docker Container Install
If you have non-Kubernetes hosts but still want to use containers, you can deploy Host scanning without Helm as follows:
docker run --detach -e HOST_FS_MOUNT_PATH=/host -e SYSDIG_ACCESS_KEY=<access-key> -e SYSDIG_API_URL=<sysdig-secure-endpoint> -e SCAN_ON_START=true -v /:/host:ro --uts=host --net=host quay.io/sysdig/vuln-host-scanner:$(curl -L -s https://download.sysdig.com/scanning/sysdig-host-scanner/latest_version.txt)
Non-Containerized Install
The Helm chart is the recommended installation method, but if you want to scan a host without using containers at all, we also offer a standalone binary and an RPM package.
The configuration is passed via environment variables, specifically:
- Retrieve your access key to use for
SYSDIG_ACCESS_KEY=<your-access-key>
- Check your Sysdig Secure endpoint by region to use for
SYSDIG_API_URL=https://<sysdig-url>
RPM
Compatible with any host that supports the RPM package format, such as RHEL.
# Configure the repo
$ rpm --import https://download.sysdig.com/DRAIOS-GPG-KEY.public
$ curl -s -o /etc/yum.repos.d/draios.repo http://download.sysdig.com/stable/rpm/draios.repo
# Update index
$ yum update
# Install the package
$ yum install vuln-host-scanner
# Create your configuration file
$ cat /opt/draios/etc/vuln-host-scanner/env
SYSDIG_ACCESS_KEY=<access-key>
SYSDIG_API_URL=<api-url>
SCAN_ON_START=true # optional
# Enable the service
$ systemctl enable vuln-host-scanner.service
# Start the service
$ systemctl start vuln-host-scanner.service
# Check logs
$ journalctl -u vuln-host-scanner.service
Raw Binary
## Binary
$ ARCH=amd64; curl -s https://download.sysdig.com/scanning/bin/sysdig-host-scanner/$(curl -L -s https://download.sysdig.com/scanning/sysdig-host-scanner/latest_version.txt)/linux/$ARCH/sysdig-host-scanner > sysdig-host-scanner
# ARCH=arm64 for arm architectures
# Give exec permission
$ chmod +x sysdig-host-scanner
# Run it
$ SYSDIG_ACCESS_KEY=<access-key> SYSDIG_API_URL=<api-url> ./sysdig-host-scanner
Kubernetes Metadata:
If your node is part of an existing Kubernetes installation and you’re not using the official Helm chart, you’ll be in charge of setting node name and cluster name via
K8S_CLUSTER_NAME
K8S_NODE_NAME
Other environment variables for the Host Scanner are listed in the chart.
Usage
Once you have deployed the host scanner in your environment, the Runtime UI will integrate the findings alongside the runtime workload results, based on an out-of-the-box Vulnerability policy.
Filter for Hosts
You can filter to find results of host scanning using the quick links in the banner at the top of the page, and/or the filter bar.

Hosts can be filtered using
- Kubernetes cluster name
- Cloud account id
- Cloud account region
- Host Name
- Agent tags
See also, Vulnerability Policies|Runtime.
Download Reports
You can schedule and download reports for scanning done on hosts as well as containers.
See Vulnerabilities|Reporting.
3.3 - Reporting
This doc applies only to the Vulnerability Management engine, released April 20, 2022. Make sure you are using the correct documentation: Which Scanning Engine to Use
Introduction
Use the Vulnerability Reporting interface to schedule asynchronous reports about detected runtime vulnerabilities along with package and image data. You can schedule reports for runtime (container) scanning and/or host scanning.
Here you can:
- Create a report definition
- Schedule its frequency
- Define notification channel(s) in which to receive the reports (email, Slack, or webhook)
- Preview how the data will appear (optional)
- Download the resulting reports in
.csv
, .json
, or .ndjson
- Optionally, generate a manual (unscheduled) report
NOTE: Regardless of the schedule, reports always include the data from the past 24 hours. Therefore, most users schedule a daily report to avoid having any gaps
Past reports are stored for two weeks. Therefore, if you scheduled a weekly report, the list would only contain two records.
Create a Report Definition
For Runtime Workloads
Access: Log in to Sysdig Secure with Advanced User or higher permissions, and select Vulnerabilities > Reporting
.
The Vulnerabilities Reporting list page is displayed. If you have previously created report definitions, you can click one to see the details.

Create: Click Add Report
. The New Report page is displayed.

Basic Info: Define the report basic info:
- Name
- Description
- Export file format: .csv, .json, or .ndjson
Select Definitions:
Conditions: (Optional) Add Conditions from the drop-down if you want to filter the items reported on.
The available conditions include:
- Image Name * (only for this Entity)
- OS Name
- In Use * (only for this Entity)
- Package Name
- Package Path
- Package Type
- Package Version
- Vulnerability ID
- CVSS Score
- CVSS Vector
- Vuln Publish Date
- Exploitable
- Fix Available
- Risk Accepted
- Severity
- Vuln Fix Date
Example 1: You want a report of all vulnerabilities with a Severity >= High
, and for which a Fix
is Available
.

Example 2: You want a report of all vulnerabilities that are In Use with Accepted Risks.

Schedule: Define the Schedule (frequency and time of day) that the report should be run.
Note: The schedule determines when the report data collection begins. As soon as evaluation is complete, you will receive a notification in the configured notification channels.

Notification Channel: If you have configured them, you can use email
, Slack
, or webhook
notification channels, and they will appear in the drop-down. Since reports are typically large, the actual data is not sent to the notification channel; you receive a link to download it. You must be a valid Sysdig Secure user (Advanced User+) to access the link.
Data Preview: Click Refresh
to apply the configuration you’ve chosen and pull up on the center bar of the Data Preview panel to see sample results.
Click Save
.
For Runtime Hosts
All of the steps are the same as for Runtime Workload reports, except:
Basic Info: Select Runtime Hosts
as the entity.
Conditions: (Optional) Add Conditions from the drop-down if you want to filter the items reported on.
The available Conditions include:
- Architecture * (only for this entity)
- OS Name
- Package Name
- Package Path
- Package Type
- Package Version
- CVSS score
- CVSS vector
- Vuln Publish Date
- Exploitable
- Fix Available
- Risk Accepted
- Severity
- Vuln Fix Date
Manage Reports
View and Edit Report Definition
Select an entry in the Reporting list to see the detail panel.

Click Edit
to change the report definition parameters. You can also access this panel from the kebab (3-dot) menu.

Make your edits, click Refresh
to see the Data Preview, and Save
.
Download Reports
From the Reporting list, the latest report download link appears in the Download column.

To see older reports, select an entry in the Reports list to open the detail panel and select from the report download list.

The report will be downloaded in the format you defined; the file is zipped (.gz) – double-click to unzip and view.
Generate Report Manually
- Select an entry in the Reporting list to see the detail panel.
- Click
Generate Now
. A Scheduled entry will appear. Within 15 minutes or so it will change to Completed and you can download the manually generated report.
Duplicate a Report Definition
- Choose the kebab (3-dot) menu for a scheduled report.
- Click
Duplicate
.
Report Definition Retention
The scheduled and manually created reports are retained for 14 days.
Delete a Report Definition
Be sure to download any needed reports before deleting the definition.
Choose the kebab (3-dot) menu for a scheduled report.
Click Delete
, click Yes
when prompted.
The report definition and all associated reports are deleted.
4 - Posture
Sysdig’s Posture module includes Compliance handling for both Kubernetes and Cloud accounts (KSPM/CSPM), as well as Identity and Access for cloud accounts.

Note that users may have a legacy version of Compliance installed.
4.1 - Compliance
Introduction
Sysdig’s Compliance feature continues to evolve and the new Compliance module represents the next phase of maturity, as well as the first to support CSPM/KSPM. The Compliance module now relies on persisting the resources in an inventory; this enhanced resource visibility and full-context prioritization drives remediation and resolution of violations.
- Compliance IBM Cloud and On-Prem Users, please see Legacy Compliance
- Compliance is not available for Managed Falco (secure light).
What’s New with Compliance
Compliance that is Actionable
- The new CSPM Compliance lets you manage your risks:
- Remediate
- Accept the risk
- Open a Pull Request in your code repository - if Git IaC integration is enabled
- Coming Soon: open a JIRA ticket for remediation
A Stream of Violations
The resources of your Zones are evaluated against compliance policies; the violations are collected into tiles in an ongoing stream and shown on the Compliance page.
The new approach relies on the common process of fetching the resources once per day into the backend and performing the relevant analysis of policies.
You can create custom policies or use Sysdig out-of-the-box policies.
Click into the resource itself, rather than a list of violations
Variety of terminology changes
Downloadable reports and the APIs to support reporting
For Legacy Compliance Users:
Note that Compliance and (legacy) Unified Compliance can be run in parallel. When the benchmarks have reached End of Life (EOL), the data collection will be only on Compliance and the Legacy Reports will be available on the interface for a period of a year from creation date.
There is no plan to transfer data between compliance versions.
See also: Appendix B
Typical Use Cases
Compliance/Security Team Members
Will want to:
- Check current compliance status of their business zones against predefined policies
- Demonstrate to an auditor the compliance status of their business zone in a specific point in time (the audit)
- Create a report of the compliance status of their business zone, share it with their auditors and the management team
- Understand the magnitude of the compliance gap
DevOps Team Members
Will want to:
- Identify the compliance violations of a predefined policy applied on their business zones
- Manage the violations according to their severity
- Easily fix the violation
- Document exceptions and accept risk according to the risk management policy of their organization
Below is a quick overview of how users work through the Compliance screens to detect prioritized vulnerabilities, analyze them, and remediate.
Compliance Page: Get high-level posture performance indicators (PPIs) on each of the policies applied applied on your zones:

Your zones are ordered alphabetically. The results under each zone are ordered by the passing score, lowest first, to help you focus on improving your scores.
Select a Policy to see its Results and select a failing requirement to see the Controls and failing resources that comprise it.
Start Remediation to open the Remediation panel.
Begin remediation (where possible).
The remediation flow allows you to understand exactly what the issue is, to review the suggested patch that Sysdig created specifically for the problem, and choose how to apply the patch (manually or in your Git repository).
- Manually, you can copy the patch code and apply it in production.
- To remediate in the code repository, you can choose the relevant Git source and Compliance will create a pull request integrating the patch (as well as checking for code formatting cleanup). You can review all the changes in the PR before you merge.
You can Accept the Risk
(temporarily or forever) and remove the violation from the failed controls.
Download a report as a CSV/spreadsheet of your compliance results for development teams, executives or auditors (optional).
The rest of the page describes the screens and actions in detail.
Installation
To connect new cloud accounts or Kubernetes clusters to Sysdig Secure, please follow our installation guide, where --set global.kspm.deploy=true \
.
For existing connected data sources, please follow the migration guide.
Usage
CSPM Zones Management
On the Compliance landing page, a default Entire Infrasture zone is automatically created. CIS policies and the Sysdig Kubernetes policy are automatically added to the Entire Infrastructure zone.
To see results from any of the other 40+ out-of-the-box policies provided with the Compliance module, or for any custom policies, you must apply them to a zone.
Navigate Compliance Overview
Select Posture > Compliance
.
Review the compliance posture for each of your zones. Each row or tile shows your compliance results of a policy that is applied on your zone.

The zones are ordered alphabetically. The default Entire Infrastructure
zone was created by Sysdig and some policies are applied to it. You can customize the Entire Infrastructure
zone , along with your own zones, in the Zones management page.
The rest of the tiles are ordered alphabetically until custom filters are applied.
Zones / Policies:
This is the lens to evaluate your compliance results through your zones and the policies you applied to them.
Passing Score:
The number of requirements passing for this policy view, expressed as a percent. The higher the better.
Requirements Failing:
The number of requirements remaining to fix to get to 100% for a view, listed as a bar chart of the past 7 days’ results. The smaller the number, the better. Requirements are made up of one or more controls, so requirements will be the smaller number.
Controls to Fix:
The number of controls to fix to achieve a perfect score. The smaller the better. (Multiple controls make up a single requirement, so control count will be larger than requirement count).
Resources Passing:
The percent of resources passing (or accepted) out of all resources evaluated. Resources are the most granular of your results. The higher the percentage, the fewer individual resources failing, the better.
Violations by Severity:
Every control has a Severity
(high/medium/low
). Resource Violations are the number of resources failing, organized by severity. One resource can be counted multiple times if it’s failing multiple controls. The lower, the better.
Select a tile to drill into the results of a particular policy.
Review and Filter Results
From the Compliance page, select a particular tile to see the Results page.

The failed requirements are sorted by severity and importance.
You can edit the filters to focus on the compliance results that are relevant for you.The Compliance results page presents the policy requirements for the selected zones and policies, and the controls under each requirement.
Drill Down to the Control Pane
From the Results page, open a requirement to see the individual failing controls. Click a control to open the Control pane on the right and review the resources that were evaluated by the control.

Here you can see:
- An description of the control
- An overview of all resources that have passed, failed, or had their risks temporarily accepted
Filters in the Control Pane
The Control pane shows the top 50 results. Use filters to find additional resources.
You can construct filter expressions in the Control pane on all resource fields:

For each of the following Control Types, you can refine your search in the mini-inventory using the associated attributes:
Kubernetes Identity
- Cluster
- Name
- Name
- NamespaceType (= Resource Type in Inventory) - ex: Group, ServiceAccount, User
Kubernetes Resource
- Cluster
- Labels
- Name
- Namespace
- Type (= Resource Type in Inventory) - ex: Deployment, Daemonset, StatefulSet, ReplicaSet, Pod, Job, CronJobHost (K8s, Linux, Docker)
Host (K8s, Linux, Docker)
- Cluster
- Name
- OS (= Operating System in Inventory)
- OS Image (doesn’t exist in Inventory)
Managed Cloud Identity & Resource (AWS, GCP, Azure)
- Account
- Location (= Region in Inventory)
- Name
- Organization
Select a failing resource to review its remediation guidelines and take action towards its remediation.
The remediation solutions are under continued development in the product.
Some remediation flows are manual, while others offer different degrees of automation.
Sysdig can present a patch to be manually applied to production, or it can fix the resource via creation of Pull Request with the required changes directly in the Git repository that has been previously configured as an IaC integration.
At this time, risk response actions are for a single resource for a single violation. Several types of risk responses are supported:
- Manual Remediation: Playbook text to remediate the violation is presented
- Automatically generate a patch (with or without user input): Patch code is presented with an input field if new values are required, and the user downloads the patch and copy/pastes the patch application code.
- Set up an Automatic Pull Request (with or without user input): Patch code is presented, with an input field if new values are required, and the user opens a PR.
- Accept this Risk
Source Detection
When applying remediation to a resource, Sysdig tries to identify the matching source file from your configured Git integrations. If there are multiple candidates or in case it is not possible to find the matching source file, you can use the search field to manually explore and select the relevant file from the connected Git repositories.
Patching and Pull Requests (PRs)
When using Pull Request for remediation, Sysdig will create a branch directly in your Git repository, patching the offending resource with corrective changes. You can review all the suggested changes in the PR before you merge it.
Select a Resource to open the Remediation pane on the right. This pane will differ depending on the specific control and resource evaluation.
If remediation via patch is possible, and Git integration has been set up, then the full remediation pane will be displayed. If there is more than a single possible matching file for the resource, all the candidates are displayed as “Suggested Sources”. If no candidates are displayed or you want to choose a different file, you can click the “Search Source” button to manually select from the list of possible files in the connected Git repositories.

Review Issues
Here you see the impact of the remediation, review the resource attributes, and, if relevant, enter a necessary Value that will be incorporated into the patch code.
If a required value can be autodetected, it will be auto-inserted and the Value
input field will be read-only.

Check the Patch
The Patch code will be presented for review when there is a patch that can be applied manually or used in a Pull Request to remediate the IaC file. In most cases, it is recommended to download the code in the Continue Remediation section, but you can also copy/paste it.

If you have not integrated your Git repository with Sysdig’s IaC Scanning, or if creating a pull request is not required in a particular resource failure, then you can perform remediation manually.
Use the button to download the patch and the provided code to apply it.

After configuring IaC Scanning in your account, Sysdig will scan and analyze the manifests and modules from your defined Git sources, and scrape resources declared in your source files. The scan process runs daily or whenever a new Git source is added.
Sysdig tries to match and identify the resources discovered from the Git IaC Scanning with the deployed and evaluated resources.The best matches are listed under “Suggested Sources” in the Remediation pane when setting up a Pull Request.
You can also search manually for sources by their full URL path.

Use the button to Open a Pull Request
.
Workflow Name Selector for Helm/Kustomize:
What is it: you select a source of type Helm/Kustomize; you can type a selector for the workload name.
Why: In Helm, in most cases, workload names are derived from the release name, which means that they change with every new release. The selector is a regular expression that matches workloads by prefix/suffix (or a more complex pattern). With that selector in place, the remediation can be used for the workloads generated from the same chart, regardless of the release.

Note that Sysdig will create a new Pull Request in your repository with the suggested fixes, and depending on your Git source configuration, Sysdig can run a Pull Request Policy Evaluation that might report other unfixed control violations.
Option: Accept Risk
In some cases, a failing control can be safely ignored for a period of time so the resource will pass and the compliance score will improve. To do so:
Click the Accept Risk
button on the remediation pane.
Fill out the required fields, to comply with audit best practices, and click Save
.
Reason: Risk Owned, Transferred, Avoided, Mitigated, Not Relevant, or Custom
Details: Explain to an auditor the reason for accepting the risk, or select the risk management action taken
**Expired In: **Select when you want this acceptance to expire and the resource to fail. 7/30/60/90 days, Custom time frame, Never
Expiration Date: Enter for Custom time frame, otherwise autocompleted
Later, you can filter violations by Accepted
status to address them.

Create and Download a Report
To meet compliance goals, an organization may need to generate output to be shared with other stakeholders, such as executives or auditors, to show point-in-time compliance/violations.
Reports could also be used for sharing compliance results with your development teams. Also consider Using the CSPM APIs .
You can download ad hoc reports as CSV files from the Compliance Results page or from an individual control.
To generate a report of Compliance results:
Select Posture > Compliance
.
Select a tile of a policy under one of your zones.
Optional: filter as desired. For example: by dates, by pass/fail status, by controls, etc. You can select more than one policy for a single zone. The maximum report size of 10 MB.
Click Download Report
.
A file is downloaded in a CSV (Comma-Separated-Values) format and can be used as a spreadsheet.

To generate a report from an individual control:
Select Posture > Compliannce
.
Select a tile of a policy under one of your zones.
Select a control to open the control pane, filter the resources if desired, and click the “Download Report” button.
The maximum report size is 10 MB.

Using the CSPM API
When your organization uses a 3rd-party system to receive remediation reports and create tasks, consider using the CSPM APIs.
These are documented online along with the rest of the Sysdig Secure APIs.
Compliance Results API Call (Requirements)
- Please specify a zone in the request. If a zone is not specified in the request, results will be returned for policies applied on the default “Entire Infrastructure” zone.
- If no policy is applied on the default “Entire Infrastructure” zone, you will receive empty results.
- Note that URL Links to every Control Resource List API call are contained in the Compliance Results Response.
Appendix A
Terminology Changes
Previous Term | New Term |
---|
Framework, Benchmark | Policy The policy is a group of business/security/compliance/operations requirements that can represent a compliance standard (e.g. PCI 3.2.1), a benchmark (e.g. CIS Kubernetes 1.5.1), or a business policy (e.g. ACME corp policy v1).
You can review the available policies and create custom CSPM/Risk and Compliance policies under Policies |
Scopes | Zone A business group of resources for a specific customer, defined by a collection of Scopes of various resource types, calculated by “OR” operators |
Control | Requirement (or Policy Requirement) A requirement exists in a single policy and is an integral part of the policy. The requirement represents a section in a policy with which compliance officers & auditors are familiar with. |
Family | Requirements Group Group of requirements in a policy |
Rule | Control A control defines the way we identify the issue (check) and the playbook(s) to remediate the violation detected. |
Vulnerability Exception | Risk Acceptance The new module now includes the ability for a user to review a violation or vulnerability, not yet remediate it, and acknowledge it without making it fail the policy. |
Policies Included
The following risk and compliance policies are included out-of-the-box:
National Institute of Standards and Technology (NIST)
- NIST SP 800-53 Rev 5
- NIST SP 800-53 Rev 5 Privacy Baseline
- NIST SP 800-53 Rev 5 Low Baseline
- NIST SP 800-53 Rev 5 Moderate Baseline
- NIST SP 800-53 Rev 5 High Baseline
- NIST SP 800-82 Rev 2
- NIST SP 800-82 Rev 2 Low Baseline
- NIST SP 800-82 Rev 2 Moderate Baseline
- NIST SP 800-82 Rev 2 High Baseline
- NIST SP 800-171 Rev 2
- NIST SP 800-190
- NIST SP 800-218 v1.
Federal Risk and Authorization Management Program (FedRAMP)
- FedRAMP Rev 4 LI-SaaS Baseline
- FedRAMP Rev 4 Low Baseline
- FedRAMP Rev 4 Moderate Baseline
- FedRAMP Rev 4 High Baseline
Defense Information Systems Administration (DISA) Security Technical Implementation Guide (STIG)
- DISA Kubernetes Security Technical Implementation Guide (STIG) Ver 1 Rel 6
- DISA Kubernetes Security Technical Implementation Guide (STIG) Ver 1 Rel 6 Category II (Medium)
- DISA Kubernetes Security Technical Implementation Guide (STIG) Ver 1 Rel 6 Category I (High)
- DISA Docker Enterprise 2.x Linux/UNIX Security Technical Implementation Guide (STIG)
- DISA Docker Enterprise 2.x Linux/UNIX Security Technical Implementation Guide (STIG) v2 Category III (Low)
- DISA Docker Enterprise 2.x Linux/UNIX Security Technical Implementation Guide (STIG) v2 Category II (Medium)
- DISA Docker Enterprise 2.x Linux/UNIX Security Technical Implementation Guide (STIG) v2 Category I (High)
Center for Internet Security (CIS) Benchmarks
- CIS Distribution Independent Linux Benchmark v2.0.0
- CIS Docker Benchmark v1.3.1
- CIS Kubernetes V1.15 Benchmark v1.5.1
- CIS Kubernetes V1.18 Benchmark v1.6.0
- CIS Kubernetes V1.20 Benchmark v1.0.0
- CIS Kubernetes V1.23 Benchmark v1.0.0
- CIS Amazon Elastic Kubernetes Service (EKS) Benchmark v1.0.1
- CIS Google Kubernetes Engine (GKE) Benchmark v1.1.0
- CIS Azure Kubernetes Service (AKS) Benchmark v1.1.0
- CIS Amazon Web Services Foundations Benchmark v1.4.0
- CIS Google Cloud Platform Foundations Benchmark v1.3.0
- CIS Microsoft Azure Foundations Benchmark v1.4.0
Amazon Web Services (AWS) Best Practices
- AWS Well Architected Framework
- AWS Foundational Security Best Practices
Regulatory Compliance Standards
- System and Organization Controls (SOC) 2
- Health Insurance Portability and Accountability Act (HIPAA)
- Payment Card Industry Data Security Standard (PCI DSS) v3.2.1
- Payment Card Industry Data Security Standard (PCI DSS) v4.0
- NSA/CISA Kubernetes Hardening Guide
- General Data Protection Regulation (GDPR)
- ISO/IEC 27001:2013 v2
- Health Information Trust Common Security Framework (HITRUST CSF) v9.4.2
Risk Frameworks
- All Findings
- MITRE ATT&CK for Enterprise v10.1
- MITRE D3FEND
Sysdig Best Practices
- Sysdig Kubernetes - based on Sysdig’s security research and best practices
Coming soon:
- CIS Red Hat OpenShift Container Platform (OCP) Benchmark v1.2.0
Appendix B
Legacy Compliance Versions
Customers running older versions of Sysdig Secure may encounter different Compliance UI and features.
For OnPrem, IBM Cloud and Legacy Compliance Versions, see:
Migration Guide
For customers migrating to the new Compliance module, released January, 2023:
Starting January 17th, SaaS customers that connect new data sources for Sysdig cloud accounts or Sysdig agents will automatically have the new Compliance module (previously known as “Actionable Compliance”) enabled. Resources of the connected data sources will be evaluated according to CSPM/Risk and Compliance policies that are applied on zones. Results are displayed about 5-10 minutes after connection, varying by the scale of the resources.
If you were using Unified Compliance:
- For Existing Kubernetes clusters: please make sure that your applied helm charts are updated according to the KSPM Components guide.
- For Existing GCP cloud accounts, please be sure to enable the Cloud Asset API
- The new Compliance module will be auto-enabled on your existing Cloud accounts by January 26th.
Currently, the New CSPM Compliance module is not available for OnPrem and IBM Cloud users; they could continue using Unified Compliance
4.1.1 - Compliance Legacy Versions
Customers running older versions of Sysdig Secure may encounter different interations of the Compliance UI and features, as well as the Benchmarks module, which in current versions has moved behind the scenes.
The documentation appropriate for your Compliance tools depends on the software version you are running.
History of Compliance Components
4.1.1.1 - Compliance (Unified)
The Compliance module in Sysdig Secure is comprised of a validator tool that checks selected controls from various compliance standards, and the reports it compiles. New standards are being added regularly. The validator checks many Sysdig Secure features, including: image scanning policies, Falco runtime policies and rules, scheduled benchmark testing, Admission Controller, Network Security Policies, Node Image Analyzer, and more. Over time we will add new compliance coverage.
Disclaimer: Sysdig cannot check all controls within a framework, such as those related to physical security.
In Jan. 2022, Sysdig Secure has unified and simplified the Compliance interface.

From a single page, you can now:
Scope all types of reports
Scope across both host and cloud* platforms (workload*, Kubernetes, AWS, GCP, etc.)
Select any or all compliance frameworks (CIS AWS, CIS Azure, NIST, HIPAA, etc.)
Fine-tune selections by compliance framework version

Create/Enable/Disable reports
- Schedule a new report task for any of the available frameworks or platforms
- Enable/disable existing tasks
Review all scheduled tasks and the resulting reports
Benchmark tasks are now treated as just another compliance task, within the same interface
- No need to configure or reference the Legacy Benchmarks module once unified compliance is switched on
*Terminology note: Compliance standards are scoped to different platforms depending on the specific security rules they include. Broadly, these are divided into:
Workload types: Including any Falco rules for kernel system calls, Falco rules for Kubernetes audit logs, host benchmarks, and security features that affect hosts, containers, and kubernetes clusters
Cloud type: Falco rules for CloudTrail and Cloud Custodian rules on AWS, or for GCP, Azure, and other cloud providers as they are added.
Enable Unified Compliance UI
Prerequisites
When these two prerequisites are met, the new UI for unified compliance will be automatically deployed.
NOTE: If you are upgrading from an earlier version of Sysdig Secure, your existing compliance and benchmark records will be migrated to the new version and retained on the same schedule as before.
Use Compliance Reports
Access the Compliance Module
Click the Posture
icon in the left-hand navigation and select either All Platforms
or an individual platform under Compliance
.

Schedule New Task
Click +Schedule New
from the top-right corner of a Compliance landing page, or choose Posture > New Report
from the nav bar.
Choose the desired framework from the list presented and click Schedule
.

(Note that if a framework already has a scheduled task, you can view that report from here as well.)
Configure the report details:

- Report Name: Assign a name to the scheduled task
- Framework: Auto-filled from the selection you made, or choose a different framework
- Version: Select from the drop-down as needed
- Platform: Only applicable options will appear in the drop-down menu, based on the framework chosen
- Scope: Select Entire Infrastructure or an appropriate subscope from the drop-down menu
- Schedule: Choose Daily, Weekly, or Monthly and the time at which the task should be run and the report generated.
Click Schedule Report
. At the designated schedule, the task will run and the report will be displayed on the Compliance landing page.
Use Compliance Reports
Review a Report
Navigate to the Compliance
list from the Posture
menu.
Select a report from the list to view the Report details. The top section of the page presents the compliance report summary, with the Pass|Fail summary data.

Report Date: Themost current report is displayed; select a different date/time from the drop-down to see an earlier version.
Expand relevant details: For example, click any Failing Controls in the summary at the top of the page and then expand to review the resources that are failing and find the suggested fixes.

Frameworks and Controls Implemented
At this time, Charmed Kubernetes is not supported.
AWS Foundational Security Best Practices v1 (FSBP) Compliance
The AWS Foundational Security Best Practices standard is a set of controls that detect when your deployed accounts and resources deviate from security best practices. The standard allows you to continuously evaluate all of your AWS accounts and workloads to quickly identify areas of deviation from best practices. It provides actionable and prescriptive guidance on how to improve and maintain your organization’s security posture. The controls include best practices from across multiple AWS services.
For AWS protection, Sysdig Secure will check the following sections:
AutoScaling.1, CloudTrail.1, Config.1, EC2.6, CloudTrail.2, DMS.1,
EC2.1, EC2.2, EC2.3, ES.1, IAM.1, IAM.2, IAM.4, IAM.5, IAM.6, IAM.7,
Lambda.2, GuardDuty.1
AWS Well Architected Framework Compliance
The AWS Well Architected Framework helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. Built around six pillars—operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability—AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures and implement scalable designs.
For workload protection, Sysdig Secure will check the following
sections: OPS 4, OPS 5, OPS 6, OPS 7, OPS 8, SEC 1, SEC 5, SEC 6, SEC 7,
REL 2, REL 4, REL 5, REL 6, REL 10, PERF 5, PERF 6, PERF 7
For AWS protection, Sysdig Secure will check the following
sectionsOPS 6, SEC 1, SEC 2, SEC 3, SEC 8, SEC 9, REL 2, REL 9, REL 10
FedRAMP Compliance
FedRAMP is a government-wide program that promotes the adoption of secure cloud services across the federal government by providing a standardized approach to security and risk assessment for cloud technologies and federal agencies. FedRAMP empowers agencies to use modern cloud technologies, with an emphasis on security and protection of federal information.
For workload protection, Sysdig Secure will check the following sections:
AC-2, AC-2(4), AC-2(12), AC-6(1), AC-6(2), AC-6(3), AU-2, AU-6,
AU-10, AU-12, CM-3(6), CM-7, CM-7(1), SA-10, SC-8, SC-8(1), SI-3, SI-4(4)
For AWS protection, Sysdig Secure will check the following sections:
AC-2, AC-2(4), AU-8, SC-8(1)
For Google Cloud protection, Sysdig Secure will check the following sections:
AC-6(1), AC-6(2), AC-6(3), AU-9(2), AU-12(1), CM-3(1), SC-7(4)
For Azure protection, Sysdig Secure will check the following sections:
AC-2, AU-8
GDPR Compliance
The General Data Protection Regulation 2016/679 (GDPR) is
a regulation for data protection and privacy in the European Union (EU)
and the European Economic Area (EEA). It also addresses the transfer of
personal data outside the EU and EEA areas.
For workload protection, Sysdig Secure will check the following
sections: 5.1, 5.2, 24.1, 24.2, 24.3, 25.1, 25.2, 25.3, 32.1, 32.2, 40.2
For AWS protection, Sysdig Secure will check the following sections:
5.1, 5.2, 24.1, 24.2, 24.3, 25.1, 25.2, 25.3, 30.1, 30.2, 30.3, 30.4,
30.5, 32.1, 32.2, 40.2
For Google Cloud protection, Sysdig Secure will check the following sections:
25.1, 25.2, 25.3, 32.1, 32.2
For Azure protection, Sysdig Secure will check the following sections:
25.1, 25.2, 25.3, 32.1, 32.2
HIPAA Compliance
The Health Insurance Portability and Accountability Act (HIPAA) sets the standard for protecting sensitive patient data. Companies dealing with Protected Health Information (PHI) must have and comply with physical, network, and technology security measures to maintain HIPAA compliance. Any entity providing health care treatment, payment, and operations, as weel as any entity who has access to patient information and provides support for treatment, payment, or operations must comply with HIPAA requirements. Other organizations such as subcontractors and any other related business partners must also comply.
For workload protection, Sysdig Secure will check the following
sections: 164.308(a)(1)(ii)(D), 164.308(a)(3)(i), 164.308(a)(3)(ii)(A),
164.308(a)(3)(ii)(B), 164.308(a)(4)(i), 164.308(a)(4)(ii)(A),
164.308(a)(4)(ii)(B), 164.308(a)(4)(ii)(C), 164.308(a)(5)(ii)(B),
164.308(a)(5)(ii)(C), 164.310(a)(2)(iii), 164.310(b), 164.312(a)(1),
164.312(a)(2)(i), 164.312(a)(2)(ii), 164.312(a)(2)(iv), 164.312(b),
164.312(c)(1), 164.312(c)(2), 164.312(d), 164.312(e)(2)(i),
164.312(e)(2)(ii)
For AWS protection, Sysdig Secure will check the following sections:
164.308(a)(1)(ii)(D), 164.308(a)(3)(i), 164.308(a)(3)(ii)(A),
164.308(a)(3)(ii)(B), 164.308(a)(4)(i), 164.308(a)(4)(ii)(A),
164.308(a)(4)(ii)(B), 164.308(a)(4)(ii)(C), 164.308(a)(5)(ii)(B),
164.308(a)(5)(ii)(C), 164.308(a)(8), 164.310(b), 164.312(a)(1),
164.312(a)(2)(i), 164.312(a)(2)(ii), 164.312(b), 164.312(c)(1),
164.312(c)(2), 164.312(e)(2)(i)
For Google Cloud protection, Sysdig Secure will check the following sections:
164.310(b), 164.312(b), 164.312(d)
HITRUST CSF v9.4.2 Compliance
The HITRUST Common Security Framework (CSF) provides the structure, transparency, guidance, and cross-references to authoritative sources organizations globally need to be certain of their data protection compliance. It leverages nationally and internationally accepted security and privacy-related regulations, standards, and frameworks–including ISO, NIST, PCI, HIPAA, and GDPR–to ensure a comprehensive set of security and privacy controls and continually incorporates additional authoritative sources. The HITRUST CSF standardizes these requirements, providing clarity and consistency and reducing the burden of compliance.
For workload protection, Sysdig Secure will check the following sections:
01.b, 01.c, 01.i, 01,j, 01.k, 01.l, 01.m, 01.n, 01.o, 01.p, 01.q, 01.s, 01.v, 01.w,
01.x, 01.y, 03.d, 05.i, 06.h, 06.i, 06.j, 09.b, 09.i, 09.j, 09.k, 09.m, 09.n, 09.s,
09.v, 09.w, 09.x, 09.y, 09.z, 09.aa, 09.ab, 09.ac, 09.ad, 09.ae, 10.c, 10.d, 10.g,
10.h, 10.j, 10.k, 10.m, 11.a, 11.b
For AWS protection, Sysdig Secure will check the following sections:
01.c, 01.i, 01.p, 01.s, 01.v, 01.x, 01.y, 05.i, 06.i, 09.m,
09.v, 09.x, 09.ac, 09.af, 10.j, 11.b
For Google Cloud protection, Sysdig Secure will check the following sections:
01.c, 01,j, 01.n, 01.q, 01.y, 05.i, 06.d, 06.j, 09.m, 09.s,
10.g, 10.k
For Azure protection, Sysdig Secure will check the following sections:
01.x, 09.m, 09.ac, 09.af, 11.b
ISO 27001:2013 Compliance
The ISO/IEC 27001:2013 is an international standard on how to manage information security. It details requirements for establishing, implementing, maintaining and continually improving an information security management system (ISMS).
For workload protection, Sysdig Secure will check the following
sections: A.6.1.2, A.8.1.1, A.8.1.2, A.8.1.3, A.9.1.2, A.9.2.3, A.9.4.1,
A.9.4.4, A.10.1.1, A.12.1.2, A.12.4.1, A.12.5.1, A.12.6.1, A.12.6.2,
A.13.1.1, A.13.1.2, A.13.1.3, A.14.1.2, A.14.2.2, A.14.2.4, A.18.1.3,
A.18.1.5
For AWS protection, Sysdig Secure will check the following sections:
A.6.1.2, A.9.1.1, A.9.1.2, A.9.2.3, A.9.2.5, A.9.4.2, A.9.4.3, A.10.1.1,
A.10.1.2, A.12.1.2, A.13.1.1, A.14.1.2, A.18.1.3, A.18.1.5
For Google Cloud protection, Sysdig Secure will check the following sections:
A.6.1.2, A.9.1.2, A.9.2.3, A.10.1.2, A.18.1.3, A.18.1.5
For Azure protection, Sysdig Secure will check the following sections:
A.9.1.2, A.9.4.2, A.10.1.1, A.13.1.1, A.14.1.2, A.18.1.3, A.18.1.5
NIST 800-53 rev4 and rev5 Compliance
The National Institute of Standards and Technology (NIST) Special Publication 800-53 revision 4
describes the full range of controls required to pass a NIST 800-53 audit.
For workload protection, Sysdig Secure will check the following
sections: AC-2, AC-2(4), AC-2(12), AC-3, AC-4, AC-4(17), AC-6, AC-6(1),
AC-6(2), AC-6(3), AC-6(5), AC-6(6), AC-6(9), AC-6(10), AC-14, AC-17,
AC-17(1), AC-17(3), AC-17(4), AU-2, AU-6, AU-6(8), AU-10, AU-12, CA-9,
CM-3, CM-3(6), CM-5, CM-7, CM-7(1), CM-7(4), IA-3, SA-10, SA-15(10),
SC-2, SC-4, SC-7, SC-7(3), SC-7(10), SC-8, SC-8(1), SC-12(3), SC-17,
SC-39, SI-3, SI-3(1), SI-3(2), SI-4, SI-4(2), SI-4(4), SI-4(11),
SI-4(13), SI-4(18), SI-4(20), SI-4(22), SI-4(23), SI-4(24), SI-7,
SI-7(3), SI-7(9), SI-7(11), SI-7(12), SI-7(13), SI-7(14), SI-7(15)
For AWS protection, Sysdig Secure will check the following sections:
AC-2, AC-2(4), AC-4, AC-6, AC-6(9), AU-6(8), AU-8, CA-7, CM-6, SC-8(1),
SI-4, SI-12
For Google Cloud protection, Sysdig Secure will check the following sections:
AC-6(1), AC-6(2), AC-6(3), AC-6(5), AC-6(9), AC-6(10), AC-17(1), AC-17(2),
AC-17(3), AU-6(8), AU-9(2), AU-12(1), CM-3(1), IA-2(12), SC-7(3),
SC-7(4), SC-7(5), SC-7(8), SC-7(21), SC-12(1)
For Azure protection, Sysdig Secure will check the following sections:
AC-2, AU-8, SI-4
Special Publication 800-53 revision 5
was published in September 2020 and includes some modifications. For 12 months
both revisions will be valid, and revision 4 will be deprecated in September 2021.
For workload protection, Sysdig Secure will check the following
sections: AC-2, AC-2(4), AC-2(12), AC-3, AC-4, AC-4(17), AC-6, AC-6(1),
AC-6(2), AC-6(3), AC-6(5), AC-6(6), AC-6(9), AC-6(10), AC-14, AC-17,
AC-17(1), AC-17(3), AC-17(4), AC-17(10), AU-2, AU-6, AU-6(8), AU-10,
AU-12, CA-3(6), CA-7(4), CA-7(5), CA-9, CM-3, CM-3(6), CM-3(7), CM-3(8),
CM-4, CM-4(2), CM-5, CM-5(1), CM-7, CM-7(1), CM-7(4), CM-7(6), CM-7(7),
CM-7(8), CM-8, CM-11(3), IA-3, MA-3(5), MA-3(6), PM-5(1), RA-3(4),
RA-10, SA-10, SA-15(10), SA-23, SC-2, SC-4, SC-7, SC-7(3), SC-7(10),
SC-7(25), SC-7(26), SC-7(27), SC-7(28), SC-7(29), SC-8, SC-8(1),
SC-12(3), SC-17, SC-39, SC-50, SI-3, SI-4, SI-4(2), SI-4(4), SI-4(11),
SI-4(13), SI-4(18), SI-4(20), SI-4(22), SI-4(23), SI-4(24), SI-4(25),
SI-7, SI-7(3), SI-7(9), SI-7(12), SI-7(15)
For AWS protection, Sysdig Secure will check the following sections:
AC-2, AC-2(4), AC-4, AC-6, AC-6(9), AU-6(8), AU-8, SC-8(1), SI-4
For Google Cloud protection, Sysdig Secure will check the following sections:
AC-6(1), AC-6(2), AC-6(3), AC-6(5), AC-6(9), AC-6(10), AC-17(1), AC-17(2),
AC-17(3), AU-6(8), AU-9(2), AU-12(1), CM-3(1), IA-2(12), SC-7(3),
SC-7(4), SC-7(5), SC-7(8), SC-7(21), SC-12(1)
For Azure protection, Sysdig Secure will check the following sections:
AC-2, AU-8, SI-4
NIST 800-82 rev2 Compliance
The National Institute of Standards and Technology (NIST)
Special Publication 800-82 revision 2
provides guidance on how to secure Industrial Control Systems (ICS),
including Supervisory Control and Data Acquisition (SCADA) systems, Distributed
Control Systems (DCS), and other control system configurations such as Programmable
Logic Controllers (PLC), while addressing their unique performance, reliability,
and safety requirements.
For workload protection, Sysdig Secure will check the following
sections: AC-2, AC-2(4), AC-2(12), AC-3, AC-4, AC-6, AC-6(1),
AC-6(2), AC-6(3), AC-6(5), AC-6(9), AC-6(10), AC-17,
AC-17(1), AC-17(3), AC-17(4), AU-2, AU-6, AU-10,
AU-12, CA-9, CM-3, CM-5, CM-7, CM-7(1), IA-3, SA-10, SC-2, SC-4, SC-7, SC-7(3),
SC-8, SC-8(1), SC-17, SC-39, SI-3, SI-3(1), SI-3(2), SI-4, SI-4(2), SI-4(4),
SI-7, SI-7(14)
For AWS protection, Sysdig Secure will check the following sections:
AC-2, AC-2(4), AC-4, AC-6, AC-6(9), AU-8, SC-8(1), SI-4.
For Google Cloud protection, Sysdig Secure will check the following
sections: AC-6(1), AC-6(2), AC-6(3), AC-6(5), AC-6(9), AC-6(10),
AC-17(1), AC-17(2), AC-17(3), AU-9(2), AU-12(1),
CM-3(1), IA-2(12), SC-7(3), SC-7(4), SC-7(5), SC-7(8), SC-7(21),
SC-12(1)
For Azure protection, Sysdig Secure will check the following sections:
AC-2, AU-8, SI-4
NIST 800-171 rev2 Compliance
The National Institute of Standards and Technology (NIST)
Special Publication 800-171 revision 2
provides agencies with recommended security requirements for
protecting the confidentiality of Controlled Unclassified Information
(CUI) when the information is resident in nonfederal systems and
organizations.
For workload protection, Sysdig Secure will check the following
sections:3.1.1, 3.1.2, 3.1.3, 3.1.5, 3.1.6, 3.1.7, 3.1.12, 3.1.13,
3.1.14, 3.1.15, 3.1.16, 3.1.17, 3.1.20, 3.3.1, 3.3.2, 3.3.5, 3.3.8,
3.3.9, 3.4.3, 3.4.5, 3.4.6, 3.4.7, 3.4.9, 3.5.1, 3.5.2, 3.11.2, 3.12.1,
3.13.1, 3.13.2, 3.13.3, 3.13.4, 3.13.5, 3.13.6, 3.13.8, 3.14.1, 3.14.2,
3.14.3, 3.14.4, 3.14.5, 3.14.6, 3.14.7
For AWS protection, Sysdig Secure will check the following
sections:3.1.1, 3.1.2, 3.1.3, 3.3.1, 3.3.2, 3.3.7, 3.5.7, 3.5.8, 3.14.6,
3.14.7
For Azure protection, Sysdig Secure will check the following sections:
3.1.1, 3.1.2, 3.3.7, 3.14.6, 3.14.7
NIST 800-190 Compliance
The National Institute of Standards and Technology (NIST) Special
Publication 800-190
explains the potential security concerns associated with the use of
containers and provides recommendations for addressing these concerns.
For workload protection, Sysdig Secure will check the following sections:
3.1.1, 3.1.2, 3.1.3, 3.1.4, 3.1.5, 3.2.1, 3.2.2, 3.3.1, 3.3.2,
3.3.3, 3.3.4, 3.3.5, 3.4.1, 3.4.2, 3.4.3, 3.4.4, 3.4.5, 3.5.2, 3.5.5
PCI DSS v3.2.1
The PCI Data Secirity Standard (DSS) Quick Reference
describes the full range of controls required to pass a PCI 3.2 audit. In this
release, Sysdig Secure will check the following subset:
For workload protection: 1.1.2, 1.1.3, 1.1.5, 1.1.6.b,
2.2, 2.2.a, 2.2.1, 2.2.2, 2.4, 2.6, 4.1, 6.1, 6.2, 6.4.2, 6.5.1, 6.5.6,
6.5.8, 7.2.3, 10.1, 10.2, 10.2.1, 10.2.5, 10.2.7, 10.5.5, 11.5.1
For AWS protection, Sysdig Secure will check the following
sections: 2.2, 2.2.2, 10.1, 10.2.1, 10.2.2, 10.2.5, 10.2.6, 10.2.7,
10.5.5, 11.4
For Google Cloud protection, Sysdig Secure will check the following
sections: 1.1.5, 7.1.2, 10.1, 10.2, 10.3
For Azure protection, Sysdig Secure will check the following
sections: 2.2.2
SOC2
The American Institute of CPAs (AICPA) describes the full range of controls required to pass a SOC 2 audit.
For workload protection, Sysdig Secure will check the following
sections: CC3.2, CC5.1, CC5.2, CC6.1, CC6.2, CC6.6, CC6.8, CC7.1, CC7.2,
CC7.5, CC8.1, CC9.1
For AWS protection, Sysdig Secure will check the following sections:
CC3.2, CC5.2, CC6.2, CC6.6, CC7.1, CC7.2
For Google Cloud protection, Sysdig Secure will check the following sections:
CC5.2, CC6.1, CC6.2, CC6.6, CC7.1, CC8.1
For Azure protection, Sysdig Secure will check the following sections:
CC5.2, CC6.1, CC6.6, CC7.2, CC8.1
4.1.1.2 - Compliance (Legacy)
The Regulatory Compliance module in Sysdig Secure is comprised of a validator tool that checks selected controls from various compliance standards, and the reports it compiles. New standards are being added regularly. At this time, checks are provided against specific controls in:
The validator checks many Sysdig Secure features, including: image
scanning policies, Falco runtime policies and rules, scheduled benchmark
testing, Admission Controller, Network Security Policies, Node Image
Analyzer, and more. Over time we will add new compliance coverage.
Disclaimer: Sysdig cannot check all controls within a framework, such
as those related to physical security.
Terminology note: Compliance standards are scoped to different
platforms depending on the specific security rules they include,
Broadly, these are divided into:
Workload types: Including any Falco rules for kernel system
calls, Falco rules for Kubernetes audit logs, host benchmarks, and
security features that affect hosts, containers, and kubernetes
clusters
AWS/cloud type: Falco rules for CloudTrail and Cloud Custodian
rules on Amazon Web Services
Use Compliance Reports
Access the Compliance Module
Sysdig Secure admin: Enable the feature under
Settings > Sysdig Labs
.
Click the Posture
icon in the left-hand navigation and select AWS
or Workloads
under Regulatory Compliance.
Review a Report
Each of the standards controls is checked when you visit the Compliance
page and it always shows the current state in your environment.

Compliance Report Summary
The top section of the page presents the compliance report summary, with
the Pass|Fail summary data.
Pass %: Total percentage of all available checks that have
passed
Passed: Total number of controls implemented that Sysdig was
able to validate
Failed: Total number of controls not implemented that Sysdig was
able to validate
Unchecked: Total number of controls that Sysdig configured to
check but unable to validate (i.e. unavailable API at the time of
validation)
Total Controls: Total number of controls Sysdig is configured to
check
Control Report and Common Fixes
The controls are grouped together under collapsable sections of “control
families.”

Open them to see each control description with a link to either the:
Proof: Link to the implemented Sysdig feature that permitted the
control to pass, or the
Remediation: Link to the Sysdig feature that must be implemented
to pass a check within the control
The Rationale is the reason an implemented Sysdig feature will pass
a check within the control.
The Common Fixes section on the left consolidates the links for
enabling Sysdig features in order to pass the control checks.
Control Details
Terminology note: Compliance standards are scoped to different
platforms depending on the specific security rules they include,
Broadly, these are divided into:
- Workload types: Including any Falco rules for kernel system
calls, Falco rules for Kubernetes audit logs, host benchmarks, and
security features that affect hosts, containers, and kubernetes
clusters
- AWS/cloud type: Falco rules for CloudTrail and Cloud Custodian
rules on Amazon Web Services
PCI Controls Implemented
The PCI Quick
Reference describes
the full range of controls required to pass a PCI 3.2 audit. In this
release, Sysdig Secure will check the following subset:
For PCI 3.2.1 workload protection: 1.1.2, 1.1.3, 1.1.5, 1.1.6.b,
2.2, 2.2.a, 2.2.1, 2.2.2, 2.4, 2.6, 4.1, 6.1, 6.2, 6.4.2, 6.5.1, 6.5.6,
6.5.8, 7.2.3, 10.1, 10.2, 10.2.1, 10.2.5, 10.2.7, 10.5.5, 11.5.1
For PCI DSS v3.2.1 for AWS Sysdig Secure will check the following
sections: 2.2, 2.2.2, 10.1, 10.2.1, 10.2.2, 10.2.5, 10.2.6, 10.2.7,
10.5.5, 11.4
SOC2 Controls Implemented
The American Institute of CPAs (AICPA)
describes the full range of controls required to pass a SOC 2 audit.
For workload protection, Sysdig Secure will check the following
sections: CC3.2, CC5.1, CC5.2, CC6.1, CC6.2, CC6.6, CC6.8, CC7.1, CC7.2,
CC7.5, CC8.1, CC9.1
For AWS protection, Sysdig Secure will check the following sections:
CC3.2, CC5.2, CC6.2, CC6.6, CC7.1, CC7.2.
NIST 800-53 rev4 and rev5 Controls Implemented
The National Institute of Standards and Technology (NIST) Special
Publication 800-53 revision 4 describes
the full range of controls required to pass a NIST 800-53 audit.
For workload protection, Sysdig Secure will check the following
sections: AC-2, AC-2(4), AC-2(12), AC-3, AC-4, AC-4(17), AC-6, AC-6(1),
AC-6(2), AC-6(3), AC-6(5), AC-6(6), AC-6(9), AC-6(10), AC-14, AC-17,
AC-17(1), AC-17(3), AC-17(4), AU-2, AU-6, AU-6(8), AU-10, AU-12, CA-9,
CM-3, CM-3(6), CM-5, CM-7, CM-7(1), CM-7(4), IA-3, SA-10, SA-15(10),
SC-2, SC-4, SC-7, SC-7(3), SC-7(10), SC-8, SC-8(1), SC-12(3), SC-17,
SC-39, SI-3, SI-3(1), SI-3(2), SI-4, SI-4(2), SI-4(4), SI-4(11),
SI-4(13), SI-4(18), SI-4(20), SI-4(22), SI-4(23), SI-4(24), SI-7,
SI-7(3), SI-7(9), SI-7(11), SI-7(12), SI-7(13), SI-7(14), SI-7(15)
For AWS protection, Sysdig Secure will check the following sections:
AC-2, AC-2(4), AC-4, AC-6, AC-6(9), AU-6(8), AU-8, CA-7, CM-6, SC-8(1),
SI-4, SI-12.
Special Publication 800-53 revision 5
was
published in September 2020 and includes some modifications. For 12
months both revisions will be valid, and revision 4 will be deprecated
in September 2021.
For workload protection, Sysdig Secure will check the following
sections: AC-2, AC-2(4), AC-2(12), AC-3, AC-4, AC-4(17), AC-6, AC-6(1),
AC-6(2), AC-6(3), AC-6(5), AC-6(6), AC-6(9), AC-6(10), AC-14, AC-17,
AC-17(1), AC-17(3), AC-17(4), AC-17(10), AU-2, AU-6, AU-6(8), AU-10,
AU-12, CA-3(6), CA-7(4), CA-7(5), CA-9, CM-3, CM-3(6), CM-3(7), CM-3(8),
CM-4, CM-4(2), CM-5, CM-5(1), CM-7, CM-7(1), CM-7(4), CM-7(6), CM-7(7),
CM-7(8), CM-8, CM-11(3), IA-3, MA-3(5), MA-3(6), PM-5(1), RA-3(4),
RA-10, SA-10, SA-15(10), SA-23, SC-2, SC-4, SC-7, SC-7(3), SC-7(10),
SC-7(25), SC-7(26), SC-7(27), SC-7(28), SC-7(29), SC-8, SC-8(1),
SC-12(3), SC-17, SC-39, SC-50, SI-3, SI-4, SI-4(2), SI-4(4), SI-4(11),
SI-4(13), SI-4(18), SI-4(20), SI-4(22), SI-4(23), SI-4(24), SI-4(25),
SI-7, SI-7(3), SI-7(9), SI-7(12), SI-7(15)
For AWS protection, Sysdig Secure will check the following sections:
AC-2, AC-2(4), AC-4, AC-6, AC-6(9), AU-6(8), AU-8, SC-8(1), SI-4.
NIST 800-171 rev2 Compliance
The National Institute of Standards and Technology (NIST) Special
Publication 800-171
rev2
describes the full range of controls required to pass a NIST 800-171
audit. It provides agencies with recommended security requirements for
protecting the confidentiality of Controlled Unclassified Information
(CUI) when the information is resident in nonfederal systems and
organizations.
For workload protection, Sysdig Secure will check the following
sections:3.1.1, 3.1.2, 3.1.3, 3.1.5, 3.1.6, 3.1.7, 3.1.12, 3.1.13,
3.1.14, 3.1.15, 3.1.16, 3.1.17, 3.1.20, 3.3.1, 3.3.2, 3.3.5, 3.3.8,
3.3.9, 3.4.3, 3.4.5, 3.4.6, 3.4.7, 3.4.9, 3.5.1, 3.5.2, 3.11.2, 3.12.1,
3.13.1, 3.13.2, 3.13.3, 3.13.4, 3.13.5, 3.13.6, 3.13.8, 3.14.1, 3.14.2,
3.14.3, 3.14.4, 3.14.5, 3.14.6, 3.14.7
For AWS protection, Sysdig Secure will check the following
sections:3.1.1, 3.1.2, 3.1.3, 3.3.1, 3.3.2, 3.3.7, 3.5.7, 3.5.8, 3.14.6,
3.14.7
ISO 27001:2013 Controls Implemented
The ISO27001:2013 standard
describes the full range of controls required to pass an ISO27001:2013
audit.
For workload protection, Sysdig Secure will check the following
sections: A.6.1.2, A.8.1.1, A.8.1.2, A.8.1.3, A.9.1.2, A.9.2.3, A.9.4.1,
A.9.4.4, A.10.1.1, A.12.1.2, A.12.4.1, A.12.5.1, A.12.6.1, A.12.6.2,
A.13.1.1, A.13.1.2, A.13.1.3, A.14.1.2, A.14.2.2, A.14.2.4, A.18.1.3,
A.18.1.5
For AWS protection, Sysdig Secure will check the following sections:
A.6.1.2, A.9.1.1, A.9.1.2, A.9.2.3, A.9.2.5, A.9.4.2, A.9.4.3, A.10.1.1,
A.10.1.2, A.12.1.2, A.13.1.1, A.14.1.2, A.18.1.3, A.18.1.5.
HIPAA Controls Implemented
The HIPAA (Health Insurance Portability and Accountability
Act) standard describes the full
range of controls required to pass an HIPAA audit.
For workload protection, Sysdig Secure will check the following
sections: 164.308(a)(1)(ii)(D), 164.308(a)(3)(i), 164.308(a)(3)(ii)(A),
164.308(a)(3)(ii)(B), 164.308(a)(4)(i), 164.308(a)(4)(ii)(A),
164.308(a)(4)(ii)(B), 164.308(a)(4)(ii)(C), 164.308(a)(5)(ii)(B),
164.308(a)(5)(ii)(C), 164.310(a)(2)(iii), 164.310(b), 164.312(a)(1),
164.312(a)(2)(i), 164.312(a)(2)(ii), 164.312(a)(2)(iv), 164.312(b),
164.312(c)(1), 164.312(c)(2), 164.312(d), 164.312(e)(2)(i),
164.312(e)(2)(ii)
For AWS protection, Sysdig Secure will check the following sections:
164.308(a)(1)(ii)(D), 164.308(a)(3)(i), 164.308(a)(3)(ii)(A),
164.308(a)(3)(ii)(B), 164.308(a)(4)(i), 164.308(a)(4)(ii)(A),
164.308(a)(4)(ii)(B), 164.308(a)(4)(ii)(C), 164.308(a)(5)(ii)(B),
164.308(a)(5)(ii)(C), 164.308(a)(8), 164.310(b), 164.312(a)(1),
164.312(a)(2)(i), 164.312(a)(2)(ii), 164.312(b), 164.312(c)(1),
164.312(c)(2), 164.312(e)(2)(i).
GDPA Controls Implemented
The General Data Protection Regulation 2016/679 (GDPR)
is
a regulation for data protection and privacy in the European Union (EU)
and the European Economic Area (EEA). It also addresses the transfer of
personal data outside the EU and EEA areas.
For workload protection, Sysdig Secure will check the following
sections: 5.1, 5.2, 24.1, 24.2, 24.3, 25.1, 25.2, 25.3, 32.1, 32.2, 40.2
For AWS protection, Sysdig Secure will check the following sections:
5.1, 5.2, 24.1, 24.2, 24.3, 25.1, 25.2, 25.3, 30.1, 30.2, 30.3, 30.4,
30.5, 32.1, 32.2, 40.2
AWS Well Architected Framework Compliance
The AWS Well Architected Framework whitepaper
defines best practices to build secure, high-performing, resilient, and
efficient infrastructure for applications and workloads.
For workload protection, Sysdig Secure will check the following
sections: OPS 4, OPS 5, OPS 6, OPS 7, OPS 8, SEC 1, SEC 5, SEC 6, SEC 7,
REL 2, REL 4, REL 5, REL 6, REL 10, PERF 5, PERF 6, PERF 7
For AWS protection, Sysdig Secure will check the following
sectionsOPS 6, SEC 1, SEC 2, SEC 3, SEC 8, SEC 9, REL 2, REL 9, REL 10
AWS Foundational Security Best Practices v1 (FSBP) Compliance
AWS Foundational Security Best Practices v1
(FSBP)
describes the full range of controls to detect when your deployed
accounts and resources deviate from security best practices.
For AWS protection, Sysdig Secure will check the following sections:
AutoScaling.1, CloudTrail.1, Config.1, EC2.6, CloudTrail.2, DMS.1,
EC2.1, EC2.2, EC2.3, ES.1, IAM.1, IAM.2, IAM.4, IAM.5, IAM.6, IAM.7,
Lambda.2, GuardDuty.1
4.1.1.3 - Benchmarks (Legacy)
Navigate the Benchmark Tasks Landing Page
Select Posture > Benchmark|Tasks
. The Tasks landing page is
displayed.
A “task” is the combination of benchmark test (schema), scheduled to run
on a particular scope at a scheduled time. Once a task is configured, it
is listed on the landing page and is linked to the full benchmark
report.

For new users: If no tasks have been created yet, you will be
prompted to create some.
For users who had Benchmark v1 tasks configured:
v1 tasks will be migrated to v2.
You can still view all v1 schedules and reports from the
View Legacy Benchmarks
button, if desired. Modifications to v1
after this point will not be propagated.
On this page you can:
Enable/disable a task. Note that if you have Sysdig Secure for
cloud installed then the AWS Foundations
Benchmark task is
listed for information but is handled differently than the other
task types.
Filter the list by scope or task type to find the task more
easily
Click a task to access the full benchmark
report
Benchmark Components details
Types of Benchmark Schemas
The Center for Internet Security (CIS)
issues standardized benchmarks, guidelines, and best practices for
securing IT systems and environments. Additionally, Redhat publishes a
Container Security Guide that outlines best practices for running
Openshift 3.10/3.11 clusters.
With v2, Sysdig supports the following types of benchmarks
tests/schemas:

Understanding Benchmark Scopes
When you Configure Benchmark Tasks
, the available scope
depends on the schema you choose.
Scope Label | Description | Source | Applicable Schemas |
---|
host.hostName | The local hostname of the machine running the benchmark container. | Retrieved from the machine running the benchmark container. | All |
host.mac | The MAC address of the machine running the benchmark container. | Retrieved from the machine running the benchmark container. | All |
aws.accountId | The AWS account ID containing the EC2 instance running the benchmark container. | Retrieved from the AWS EC2 Instance Metadata Service | CIS Amazon Elastic Kubernetes Service (EKS) Benchmark v1.0.0 |
aws.region | The Region containing the EC2 instance running the benchmark container. | Retrieved from the AWS EC2 Instance Metadata Service | CIS Amazon Elastic Kubernetes Service (EKS) Benchmark v1.0.0 |
aws.instanceId | The AWS instance ID of the EC2 instance running the benchmark container. | Retrieved from the AWS EC2 Instance Metadata Service | CIS Amazon Elastic Kubernetes Service (EKS) Benchmark v1.0.0 |
gcp.projectId | The Project ID used to create the instance. | Retrieved from the GCP Compute Engine Metadata endpoint | CIS Google Kubernetes Engine (GKE) Benchmark v1.0.0 |
gcp.instanceId | The ID of the VM. | Retrieved from the GCP Compute Engine Metadata endpoint | CIS Google Kubernetes Engine (GKE) Benchmark v1.0.0 |
gcp.instanceZone | The Zone that the VM is running in. | Retrieved from the GCP Compute Engine Metadata endpoint | CIS Google Kubernetes Engine (GKE) Benchmark v1.0.0 |
kubernetes.cluster.name | The configured Cluster name. | Set in the sysdig-agent configmap under the key: k8s_cluster_name | All |
kubernetes.node.name | The name of the node in Kubernetes. | Supplied by Kubernetes Downwards API | All |
agent.tag.* | A set of customizable tags set in the agent configmap. Same as tags for the standard agent | Set in the sysdig-agent configmap under the key: tags | All |
4.1.1.3.1 - Configure Benchmark Tasks
Use a Benchmark Task to define:
the type of benchmark test to be run
the scope of the environment to be checked
the scheduled test frequency
the list of controls to be included/excluded. Use this to silence
noisy or unfixable controls that you’ve determined are not useful.
Once a task has been set up, it will run tests automatically on the
scheduled timeline. You can also trigger the task manually.
Create a Task
Select Compliance > Benchmark|Tasks
.
The Task
benchmark landing page is displayed.

Click+Add Task
and define the task parameters on
the New Task
page:

Name:
Create a meaningful name.
Schema:
Select the appropriate schema type from the drop-down
menu. See Types of Benchmark Schemas
for details.
Schedule:
Choose a frequency and time to run the test.
Benchmarks can be scheduled Daily, Weekly or Monthly, on
designated days at a specific time. A single task cannot be
scheduled more frequently than once per day.
Scope:
Choose from the available scoping options, which are
auto-filtered based on the chosen schema. See also:
Understanding Benchmark Scopes.
Custom Report:
De-select any of the controls you don’t want
run in the test or view in the report.
Click Save
.
The task will appear on the Tasks landing page along with the date and
time it was last run. Click the task to review the report.
Tasks are immutable once created. You cannot change the scope, schedule,
schema or filtered controls for an existing task.
Trigger a Task Manually
Rather than wait for the next scheduled time for a task to run, users
can choose to run a benchmark test manually.
Select Compliance > Benchmark|Tasks
.
On the relevant task, click the Run Now
(arrow) icon.

A notification will state that the test was successfully run.
4.1.1.3.2 - AWS Foundations Benchmarks
Overview
The CIS Amazon Web Services Foundations Benchmark v
1.3.0 forms
one part of Sysdig’s comprehensive Cloud Security Posture Management
(CSPM) and Compliance tools. The AWS CIS Benchmarks assessment evaluates
your AWS services against the benchmark requirements and returns the
results and remediation activities you need to fix misconfigurations in
your cloud environment.
We’ve included several UI improvements to provide additional details
such as: control descriptions, affected resources, failing assets, and
guided remediation steps, both manual and CLI-based when available.
Enable CIS AWS Foundations Benchmarks
Prerequisites
Sysdig Secure (SaaS)
Workloads running in the AWS environment, including EKS, Fargate,
etc. for which you want to verify best security practices and
compliance
Deploy: using a simple CloudFormation Template in the AWS Console.
See Deploy Sysdig Secure for cloud on
AWS
Using AWS Foundations Benchmarks
The checks and reports for AWS Benchmarks differ from Host
Benchmarks in the following
ways:
No scheduling: The check is automatically deployed daily; the
user does not choose a particular schedule, nor to “run now.”
Tasks and Reports combined:
There is a single page displaying:
The chosen AWS account, region, and date when report date
The curated list of controls that are run (left panel)
The daily report, with its pass/fail details and any recommended
remediation steps
Reviewing an AWS CIS Report
Log in to Sysdig Secure and select
Compliance > AWS Foundations Benchmark
.

Select the relevant report:
Account id: From the drop-down menu, choose one of the accounts
where you deployed the CFT and enabled the AWS Benchmarks feature.
Region: Choose the AWS region of the account you want to check
(not necessarily the region where your Sysdig Secure is installed)
Date: Choose a report date. Checks are run once per 24 hours.
Review the daily report (right panel).
Note the following:
% of Resources Passed: Of the controls implemented by
Sysdig, this is the percentage that passed.
Resources Passing: Every control checks multiple resources
(e.g., hundreds of S3 buckets, etc.). This figure displays an
aggregated count of all the resources over all the controls.
Resources Failing: Choose this figure to review a
consolidated list of all failed controls with their remediation
recommendations.
4.1.1.3.3 - Review Benchmark Results
Click a listed task to review the full report, check Pass|Fail
status,
discover remediation
steps, and/or download the report as a CSV file.
- Log in to Sysdig Secure and select
Compliance > Benchmark|Tasks
and select one of the task line items.
If you have installed Sysdig Secure for cloud, AWS Foundations
Benchmarks are listed on Tasks page, but are handled differentlyfrom the rest of the
Host Benchmark results.
A benchmark report is displayed.

From the report page, you can do the following:
Summary: Review the Summary (left panel) to see every
control and its result
Date: Choose the test run from a different date. Use the
date drop-down to see historical results of this report.
Sort and list: by which resources passed/failed the test.
Click the Resources Passed/ Resources Failed
links to filter
the results accordingly.
Drill down to review details and remediate.

After sorting, e.g., by Resources Failed
, you can review the
control details including the recommended Remediation Procedure
.
Optional: Download as CSV using the button at the top of the page.
4.2 - Identity and Access
As cloud services proliferate, so do user access policies, and a majority of enterprises use overly permissive policies that create large attack surfaces and significant security risks. With Sysdig’s Identity and Access module (I&A) for cloud accounts, you can review and mitigate these risks in minutes.
This topic includes the following high-level sections:
Prerequisites
- Sysdig Secure for cloud for AWS
- Can be installed with Terraform or CloudFormation Template
- These will enable Threat Detection for Cloudtrail, which is required for CIEM to work
- Either installation automatically creates a required IAM role which gives Sysdig read-only access to your AWS resources.
- Terraform role name:
sfc-cloudbench
- CFT role name:
SysdigComplianceAgentlessRole
- Adequate AWS permissions to read policies related to users, roles, and access
Introduction
Understanding Identity and Access
In Sysdig Secure for cloud, Identity and Access work together with Compliance and Benchmark tools under the Posture navigation tab in the Sysdig Secure menu.

Analysis: From this interface you can quickly acertain risks from two different angles:
User-Focused Risks
- Users and roles with excessive permissions
- Inactive users that can be removed
- Unnecessary permissions
Resource-focused Risks
- Who can access a resource
- Any suspicious cloud resource activity from a user with excessive permissions
- Recent permissions changes
Remediation: From there, the tool can suggest an improved policy, based on users’ actual activity, which you can immediately paste into your AWS policy in the linked AWS console.
Understanding the Suggested Policy Changes
When you find a user or a policy with excessive permissions, there are two suggested types of remediations:
Global Policy Change: In this case, you click a targeted policy (e.g. AdministratorAccess
) from either:
- The policy link on a user’s panel, or
- The
Optimize IAM Policy
button on a policy panel
A revised policy is suggested based on the activities of all users in the system that have been granted this entitlement.

You would copy the suggested code into your existing policy in the AWS console.
User-Specific or Role-Specific Policy: In this case, when investigating an individual user or role entry, you click Optimize IAM Policy
and a policy is suggested based on a combination of all policies and activities detected for that user or role.

You would copy the suggested code into a new user policy in your AWS console.
Understanding Risk Scoring
Risk in Identity and Access Management (IAM) is primarily determined by IAM permissions. The Sysdig Threat Detection team mapped all possible IAM actions to a risk score.
- Risk Score is determined by the worst permission given by a policy. For example, a policy with a
Critical
Risk Score has at least one permission allowing a Critical
action - Actionable Risk is designed to help you achieve Least Permissive by focusing on Unused permissions, instead of all permissions
- Note: It’s possible for the Actionable Risk and Risk scores to differ if there are Used permissions with a higher risk than Unused
- For Users and Roles, Sysdig looks at all attached policies to understand what permissions the users/roles can access. Certain risky attributes such as
Admin
or Inactive
are also taken into account when determining user/role Risk Scores.
Understanding Learning Mode and Disconnected States
Sysdig’s IAM page shows helpful information about cloud accounts and indicates several states for each registered account:
Learning Mode: A cloud account is in learning
mode when the account was connected less than 90 days prior. This ensures that the user activity has been profiled for a meaningful amount of time.
Disconnected: A cloud account is in disconnected
state if either of these events occur:
- Cloud-Connector stops sending events. The timestamp shows the time the last events were received
- The role provisioned on the customer’s AWS account cannot be impersonated
Overview
Access the Overview
- Log in to Sysdig Secure.
- Select
Posture >Identity and Access|Overview
. - Review the global Permissions posture from the various panels and use the filtered links to access the Users and Policies subpages as needed.
Filter by Account
On each page in the Identity and Access section, all users and resources are listed by default. If desired, you can focus on a single cloud account, using the Accounts drop-down at the top of the page.
Review Unused Permissions

Total Permissions Usage
See at a glance the number of permissions that have been granted vs those that have actually been used. Click on the Used and Given links to see the related Policies list and remediate those with the highest number of unused permissions.
Users
See at a glance the number of active vs inactive users. Clicked on the Active and Inactive links to see the related Users and Roles lists and to remediate.
Average Permissions Per Policy
See at a glance the average number of permissions granted per policy. per account, and click into the Policies list to remediate.
Average Policies Per User
See at a glance the average number of policies a user is associated with, per account and click into the Users and Roles list to remediate.
Policies with Unused Permissions
The Inventory section orders the Policies, Users, and Roles with the greatest number of unused permissions at the top of the list. Click to expand the lists and remediate.
Users and Roles with Unused Permissions
The Inventory section orders the Policies, Users, and Roles with the greatest number of unused permissions at the top of the list. Click to expand the lists and remediate.
Users
The AWS IAM Users page provides numerous ways to sort, filter, and rank the detected user information to quickly remediate identity risks associated with users and their policies.

Filter and Sort
Available filters:
- By Actionable Risk
- By Account
- By User Attributes
- Root User
- No MFA
- Inactive
- Admin
- Multiple Access Keys Active
- Access Key Not Rotated
Each column in the table can be sorted to help target, for example, the users with the highest number of granted permissions or the highest percentage of unused permissions.
To reduce the entitlements for a particular user or role:
Click on a user or role to open the detail pane.

In the screenshot example above, the user has not triggered all of the permissions issued, and is associated with two different policies. Full AdministratorAccess
has not been needed for the job the user has been performing.
Decide whether to Optimize IAM Policy
, taking into account all the policies and permissions this user has employed, or whether to use the Suggested Policy
for e.g., AdministratorAccess
, globally. See: Understanding the Suggested Policy Changes.
Copy the generated policy and paste it into a policy in your AWS console.
Roles
The AWS IAM Roles page provides numerous ways to sort, filter, and rank the detected role information to quickly remediate identity risks associated with roles and their policies.

Filter and Sort
Available Filters:
- By Actionable Risk
- By Account
- By Role Attributes
To reduce the entitlements for a particular role:
Click on a role to open the detail pane:

In the screenshot example above, the role has actually triggered only 1 of the 60 permissions issued, and is associated with two different policies.
Decide whether to Optimize IAM Policy
, taking into account all the policies and permissions this role has employed, or whether to use the Suggested Policy for e.g., AmazonEKSClusterPolicy
, globally. See: Understanding the Suggested Policy Changes.
Copy the generated policy and paste it into a policy in your AWS console.
AWS IAM Policies
The Identity and Access|AWS IAM Policies page currently displays AWS policies only. Other cloud vendors will be added over time.

Filter and Sort
As with the Users and Roles page, you can filter by account, and each column in the table is sortable.
Available filters:
- By Actionable Risk
- By Account
- By Policy Attributes (Unused)
- By Policy Type
- AWS Managed
- Customer
- Inline
Each column in the table can be sorted. The most common sorting priorities are:
- By Unused % or Unused Permissions: Immediately target the policies with the greatest exposure and refine them according to the suggestions
- By Shared Policy (# of Users): Focus on the policies affecting the greatest number of users and make a global policy change
To reduce the entitlements globally for a particular policy:
Click on a policy name to open the detail pane.

Click Optimize IAM Policy
and review the proposed code.

You can copy (then paste), download (then upload), or open the adjusted policy directly in the AWS console and save.
Posture Resources
The Resources page will be further developed in future releases.

At this time, you can use the S3 Bucket information to see all the S3 buckets currently set to Public
and switch them to Private
in the AWS console as needed. Similarly, the Lambdas are displayed with their public/private setttings.
Download CSV
Each page of the Identity and Access module has a Download CSV
button for retrieving the page data in a spreadsheet.
Note: If your Chrome browser is set to disallow downloading multiple files from a site, you may only get one CSV download and then a “blocked” message in the Chrome address bar. You can click the message to access and change that browser setting, if desired.

Troubleshooting
Check read Access
Sysdig’s Identity and Access feature needs read
access for specific resources such as IAM, S3 buckets, and Lambda functions to function. Certain AWS policies block Sysdig from reading data:
Check Role Provisioning
Verify the role provisioned for Sysdig is correct with this API.
Check Health of cloud-connector
Verify the cloud-connector component is healthy by following these steps:
Within the Sysdig Secure UI, navigate to Posture → Identity and Access → Overview
Click on Learning
in the top right corner.
Your connected cloud accounts will be shown with the status of the cloud-connector and the last time Sysdig received an event listed.
If cloud-connector is disconnected
and the(Last Event Sent
timestamp is older than a few hours, user activity will not be monitored. Please check logs and contact Sysdig support to help resolve the issue.
Limitations
Currently only the identity-based policies (managed, inline, and group policies) are considered for permission calculation. Resource-based, permission boundaries, organization SCPs, ACLs, and Session policies are not yet accounted for during permission calculations.
More details on these policies here.
Two notes about the data displayed:
AWS Last seen time
is based on GetServiceLastAccessedDetails
. For more information, see Amazon’s documentation.- The AWS permissions used by IAM identities is based on user activity observed in Cloudtrail logs. Currently, permissions used after assuming roles is not taken into account.
5 - Policies
Sysdig Secure deploys different types of policies.
Those described in this module include:
Vulnerability Management Policies and Rules for scanning pipeline and runtime images for vulnerabilities (only available after April 20, 2022)
Threat Detection Policies and Rules for all types of security threats such as disallowed actions, excessive permissions, suspicious changes, etc.
There are other optional tools to help automate the creation of policies, such as:
5.1 - CSPM Policies
Overview
CSPM/Risk and Compliance Policies allow you to:
- Search for the policies that match your organizations’ needs
- Configure what is being evaluated by the Compliance feature in the context of compliance standards (CIS, NIST, etc.)
- Create your own custom policies, configure controls that are linked to each requirement
- Review the policy structure and the controls connected to it
- Enable/disable controls on all policies
- Filter controls by enablement status, violation severity, name, and control type
Prerequisites
This feature requires the new Compliance component.
See also:
Navigate Policies List
Select Policies > Compliance | Risk and Compliance Policies
.
Review the Policy list. The included policies are listed alphabetically.

Policy Name/Description: The full policy name and description, in accordance with naming used by, e.g., the Center for Internet Security (CIS). Click the arrow to link directly to the relevant standards website.
Zones: Zones where this policy has been applied. Apply a policy to a zone to show compliance results against the policy in the compliance page.
Version: This column lists the version of the standard published. Not to be confused with the version, e.g., of Kubernetes, listed in the policy name.

Date Published: Date the policy was published
Author: Sysdig
for default policies; creator name for custom policies
Click a row to open the individual policy page.
Create a Custom Policy
Select New Policy
on the top right, or
Select an existing policy to duplicate
Add/edit the Name and Description and click Save
.

Edit the requirement groups and the requirements of your policy.

To edit the controls to each leaf requirement: Select the Link Controls
button, filter for the controls you want in the right-most Not Linked column, and select Link
on them.

Changes are automatically saved.
Navigate a Policy Page
Select a policy from the Policies list to review requirements and controls, enable/disable controls, and filter/search.

Requirement Groups and Requirements: Open
the rows in the left pane to view requirement groups and the nested requirements to which the controls are linked.
Hover
to get the full description text.
Enable/Disable: Toggle
to enable/disable
an individual control within a policy. The control will be enabled/disabled for ONLY the targeted policy.
Filter: See below.
Filter

Filter Details
Note that any filters can be combined. For example, you could filter to find:
How many high-severity disabled
controls are linked to the policies I care about?
Enabled/Disabled
Click in the Filter
box and select Enabled = [True | False]
Optional: Add more filters, such as Severity = High
.
Name
Severity
- Click in the
Filter
box and select Severity in [High | Medium | Low]
.
Type
5.1.1 - Using CSPM Policies and Requirements (Preview)
This Preview release introduces custom policy handling for CSPM/ Compliance policies. This includes the ability to:
- Clone an existing policy and edit its metadata
- Create, edit, and delete a custom policy
- Create, edit, and delete requirements in a custom policy
- Link and unlink available controls to policy requirements
If necessary, review the basics of CSPM Policies to begin.
In most cases, users will want to:
- Start from an existing policy
- Create or edit some requirements
- Link or unlink some controls, and
- Save under a new name.
The process of policy creation is separate from activation, so you can take time to design your policy as needed.
It’s also possible to create a policy entirely from scratch.
Create a Custom Policy
Create a Policy from a Duplicate
Select Policies > Compliance | CSPM Policies
and either:
- Click the
New Policy
button at the top of the page and select an existing policy name
from the resulting drop-down menu, OR - From the three-dot menu next to a listed policy, select
Duplicate
.
Edit the Name
and Description
and click Save
.
The duplicated, inactive policy draft is displayed, with the inherited requirements and controls listed.

From here you can add, delete, or edit requirement groups and requirements, link or unlink existing controls, and activate, as described in the following sections.
Create Requirement Groups and Requirements
In a custom policy, requirement groups and requirements can be removed or edited and new ones can be created and added. Requirements and groups are not shared between policies; to reuse a requirement from another policy, you must create a new group and requirement and then link the controls desired.
On the policy page, click +New Group
.
Enter the requirement group name and description and click Save
. The group name is displayed in the left panel.
Optional: Add a subgroup.
Select a requirement group, click the 3-dot menu, and select +New Subgroup
.
Enter the Subgroup name and description and click Save
.

Add a requirement:
Select a group or subgroup, click the 3-dot menu, and select +New Requirement
.
Enter the Requirement name and description and click Save
.
You can now link controls to your requirements.
Link and Unlink Controls
Once you have a requirement group and requirement, the Link Controls
button is active.
Select a requirement within a requirement group in your policy.
Click Link Controls
in the right panel. All available controls are displayed, with the top 20 listed first.
Filter for the desired controls by Name
, Severity
, and/or Type
.

Select the desired control and click Link
. Repeat as needed.
Optional: Unlink a control.
From the list of linked controls, hover over a control to reveal the Unlink
option.

Click Unlink
.
If the policy has already been activated, confirm that you want this control to no longer be evaluated by clicking Yes, Unlink
. This action will trigger a policy re-evaluation.
Activate/Deactivate the Policy
When your custom policy is complete:
Select the 3-dot menu beside the policy name and click Activate Policy
.

OR open the policy and click the Activate
button at the top of the page.
Click Yes, Activate
to confirm that the policy should be evaluated and the results added to Compliance Views.
The Date Published
will be displayed from the moment of activation.
After activation, any policy edit (e.g. name change, controls linked or unlinked, etc.) will trigger a re-evaluation and fresh results will be listed in the Compliance Views after a couple of minutes.
Option: Create a Policy from Scratch
When creating a policy from scratch, you must create all the requirement groups and requirements you want to use and manually link controls to them.
Edit
For custom policies, you can edit:
- Policy name and description
- Requirement group and requirement names, descriptions
- Add/remove requirement groups and/or requirements
- Link/unlink controls
- Activated/deactivated status
All such changes trigger a policy re-evaluation if the policy is active.
Delete
Delete Requirements
Deleting a requirement group or requirement from a policy will delete all associations with linked controls as well.
- Select a requirement group, subgroup, or requirement in a custom policy.
- From the three-dot menu, choose the
Delete
option and confirm Yes, Delete
after warning.
A policy re-evaluation is triggered if the policy is active. Refresh Compliance Views to see the results.
Delete Custom Policies
Deleting an active policy will delete its history of policy evaluations as well.
- Select a custom policy.
- Click
Delete Policy
from the top-right button. - Confirm and click
Yes, Delete
after the warning.
A re-evaluation is triggered if the policy is active. Refresh Compliance Views to see the results.
5.1.2 - CSPM Controls (Preview)
Overview
With the CSPM Controls library, you can see the logic behind the compliance results by drilling into the control details:
- To ensure that this compliance product is fit for your organization’s needs
- To know precisely what has been or will be evaluated
- To review a specific control to see its logic and remediation
The features are under development.
Prerequisites
This feature requires the new Compliance component.
If necessary, review:
How Controls are Structured
Sysdig controls are built on the Open Policy Agent (OPA) engine, using OPA’s policy language, Rego. The CSPM Controls library exposes the code used to create the controls and the inputs they evaluate, providing full visibility into their logic. You can download the code as a JSON file.
Navigate CSPM Rules List
Select Policies > Compliance | CSPM Controls
.
Select a specific control to open it in the right panel and work with it.
Filter the List
Use the unified filter bar on the left side to limit the control list by:
- Name: Use
Contains
to enter free text on any word or part of a word in the name - Severity: Choose the severity level(s) assigned to the control(s) from the drop-down list
- Type: Choose an infrastructure type from the drop-down list
Add multiple parameters to create more specific filter expressions.
Select a specific control.

Review basic attributes.
At the top of the right panel you can see:
Control title
Severity
Type (e.g. Host)
Author (e.g. Sysdig for out-of-the-box controls)
Description
The policies to which the control is linked.
Hover over the policy names to get full details, such as the exact requirement number for the particular compliance standard.

Code: Use the provided code snippets.
At this time, the code provides visibility into the precise objects that are evaluated and how the evaluation rules are structured. The display includes Inputs (where applicable) and the evaluation code written in Rego.
You can copy and/or download the input as a .json file.
Remediation Playbook: Follow the recommended steps in the Remediation Playbook to resolve failing controls.

In some cases, you will need to provide the applicable input in the provided remediation code.

5.1.3 - Zones
A zone, in Sysdig, is a collection of scopes that represent important areas of your business. For example, create a zone for your production environment, a staging environment, or a region.
By default, the Entire Infrastructure zone is created by Sysdig. For Risk and Compliance evaluation, CIS policies and the Sysdig Kubernetes policy are automatically applied to the Entire Infrasture and the finding are reported on the Compliance landing page.
To use other policies, you must apply them to zones.
A completed Zone includes:
- Zone name and description
- Zone scope (the area of business to be included)
- Applied policies
Navigate to Policies > Risk and Compliance > Zones
.

Click New Zone
, enter a zone Name
and Description
, and click Save
.
Define the Scope
Define the Scope
by Platform
and Scope Attributes
.

Supported scope rules for each platform:
Kubernetes
- Distribution (AKS, GKE, EKS, Vanilla Kubernetes)
- Cluster name
- Namespace
- Labels
AWS
- Organization
- Account
- Region
- Labels
Azure
- Organization
- Subscription
- Region
- Labels
GCP
- Organization
- Project
- Region
- Labels
- Host (for Docker, Linux hosts)
- Cluster
Apply Policies
Select polic(ies) from the drop-down list.
Click Save
. The zone will be listed with the Platform and number of applied policies on the Zones list page.
Note that if a policy is applied on zones that have no relevant resources to evaluate for that policy, results will not appear on the Compliance page.
5.2 - Vulnerability Policies
Overview
Sysdig includes scanning policies for both Pipeline and Runtime vulnerabilities that work out of the box, along with relevant rule bundles. The process of editing or creating new policies and rules is similar for both.
Available Rules
Vulnerability Rules
Severities and Threats
Scanning for vulnerabilities in the software is a primary concern; at the same time, reported vulnerabilities may not be relevant to the particular production environment being analyzed, and it’s usually unrealistic to achieve an environment with no vulns at all for a particular software package. Each organization sets an acceptable risk threshold for a vulnerability, in order to decide whether the evaluated asset is within acceptable boundaries or should be considered non-compliant.
CVE DenyList
If any vulnerability listed in this rule is detected, the rule will fail, regardless of severity or any other vulnerability attribute.
ImageConfig Rules
An OCI Image Configuration is a JSON document describing images for use with a container runtime and execution tool, and their relationship to filesystem changesets.
In short, is comprises the image configuration and metadata.
For example:
- Entrypoint / CMD
- Configured user
- Environment variables
- Labels
- Author
- Creation time
- Build history
- … (many other config keys, some of them mandatory some optional)
Dockerfiles VS ImageConfiguration:
Dockerfiles specify a language used to generate the resulting image, which contain the mentioned ImageConfiguration file inside. Although Dockerfiles and Image Configuration files are closely related, they are not the same concept. Compliant ImageConfiguration files can be generated using development tools other than Docker/Dockerfiles.
Default User
The default user configured to run the entrypoint or CMD.
Defaulting to root is discouraged, as it can confer unnecessary privileges and allow an attacker easier privilege escalation or lateral movements if successfully exploited.
Apart from avoiding root, this rule also allows specifying a particular user (i.e. jenkins
) that must be set , or otherwise fail.
Recommended Instructions
The use of the ADD instruction is discouraged, as COPY is more predictable and less error prone.
Package Manager Instructions
This rule forbids the use of package manager instructions, per recommended security practices. (Directly fetching the latest available version of a package(s) using a package manager during image build can lead to non-reproducible builds, so may be discouraged.)
The following package managers / update subcommands are currently detected from the image’s build history:
apk
.*apk upgrade.*
apt
.*apt-get upgrade.*
.*apt upgrade.*
yum
.*yum upgrade.*
rpm
.*rpm (--upgrade|-U).*
pip
.*pip3* install (--upgrade|-U).*
pipenv
.*pipenv update.*
poetry
.*poetry update.*
npm
.*npm update.*
yarn
.*yarn update.*
composer
.*composer update.*
cargo
.*cargo update.*
bundle
.*bundle update.*
gem
.*gem update.*
Image Creation Date
The creation date of an image can be used to indicate that the image has become stale.
NOTE: Image creation date is an optional attribute, so this rule will also fail if the date has not been declared.
Leakage of sensitive information is one of the most severe security issues and has often led to actual security breaches. By enabling this rule, the ImageConfig metadata will be parsed for sensitive strings.
Example violation of an AWS secret found in the image label AWS_TOKEN:

The currently available detections for this rule are:
Aws_secret
- AKIA keys:
AKIA[0-9A-Z]{16}
- Any other key:
aws.{0,20}?(?:key|pwd|pw|password|pass|token).{0,20}?
Azure storage account key
Basic Auth: detects [http,ssh]://user@pass:domain.com
JWT token
Private keys: Check if strings contains
"BEGIN DSA PRIVATE KEY",
"BEGIN EC PRIVATE KEY",
"BEGIN OPENSSH PRIVATE KEY",
"BEGIN PGP PRIVATE KEY BLOCK",
"BEGIN PRIVATE KEY",
"BEGIN RSA PRIVATE KEY",
"BEGIN SSH2 ENCRYPTED PRIVATE KEY",
"PuTTY-User-Key-File-2"
Create Rule Bundles
A rule bundle is a set of scanning rules that are grouped together.
Note:
- Default Sysdig rule bundles (identified with the Sysdig shovel icon) cannot be deleted, but they can be duplicated if you want to use them as a template for a new rule bundle
- The same rule bundle can be used for several different policies
- Rules order is irrelevant from the evaluation perspective, but you can organize them to your liking for easier visualization.
Creation Steps
Navigate to Policies > Rule Bundles
and click +Add Bundle
.

Enter the parameters:
- Name: User-assigned name for this rule bundle
- Description: User-assigned rule bundle description
- Rules: A rule bundle is composed of 1..N scanning rules; you can use the visual editor to create and configure new rules (represented as “cards” in the interface).
Click Save
. You can now attach this rule bundle to policies.
Example
In the example below, a particular vulnerability will fail the check if:
- The severity is High or Critical AND
- It was discovered 60 days ago or more AND
- It has a published fix AND
- There is a public exploit available

Notes:
- You can create multiple version of the same rule template for the same policy bundle, i.e. you can have two or more cards like the one above of type
Vulnerabilities: Severities and Threats
" - Conditions between the same rule are evaluated with AND logic, as in the example above, a vulnerability needs to meet all the conditions in order to be considered a violation
- All the rules in a rule bundle are evaluated using OR logic
- If any rule is in violation, the rule bundle is in violation
- Also if any rule bundle is in violation, the policy containing it is in violation as well, considered “failed”.
Create Scanning Policies
You can create custom scanning policies and rule bundles as needed to meet your organization’s vulnerability management guidelines. The basic concepts of scanning polices and rules are:
- An image can be evaluated with 1..N policies at the same time
- A policy can contain 1..N rule bundles to be evaluated
- A rule bundle is composed of any number of rules to be evaluated
Pipeline
Navigate to Policies | Vulnerabilities > Pipeline
. The Pipeline scanning policy list is displayed.
Click +Add Policy|Pipeline
.
Enter the parameters:
Name: User-assigned name for this policy
Description: User-assigned policy description
Always apply toggle: Mapping strategy to use:
- If
Always Apply
is enabled
, every execution of the scanner will apply this policy. This cannot be overridden by the CLI parameters. - If
Always Apply
is disabled
, this policy must be explicitly requested when executing the scanner in order to apply it to the evaluation.
Rule Bundles: A policy contains rule bundles to be evaluated. Using this widget you can add, remove, or modify the bundles used for this policy.
How to Scan Images with this policy: Helper widget that previews the command line to be used in order to apply the policy to the scanner run. See also: Getting Started with Sysdig Secure.
Click Create
.
Runtime
Navigate to Policies | Vulnerabilities > Runtime
. The Runtime scanning policy list is displayed.
Click +Add Policy|Runtime
.

Enter the parameters:
Name: User-assigned name for this policy
Description: User-assigned policy description
Scope: Consists of asset types (container image workloads and hosts) and subsets of scope values.
- Asset Type: Default is
Any Asset
. Select Host
or Workload
to narrow. - Scope: Use
Entire Infrastructure
or build out a desired scope. Scope values applicable to the chosen asset type(s) are displayed.- Click
See Workloads in this Scope
to check that the scope is valid and working as expected. - NOTE: If you use asset type
Any Asset
, the only Scopes that apply to both Hosts and Workloads are Entire Infrastructure
or kubernetes.cluster.name
.
Rule Bundles: A policy contains rule bundles to be evaluated. Using this widget you can add, remove, or modify the bundles used for this policy.
Risk Acceptance Summary
If a user chooses to accept the risk of a known vulnerability, the list of those acceptances is compiled on this summary page.

Reminder: The risks accepted in Compliance are viewed on the Compliance pages.
Usage
Use the summary page to search for acceptances that are expired or close to expiry and manage them. You can also:
- Use the Search field to search by
CVE name
or image tag
- Filter by
Entity
or Acceptance Reason
- Filter by
Active
acceptances
To manage expired acceptances:
Navigate to Policies > Vulnerability|Risk Acceptance
.
Find the relevant entries by:
- Clicking the
Expired
button in the top filter - Sorting by
Expiration date
in the table header - Using the search bar for a targeted search
Click the row for each expired entry and either extend the time limit or revoke the Acceptance.

Note: When an acceptance expires, it no longer excludes the vulnerability from the vuln count.
5.3 - Threat Detection Policies and Rules
This page introduces Sysdig threat detection policies and the rules that comprise them,
providing the conceptual background needed to create, edit, and apply
security policies in your own environment.
Understanding Threat Detection Policies
A Sysdig Secure policy is a combination of rules about activities
an enterprise wants to detect in an environment, the actions that
should be taken if the policy rule is breached, and– potentially– the
notifications that should be sent. A number of policies are
delivered out-of-the-box and can be used as-is, duplicated, or edited as
needed. You can also create policies from scratch, using either
predefined rules or creating custom rules.
Managed Policies, Managed Rulesets, and Custom
As of July, 2022, threat detection policies have three “flavors.”
Default/Managed Policies These are the default policies provided and managed by Sysdig. The Sysdig Threat Research team may update them at any time.
Default policies exist across all accounts, the names cannot be changed, and they cannot be deleted.
They are loaded with a pre-defined enabled/disabled status, based on most common usage, but you can enable
or disable
them at will.
Only the Scope
and Action
(such as notification channel) can be edited.

Note: Earlier versions of Sysdig Secure had “default” policies that were not managed by Sysdig and used different naming. See the release notes for information about that transition.
If you want to edit other attributes, you can Duplicate
policies to create:
Managed Ruleset Policies:
Name
, Description
, and Severity
can also be edited
As with the default Managed policies, Managed Ruleset policies may be updated by the Sysdig Threat Research team.
Use case example: You need different scopes or actions (such as notification channels) for the same set of rules within a Managed Policy.

If you want to change the rules in a policy, then you need:
Custom Policies: These can be created three ways:
- Converting a Default policy to Custom
- Converting a Managed Ruleset policy to Custom
- Creating a policy from scratch
- (Any policies from before July, 2022 are auto-converted to Custom policies and continue to work as they did before.)
Custom policies cannot be updated by the Sysdig Threat Research team. If/when Sysdig creates new rules, the user must apply them to custom policies themselves.
Reviewing the Runtime Policies List
Select Policies > Runtime Policies
see the default policies you loaded
into Sysdig Secure, as well as any custom policies you have created.

From this overview, you can:
See at a Glance
Severity Level Default policies are assigned High, Medium, Low,
or Info level severity, which can be edited.
Enabled/Not Enabled Viewed by toggle position.
Policy Summary Includes Update
status, the number of Rules
,
assigned Actions
to take on affected containers
(Stop | Pause | Notify
), and Capture
details, if any.
Policy Status: Default
policies are “managed policies,” Ruleset
are managed ruleset policies, and Custom
policies may be user-designed from scratch or converted from default policies with changes to their rules.
Policy Type icons
Take Action
From this panel you can also:
Drill down to policy details (and potentially Edit them)
Search and filter policies by name, policy name, severity level,
policy type, or whether captures are enabled
Enable/Disable a policy using the toggle
Create a new policy using the +Add Policy
button
Review Policy Types
Additional types are added periodically.

Runtime Policies
Workload Policy
Powered by the Falco engine, these provide a way to filter system calls using flexible
condition expressions. See Using Falco within Sysdig Secure for more context.
List-Matching Policy
Policies using a simple matching or not-matching for containers, syscalls, processes, etc. See
Understanding List Matching Rules for more context.
Drift Policy
Policy with a single rule that provides default drift detection and prevention.
See also: Understanding DriftControl and Additional Parameters for Drift Policy Type.
Machine Learning
Policy leveraging Machine Learning to provide advanced detection capabilities.
See also: Understanding Machine Learning and Additional Parameters for Machine Learning Policy Type.
Log-Detection Policies
Kubernetes Audit Policy
Powered by the falco engine, provide a way to filter
Kuernetes audit logs using flexible condition expressions. See also Kubernetes Audit Logging.
AWS CloudTrail Policy
Provides a way to filter AWS CloudTrail events using Falco-compatible condition expressions. You need to have Sysdig Secure for cloud installed to transmit your AWS CloudTrail events.
GCP Audit Log Policy
Provides a way to filter GCP audit logs using Falco-compatible condition expressions.
Provides a way to filter Azure platform logs using Falco-compatible condition expressions.
Scopes and Actions for Policy Types
The scopes and actions available differ by type:
| Scope Options | Action Options |
---|
RUNTIME | | |
Workload | Custom
Hosts only
Container only | Stop/ pause/ kill Capture Notification channel |
List-Matching | Custom
Hosts only
Container only | Stop/ pause/ kill Capture Notification channel |
Drift | Custom only | Prevent Notification channel |
LOG DETECTION | | |
Kubernetes | kubernetes.cluster.name
kubernetes.namespace.name | Notification channel |
AWS Cloud | aws.accountId
aws.region | Notification channel |
GCP | gcp.projectid
gcp.location | Notification channel |
Azure | azure.subscriptionId
azure.tenantId
azure.location
azure.resourceGroup | Notification channel |
Understanding DriftControl
Drift is the change in an environment that differs from the expected state checked into a version control system, e.g. software that was introduced, updated, or upgraded into a live environment.
Sysdig’s DriftControl feature uses various detection techniques, such as watching the system for when new executables are downloaded, updated, or modified inside a container which was not part of the container image before the container started up.
With the default agent configuration, a Drift policy/rule will stop such a detected process after it has begun.
If it is necessary to ensure that a particular task should be blocked from ever starting, you can enable the following configuration in the agent config file:
drift_killer:
enabled: true
Or if using helm, add the --set agent.sysdig.settings.drift_killer.enabled=true
flag.
Be aware that this option uses ptrace
, which is more resource-intensive than the default mode.
Understanding Machine Learning Policies
Prerequisite: Machine Learning policies require enabling the Profiling Sysdig Labs toggle to activate the underling fingerprint collection mechanism.
Machine Learning collects low-level activities from your infrastructure, aggregating them over time and applying algorithms.
With machine learning policies you can configure the detections you want to use and their thresholds.
Machine Learning Detection algorithms work by estimating the probability that those activities are related the detection subjects, i.e. miners. Sysdig Machine Learning detections don’t rely on mere program names or executable checksums matching. Instead, they are based on actual runtime behaviors, collected in the form of fingerprints by the Profiling feature.
Understanding How Policy Actions Are Triggered
Policy actions occur asynchronously. If a policy has a container action
and matched activity, the agent asks the Docker/Cri-o daemon to perform
the stop/kill/pause
action. This process takes a few minutes, during
which the container still runs and the connect/accept
etc. still
occurs.
Understanding Threat Detection Rules
Rules are the fundamental building blocks you will use to compose your
security policies. A rule is any type of activity that an enterprise
would want to detect in its environment.
Rules can be expressed in two formats:
Falco rules
syntax, which can
be complex and layered. All the default rules delivered by Sysdig
are Falco rules, and users can also create their own Falco rules.
List-matching rules syntax, which is simply a list against which
a match/not match
condition is applied. All these rules are
user-defined. They are grouped into five types: Container Image,
File System, Network, Process, and Syscall.
Understanding the Rules Library
The Rules Library includes all created rules which can be referenced in
policies. Out of the box, it provides a comprehensive runtime security
library with container-specific rules (and predefined policies)
developed by Sysdig’s threat-research teams, Falco’s open-source
community rules, and international security benchmarks such as
CIS or MITRE
ATT&CK.

Audit-Friendly Features
In the Rules Library interface, you can see at a glance:
Published By:
Last Updated
for enhanced traceability and audit.
- Default rules appear in the UI as Published By: Sysdig
- User-defined rules appear as Published By: Secure UI
Rules are categorized by tags, so you can group them by functionality,
security standard, target, or whatever schema makes sense for your
organization.
Various tags are predefined and can help you organize rules into logical
groups when creating or editing policies.
Search
Use the search boxes at the top to search by rule name or by tag.
Using Falco within Sysdig Secure
What is Falco
Falco is an open-source intrusion detection and activity monitoring
project. Designed by Sysdig, the project has been donated to the Cloud
Native Computing Foundation, where it continues to be developed and
enhanced by the community. Sysdig Secure incorporates the Falco Rules
Engine as part of its Policy and Compliance modules.
Within the context of Sysdig Secure, most users will interact with Falco
primarily through writing or customizing the rules deployed in the
policies for their environment.
Falco rules consist of a condition under which an alert should be
generated and an output string to send with the alert.
Conditions
Falco rules use the Sysdig filtering
syntax.
(Note that much of the rest of the Falco documentation describes
installing and using it as a free-standing tool, which is not
applicable to most Sysdig Secure users.)
Rule conditions are typically made up of macros and lists.
Macros are simply rule condition snippets that can be
re-used inside rules and other macros, providing a way to factor
out and name common patterns.
Lists are (surprise!) lists of items that can be included in
rules, macros, or other lists. Unlike rules/macros, they can not
be parsed as Sysdig filtering expressions.
Behind the scenes, the falco_rules.yaml
file contains the raw code for
all the Falco rules in the environment, including Falco macros and
lists.
Anatomy of a Falco Rule
All Falco rules include the following base parameters:
rule name: default or user-assigned
condition: the command-line collection of fields and arguments
used to create the rule
output:
source:
description:
tags: for searching and sorting
priority
Select a rule from the Rules Library
to see or edit its underlying
structure. The same structure applies when creating a new Falco rule and
adding it to the library.
|
---|
Existing Rule |
 |
Create a Rule |

|
About Falco Macros
Many of the Falco rules in the Rules Library contain Falco macros in
their condition
code.
You can browse the Falco Macros list, examine a macro’s underlying code,
or create your own macro. The default Falco rule set defines a number of
macros that make it easier to start writing rules. These macros provide
shortcuts for a number of common scenarios and can be used in any
user-defined rule sets.

About Falco Lists
Default Falco lists are added to improve the user experience around
writing custom rules for the environment.
For example, the list allow.inbound.source.domains
can be customized
and easily referenced within any rule.
(On-Prem Only) Upgrading Falco Rules with the Rules Installer
Sysdig Secure SaaS is always using the most up-to-date Falco rules set.
Sysdig Secure On-Prem accounts should upgrade their Falco rules set
regularly.
This can be achieved through our Rules Installer.
Understanding List-Matching Rules
List-matching rules (formerly known as “fast” rules) are used for
matching against lists of items (when matchItems=true)
or matching
everything other than lists of items (when matchItems=false
). They
provide for simple detections of processes, network connections, and
other operations. For example:
If this process is detected, trigger an action when this rule is in
a policy (such as send notification).
Or
If a network connection on x port is detected, trigger an action
when this rule is in a policy (such as send notification)
Unlike Falco rules, the list-matching rule types do not permit complex
rule combinations, such as “If a connection on x port from y IP
address is detected…”
The five list-matching Rule Types are described below.
Container Rules
These rules are used to notify if a specific image name is running in an
environment. The rule is evaluated when the container is started. The
items in the list are image pattern names, which have the syntax
<host.name>:<port>/<name>/<name2>:<tag>@<digest>
.
Only <name2>
is required; everything else is optional and inferred
building on the name.
See also: How Matching Works: Container
Example
and Create a List-Matching Rule: Container Type
Example.
File System Rules
These rules are used to notify if there is write activity to a specific
directory/file. The rule is evaluated when a file is opened. The items
in the list are path prefixes.
For example: /one/two/three
would match a path /one/two/three
,
/one/two/three/four
, but not /one/two/three-four
.
Network Rules
These rules are used to:
Note that the current Sysdig UI talks about “Allowing” or “Denying”
connections with network rules, but this can introduce some confusion.
For both Inbound and Outbound connections:
You would still need to add the rule to a policy and attach actions to
respond to a connection attempt by stopping/pausing/killing
the
container where the connection occurred. See also: Understanding How
Policy Actions Are
Triggered.
Process Rules
These rules are used to detect if a specific process, such as SSH, is
running in a particular area of the environment.
The rule is evaluated when a process is launched. The items in the list
are process names, subject to the 16-character limit enforced by the
Linux kernel. (See also: Process Name Length
information.)
Syscall Rules
The syscall
rule type is almost never deployed in user-created
policies; the definitions below are for information only.
These rules are used (internally) to:
The rule is evaluated on syscalls that create inbound
(accept, recvfrom, recvmsg, listen
) and/or outbound
(connect, sendto, sendmsg
) connections. The items in the list are port
numbers.
How Matching Works: Container Example
A Container Image consists of the following components:
<registry host>:<registry port>/<image>:<tag>@<digest>
.
Note that <image>
might consist of multiple path components such as
<project>/<image>
or <project>/<subproject>/<image>.
Complete example:
docker.io:1234/sysdig/agent:1.0@sha256:da39a3ee5e6b4b0d3255bfef95601890afd80709
Where:
<registry host>
= docker.io
<registry port>
= 1234
<image>
= sysdig/agent
<tag>
= 1.0
<digest>
= sha256:da39a3ee5e6b4b0d3255bfef95601890afd80709
Each item in the containers list is first broken into the above
components, using the following rules:
If the string ends in /
, it is interpreted as a registry host and
optional registry port, with no image/tag/digest
provided.
Otherwise, it is interpreted as an image. The registry host and port
may precede the image and are optional, and the tag and digest may
follow the image, and are optional.
Once the item has been broken into components, they are considered a
prefix match against candidate image names.
Examples:
docker.io:1234/sysdig/agent:1.0 @sha256:da39a3ee5e6b4b0d3255bfef95601890afd80709:
must match all components exactly
docker.io:1234/sysdig/agent:1.0:
must match the registry host, port,
image, and tag, with any digest
docker.io:1234/sysdig/agent:
must match the registry host, port, and
image, with any tag or digest
sysdig/agent:
must match the image, with any tag or digest. Would not
match an image docker.io:1234/sysdig/agent
, as the image provides
additional information not in the match expression.
docker.io:1234/:
matches all images for that registry host and port
docker.io/:
matches all images for that registry host
Getting Started
There are optional tools to help automate the creation of
policies. See also: Network Security Policy Tool to author and fine-tune Kubernetes network policies
5.3.1 - Manage Threat Detection Policies
Overview
Review Threat Detection Polices, if needed.
In general, users will:
- Use Default Managed policies out-of-the-box, defining only the
Scope
, actions such as Nofication Channels
, and enabling/disabling
the policy; - Duplicate a policy to create a Managed Ruleset and edit additional parameters such as
Name
, Description
, and Severity
, creating Managed Ruleset
policies; - Require Custom rules and either convert an existing policy or build policy parameters and ruleset from scratch.
Steps to Create a Custom Policy from Scratch
Log in to Sysdig Secure and select Policies > Runtime Policies
.
On the Runtime Policies list page, select +Add Policy.
Select Type: Select the policy type and define the policy parameters.
Note: The Scope
available will differ by policy type.
Define parameters: E.g., Name, Description, Severity, etc. Most policy types have the same parameters; Drift and Machine learning have some differences.
Add rules: Add or edit the rules to be used.
Define actions: to be taken if the policy rules are breached.
Enable and Save the policy.
Details in the following sections.
Policy Details
Select the Policy Type
When you click +Add Policy
, you are prompted to choose the Policy Type
desired. See also: Review Policy Types

Define the Basic Parameters
The Policy parameters differ mainly by the Scope
and Actions
available
on the type selected.

Name and Description: Provide meaningful, searchable descriptors
Enabled/Disabled: Once enabled, the policy will begin to
generate events.
Severity: Choose the appropriate severity level as you would
like to see it in the Runtime Policies
UI.
Policy severity is subjective and is used to group policies within a
Sysdig Secure instance.
NOTE: There is no inheritance between the underlying
rule priorities
and the severity you assign to the policy.
Scope: Define the scope to which the policy will apply, based on
the type-dependent options listed.
Link to Runbook: (Optional) Enter the URL of a company procedure that should be followed for events resulting from this policy. E.g. https://www.mycompany.com/our-runbook-link.
If you enter a value here, then a View Runbook
option will be displayed in any corresponding Event.
Additional Parameters for Drift Policy Type
The Drift policy differs from the other policy types in a few ways:

- 1:1 Policy:Rule Drift includes only one rule.
- Prevent You can toggle the
Prevent
action to stop the binary ever from starting. - Dynamic Deny List When enabled, the policy evaluates and tracks any downloaded executable on the container. If that executable attempts to run, Sysdig will create an alert, or the executable is denied from running if
Prevent
is enabled. - Exceptions A user-defined list that can allow a downloaded executable to not trigger an alert
- Always Deny A user-defined list that will always block the executable from running even if it was built with the image
Additional Parameters for Machine Learning Policy Type
The Machine Learning policy differs from the other policy types in a few ways:

- Detection types You can what type of Machine Learning based detections you want enable in your policy. We support only
Crypto Mining Detection
at this time. - Confidence level You can fine-tune the policy to choose at which certainty level the detection should trigger an event.
- Severity defined at detection level, so that you can have a different severity for each detection type.
Add Rules
You can select existing rules from the Library or create new rules on
the fly and add them to a policy.
The Policy Editor interface provides many flexible ways to add rules to
or remove rules from a Policy; the instructions below demonstrate one
way.
See also: Manage Rules
Import from Library
From the New Policy (or Edit Policy) page, click
Import from Library
.
The Import from Rules Library page is displayed.

Select the checkboxes by the rules to import.
You can pre-sort a collection of rules by searching for particular
keywords or tags, or clicking a colored Tag icon (e.g.
).
Click Mark for Import.

A blue Import
icon

appears to the right of the selected rules and the Import Rules
button is activated.

Click Import Rules
.
The Policy page is displayed with the selected rules listed.

You can remove a rule from a Policy by clicking the X next to the rule in the list.
Create a Rule from the Policy Editor
If you click New Rule instead of Import from Library, you will be linked
to the procedure described in Create a
Rule.
Define Actions
Determine what should be done if a Policy is violated. See also: Understanding How Policy Actions Are Triggered.
Containers
Select what should happen to affected containers if the policy rules are breached:
Nothing (alert only):
Do not change the container behavior; send a
notification according to Notification Channel settings.
Kill:
Kills one or more running containers immediately.
Stop:
Allows a graceful shutdown (10-seconds) before killing the
container.
Pause:
Suspends all processes in the specified containers.
For more information about stop vs kill command, see Docker’s
documentation.
If you have agent 12.10.0+, the agent can be configured to prevent kill/pause/stop
actions, regardless of the policy.
To enable this, edit the following parameter in dragent.yaml
: (default is false
)
security:
ignore_container_action: true
See also: Understanding Agent Configuration.
Capture
Toggle Capture ON if you want to create a capture in case of an event,
and define the number of seconds before and after the event that should
be in the snapshot.
As of June, 2021, you can add the Capture option to policies affecting events from both the Sysdig agent and Fargate Serverless Agents Fargate serverless agents.
Note that for serverless agents, manual captures are not supported; you must toggle on the Capture option in the policy defintion.
See also: Captures.
Notification Channels
Select a notification channel from the drop-down list, for sending
notification of events to appropriate personnel.
See also: Set Up Notification Channels.
Duplicate or Convert a Managed Policy
Select a row in the Runtime Policies list to expand the policy details
and access the icons to Edit
, Copy
, or Delete
the policy.

Duplicate to Create a Managed Ruleset
Select a Managed Policy in the Runtime Policies list and click the Duplicate
icon in the details panel.
Optionally edit any of the parameters except the rules.
Click Save
.
The new policy will appear in the Runtime policy list tagged Ruleset
.
Note you can also duplicate a Ruleset, if desired.
If the Sysdig Threat Research team updates the underlying ruleset in the Default policy on which it was based, the Managed Ruleset policy will be updated accordingly.
Convert to Create a Custom Policy
Select a Default
or a Ruleset
policy from the Runtime Policies list and click the Edit
icon in the details panel.
Click the Convert to Custom
button in the middle of the page.
You can now edit everything about this policy, including the rules. It will not be managed/updated by the Sysdig team; if new rules are offered, the user is responsible for adding them to the custom policies as desired.
Click Save
.
Duplicating a custom policy simply creates another unmanaged custom policy.
Edit a Policy
Only certain changes can be made to a managed policy:
- Enable/disable the policy
- Set policy scope
- Set notifications
- New: Disable (or re-enable) individual rules (also available for custom policies)
Disable Individual Rules
As of September, 2022, you can disable individual rules within any policy or managed ruleset.
The primary use cases for this feature are:
- Using a subset of rules in a policy while retaining the “managed” status of the policy/ruleset and continuing to receive any updates that are pushed from Sysdig
- Temporarily disabling a rule that is generating many events, until the cause is investigated or an appropriate exception is put in place.
To disable a rule:
Select a threat detection policy from the Policies list and click the Edit
(pencil) icon in the slide-out panel.
The Policy details page is displayed.

Slide the toggle left for the rule(s) you want to disable.
Click Save
.
5.3.2 - Manage Threat Detection Rules
Review Understanding Threat Detection Rules to get started.
Access the Rules Library
Select Policies > Rules | Rules Library.
The Rules Library is displayed.

Tips:
Rules are listed alphabetically by name.
Search: Click the magnifying glass if the Search field is not
automatically opened. Search by words in the rule name.
Published by: Remember that default (Falco) rules show up as
Published by: Sysdig ; user-created rules show as Published by:
Secure UI. See also: Edit a
Rule.
Usage: Shows number of policies where the rule and used, and
whether the policies are enabled. Click the rule to see the policy
names in the Rule Detail panel.
Create a Rule
There are different interfaces for creating Falco rules vs.
list-matching rules.
Create a Falco Rule
From the Rules Library page, click +Add Rule
and select
Falco
from the drop-down.
The New Rule page for the Falco rule type is displayed.

Enter the parameters:
Name and Description: create a name and a meaningful description
for the rule
Condition and Output: write the condition code and outputs
required. See Supported Fields
for more
information.
Priority: This is a required field to meet the Falco rule syntax.

Source: Define if the rule is detecting events using the
Kubernetes Audit data source or using the standard syscall
mechanisms
Tags: Select relevant tags from the drop-down or add your own
custom tag
Click Save
.
Create a List-Matching Rule: Container Type Example
Suppose you want detect whenever someone used a specific container image
that has known problems. In this case, a Container rule would be
appropriate. (The other list-matching rule types have similar entry
fields, as appropriate to their type.)
From the Rules Library page, click +Add Rule
and select
Container
from the drop-down.
The New Rule page for the Container rule type is displayed.

Enter the parameters:
Name: Enter a Name, e.g. Problematic Images.
Description: Enter a Description, e.g. Images that shouldn’t be
used
If Matching/ If Not Matching: Select If Matching
. When added
to a policy, if the rule conditions match, then the policy action
you define (such as “send notification”) will be triggered.
Containers: Add the container name(s) that are problematic, e.g.
cassandra:3.0.23.
Tags: Select relevant tags from the dropdown, e.g. database
and container.
Click Save
.
Review a Rule Detail Panel
From the Rules Library list, select a rule to see its details.

From here you can:
Review the rule definition, including clicking embedded macros
to open their details in a pop-up window
See all the tags associated with the rule (colored boxes)
Check all policies in which the rule is used and see whether
those policies are enabled or disabled.
Edit a Rule
Any rules published by Sysdig are default and are read-only. You can
append to their lists and macros, but cannot change the core parameters.
Default rules cannot be deleted.
Self-created rules can be freely edited. You can also override the
behavior of default Falco rules and macros using a placholder
mechanism in the Rules Editor.
To display existing rules:
Select Policies > Rules | Rules Library
and select a rule.
The Rule Details panel opens on the right. You can review the
parameters and append to macros and
lists
inline if desired.

Append to Falco Macros and Lists
Default Falco rules have a variety of macros and lists embedded in them.
While these cannot be deleted from a default rule, you can append
additional information onto them.
For example, consider the Policy DB Program Spawned Process
in the
screenshot above. The embedded rule is used to check that databases have
not spawned illicit processes. You can see in the rule condition the
Falco list : db_server_binaries
.
To append items in a default list:
Click the blue list text
in the rule condition, or go to
Policies > Falco Lists
and search for it by name.

The list content is displayed. Click Append
.

Enter the additional items (i.e. databases) you want to include in
the rule and click Save
.
The same process applies to macros.
How to Use the Rules Editor
The Rules Editor allows you can freely create custom Falco rules, lists,
and macros and can override the behavior of the defaults.
Understand the Interface
To access the interface, select Policies > Rules Editor
:

The Right Panel (Default)
Displays the rules_yamls
provided from Sysdig.
The Left Panel (Custom)
Displays the custom rules and overrides you want to add to the selected
rules_yaml
.
Note that many default Falco rules and macros have a parallel
placeholder
entry (commented out) in the yaml file. These have the
prefix user_known
. To change the behavior of a default rule, it is
recommended to copy the placeholder equivalent into the custom rules
panel and edit it there, rather than editing the default rule directly.
To search the rules YAML files
Click inside the Rules Editor right panel and use CNRL F
to open an
internal search field .
See also: Runtime Policy Tuning
.
Use Cases: List-Matching Rules
It is more helpful to think of the rules as matching the activity,
rather than using concepts of allowing or denying. (The Network types
can be a little confusing in this regard; see the last two use cases for
more detail on that type). Thus, the use cases are based on answering
the question: What do I want to know?
I WANT TO KNOW…
when any process other than web server programs are run:
if any of the following crypto-mining processes are run:
if any program reads any file containing password-related
information:
Rule Type: Filesystem
Read Operations: If Matching
Entries:
/etc/shadow, /etc/sudoers, /etc/pam.conf, /etc/security/pwquality.conf
if any program writes anywhere below binary directories:
Rule Type: Filesystem
Read/Write Operations: If Matching
Entries: /usr, /usr/bin, /bin
if a program writes to anywhere other than /var/tmp:
if any container with an image from docker.io is started:
Rule Type: Container
If Matching
Entries: [docker.io/]
if any container runs an Apache web server:
I want to know if any container with a non-database image is
started:
if any program accepts an inbound ssh connection:
Rule Type: Network
Tcp, "If Matching"
Entries: [22]
if any program receives a DNS datagram:
Rule Type: Network
UDP, "If Matching"
Entries: [53]
if any program accepts a connection on a port other than http/https
Rule Type: Network
TCP, "If Not Matching"
Entries: [80, 443]
if any program accepts any inbound connection:
Rule Type: Network
Inbound Connection: Deny
if any program makes any outbound connection
5.3.3 - Runtime Threat Detection Policy Tuning
The Runtime Policy Tuning feature assists in reducing noisy false
positives in the Sysdig Secure Events feed. Built on top of the Falco
Rules Tuner, it automatically adds
Exceptions to rules, thereby
removing particularly noisy sets of policy events and leaving the
lower-volume events for later analysis.
The tuner may be especially helpful when deploying Sysdig Secure runtime
policies in a new environment. Your environment may include applications
that legitimately perform actions such as running Docker clients in
containers, changing namespaces, or writing below binary directories,
but which trigger unwanted floods of related policy events in the
default policies and rules provided by Sysdig.

Using Runtime Policy Tuner
Prerequisites
Enable, View, Edit Exceptions, Disable
The tuner is enabled and disabled as needed to tame false positives and
optimize the use of the Events feed. By default, it is disabled.
Log in to Sysdig Secure as Admin and choose
Policies > Threat Detection | Runtime Policy Tuning
.
Enable the feature with the Tuning Engine
toggle.
It may take up to 24 hours to see the initial Applied Tuning
Exceptions listed in the left panel.

In the background, the tuner will evaluate policy events as they are
received by the Sysdig backend, find applicable exceptions values,
and add them. The AppliedTuning Exceptions file is passed along to
all Sysdig agents, along with the
rules and
policies.
If needed, you can edit the
Exceptions created
directly in the left-hand panel.
Any changes will be retained as the tuner evaluates additional
events.
NOTE: Do not add custom exceptions, macros, or lists definitions here. Please use the Rules Editor (Custom Rules) for such elements.
Toggle the Tuning Engine off
when you feel the feature has
addressed the most commonly occurring (unwanted) policy events.
NOTE: Any exceptions in the Applied Tuning Exceptions
panel
will still be passed along to agents.
To start over from scratch, clear the Applied Tuning Exceptions text
and re-enable with the Tuning Engine toggle.
Understanding How the Tuning Engine Works
When Does the Tuner Add Exceptions?
The Policy Tuning feature is conservative, only adding exceptions for
commonly occurring events for a single rule with similar
attributes.
All the conditions must be met:
This ensures the tuning feature only adds exceptions for high-volume
sets of events that can be easily addressed with a single set of
exception values.
Exceptions Behind the Scenes
If you want to understand the process of exception insertion by the
tuner, consider a sample rule:
- rule: Write below root
desc: an attempt to write to any file
directly below / or /root
condition: root_dir and evt.dir = < and
open_write
exceptions: - name: proc_writer
fields: [proc.name, fd.filename]
And a stream of policy events with outputs such as:
File below / or /root opened for writing (user=root user_loginuid=-1 command=/usr/local/bin/my-app-server parent=java file=/state.txt program=my-app-server container_id=a97d44bbe437 image=my-registry/app-server:latest
File below / or /root opened for writing (user=root user_loginuid=-1 command=/usr/local/bin/my-app-server parent=java file=/state.txt program=my-app-server container_id=a97d44bbe437 image=my-registry/app-server:latest
File below / or /root opened for writing (user=root user_loginuid=-1 command=/usr/local/bin/my-app-server parent=java file=/state.txt program=my-app-server container_id=a97d44bbe437 image=my-registry/app-server:latest
File below / or /root opened for writing (user=root user_loginuid=-1 command=/usr/local/bin/my-app-server parent=java file=/state.txt program=my-app-server container_id=a97d44bbe437 image=my-registry/app-server:latest
Then the tuner would add the following exception values to address the
false positives:
- rule: Write below root
exceptions:
- name: proc_writer
values:
- [my-app-server, /state.txt]
append: true
See the Falco proposal
for
more background information on using exceptions.
5.3.3.1 - The Falco Rules Tuner (Legacy)
This version of the tuner has been
updated for Sysdig SaaS;
this content is preserved for older on-prem Sysdig environments.
Sysdig policies are built on rules, including Falco rules and macros.
(For review: Understanding Sysdig Secure
Rules
and Using Falco within Sysdig
Secure.)
Sysdig is always working to improve its out-of-the-box policies based on
activity captured about well-known containers and OSS applications.
Nevertheless, proprietary software running in unique user environments
can require a customized approach.
The Falco Rule Tuner was created to simplify the process of updating the
existing ruleset to reduce false positives.
The tool fetches policy events generated during a configurable time
window (EVENT_LOOKBACK_MINUTES
), and based on occurrence threshold
(EVENT_COUNT_THRESHOLD
), it suggests updates to rules. It’s up to the
user to evaluate the suggestions and selectively apply the changes.
To use the Rule Tuner, you will provide some environment variables, run
as a Docker container, review the output in a Slack channel or the
terminal window, and then apply the recommended tuning adjustments as
desired, in the Sysdig Secure Rules
Editor.
Requirements
Sysdig Secure SaaS or On-Prem version 3.5.0+
An available Slack channel (optional, for receiving output
information)
Environment variable values listed in the table below
Set Variables and Run the Container
Gather the values needed for the following environment variables.
Required Environment Variables for Falco Rule TunerSECURE_CUSTOMER
| Optional: Name of the business entity. Default: test |
SECURE_ENDPOINT
| The endpoint for the tuning engine to query. For SaaS, see SaaS Regions and IP Ranges. For On-Prem, the endpoint has been user-defined. |
SECURE_TOKEN
| The Sysdig Secure API token used to access the Secure backend. See Find Sysdig API Token. |
SLACK_WEBHOOK
| Optional: The Slack webhook URL to receive the events summary and rule tuning recommendations. For example: https://hooks.slack.com/services/... |
EVENT_LOOKBACK_MINUTES
| The number of minutes the Falco Rule Tuner should look back to gather the events. Default: 60 |
EVENT_COUNT_THRESHOLD
| The threshold number of events over which a tuning is recommended. Default: 5 . Setting the threshold to 1 would mean that every policy event should be considered a false positive. |
Required Environment Variables for Falco Rule Tuner
Run as a Docker container:
docker run -e SECURE_ENDPOINT=${SECURE_ENDPOINT} -e SECURE_TOKEN=${SECURE_TOKEN} quay.io/sysdig/falco_rules_tuner
The output in the terminal window will show the recommended rules to be
adjusted and the recommended/generated macros and their conditions,
e.g.:
... <etc.>
# Change for rule: Write below root
- macro: elasticsearch-scripts_python_access_fileshost_exe_access_files
condition: (container, image, repository endswith locationservices/elasticsearch-scripts and proc.name=python and (fd.name startswith=/root/app/))
Check Output in Slack Channel (Optional)
The output provided in the terminal window includes only the recommended
rule changes. If you provide a Slack channel URL in the environment
variables, the Tuner gives both an event summary and the recommended
rule changes.


Apply Recommended Tuning to Rules
For review: How to Use the Rules
Editor.
The Tuner detects rules that may be triggering excess alert “noise” and
proposes content relevant macros and macro conditions that would reduce
the noise.
To implement the suggestions, you can 1) copy the rule contents directly
into the left panel of the Rules Editor and edit them, or 2) find the
existing placeholder macro that was created for that rule (usual format:
user_known_<rule_name>
) and add the suggested macros and conditions
there.
Note that editing the definition of a rule directly could cause
overwrite issues when upgrading Sysdig versions. Creating custom rules
or using the user_known
placeholders is a safer procedure.
For example, suppose you decide to implement the Tuner prompt 4 in the
image above, which suggests changing the configuration of the rule
Write below root. One way to proceed:
Search [CTRL F
] the falco_rules.yaml
for Write below root.
You will find both the Rule itself

and placeholder macros, user_known_write_below_root_activities
and
user_known_write_below_root_conditions
. Either one can be used.

Copy one placeholder to the left-hand Custom Rules panel of the
Editor: user_known_write_below_root_activities
.
Copy the tuner-generated macro
(elasticsearch-scripts_python_access_files
in this case), and
conditions into the Custom Rules panel, overwriting the never_true
default condition. The result is something like:
# generated by tuner and copied to here (custom panel in the rules editor)
- macro: elasticsearch-xxx
condition: (...)
- macro: user_known_write_below_root_acitivies
condition: (elasticsearch-xxx) # updated from "never_true" with the generated macro name
Click Save
.
The tuning adjustment will apply when the Write below root rule
is invoked in a policy.
These changes will apply anywhere that the edited macro (
user_known_write_below_root
) is used. Some macros have been
embedded in multiple rules and/or other macros. Edit at your
discretion.
5.4 - Install Falco Rules On-Premises
Periodically, Sysdig releases new Falco
Rules that provide
additional coverage for new behaviors and adds exceptions for known good
behaviors. This topic helps you install Falco Rules as a container in an
on-prem deployment. For air-gapped deployments, the instructions
slightly differ given the security measures employed in the isolated
setup.
Sysdig provides a container image on the Docker
hub to install
Falco Rules on the Sysdig Platform.
This container image allows easy installation and upgrades of the Falco
rules files for Sysdig Secure. The file contains the following:
The image is tagged with new versions as new sets of rules files are
released, and the latest
tag is always pointed to the latest version.
When a container is run with this image, it does the following:
The Falco Rules Updater can be run from ANY machine on the same network
as the backend that has Docker installed. It does not have to be the
backend server.
Example
Non-Airgapped Environment
This section assumes that the installation machine has network access to
pull the image from the Docker hub.
Download the container image:
# docker pull sysdig/falco_rules_installer:latest
Use the docker run
to install the Falco Rules. For example:
# docker run --rm --name falco-rules-installer --network host -it -e DEPLOY_HOSTNAME=https://my-sysdig-backend.com -e DEPLOY_USER_NAME=test@sysdig.com -e DEPLOY_USER_PASSWORD=<my password> -e VALIDATE_RULES=yes -e DEPLOY_RULES=yes -e CREATE_NEW_POLICIES=no -e SDC_SSL_VERIFY=True sysdig/falco_rules_installer:latest
Airgapped Environment
This section assumes that the installation machine does not have the
network access to pull the image from the Docker hub.
Download the container image on a machine that is connected to the
network:
# docker pull sysdig/falco_rules_installer:latest
Create an archive file for the image:
# docker save sysdig/falco_rules_installer:latest -o falco_rules_installer.tar
Transfer the tar file to the air-gapped machine.
Untar the image file:
# docker load -i file.tar
It restores both images and tags.
Use the docker run
to install the Falco Rules. For example:
# docker run --rm --name falco-rules-installer --network host -it -e DEPLOY_HOSTNAME=https://my-sysdig-backend.com -e DEPLOY_USER_NAME=test@sysdig.com -e DEPLOY_USER_PASSWORD=<my password> -e VALIDATE_RULES=yes -e DEPLOY_RULES=yes -e CREATE_NEW_POLICIES=no -e SDC_SSL_VERIFY=True sysdig/falco_rules_installer:latest
Usage
You can run this container from any host that has access to the server
that hosts the Sysdig backend API endpoint. The hostname is specified in
the DEPLOY_HOSTNAME
variable. The container need not run on the hosts
where the Sysdig Platform backend components are running.
To run, the container depends on the following environment variables:
Variables | Description |
---|
DEPLOY_HOSTNAME | The server that hosts the Sysdig API endpoints. The default is https://secure.sysdig.com. |
DEPLOY_USER_NAME | The username for the account that has the admin-level access to the Sysdig API endpoints. The value defaults to a meaningless user, nobody@nobody.com. |
DEPLOY_USER_PASSWORD | The password for the admin user. The value defaults to a meaningless password nopassword . |
VALIDATE_RULES | If set to yes, ensure that the rules file is compatible with your user rules file. Otherwise, skip this validation step. The value defaults to yes . |
DEPLOY_RULES | If set to yes, the falco rules file is deployed. Otherwise, skip deploying the falco rules file. The value defaults to yes . |
CREATE_NEW_POLICIES | If set to yes, will fetch new DEFAULT runtime policies, and restore any missing/deleted DEFAULT runtime policies. This will NOT overwrite any of your existing runtime policies. The value default is no . |
SDC_SSL_VERIFY | If set to false, allow certificate validation failures when deploying the rules. The value defaults to true . |
SKIP_FALCO_VERSION_0 | If set to yes , will not deploy falco rules file version 0, only deploys version 8. (Recommended for on-prem customers with version 5.x). Default value is yes . |
SKIP_K8_VERSION_2 | If set to yes , will not deploy k8 audit rules file version 2, only deploy version 8 (Recommended for on-prem customers with version 5.x. Default value is yes . |
See Docker hub
for the latest information about the image and usage.
5.5 - Profiling
What is Image Profiling in Sysdig
Image profiling in Sysdig enhances the data collection capabilities of the agent, and is a building block for several other Sysdig features:
- Creating Machine Learning policies
- Viewing prioritized vulnerabilities in an “In Use” column in Vulnerability Runtime results
- Allowing third-party vulnerability management software to consume and display the prioritized runtime vulnerabilities from Sysdig, as described in Risk Spotlight Integrations
Availability and Enablement
Some features are still under Controlled Availability and require enablement from Sysdig support, as noted.
Enabling Profiling triggers a feature on the agent that will increase its resource demand, both in memory and CPU. Note that if the agent starts using too many resources, it will automatically and temporarily disable this feature, to avoid impacting its basic functionality.
Enable for Machine Learning
To use machine learning policies:
Log in to Sysdig Secure as Admin and navigate to Settings > User Profile
.
Toggle the Profiling switch in the Sysdig Labs section.

Select Policies > Runtime Policies
and create a new policy of the type Machine Learning.
Enable for Risk Spotlight Integrations or for the In Use Column
Prerequisite: Have the new Vulnerability Management engine enabled in Sysdig Secure SaaS.
Then:
Contact Sysdig support and ask to have the feature enabled in the backend. (This step is required during Controlled Availability.)
Enable a parameter to the Node Analyzer of your Sysdig agents, e.g., using the sysdig-deploy Helm chart. The parameter is:
nodeAnalyzer.nodeAnalyzer.runtimeScanner.settings.eveEnabled=true
Toggle the Profiling switch in the Sysdig Labs section.

After 12 hours, check Vulnerabilities > Runtime
. The runtime scanner will gather information against this policy every 12 hours, displaying results in the Vulnerabilities Runtime scan results.
You should see the In Use column populated.
If you also want to export these results to third-party software, follow the instructions in Risk Spotlight Integrations.
(Note: If the third-party software is Snyk, the instructions are slightly different.)
How Image Profiles Work
With image profiling enabled, the agents start sending “fingerprints” of what happened on the containers – network activity, files and directories accessed, processes run, and system calls used – and Sysdig Secure aggregates this information per image. Thus, for multiple containers based off of the same image, running on different nodes, the profiler will collect and combine system activity into an image profile.
Internal algorithms determine two aspects of behavior:
Profile Contents
A container image profile is a collection of data points related to:
6 - Network
Sysdig Network Security tracks ingress and egress communication from every pod. The Network Security Policy tool allows you to generate
Kubernetes Network Policies based on the traffic allowed or denied as defined in the Ingress and Egress tabs. The UI also allows you to view which policies are being applied in real time.
Prerequisites
Sysdig agent: 10.7.0+
If necessary, install or upgrade your agents.
Note: If you are upgrading and not using Helm, you will need to update the clusterrole.yaml manually.
Supported CNI Plugins:
Coverage Limits
- Communications to/from k8s nodes are not recorded
- Workloads with no recorded communications are not present in workloads list
By default, all pods within a Kubernetes cluster can communicate with each other without any restrictions. Kubernetes Network Policies help you isolate the microservice applications from each other, to limit the blast radius and improve the overall security posture.
With the Network Security Policy tool, you can generate and fine-tune Kubernetes network policies within Sysdig Secure. Use it to generate a “least-privilege” policy to protect your workloads, or view existing network policies that have been applied to you workloads. Sysdig leverages native kubernetes features and doesn’t require any additionl networking requirements other than the CNIs already supported.
Benefits
Key features include:
- Out-of-the-box visibility into network traffic between applications
and services, with a visual topology map to help identify
communications.
- A baseline network policy that you can directly refine and modify to
match your desired declarative state.
- Automated KNP generation based on the network communication
baseline + user-defined adjustments.
- Least-privilege: KNPs follow an allow-only model, any communication
that is not explicitly allowed will be forbidden
- Enforcement delegated to the Kubernetes control plane, avoiding
additional instrumentation or directly tampering with the host’s
network configuration
- Map workloads to network policies applied to your cluster, helping operators and developers understand why a pods communicaiton may or may not be blocked
- The ability to view the network policies applied to a cluster for a particular workload or workloads, with drill-down details to the raw yaml
Ensure your environment meets the Prerequisites.
Log in to Sysdig Secure and select Network. You will be prompted to select a cluster and namespace, then taken to the Network Security Policies page.


Next Steps
You can now generate policies, review and tune them, and finesse configurations or troubleshoot.
6.1 - Netsec Policy Generation
Generating KNPs in the Sysdig Network Security Policy Tool involves four steps, as described in the following sections:
- Set the scope
- Review ingress/egress and edit the detected communications as desired
- Review the topology map
- Click
Generated Policy
and download the resulting file.
Subsequently, you can check the topology map to:
- Review applied policies
- Click into details for remediation if needed.
Set the Scope
You first define the Kubernetes entity and timeframe for which you want
to aggregate communications.
Understanding the aggregation: Communications are aggregated using
Kubernetes metadata to avoid having additional entries that are not
relevant for the policy creation. For example, if pod A under deployment
A communicates several times with pod B under deployment B, only one
entry appears in the interface. Or: If pod A1 and pod A2, both under
deployment A, both communicate with pod B, deployment A will represent
all its pods.
In the Sysdig Secure UI, select Network
from the left menu.
Choose Cluster
and Namespace
from the drop-down menus.
Select the type of Kubernetes entity for which you want to create a
policy:
Service
Deployment
Daemonset
Stateful Set
CronJob
Choose CronJob to see communication aggregated to the
CronJob (scheduler) level, rather than the Job, which may
generate an excess number of entries.
Job
Choose Job to see entries where a Job has no CronJob
parent.
Select the timespan, i.e. how far back in time to aggregate the
observed communications for the entity. The interface will display
the Ingress / Egress tables for that Kubernetes entity and
timeframe.
Manage Ingress and Egress
The ingress/egress tables detail the observed communications for the
selected entity (pod owner) and time period.
Granular and global assignments: You can then cherry-pick rows to
include/exclude from the policy granularly, or establish general rules
using the drop-down global rule options.
Understanding unresolved IPs: For some communications, it may not be
possible to resolve one of the endpoints to Kubernetes metadata and
classify as Service
, Deployment
, etc.. For example, if a
microservice is communicating with an external web server, that external
IP is not associated with any Kubernetes metadata in your cluster. The
UI will still display these entities as “unresolved IPs.” Unresolved IPs
are excluded by default from the Kubernetes network policy, but can be
added manually via the ingress/egress interface.

Choose Ingress or Egress to review and edit the detected
communications:
Select the scope as described above.
For in-cluster entities: Edit the permitted communications as
desired, by either:
Selecting/deselecting rows of allowed communication, or
Choosing General Ingress/Egress Rules:
Block All, Allow All Inside Namespace, or Allow All.
For unresolved IPs (if applicable): If the tool detects many
unresolved IPs, you can:
Search results by any text to locate particular listings
Filter results by
Internal: found within the cluster
External: found outside the cluster
Aliased: displays any given alias
Unknown: unable to tell if internal or external.
Fine-tune the handling of unknown IPs (admins only) .
You can assign an alias, set the IP to “allowed” status, or add a CIDR configuration so the IP so the IP is correctly categorized and labelled.
Repeat on the other table, then proceed to check the topology and/or
generate the policy.
Use Topology Visualization
Use the Topology view to visually validate if this is the policy you
want, or if something should be changed. The topology view is a
high-level Kubernetes metadata view: pod owners, listening ports,
services, and labels.
Communications that will not be allowed if you decide to apply this
policy are color-coded red.

Pop-up detail panes: Hover over elements in the topology to see all
the relevant details for both entities and communications.
Review Applied Policies
Once policies have been generated, you can view the network policies applied to a cluster for a particular workload or workloads.

You can:

Topology Legend
When glancing at the topology, the color codes indicate:
Lines:
Black = resolved connection
Red = connection not resolved; communication not included in the generated policy. (Go to Ingress/Egress panels and select the relevant rows to allow the communication.)
Entities:
Blue = the selected workload
Black = other services and deployements the selected workload communicates with
Review and Download Generated Policy
When you are satisfied with the rules and communication lines, simply
click the Generated Policy tab to get an instantaneously generated
file.
Review the resulting YAML file and download it to your browser.

Sample Use Cases
In all cases, you begin by leaving the application running for at least 12 hours, to allow the agent to collect information.
Case 1: Only Allow Specified Ingress/Egress Communications
As a developer, you want to create a Kubernetes network policy that only
allows your service/deployment to establish ingress and egress network
communications that you explicitly allow.
Select the cluster namespace and deployment for your
application.
You should see pre-computed ingress and egress tables. You know the
application does not communicate with any external IP for ingress or
egress, so should not see any unresolved IPs. The topology map shows
the same information.
Change a rule: You decide one service your application is
communicating with is obsolete. You uncheck that row in the egress
table.
Check the topology map. You will see the communication still
exists, but is now drawn in red, meaning that it is forbidden using
the current Kubernetes network policy (KNP).
Check the generated policy code. Verify that it follows your
plan:
Download the generated policy and upload it to your Kubernetes
environment.
Verify that your application can only communicate with the
services that were marked in black in the topology and checked in
the tables. Then generate and download the policy to apply it.
Case 2: Allow Access to Proxy Static IPs
As a developer, you know your application uses proxies with a static IP
and you want to configure a policy that allows your application to
access them.
See the proxy IPs in the egress section of the interface
Use the Allow Egress to IP
mask to create a manual rule to
allow those IPs in particular
De-select all the other entries in the ingress and egress tables
Looking at the topology map, verify that only the communications
to these external IPs are marked in black, the other communications
with the other services/deployments are marked in red
Download the generated Kubernetes network policy and apply it.
Case 3: Allow Communication Only Inside the Namespace
You know that your application should only communicate inside the
namespace, both for ingress and for egress.
Allow ingress inside the namespace using the general rules
Allow egress inside the namespace using the general rules
Generate the policy and confirm: everything inside the namespace
is allowed, without nominating a particular service/deployment, then
apply it.
Case 4: Allow Access to a Specified Namespace, Egress Only
Your application deployment A only communicates with applications in
deployment B, which lives in a different namespace. You only need that
egress traffic; there is no ingress traffic required for that
communication.
Verify that the ingress table is empty, both for Kubernetes
entities and for raw IPs
Verify that the only communication listed on the Egress table is
communication with deployment B
Download the autogenerated policy, apply it, and verify:
Case 5: Allow Access When a Deployment Has Been Relabeled
As a developer, you want to create a policy that only allows your
service/deployment to establish ingress and egress network
communications that you explicitly allow, and you need to make a change.
After leaving the application running for a few hours, you realize
you didn’t tag all the namespaces involved in this policy
A message at the top of the view will state “you need to assign
labels to this namespace”.
Confirm the situation in the different views:
Attach a label to the namespace that was missing it. After some
minutes, a row shows the updated information.
Whitelist the connection appropriately.
Generate and download the policy and apply it.
6.2 - Configuration and Troubleshooting
Kubernetes Network Configuration
Sysdig provides a Configuration page for Administrators who
want to fine-tune the way the agent processes the network data.
It contains three areas, described below:
Workload Labels
The Sysdig agent automatically detects labels used for the Kubernetes objects in a
cluster. Sometimes, there are many more labels than are required for
network security purposes. In this cases, you can select the two or
three most meaningful labels and use include
or exclude
namespace or workload labels to avoid clutter
in both the UI and your network security policies. For example you can exclude labels inherited by helm, and only include the labels that are required for each ojbect, like app
and name

Unresolved IP Configuration
If the Sysdig agent cannot resolve an IP to a higher-level structure
(Service
, Deployment
, Daemonset
, etc.) it will be displayed as
“unresolved” in the ingress/egress tables. Additionaly you can add unresolved IPs from the ingress or egress tabs by clicking the @
and creating a new alias or assigning it to an existing alias

You can manually enter such IPs or CIDRs in the configuration panel,
label them with an alias, and optionally set them to “allowed” status.
Note that grouping IPs under a single alias helps declutter the
Topography view.
Pod communication without an alias

Pod communicaiton with IP aliases

Cluster CIDR Configuration
Unresolved IPs are listed and categorized as “internal” (inside the
cluster), “external” (outside the cluster) or “unknown,” (subnet
information incomplete). For unknowns, Sysdig will prompt with an error
message to help you resolve it.
The simplest resolution is to manually specify cluster and service CIDRs
for the clusters.

Troubleshooting
Tips to resolve common error messages:
Error message: Namespaces without labels
Problem: Namespaces must be labeled for the KNPs to define
ingress/egress rules. If non-labeled namespaces are detected in the
targeted communications, the “Namespaces without labels” error message
is displayed in the UI:

Resolution: Simply assign a label to the relevant namespace and wait
a few minutes for the system’s auto-detection to catch up.
Error Message: Cluster subnet is incomplete
Problem: To categorize unresolved IPs as inside or outside the
cluster, the agent must know which CIDR ranges belong to the cluster. By
default, the agent tries to discover the ranges by examining the command
line arguments of the kube-apiserver
and kube-controller-manager
processes.
If it cannot auto-discover the cluster subnets, the “cluster subnet is
incomplete” error message is displayed in the UI:

Resolution:
Preferred: Use the Configuration panel to add the
CIDR entries.
In rare cases, you may need to configure the agent to look for the
CIDR ranges in other processes than the default
kube-apiserver, kube-controller-manager
processes. In that case,
append the following to the agent configmap:
network_topology:
pod_prefix_for_cidr_retrieval:
[<PROCESS_NAME>, <PROCESS_NAME>]
7 - Secure Events
The Events page in Sysdig Secure provides overview of the entire infrastructure, and the ability to
deep-dive into specific security events, identify false positives, and configure policies to optimize performance.

It provides a navigable interface to:
Find and surface insights around the most relevant security events
in your infrastructure
Slice and dice your event data using multiple filters and scopes to
hone into the events that will require further inspection or
remediation actions
Inspect any items using an advanced event detail panel
Follow up on forensics, activity audits, etc., by directly linking
to other sections of the product for additional event information
Without filters or scope defined, the event list comprises all events
within the timeline, in chronological order. Clicking on an event opens
the event detail panel on the right.
Review Summary and Filter Secure Events
The panel at the top of the page, together with the time-span selector at the bottom, provide a high-level summary of the events during the chosen timeframe– anywhere from 10 minutes to 3 days or more.

The summary shows the:
Top number of events per Cluster
, Node
, Namespace
, Workload
, Image
, plus by Rule name
or MITRE
attack
Number of events by severity (High
/Med
/Low
/Info
)
By default, High, Med, and Low are selected. Deselect to see, for example, just High severity events.
Group-by selector: currently, you can choose to group the events list by Policy.
Click elements to add them to filter expressions (see below).
Using the Filter Bar
Building expressions in the improved filter bar is simpler and cleaner than in the original filter UI. Both use the Filter Expression Elements described below.
Build expressions from the drop-down options: Click Add Filter
for an initial drop-down list of valid scope elements. Keep clicking in the filter bar to be presented with the next logical operand, value, etc. to add to your expression.

Build expressions using elements from the Events list: Click the operand after an element in an event to add it directly to the filter expression.

Add priority or type filters and save a constructed expression as a Favorite or set as the Default filter
Understanding Filter Expression Elements
Note that the filters are additive. For example, if you set the Type to
Image Scanning events and don’t see what you expected, make sure the
scope and time span have also been set appropriately.
You construct a filtering expression from the following elements:
Scope
By default, the Event scope encompasses Everywhere
, but you can define
the environment scope(containers
, namespaces
, etc.) to limit the
range. Those environment limits are assigned to the team active
during the scope definition.
See also: Team Scope and the Event Feed, below.
Free-Text Search
You can search by the event title and scope label values, such as
“my-cluster-name,” visible in the events lists.
Type
Events include both Runtime
and Image Scanning
events.
Runtime events correspond to the rules and violations defined in Policies.
Image Scanning events correspond to the runtime scanning alerts.
Severity
Use the appropriate buttons to filter events by High, Medium,
Low, and Info level of severity, corresponding to the levels
defined in the relevant runtime Policies or runtime scanning alerts.
Group by
When a particular policy is generating many events, use the Group by: Policy option to sort the event feed into a more useable list.

| 
|
---|
No group | Grouped by policy |
Time Span
As in the rest of the Sysdig Platform interface, the time span can be
set by date ranges using the calendar pop-up, and in increments from 10
minutes to 3 days. You can additionally use the calendar picker to
select other time ranges that are not available as fast buttons.
Attributes
Under Details, hover over an attribute to reveal the =/!=
filter button and click to add to the Attribute filter.

Event Detail Panel
The Event Detail contents vary depending on the selected event. In general, the following are always present:
Attributes on which you can filter directly:
See the Attributes, above.
Action Buttons, sometimes grouped under “Respond”:
Only relevant activity links are displayed for each event detail.
If relevant, the Captures button links to
Captures. See also: Quick Link to Captures from Runtime Events.
If set up in the associated policy, a View Runbook link or button connects your company’s procedure documents.
The image below shows how it may appear as a single button or under Respond, depending how many actions have been enabled.

For Runtime events, the Activity shortcut button is available and links to Activity Audit.
For Image Scanning, the Scan Results shortcut links to the Scan Results page.
For a birds-eye view of the related network activity and the ability to create a netsec policy, the Network Activity shortcut links to the Netsec page. See also: Quick Link to Netsec Typology.
For auto-tuning policies to reduce noisy false positives, the Tunable Events shortcut provides a link the Runtime Policy Tuner. Note that the tuner only detects and alerts on rules that have exception definition, so the link does not necessarily appear on every event. See also: Quick Link to Policy Tuner.
Edit Policy Shortcut:
For image scanning: Links to the runtime alert that generated the event.
For policy (runtime) events: Links to the runtime rule that
created the event, as well as the rule type (i.e. Falco - Syscall)
and the labels associated with that rule.
All three elements are filterable using the attribute filter widgets
(see above).
View Rule
Click the View Rule button to slide the out the rule detail panel for review.

Output (For Policy events):
The Falco rule output as configured in the rule is listed.
Scope
The new scope selector allows for additional selector logic (in, not
in, contains, starts-with, etc), improving the scoping flexibility
over previous versions. This scope selector also provides scope
variables, allowing you to quickly switch between, for example,
Kubernetes namespaces without having to edit the panel scope. See
also: Team Scope and the Event Feed, below.
Note that the scope details listed can be entered in the free-text
search field if desired.
Live/Pause Button

When live, events continually update. Use Pause to focus on a
section of the screen and not continue scrolling away in a noisy
environment.
Portable URLs
The Event Feed URL maintains the current filters, scope, and
selected elements. You can share this URL with other users to allow
them to display the same data.
Quick Link to Captures from Runtime Events
For runtime policy events that have an associated capture, we now offer
a contextual menu for performing quick actions over the event capture,
rather than a simple link to the Captures interface. You can:
Additionally, if the event is scoped to a particular container, Sysdig
Inspect will automatically filter the displayed information to the scope
of that Container ID.

Quick Link to Netsec Topology
As part of triaging an event, it may be useful to get a birds-eye-view of the network activity, e.g., to establish what is connected to what, who else a service communicates with, and whether the connection is expected or an outlier.
When relevant, the event detail Respond button provides a quick link to the Network Activity topology, visible users with Advanced User privileges or above, as well as the ability for administrators to craft a unique netsec policy as needed.
The event should include cluster/ namespace/workload details (one of deployment
, daemonset
, statefulset
, job
, cronjob
), and actual network activity on the workload for the Network Activity link to be offered.
Quick Link to Policy Tuner
Sysdig’s Runtime Policy Tuner helps reduce noisy false negatives using rule exceptions. If you have not enabled the tuner, the Events overview will include a link for enabling.
Once enabled, the Event detail will show a # Tunable Exceptions
link, sometimes grouped under the Respond button. Click the link to get the Tuner suggestions and apply as desired.

Team Scope and the Event Feed
Not every label available in the Sysdig Platform is compatible with the
set of labels used to define the scope of a security event in the Event
Feed.
Practically, this means that in order to correctly determine if a set of
events is visible for a certain Sysdig Secure team, the team scope must
not use any label outside the following list.
Permitted Labels
agent.tag.* (any label starting with agent.tag is valid)
host.hostName
host.mac
kubernetes.cluster.name
kubernetes.namespace.name
kubernetes.node.name
kubernetes.namespace.label.field.cattle.io/projectId
kubernetes.namespace.label.project
kubernetes.pod.name
kubernetes.daemonSet.name
kubernetes.deployment.name
kubernetes.replicaSet.name
kubernetes.statefulSet.name
kubernetes.job.name
kubernetes.cronJob.name
kubernetes.service.name
container.name
container.image.id
container.image.repo
container.image.tag
container.image.digest
container.label.io.kubernetes.container.name
container.label.io.kubernetes.pod.name
container.label.io.kubernetes.pod.namespace
container.label.maintainer
Not using any label to define team scope (Everywhere) is also
supported.
If the Secure team scope is defined using a label outside of the list
above, the Event Feed will be empty for that particular team.
7.1 - Event Forwarding
Sysdig supports sending different types of security data to third-party
SIEM (security information and event management) platforms and logging
tools, such as Splunk, Elastic Stack, Qradar, Arcsight, LogDNA. Use
Event Forwarding to perform these integrations so you can view security
events and correlate Sysdig findings with the tool that you are already
using for analysis.
Review the Types of Secure
Integrations table for more
context. The Event Forwarding column lists the various options and their
levels of support.
You must be logged in to Sysdig Secure as Administrator to access the event forwarding options.
Supported Event Forwarding Data Sources
At this time, Sysdig Secure can forward the following types of data:
If Sysdig Monitor is installed, Monitor events are also supported.
Informational; in most cases, there is no need to change the default
format.
Policy Event Payload
Policy Event Severity
The severity field in the payload is an integer. The following table shows different values event severities can have.
Event Severity | JSON severity value |
---|
High | 0, 1, 2, 3 |
Medium | 4, 5 |
Low | 6 |
Info | 7 |
There are now two formats supported. See also this Release
Note.
New Runtime Policy Events Payload
{
"id": "164ace360cc3cfbc26ec22d61b439500",
"type": "policy",
"timestamp": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"originator": "policy",
"category": "runtime",
"source": "syscall",
"name": "Notable Filesystem Changes",
"description": "Identified notable filesystem activity that might change sensitive/important files. This differs from Suspicious Filesystem Changes in that it looks more broadly at filesystem activity, and might have more false positives as a result.",
"severity": 0,
"agentId": 13530,
"containerId": "",
"machineId": "08:00:27:54:f3:9d",
"actions": [
{
"type": "POLICY_ACTION_CAPTURE",
"successful": true,
"token": "abffffdd-fba8-42c7-b922-85364b00eeeb",
"afterEventNs": 5000000000,
"beforeEventNs": 5000000000
}
],
"content": {
"policyId": 544,
"baselineId": "",
"ruleName": "Write below etc",
"ruleType": "RULE_TYPE_FALCO",
"ruleTags": [
"NIST_800-190",
"NIST_800-53",
"ISO",
"NIST_800-53_CA-9",
"NIST_800-53_SC-4",
"NIST",
"ISO_27001",
"MITRE_T1552_unsecured_credentials",
"MITRE_T1552.001_credentials_in_files"
],
"output": "File below /etc opened for writing (user=root command=touch /etc/ard parent=bash pcmdline=bash file=/etc/ard program=touch gparent=su ggparent=sudo gggparent=bash container_id=host image=<NA>)",
"fields": {
"container.id": "host",
"container.image.repository": "<NA>",
"falco.rule": "Write below etc",
"fd.directory": "/etc/pam.d",
"fd.name": "/etc/ard",
"group.gid": "8589935592",
"group.name": "sysdig",
"proc.aname[2]": "su",
"proc.aname[3]": "sudo",
"proc.aname[4]": "bash",
"proc.cmdline": "touch /etc/ard",
"proc.name": "touch",
"proc.pcmdline": "bash",
"proc.pname": "bash",
"user.name": "root"
},
"falsePositive": false,
"matchedOnDefault": false,
"policyVersion": 2,
"policyOrigin": "Sysdig"
},
"labels": {
"host.hostName": "ardbox",
"process.name": "touch /etc/ard"
}
}
Legacy Secure Policy Event Payload
{
"id": "164ace360cc3cfbc26ec22d61b439500",
"containerId": "",
"name": "Notable Filesystem Changes",
"description": "Identified notable filesystem activity that might change sensitive/important files. This differs from Suspicious Filesystem Changes in that it looks more broadly at filesystem activity, and might have more false positives as a result.",
"severity": 0,
"policyId": 544,
"actionResults": [
{
"type": "POLICY_ACTION_CAPTURE",
"successful": true,
"token": "15c6b9cc-59f9-4573-82bb-a1dbab2c4737",
"beforeEventNs": 5000000000,
"afterEventNs": 5000000000
}
],
"output": "File below /etc opened for writing (user=root command=touch /etc/ard parent=bash pcmdline=bash file=/etc/ard program=touch gparent=su ggparent=sudo gggparent=bash container_id=host image=<NA>)",
"ruleType": "RULE_TYPE_FALCO",
"matchedOnDefault": false,
"fields": [
{
"key": "container.image.repository",
"value": "<NA>"
},
{
"key": "proc.aname[3]",
"value": "sudo"
},
{
"key": "proc.aname[4]",
"value": "bash"
},
{
"key": "proc.cmdline",
"value": "touch /etc/ard"
},
{
"key": "proc.pname",
"value": "bash"
},
{
"key": "falco.rule",
"value": "Write below etc"
},
{
"key": "proc.name",
"value": "touch"
},
{
"key": "fd.name",
"value": "/etc/ard"
},
{
"key": "proc.aname[2]",
"value": "su"
},
{
"key": "proc.pcmdline",
"value": "bash"
},
{
"key": "container.id",
"value": "host"
},
{
"key": "user.name",
"value": "root"
}
],
"eventLabels": [
{
"key": "container.image.repo",
"value": "alpine"
},
{
"key": "container.image.tag",
"value": "latest"
},
{
"key": "container.name",
"value": "large-label-container-7"
},
{
"key": "host.hostName",
"value": "ardbox"
},
{
"key": "process.name",
"value": "touch /etc/ard"
}
],
"falsePositive": false,
"baselineId": "",
"policyVersion": 2,
"origin": "Sysdig",
"timestamp": 1606322948648718,
"timestampNs": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"hostMac": "08:00:27:54:f3:9d",
"isAggregated": false
}
Activity Audit Forwarding Payloads
Each of the activity audit types has its own JSON format.
Command (cmd) Payload
{
"id": "164806c17885b5615ba513135ea13d79",
"agentId": 32212,
"cmdline": "calico-node -felix-ready -bird-ready",
"comm": "calico-node",
"pcomm": "apt-get",
"containerId": "a407fb17332b",
"count": 1,
"customerId": 1,
"cwd": "/",
"hostname": "qa-k8smetrics",
"loginShellDistance": 0,
"loginShellId": 0,
"pid": 29278,
"ppid": 29275,
"rxTimestamp": 1606322949537513500,
"timestamp": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"tty": 34816,
"type": "command",
"uid": 0,
"labels": {
"aws.accountId": "059797578166",
"aws.instanceId": "i-053b1f0509fdbc15a",
"aws.region": "us-east-1",
"container.image.digest": "sha256:26c68657ccce2cb0a31b330cb0be2b5e108d467f641c62e13ab40cbec258c68d",
"container.image.id": "d2e4e1f51132",
"container.label.io.kubernetes.pod.namespace": "default",
"container.name": "bash",
"host.hostName": "ip-172-20-46-221",
"host.mac": "12:9f:a1:c9:76:87",
"kubernetes.node.name": "ip-172-20-46-221.ec2.internal",
"kubernetes.pod.name": "bash"
}
}
Network (net) Payload
{
"id": "164806f43b4d7e8c6708f40cdbb47838",
"agentId": 32212,
"clientIpv4": 2886795285,
"clientPort": 60720,
"containerId": "da3abd373c7a",
"customerId": 1,
"direction": "out",
"hostname": "qa-k8smetrics",
"l4protocol": 6,
"pid": 2452,
"processName": "kubectl",
"rxTimestamp": 0,
"serverIpv4": 174063617,
"serverPort": 443,
"timestamp": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"type": "connection"
"tty": 34816,
"labels": {
"aws.accountId": "059797578166",
"aws.instanceId": "i-053b1f0509fdbc15a",
"aws.region": "us-east-1",
"container.image.digest": "sha256:26c68657ccce2cb0a31b330cb0be2b5e108d467f641c62e13ab40cbec258c68d",
"container.image.id": "d2e4e1f51132",
"host.hostName": "ip-172-20-46-221",
"host.mac": "12:9f:a1:c9:76:87",
"kubernetes.cluster.name": "k8s-onprem",
"kubernetes.namespace.name": "default",
"kubernetes.node.name": "ip-172-20-46-221.ec2.internal",
"kubernetes.pod.name": "bash"
}
}
File (file) Payload
{
"id": "164806c161a5dd221c4ee79d6b5dd1ce",
"agentId": 32212,
"containerId": "a407fb17332b",
"customerId": 1,
"directory": "/var/lib/dpkg/updates/",
"filename": "tmp.i",
"hostname": "qa-k8smetrics",
"permissions": "w",
"pid": 414661,
"comm": "dpkg",
"timestamp": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"type": "fileaccess",
"tty": 34817,
"metrics": [
"default",
"",
"k8s-onprem",
"bash",
"",
"ip-172-20-46-221",
"12:9f:a1:c9:76:87"
],
"labels": {
"aws.accountId": "059797578166",
"aws.instanceId": "i-053b1f0509fdbc15a",
"aws.region": "us-east-1",
"container.image.digest": "sha256:26c68657ccce2cb0a31b330cb0be2b5e108d467f641c62e13ab40cbec258c68d",
"container.image.id": "d2e4e1f51132",
"container.image.repo": "docker.io/library/ubuntu",
"container.name": "bash",
"host.hostName": "ip-172-20-46-221",
"host.mac": "12:9f:a1:c9:76:87",
"kubernetes.cluster.name": "k8s-onprem",
"kubernetes.namespace.name": "default",
"kubernetes.node.name": "ip-172-20-46-221.ec2.internal",
"kubernetes.pod.name": "bash"
}
}
Kubernetes (kube exec) Payload
{
"id": "164806f4c47ad9101117d87f8b574ecf",
"agentId": 32212,
"args": {
"command": "bash",
"container": "nginx"
},
"auditId": "c474d1de-c764-445a-8142-a0142505868e",
"containerId": "397be1762fba",
"hostname": "qa-k8smetrics",
"name": "nginx-76f9cf7469-k5kf7",
"namespace": "nginx",
"resource": "pods",
"sourceAddresses": [
"172.17.0.21"
],
"stages": {
"started": 1605540915526159000,
"completed": 1605540915660084000
},
"subResource": "exec",
"timestamp": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"type": "kubernetes",
"user": {
"username": "system:serviceaccount:default:default-kubectl-trigger",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:default",
"system:authenticated"
]
},
"userAgent": "kubectl/v1.16.2 (linux/amd64) kubernetes/c97fe50",
"labels": {
"agent.tag.cluster": "k8s-onprem",
"agent.tag.sysdig_secure.enabled": "true",
"container.image.repo": "docker.io/library/nginx",
"container.image.tag": "1.21.6",
"container.label.io.kubernetes.container.name": "nginx",
"container.label.io.kubernetes.pod.name": "nginx-76f9cf7469-k5kf7",
"container.label.io.kubernetes.pod.namespace": "nginx",
"container.name": "nginx",
"host.hostName": "qa-k8smetrics",
"host.mac": "12:09:c7:7d:8b:25",
"kubernetes.cluster.name": "demo-env-prom",
"kubernetes.deployment.name": "nginx-deployment",
"kubernetes.namespace.name": "nginx",
"kubernetes.pod.name": "nginx-76f9cf7469-k5kf7",
"kubernetes.replicaSet.name": "nginx-deployment-5677bff5b7"
}
}
Benchmark Result Payloads
To forward benchmark events, you must have Benchmarks v2
installed and configured,
using the Node Analyzer.
A Benchmark Control payload is emitted for each control on each host on
every Benchmark Run. A Benchmark Run payload containing a summary of the
results is emitted for each host on every Benchmark Run.
Benchmark Control Payload
{
"id": "16ee684c65c356616381cbcbfed06eb6",
"type": "benchmark",
"timestamp": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"originator": "benchmarks",
"category": "runtime",
"source": "host",
"name": "Kubernetes Benchmark Control Reported",
"description": "Kubernetes benchmark kube_bench_cis-1.6.0 control 4.1.8 completed.",
"severity": 7,
"agentId": 0,
"containerId": "",
"machineId": "0a:e2:ce:65:f5:b7",
"content": {
"taskId": "9",
"runId": "535de4fb-3fac-4716-b5c6-9c906226ed01",
"source": "host",
"schema": "kube_bench_cis-1.6.0",
"subType": "control",
"control": {
"id": "4.1.8",
"title": "Ensure that the client certificate authorities file ownership is set to root:root (Manual)",
"description": "The certificate authorities file controls the authorities used to validate API requests. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`.",
"rationale": "The certificate authorities file controls the authorities used to validate API requests. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`.",
"remediation": "Run the following command to modify the ownership of the --client-ca-file.\nchown root:root <filename>\n",
"auditCommand": "CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')\nif test -z $CAFILE; then CAFILE=/etc/kubernetes/pki/ca.crt; fi\nif test -e $CAFILE; then stat -c %U:%G $CAFILE; fi\n",
"auditOutput": "root:root",
"expectedOutput": "'root:root' is equal to 'root:root'",
"familyName": "Worker Node Configuration Files",
"level": "Level 1",
"type": "manual",
"result": "Pass",
"resourceType": "Hosts",
"resourceCount": 0
}
},
"labels": {
"aws.accountId": "845151661675",
"aws.instanceId": "i-0cafe61565a04c866",
"aws.region": "eu-west-1",
"host.hostName": "ip-172-20-57-8",
"host.mac": "0a:e2:ce:65:f5:b7",
"kubernetes.cluster.name": "demo-env-prom",
"kubernetes.node.name": "ip-172-20-57-8.eu-west-1.compute.internal"
}
}
Benchmark Run Payload
{
"id": "16ee684c65c356617457f59f07b11210",
"type": "benchmark",
"timestamp": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"originator": "benchmarks",
"category": "runtime",
"source": "host",
"name": "Kubernetes Benchmark Run Passed (with warnings)",
"description": "Kubernetes benchmark kube_bench_cis-1.6.0 completed.",
"severity": 4,
"agentId": 0,
"containerId": "",
"machineId": "0a:28:16:38:93:39",
"content": {
"taskId": "9",
"runId": "535de4fb-3fac-4716-b5c6-9c906226ed01",
"source": "host",
"schema": "kube_bench_cis-1.6.0",
"subType": "run",
"run": {
"passCount": 20,
"failCount": 0,
"warnCount": 27
}
},
"labels": {
"aws.accountId": "845151661675",
"aws.instanceId": "i-00280f61718cc25ba",
"aws.region": "eu-west-1",
"host.hostName": "ip-172-20-40-177",
"host.mac": "0a:28:16:38:93:39",
"kubernetes.cluster.name": "demo-env-prom",
"kubernetes.node.name": "ip-172-20-40-177.eu-west-1.compute.internal"
}
}
Host Scanning Payload
Incremental Report
This is the “vuln diff” report; it contains the list of added, removed,
or updated vulnerabilities that the host presents compared to the
previous scan.
[
{
"id": "167fddc1197bcc776d72f0f299e83530",
"type": "hostscanning",
"timestamp": 1621258212302,
"originator": "hostscanning",
"category": "hostscanning_incremental_report",
"source": "hostscanning",
"name": "Vulnerability updates - Host dev-vm",
"description": "",
"severity": 4,
"agentId": 0,
"containerId": "",
"machineId": "00:0c:29:e5:9e:51",
"content": {
"hostname": "dev-vm",
"mac": "00:0c:29:e5:9e:51",
"reportType": "incremental",
"added": [
{
"cve": "CVE-2020-27170",
"fixAvailable": "5.4.0-70.78",
"packageName": "linux-headers-5.4.0-67",
"packageType": "dpkg",
"packageVersion": "5.4.0-67.75",
"severity": "High",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2020-27170",
"vulnerablePackage": "linux-headers-5.4.0-67:5.4.0-67.75"
},
{
"cve": "CVE-2019-9515",
"fixAvailable": "None",
"packageName": "libgrpc6",
"packageType": "dpkg",
"packageVersion": "1.16.1-1ubuntu5",
"severity": "Medium",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2019-9515",
"vulnerablePackage": "libgrpc6:1.16.1-1ubuntu5"
}
],
"updated": [
{
"cve": "CVE-2018-17977",
"fixAvailable": "None",
"packageName": "linux-modules-5.4.0-72-generic",
"packageType": "dpkg",
"packageVersion": "5.4.0-72.80",
"severity": "Medium",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2018-17977",
"vulnerablePackage": "linux-modules-5.4.0-72-generic:5.4.0-72.80"
},
{
"cve": "CVE-2021-3348",
"fixAvailable": "5.4.0-71.79",
"packageName": "linux-modules-extra-5.4.0-67-generic",
"packageType": "dpkg",
"packageVersion": "5.4.0-67.75",
"severity": "Medium",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2021-3348",
"vulnerablePackage": "linux-modules-extra-5.4.0-67-generic:5.4.0-67.75"
},
{
"cve": "CVE-2021-29265",
"fixAvailable": "5.4.0-73.82",
"packageName": "linux-headers-5.4.0-67-generic",
"packageType": "dpkg",
"packageVersion": "5.4.0-67.75",
"severity": "Medium",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2021-29265",
"vulnerablePackage": "linux-headers-5.4.0-67-generic:5.4.0-67.75"
},
{
"cve": "CVE-2021-29921",
"fixAvailable": "None",
"packageName": "python3.8-dev",
"packageType": "dpkg",
"packageVersion": "3.8.5-1~20.04.2",
"severity": "Medium",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2021-29921",
"vulnerablePackage": "python3.8-dev:3.8.5-1~20.04.2"
}
],
"removed": [
{
"cve": "CVE-2021-26932",
"fixAvailable": "None",
"packageName": "linux-modules-5.4.0-67-generic",
"packageType": "dpkg",
"packageVersion": "5.4.0-67.75",
"severity": "Medium",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2021-26932",
"vulnerablePackage": "linux-modules-5.4.0-67-generic:5.4.0-67.75"
},
{
"cve": "CVE-2020-26541",
"fixAvailable": "None",
"packageName": "linux-modules-extra-5.4.0-67-generic",
"packageType": "dpkg",
"packageVersion": "5.4.0-67.75",
"severity": "Medium",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2020-26541",
"vulnerablePackage": "linux-modules-extra-5.4.0-67-generic:5.4.0-67.75"
},
{
"cve": "CVE-2014-4607",
"fixAvailable": "2.04-1ubuntu26.8",
"packageName": "grub-pc",
"packageType": "dpkg",
"packageVersion": "2.04-1ubuntu26.7",
"severity": "Medium",
"url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2014-4607",
"vulnerablePackage": "grub-pc:2.04-1ubuntu26.7"
}
]
},
"labels": {
"host.hostName": "dev-vm",
"cloudProvider.account.id": "",
"cloudProvider.host.name": "",
"cloudProvider.region": "",
"host.hostName": "ip-172-20-40-177",
"host.id": "d82e5bde1d992bedd10a640bdb2f052493ff4b3e03f5e96d1077bf208f32ea96",
"host.mac": "00:0c:29:e5:9e:51",
"host.os.name": "ubuntu",
"host.os.version": "20.04"
"kubernetes.cluster.name": "",
"kubernetes.node.name": ""
}
}
]
Full Report
The full report contains all the vulnerabilities found during the first
host scan.
[
{
"id": "1680c8462f368eaf38d2f269d9de1637",
"type": "hostscanning",
"timestamp": 1621516069618,
"originator": "hostscanning",
"category": "hostscanning_full_report",
"source": "hostscanning",
"name": "Host ip-172-31-94-81 scanned",
"description": "",
"severity": 4,
"agentId": 0,
"containerId": "",
"machineId": "16:1f:b4:f5:02:03",
"content": {
"hostname": "ip-172-31-94-81",
"mac": "16:1f:b4:f5:02:03",
"reportType": "full",
"added": [
{
"cve": "CVE-2015-0207",
"fixAvailable": "None",
"packageName": "libssl1.1",
"packageType": "dpkg",
"packageVersion": "1.1.0l-1~deb9u3",
"severity": "Negligible",
"url": "https://security-tracker.debian.org/tracker/CVE-2015-0207",
"vulnerablePackage": "libssl1.1:1.1.0l-1~deb9u3"
},
{
"cve": "CVE-2016-2088",
"fixAvailable": "None",
"packageName": "libdns162",
"packageType": "dpkg",
"packageVersion": "1:9.10.3.dfsg.P4-12.3+deb9u8",
"severity": "Negligible",
"url": "https://security-tracker.debian.org/tracker/CVE-2016-2088",
"vulnerablePackage": "libdns162:1:9.10.3.dfsg.P4-12.3+deb9u8"
},
{
"cve": "CVE-2017-5123",
"fixAvailable": "None",
"packageName": "linux-headers-4.9.0-15-amd64",
"packageType": "dpkg",
"packageVersion": "4.9.258-1",
"severity": "Negligible",
"url": "https://security-tracker.debian.org/tracker/CVE-2017-5123",
"vulnerablePackage": "linux-headers-4.9.0-15-amd64:4.9.258-1"
},
{
"cve": "CVE-2014-2739",
"fixAvailable": "None",
"packageName": "linux-headers-4.9.0-15-common",
"packageType": "dpkg",
"packageVersion": "4.9.258-1",
"severity": "Negligible",
"url": "https://security-tracker.debian.org/tracker/CVE-2014-2739",
"vulnerablePackage": "linux-headers-4.9.0-15-common:4.9.258-1"
},
{
"cve": "CVE-2014-9781",
"fixAvailable": "None",
"packageName": "linux-kbuild-4.9",
"packageType": "dpkg",
"packageVersion": "4.9.258-1",
"severity": "Negligible",
"url": "https://security-tracker.debian.org/tracker/CVE-2014-9781",
"vulnerablePackage": "linux-kbuild-4.9:4.9.258-1"
},
{
"cve": "CVE-2015-8705",
"fixAvailable": "None",
"packageName": "libisc-export160",
"packageType": "dpkg",
"packageVersion": "1:9.10.3.dfsg.P4-12.3+deb9u8",
"severity": "Negligible",
"url": "https://security-tracker.debian.org/tracker/CVE-2015-8705",
"vulnerablePackage": "libisc-export160:1:9.10.3.dfsg.P4-12.3+deb9u8"
}
]
},
"labels": {
"agent.tag.distribution": "Debian",
"agent.tag.fqdn": "ec2-3-231-219-145.compute-1.amazonaws.com",
"agent.tag.test-type": "qa-hs",
"agent.tag.version": "9.13",
"host.hostName": "ip-172-31-94-81",
"host.id": "cbd8fc14e9116a33770453e0755cbd1e72e4790e16876327607c50ce9de25a4b",
"host.mac": "16:1f:b4:f5:02:03",
"host.os.name": "debian",
"host.os.version": "9.13"
"kubernetes.cluster.name": "",
"kubernetes.node.name": ""
}
}
]
{
"id": "16f43920a0d70f005f136173fcec3375",
"type": "audittrail",
"timestamp": 1606322948648718268,
"timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
"originator": "ingestion",
"category": "",
"source": "auditTrail",
"name": "",
"description": "",
"severity": 0,
"agentId": 0,
"containerId": "",
"machineId": "",
"content": {
"timestampNs": 1654009775452000000,
"customerId": 1,
"userId": 454926,
"teamId": 46902,
"requestMethod": "GET",
"requestUri": "/api/integrations/discovery/",
"userOriginIP": "187.188.243.122",
"queryString": "cluster=demo-env-prom&namespace=sysdig-agent",
"responseStatusCode": 200,
"entityType": "integration",
"entityPayload": ""
},
"labels": {
"entityType": "integration"
}
}
Delete an Event Forwarding Integration
To delete an existing integration:
From the Settings
module of the Sysdig Secure UI, navigate to the
Events Forwarding
tab.
Click the More Options
(three dots) icon.
Click the Delete Integration
button.
Click the Yes, delete
button to confirm the change.
7.1.1 - Forwarding to Splunk
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle Splunk event forwarding.
To forward event data to Splunk:
Log in to Sysdig Secure as admin
.
From the Settings
module, navigate to the
Events Forwarding
tab.
Select Splunk
from the drop-down menu.
Configure the required options:

Integration Name: Define an integration name.
URL: Define the URL of the Splunk service. This is the HTTP
Event
Collector
that forwards the events to a Splunk deployment. Be sure to use the
format scheme://host:port
.
Token: This is the token that Sysdig uses to authenticate the
connection to the HTTP Event Collector. This token is created when
you create the Splunk Event Collector.
Optional: Configure additional Splunk parameters (Index, Source,
Source Type) as desired.
Certificate: If you have configured Certificates Management tool, you can select one of your uploaded certs here.
Index: The index where events are stored. Specify the Index if
you have selected one while configuring the HTTP Event Collector.
Source Type: Identifies the data structure of the event. For
more information, see Source
Type.
For more information on these parameters, refer to the
Splunk documentation.
If left empty, each data type will have a source type. See Appendix: Data Categories Mapped to Source Types for more details.
Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
Select whether or not you want to allow insecure connections (i.e.
invalid or self-signed certificate on the receiving side).
Toggle the enable switch as necessary. Remember that you will need
to “Test Integration” with the button below before enabling the
integration.
Click the Save
button to save the integration.

Here is an example of how policy events forwarded from Sysdig Secure is
displayed on Splunk:

Appendix: Data Categories Mapped to Source Types
Sysdig Data Type | Splunk Source Type |
---|
Monitor Events | SysdigMonitor |
Policy Events (Legacy) | SysdigPolicy |
Sysdig Platform Audit | SysdigSecureEvents |
Benchmark Events | SysdigSecureEvents |
Secure events compliance | SysdigSecureEvents |
Host Vulnerabilities | SysdigSecureEvents |
Runtime Policy Events | SysdigSecureEvents |
Activity Audit | SysdigActivityAudit |
7.1.2 - Forwarding to Syslog
Syslog refers to System Logging protocol. It is a standard chiefly used by
network devices to send events and logs in a particular format to a
centralized system for storage and analysis. A Syslog event includes
severity level, host IP, timestamps, diagnostics information, and so on.
Sysdig Event Forwarding allows you to send events gathered by Sysdig
Secure to a Syslog server.
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle event forwarding.
To forward event data to a Syslog Server:
Log in to Sysdig Secure as admin
. From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Syslog
from the drop-down menu.

Configure the required options:
Integration Name: Define an integration name.
Address: Specify the Syslog server where the events are
forwarded. Enter a domain name or IP address. If a domain name
resolves to several IP addresses, the first resolved address is
used.
Port: Specify the port number.
Protocol: Choose the protocol depending on the server you are
sending the logs to:
RFC 3164: RFC 3164 is the older version of the protocol, default
port and transport is 514/UDP.
RFC 5424: RFC 5424 is the current version of the protocol,
default port and transport is 514/UDP
RFC 5425 (TLS): RFC 5425 (TLS) is an extension to RFC 5424 to
use an encrypted channel, default port and transport is 6514/TCP. Select this option if you want to use a certificate uploaded via Sysdig’s Certificates Management tool.
UDC/TCP: Define transport layer protocol UDP/TCP. Use TCP for
security incidents, as it’s far more reliable than UDP for handling
network congestion and preventing packet loss.
- NOTE: RFC 5425 (TLS) only supports TCP.
Certificate: (Optional) Select a certificate you’ve uploaded via Sysdig’s Certificates Management tool. Note that the RFC 5425 (TLS) protocol is required for you to see this field.
Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
Allow insecure connections: Toggle on if you want to allow
insecure connections (i.e. invalid or self-signed certificate on the
receiving side).
Toggle the enable switch as necessary. Remember that you will need
to “Test Integration” with the button below before enabling the
integration.
Click the Save
button to save the integration.
7.1.3 - Forwarding to IBM Cloud Pak for Multicloud Management
Prerequisite: A grafeas-service-admin-id
API
key
in IBM Cloud Pak for Multicloud Management
To forward event data to IBM Cloud Pak for Multicloud Management:
Log in to Sysdig Secure as admin
.
From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select IBM MCM
from the drop-down menu.
Configure the required options:

Integration Name: Define an integration name.
URL: This is the URL for your MCM API endpoint. This should be
the same that you use to connect to the IBM MCM CloudPak console. Be
sure to use the format scheme://host:port
.
Grafeas API Key: You need to create a Grafeas API
key
that Sysdig will use to authorize and authenticate.
Account ID: (Optional) You can leave it blank to use the default
value of id-mycluster-account
. If you want to use a different
account name, provide it here.
Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
Select whether or not you want to allow insecure connections (i.e.
invalid or self-signed certificate on the receiving side).
Toggle the enable switch as necessary. Remember that you will need
to “Test Integration” with the button below before enabling the
integration.
Click the Save
button to save the integration.
Here is an example of how events forwarded from Sysdig Secure are
displayed on IBM Multicloud Managment Console:

7.1.4 - Forwarding to IBM QRadar
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle event forwarding.
To forward event data to IBM
QRadar:
Log in to Sysdig Secure as admin
.
From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select IBM QRadar
from the drop-down menu.
Configure the required options:

Integration Name: Define an integration name.
Address: Specify the DNS address of the QRadar installation
endpoint.
Port: Port to send data, hardcoded to TCP transport protocol.
514/TCP is the default
Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
Allow insecure connections: Toggle on if you want to allow
insecure connections (i.e. invalid or self-signed certificate on the
receiving side).
Toggle the enable switch as necessary. Remember that you will need
to “Test Integration” with the button below before enabling the
integration.
Click the Save
button to save the integration.
See also: Installing
Extensions
from IBM’s Knowledge Center.
7.1.5 - Forwarding to Kafka Topic
Kafka is a distributed system consisting of servers and clients that
communicate via a high-performance TCP network protocol. It can be
deployed on bare-metal hardware, virtual machines, or containers in
on-premise as well as cloud environments.
Events are organized and durably stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.
Prerequisites
This integration is only for Sysdig On-Premises.
To forward secure data to Kafka:
Log in to Sysdig Secure as admin
. From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Kafka topic
from the drop-down menu.
Configure the required options:

Integration Name: Define an integration name.
Brokers: Kafka server endpoints. A Kafka cluster may provide
several brokers; it follows the “hostname: port” (without protocol
scheme). You can list several using a comma-separated list.
Topic: Kafka topic where you want to store the forwarded data
Partitioner/Balancer: Algorithm that the client uses to
multiplex data between the multiple Brokers. For compatibility with
the Java client, Murmur2 is used as the default partitioner.
Supported algorithms are:
Murmur2
Round robin
Least bytes
Hash
CRC32
Compression: Compression standard used for the data. Supported
algorithms are:
Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
Select whether or not you want to allow insecure connections (i.e.
invalid or self-signed certificate on the receiving side).
Toggle the enable switch as necessary. Remember that you will need
to “Test Integration” with the button below before enabling the
integration.
Click the Save
button to save the integration.
7.1.6 - Forwarding to Amazon SQS
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle event forwarding.
Log in to Sysdig Secure as admin
.
From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Amazon SQS
from the drop-down menu.
Configure the required options:

- Integration Name: Define an integration name.
- Access Key and Access Secret: Enter your AWS access key and secret
- Token: Enter the AWS token used
- Region: Enter the AWS region where you created you Amazon SQS queue
- Delay Optional: Enter a value (in seconds) between 0 and 900 that a message delivery should be delayed.
- Metadata Optional: Set up to 10 10 key value headers with which the messages should be tagged. Entries can be string values.
- Queue: Enter your Amazon SQS queue
- Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
- Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
Click the Save
button to save the integration.
7.1.7 - Forwarding to Google Chronicle
Google Chronicle is a cloud service, built as a specialized layer on top of core Google infrastructure, designed for enterprises to privately retain, analyze, and search the massive amounts of security and network telemetry they generate. Chronicle normalizes, indexes, correlates, and analyzes the data to provide instant analysis and context on risky activity.
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle event forwarding.
Google Chronicle v2 now uses JSON format, which Sysdig does currently support. Contact Google Chronicle customer support to request a v1 API key.
To forward event data to Chronicle:
Log in to Sysdig Secure as admin
.
From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Chronicle
from the drop-down menu.
Configure the required options:
- Integration Name: Define an integration name.
- API Key: JSON format is currently not supported. Contact Google Chronicle customer support to request a v1 API key.
- Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
- Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
Click the Save
button to save the integration.
7.1.8 - Forwarding to Google PubSub
Google Pub/Sub allows services to communicate asynchronously and is used for streaming analytics and data integration pipelines to ingest and distribute data. It is equally effective as messaging-oriented middleware for service integration or as a queue to parallelize tasks. See Common Use Cases for more background detail.
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle event forwarding.
NOTE: The permissions for the service account must be either Editor
or Admin.
Publisher
is not sufficient.
Log in to Sysdig Secure as admin
.
From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Google Pub/Sub
from the drop-down menu.
Configure the required options:

Integration Name: Define an integration name.
Project: Enter the Cloud Console project name you created in Google Pub/Sub.
Topic: Enter the Topic Name you created.
JSON Credentials: Enter the Service Account credentials you created.
Attributes: If you have chosen to embed custom attributes as metadata in Pub/Sub messages, enter them here.
Ordering Key: If you chose to have subscribers receive messages in order, enter the ordering key information you set up.
Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
- Click the
Save
button to save the integration.
7.1.9 - Forwarding to Google Security Command Center
Google Security Command Center or SCC is a centralized vulnerability and threat reporting service that helps you strengthen your security posture and provide asset inventory and discovery.
Supported data
For the moment we only support GCP Audit Log events to be forwarded to this integration.
Prerequisites
Event forwarder originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.
Enable integration from GCP console, select Enable APIs and Services and enable the following APIs
- Security Command Center API
- Identity and Access Management (IAM) API
Service Account:A service account with the right permissions is required. The following example illustrates how to do it automatically from the terminal. The values PROJECT_ID and ORG_ID have to be provided. SERVICE_ACCOUNT refers to the desired name for the account. KEY_LOCATION refers to the desired name for the json output file that will need to be uploaded in to the Sysdig UI in the next step.
export SERVICE_ACCOUNT=scc-servaccount
export PROJECT_ID=elevated-web-872901
export KEY_LOCATION=scckey.json
export ORG_ID=494436833222
gcloud iam service-accounts create $SERVICE_ACCOUNT \
--display-name "Service Account for USER" \
--project $PROJECT_ID
gcloud iam service-accounts keys create $KEY_LOCATION \
--iam-account $SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com
gcloud beta organizations add-iam-policy-binding $ORG_ID \
--member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com" \
--role='roles/securitycenter.admin'
This action can be performed only by an Administrator
To forward event data to Google SCC:
Log in to Sysdig Secure as admin.
From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Google SCC
from the drop-down menu.
Configure the required options:
- Integration Name: Define an integration name.
- Organization: Set the ID of your GCP organization.
- JSON credentials: Updload JSON credentials that you previously generated from a service account or user.
- Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. Note that since only GCP Audit Log events can be forwarded, only Runtime Policy events are shown.
- Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
- Click the
Save
button to save the integration.
7.1.10 - Forwarding to Sentinel
Microsoft Sentinel (formerly Azure Sentinel) is a security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution built on Azure services. See Microsoft’s Sentinel documentation for more detail.
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle event forwarding.
To successfully integrate Sentinel with Sysdig’s event forwarding, you must have access to a configured Log Analytics Workspace. Go there to retrieve the workspace ID and secret you will need for the integration:
- Open your Log Analytics Workspace.
- Navigate to Agents management and select Linux servers.
- Copy the
workspace id
and primary key
.
Log in to Sysdig Secure as admin
.
From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Microsoft Sentinel
from the drop-down menu.
Configure the required options:

- Integration Name: Define an integration name.
- Workspace ID: Enter the workspace Id you copied from the Log Analytics Workspace.
- Secret: Enter the Primary key you copied from the Log Analytics Workspace.
- Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded to Sentinel. The available list depends on the Sysidg features and products you have enabled.
- Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
- Click the
Save
button to save the integration.
7.1.11 - Forwarding to Elasticsearch
Elasticsearch is a distributed, RESTful search and analytics engine at the heart of the Elastic Stack. Sysdig provides event forwarding to Elasticsearch for versions major or equal to:
- Elasticsearch 7
- Opensearch 1.2
For more information, see How to Ingest Data Into Elasticsearch Service
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle event forwarding.
You must have an instance of Elasticsearch running and permissions to access it.
Log in to Sysdig Secure as admin
.
From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Elasticsearch
from the drop-down menu.
Configure the required options:

Integration Name: Define an integration name.
Endpoint: Enter the specific Elasticsearch instance where the data will be saved. For ES Cloud and ES Cloud Enterprise, the endpoint can be found under the Deployments page:

Index Name: Name of the index under which the data will be stored. See also: https://www.elastic.co/blog/what-is-an-elasticsearch-index
Datastreams are currently not supported. Make sure to configure your Elasticsearch index template with the “datastream” option set to off. That way, data will be stored on indices.
Authentication: Basic authentication is the most common format (username:password
). The given user must have write privileges in Elasticsearch; you can query the available users.
Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
Allow insecure connections: Used to skip certificate validations when using HTTPS
Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
- Click the
Save
button to save the integration.
Timestamp Mapping
To handle timestamps directly in Elasticsearch, you might want to map them to the appropriate field type. Timestamps have nanosecond resolution in Sysdig and they are available both in epoch timestamp and in RFC 3339 format.
The best approach is using the date_nanos field type and define an explicit mapping in your Elasticsearch instance.
You will need to perform a PUT /<index>/_mapping
API call, with the index you are storing data into, using the following payload:
{
"properties": {
"timestampRFC3339Nano": {
"type": "date_nanos",
"format": "strict_date_optional_time_nanos"
}
}
}
Otherwise, you can do it using the Kibana interface, if you use it.
7.1.12 - Forwarding to Webhook
Webhooks are “user-defined HTTP callbacks.” They are usually triggered
by some event. When that event occurs, the source site makes an HTTP
request to the URL configured for the webhook. Users can configure them
to cause events on one site to invoke behavior on another.
Sysdig Secure leverages webhooks to support integrations that are not
covered by any other particular integration/protocol present in the
Event Forwarder list.
Prerequisites
Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses
to enable Sysdig to handle event forwarding.
To forward secure data to a Webhook:
Log in to Sysdig Secure as admin
. From the Settings
module, navigate to the
Events Forwarding
tab.
Click the Add Integration
button.
Select Webhook
from the drop-down menu.
Configure the required options:

Integration Name: Define an integration name.
Endpoint: Webhook endpoint following the schema protocol
(i.e. https://)hostname:port
Authentication: Four different methods are supported:
Basic authentication: If you select this method, you must
fill the Secret
field with the desired user: password
. No
whiteespaces, semicolon character as separation.
Bearer token: If you select this method, you must fill the
Secret
field with the desired user: password
. No
whiteespaces, semicolon character as separation.
Signature header: If you select this method, you must fill
the Secret
field with the cryptographic key provided by the
software on the other end.
Certificate: Select this option if you want to use a certificate uploaded via Sysdig’s Certificates Management tool.
- The Certificate field will then appear; select the appropriate cert from the drop-down menu.
Secret: Authorization / Authentication data. This field depends
on the method selected in c).
Custom Headers Any number of custom headers defined by the user
to accommodate additional parameters required on the receiving end.
To avoid interfering with the regular webhook protocol and expected
headers, the following
headers
cannot be set using this form.
Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
Due to the heavy connection establishment overhead imposed by the
HTTP protocol, the Secure policy events are grouped by time
proximity into batches and sent together in a single request as a
JSON array. In other words, every HTTP request will contain a JSON
array containing one or more policy runtime events.
Select whether or not you want to allow insecure connections (i.e.
invalid or self-signed certificate on the receiving side).
Toggle the enable switch as necessary. Remember that you will need
to “Test Integration” with the button below before enabling the
integration.
Click the Save
button to save the integration.
7.1.13 - Event Enrichment with Agent Labels
Labels are default fields collected by the Sysdig Agent, on top of those specified in rule output.
The agent will pull them out by default and they will be shown in the events feed and in your events forwarder destination.
Enable/disable
You can decide to enable or disable this feature. It is enabled by default.
event_labels:
enabled: true/false
Adding Custom Labels
The agent is set to have a set of default labels.
It’s possible to both include additional labels and exclude labels from the default set.
event_labels:
exclude:
- custom.label.to.exclude
event_labels:
include:
- custom.label.to.include
Example of an enriched event being sent to Splunk
{
agentId: 1658033
category: runtime
containerId: d9f5e4a9aedd
content: {
baselineId:
falsePositive: false
fields: {
container.id: d9f5e4a9aedd
container.image.repository: sysdiglabs/example-voting-app-voter
container.name: k8s_voter_voter-77d98548bc-hmkpc_example-voting-app_d27f532a-41f5-49f3-a140-99afccbac5e4_63603
evt.category: process
falco.rule: Launch Root User Container
fd.rip: <NA>
fd.rport: <NA>
proc.cmdline: container:d9f5e4a9aedd
proc.name: container:d9f5e4a9aedd
proc.pid: -1
proc.pname: <NA>
proc.ppid: -1
}
matchedOnDefault: false
output: Outbound connection to IP/Port flagged by container:d9f5e4a9aedd (command=container:d9f5e4a9aedd port=<NA> ip=<NA> container=k8s_voter_voter-77d98548bc-hmkpc_example-voting-app_d27f532a-41f5-49f3-a140-99afccbac5e4_63603 (id=d9f5e4a9aedd) image=sysdiglabs/example-voting-app-voter) extra fields = (<NA> -1 process -1 container:d9f5e4a9aeddproc.aname container:d9f5e4a9aedd -1proc.apid )
policyId: 10009837
policyOrigin: Sysdig
policyVersion: 37
ruleName: Launch Root User Container
ruleTags: [
network
mitre_execution
]
ruleType: RULE_TYPE_FALCO
}
description: This Notable Events policy contains rules which may indicate undesired behavior including security threats. The rules are more generalized than Threat Detection policies and may result in more noise. Tuning will likely be required for the events generated from this policy.
id: 1726f87daaaee3960301e17f9b06c3cf
labels: {
agent.tag.role: demo-kube-eks
aws.accountId: 845151661675
aws.instanceId: i-0b767c5bc9b2f89aa
aws.region: us-east-1
container.image.digest: sha256:4cde188c9b43d02197662b5d5323ea0ba8f40efdacf672fe9bd1eb010ad207de
container.image.id: 27f385e91e79
container.image.repo: sysdiglabs/example-voting-app-voter
container.image.tag: 0.1
container.label.io.kubernetes.container.name: voter
container.label.io.kubernetes.pod.name: voter-77d98548bc-hmkpc
container.label.io.kubernetes.pod.namespace: example-voting-app
container.name: k8s_voter_voter-77d98548bc-hmkpc_example-voting-app_d27f532a-41f5-49f3-a140-99afccbac5e4_63603
host.hostName: ip-192-168-22-221.ec2.internal
host.mac: 0a:a2:c4:d3:fd:ef
kubernetes.cluster.name: demo-kube-eks
kubernetes.deployment.name: voter
kubernetes.namespace.name: example-voting-app
kubernetes.node.name: ip-192-168-22-221.ec2.internal
kubernetes.pod.name: voter-77d98548bc-hmkpc
kubernetes.replicaSet.name: voter-77d98548bc
}
machineId: 0a:a2:c4:d3:fd:ef
name: Sysdig Runtime Notable Events
originator: policy
severity: 4
source: syscall
timestamp: 1668293930605536300
timestampRFC3339Nano: 2022-11-12T22:58:50.60553615Z
type: policy
}
7.2 - Kubernetes Audit Logging
Kubernetes log integration enables Sysdig Secure to use Kubernetes audit
log data for Falco Rules and activity audit.
We now provide examples for the distributions and platforms listed
below.
The integration allows auditing of:
Creation and destruction of pods, services, deployments, daemon
sets, etc.
Creating/updating/removing config maps or secrets
Attempts to subscribe to changes to any endpoint
Review the Types of Secure
Integrations table for more
context. The Audit Logging (Kubernetes) column lists the various options
and their levels of support.
To enable this feature in Sysdig Secure SaaS, install the Sysdig
Admission Controller and
set the features.k8sAuditDetections
to true
. After installing,
create Kubernetes Audit policies and then you should be able to view results in the UI.
View Results in the UI
Policies will need to be created to use the new Falco rules for
Kubernetes audit logging. For information on creating policies, refer to
the Policies
documentation.
View Audit Logging Rules
The Kubernetes audit logging rules can be viewed in the Sysdig Policies
Rules Editor, found in the Policies module. To view the audit rules:
From the Policies
module, navigate to the Rules Editor
tab.
Open the drop-down menu for the default rules, and select
k8s_audit_rules.yaml
:

View Audit Events
Kubernetes audit events will now be routed to the Sysdig agent daemon
set within the cluster.
Once the policies are created, the audit events will be able to be
observed via the Sysdig Secure Events
module.
LEGACY INSTALLATION INSTRUCTIONS
These methods of enabling Kubernetes audit logging on Sysdig Secure SaaS
have been replaced by simply installing the Sysdig Admission
Controller. See also: July 27,2021.
if your cluster already has the Kubernetes audit logging enabled,
there’s no need to change to the Admission Controller method.
Prerequisites
Install Sysdig Agent and Apply the Agent Service
These instructions assume that the Sysdig agent has already been
deployed to the Kubernetes cluster. See Agent
Installation for details.
When the agent(s) are installed, have the Sysdig agent service account,
secret, configmap, and daemonset information on hand.
If the
sysdig-agent-service.yaml
was not explicitly deployed during agent installation, you need to
apply it now:
kubectl apply -f https://raw.githubusercontent.com/draios/sysdig-cloud-scripts/master/agent_deploy/kubernetes/sysdig-agent-service.yaml -n sysdig-agent
Note: It is also assumed that the agent has been deployed in the
sysdig-agent
namespace; if it’s not, you might need to adjust the
commands.
If your agent version is less than 11.2.0: You must add a a
variable, k8s_audit_server_url
to your agent configmap, and set it
to 0.0.0.0
.
apiVersion: v1
kind: ConfigMap
metadata:
name: sysdig-agent
data:
dragent.yaml: |
configmap: true
...
security:
k8s_audit_server_url: 0.0.0.0
For agent versions 11.2.0+, this step is already configured and no
action is needed.
Choose Enablement Steps
Sysdig has tested Kubernetes audit log integration on a variety of
platforms and distributions. Each requires different steps, as detailed
in the sections below.
The routing of Kubernetes audit events has changed rapidly between
Kubernetes versions. For more information, review the Kubernetes
documentation.
Routing is accomplished via either:
Webhook backend: Kubernetes version >= 1.11, or
Dynamic backend with Audit Sink: Kubernetes version >= 1.13
and <1.19 (deprecated since 1.19)
The table below summarizes the tested options:
Enable Kubernetes Audit Logging
These instructions assume that the Kubernetes cluster has NO audit
configuration or logging in place. The steps add configuration only to
route audit log messages to the Sysdig agent.
There is a beta script
automating
many of these steps, which is suitable for
proof-of-concept/non-production environments. In any case, we
recommend reading the step-by-step instructions carefully before
continuing.
OpenShift 3.11
Openshift 3.11 only supports webhook backends (described as “Advanced
Audit”
in the Openshift Documentation).
Follow the steps below on the Kubernetes API master node:
Copy the
provided audit-policy.yaml file
to the Kubernetes API master node in the /etc/origin/master
directory.
(The file will be picked up by OpenShift services running in
containers because this directory is mounted into the Kube API
server container at /etc/origin/master
.)
Create a Webhook Configuration
File
and copy it to the Kubernetes API master node, in
the /etc/origin/master
directory.
Modify the master configuration by adding the following to
your /etc/origin/master/master-config.yaml
file, replacing any
existing auditConfig:
entry.
auditConfig:
enabled: true
maximumFileSizeMegabytes: 10
maximumRetainedFiles: 1
auditFilePath: "/etc/origin/master/k8s_audit_events.log"
logFormat: json
webHookMode: "batch"
webHookKubeConfig: /etc/origin/master/webhook-config.yaml
policyFile: /etc/origin/master/audit-policy.yaml
One way to do this is to use oc ex config
patch.
Assuming the above content were in a file audit-patch.yaml
and you
had
copied /etc/origin/master/master-config.yaml
to /tmp/master-config.yaml.original
,
you could run:
oc ex config patch /tmp/master-config.yaml.original -p "$(cat audit-patch.yaml)" > /etc/origin/master/master-config.yaml
Restart the API server by running the following:
# sudo /usr/local/bin/master-restart api
# sudo /usr/local/bin/master-restart controllers
Once restarted, the server will route Kubernetes audit events to the
Sysdig agent service.
MiniShift 3.11
Like OpenShift 3.11, Minishift 3.11 supports webhook backends, but the
way Minishift launches the Kubernetes API server is different.
Therefore, the command line arguments are somewhat different than in the
instructions above.
Copy the
provided audit-policy.yaml file
to the Minishift VM into the
directory /var/lib/minishift/base/kube-apiserver
/.
(The file will be picked up by Minishift services running in
containers because this directory is mounted into the kube API
server container at /etc/origin/master
.)
Create a Webhook Configuration
File
and copy it to the Minishift VM into the
directory /var/lib/minishift/base/kube-apiserver/
.
Modify the master configuration by adding the following
to /var/lib/minishift/base/kube-apiserver/master-config.yaml
on
the Minishift VM, merging/updating as required.
Note:master-config.yaml
also exists in other directories such
as /var/lib/minishift/base/openshift-apiserver
and
/var/lib/minishift/base/openshift-controller-manager/
.
You should modify the one in kube-apiserver
:
kubernetesMasterConfig:
apiServerArguments:
audit-log-maxbackup:
- "1"
audit-log-maxsize:
- "10"
audit-log-path:
- /etc/origin/master/k8s_audit_events.log
audit-policy-file:
- /etc/origin/master/audit-policy.yaml
audit-webhook-batch-max-wait:
- 5s
audit-webhook-config-file:
- /etc/origin/master/webhook-config.yaml
audit-webhook-mode:
- batch
Restart the API server by running the following:
(For minishift)
# minishift openshift restart
Once restarted, the server will route Kubernetes audit events to the
Sysdig agent service.
OpenShift 4.2, 4.3
By default, Openshift 4.2/4.3 enables Kubernetes API server logs and
makes them available on each master node, at the
path /var/log/kube-apiserver/audit.log
. However, the API server is not
configured by default with the ability to create dynamic backends.
You must first enable the creation of dynamic backends by changing the
API server configuration. You then create audit sinks to route audit
events to the Sysdig agent.
Run the following to update the API server configuration:
oc patch kubeapiserver cluster --type=merge -p '{"spec":{"unsupportedConfigOverrides":{"apiServerArguments":{"audit-dynamic-configuration":["true"],"feature-gates":["DynamicAuditing=true"],"runtime-config":["auditregistration.k8s.io/v1alpha1=true"]}}}}'
Wait for the API server to restart with the updated configuration.
Create a Dynamic Audit
Sink.
Once the dynamic audit sink is created, it will route Kubernetes
audit events to the Sysdig agent service.
Kops
You will modify the cluster configuration using kops set
, update the
configuration using kops update
, and then perform a rolling update
using kops rolling-update
.
Create a Webhook Configuration
File
and save it locally.
Get the current cluster configuration and save it to a file:
kops get cluster <your cluster name> -o yaml > cluster-current.yaml
To ensure that webhook-config.yaml
is available on each master
node at /var/lib/k8s_audit
, and that the kube-apiserver
process
is run with the required arguments to enable the webhook backend,
you will edit cluster.yaml
to add/modify the
fileAssets
and kubeAPIServer
sections as follows:
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
spec:
...
fileAssets:
- name: webhook-config
path: /var/lib/k8s_audit/webhook-config.yaml
roles: [Master]
content: |
<contents of webhook-config.yaml go here>
- name: audit-policy
path: /var/lib/k8s_audit/audit-policy.yaml
roles: [Master]
content: |
<contents of audit-policy.yaml go here>
...
kubeAPIServer:
auditLogPath: /var/lib/k8s_audit/audit.log
auditLogMaxBackups: 1
auditLogMaxSize: 10
auditWebhookBatchMaxWait: 5s
auditPolicyFile: /var/lib/k8s_audit/audit-policy.yaml
auditWebhookConfigFile: /var/lib/k8s_audit/webhook-config.yaml
...
A simple way to do this using yq
would be with the following
script:
cat <<EOF > merge.yaml
spec:
fileAssets:
- name: webhook-config
path: /var/lib/k8s_audit/webhook-config.yaml
roles: [Master]
content: |
$(cat webhook-config.yaml | sed -e 's/^/ /')
- name: audit-policy
path: /var/lib/k8s_audit/audit-policy.yaml
roles: [Master]
content: |
$(cat audit-policy.yaml | sed -e 's/^/ /')
kubeAPIServer:
auditLogPath: /var/lib/k8s_audit/audit.log
auditLogMaxBackups: 1
auditLogMaxSize: 10
auditWebhookBatchMaxWait: 5s
auditPolicyFile: /var/lib/k8s_audit/audit-policy.yaml
auditWebhookConfigFile: /var/lib/k8s_audit/webhook-config.yaml
EOF
yq m -a cluster-current.yaml merge.yaml > cluster.yaml
Configure Kops with the new cluster configuration:
kops replace -f cluster.yaml
Update the cluster configuration to prepare changes to the cluster:
kops update cluster <your cluster name> --yes
Perform a rolling update to redeploy the master nodes with the new
files and API server configuration:
kops rolling-update cluster --yes
GKE (Google)
These instructions assume you have already created a cluster and
configured the gcloud
and kubectl
command-line programs to interact
with the cluster. Note the known limitations, below.
GKE already provides Kubernetes audit logs, but the logs are exposed
using Stackdriver and are in a different format than the native format
used by Kubernetes.
To simplify things, we have written a bridge
program that
reads audit logs from Stackdriver, reformats them to match the
Kubernetes-native format, and sends the logs to a configurable webhook
and to the Sysdig agent service.
Create a Google Cloud (not Kubernetes) service account and key that
has the ability to read logs:
$ gcloud iam service-accounts create swb-logs-reader --description "Service account used by stackdriver-webhook-bridge" --display-name "stackdriver-webhook-bridge logs reader"
$ gcloud projects add-iam-policy-binding <your gce project id> --member serviceAccount:swb-logs-reader@<your gce project id>.iam.gserviceaccount.com --role 'roles/logging.viewer'
$ gcloud iam service-accounts keys create $PWD/swb-logs-reader-key.json --iam-account swb-logs-reader@<your gce project id>.iam.gserviceaccount.com
Create a Kubernetes secret containing the service account keys:
kubectl create secret generic stackdriver-webhook-bridge --from-file=key.json=$PWD/swb-logs-reader-key.json -n sysdig-agent
Deploy the bridge program to your cluster using the
provided stackdriver-webhook-bridge.yaml file:
kubectl apply -f stackdriver-webhook-bridge.yaml -n sysdig-agent
The bridge program routes audit events to the domain
name sysdig-agent.sysdig-agent.svc.cluster.local
, which corresponds to
the sysdig-agent service you created either when deploying the agent or
as a prerequisite step.
GKE Limitations
GKE uses a Kuberenetes audit policy that emits a more limited set of
information than the one recommended by Sysdig. As a result, there are
several limitations when retrieving Kubernetes audit information for the
Events feed and Activity
Audit features in Sysdig
Secure.
Request Object
In particular, audit events for config maps in GKE generally do not
contain a requestObject
field that contains the object being
created/modified.
Pod exec does not Include command/container
For many Kubernetes distributions, an audit event representing a
pod exec
includes the command and specific container as arguments to
the requestURI. For example:
“requestURI”:"/api/v1/namespaces/default/pods/nginx-deployment-7998647bdf-phvq7/exec?command=bash&container=nginx1&container=nginx1&stdin=true&stdout=true&tty=true
In GKE, the audit event is missing those request parameters.
Implications for the Event Feed
If the rule condition trigger includes a field that is not available in
the Kubernetes audit log provided by GKE, the rule will not trigger.
As a result, the following rule from k8s_audit_rules.yaml
will
not trigger: Create/Modify Configmap With Private Credentials
. (The
contents of configmaps are not included in audit logs, so the contents
can not be examined for sensitive information.)
This will limit the information that can be displayed in the outputs of
rules. For example the command=%ka.uri.param[command]
output variable
in the Attach/Exec Pod
rule will always return N/A
.
Implications for Activity Audit
kubectl exec
elements will not be scoped to the cluster name; they
will only be visible scoping by entire infrastructure
"
A kubectl exec
item in Activity Audit will not display command or
container information
Drilling down into a kubectl exec
will not provide the container
activity as there is no information that allows Sysdig to correlate
the kubectl exec
action with an individual container.
EKS (Amazon)
These instructions were verified with eks.5 on Kubernetes v1.14 for both
AWS public cloud and AWS Outposts.
Amazon EKS does not provide webhooks for audit logs, but it allows audit
logs to be forwarded
to CloudWatch. To access
CloudWatch logs from the Sysdig agent, proceed as follows:
Enable CloudWatch logs for your EKS cluster.
Allow access to CloudWatch from the worker nodes.
Add a new deployment that polls CloudWatch and forwards events to
the Sysdig agent.
You can find an example
configuration that can be
implemented with the AWS UI, along with the code and the image for an
example audit log forwarder. (In a production system this would be
implemented as IaC scripts.)
Please note that CloudWatch is an additional AWS paid offering. In
addition, with this solution, all the pods running on the worker nodes
will be allowed to read CloudWatch logs through AWS APIs.
AKS (Azure)
Requirements
The installation script (below) has the following command-line tool
requirements:
Installation
Execute the following script:
curl -s https://raw.githubusercontent.com/sysdiglabs/aks-audit-log/master/install-aks-audit-log.sh | bash -s -- -g YOUR_RESOURCE_GROUP_NAME -c YOUR_AKS_CLUSTER_NAME
Some resources will be created in the same resource group as your
cluster:
Storage Account, to coordinate event consumers
Event Hubs, to receive audit log events
Diagnostic setting in the cluster, to send audit log to Event Hubs
Kubernetes deployment aks-audit-log-forwarder, to forward the log to
Sysdig agent
If everything worked as expected, you can verify that the audit logs are
being forwarded executing:
kubectl get pods -n sysdig-agent
# take note of the pod name for aks-audit-log-forwarder
kubectl logs aks-audit-log-forwarder-XXXX -f
For additional information, optional parameters, and architecture
details, see the
repository.
To Uninstall
Use the same parameters as for installation. The script will delete all
created resources and configurations.
curl -s https://raw.githubusercontent.com/sysdiglabs/aks-audit-log/master/uninstall-aks-audit-log.sh | bash -s -- -g YOUR_RESOURCE_GROUP_NAME -c YOUR_AKS_CLUSTER_NAME
RKE (Rancher) with Kubernetes 1.13+
These instructions were verified with RKE v1.0.0 and Kubernetes v1.16.3.
It should work with versions as old as Kubernetes v1.13.
Audit support is already enabled by default, but the audit policy must
be updated to provide additional granularity. These instructions enable
a webhook backend pointing to the agent’s service. Dynamic audit
backends are not supported as there isn’t a way to enable the audit
feature flag.
On each Kubernetes API Master Node, create the
directory /var/lib/k8s_audit
.
On each Kubernetes API master node, copy the
provided audit-policy.yaml file
to to the master node into the directory /var/lib/k8s_audit
. (This
directory will be mounted into the API server, giving it access to
the audit/webhook files.)
Create a Webhook Configuration
File
and copy it to each Kubernetes API master node, into the
directory /var/lib/k8s_audit
.
Modify your RKE cluster configuration cluster.yml
to
add extra_args
and extra_binds
sections to
the kube-api
section. Here’s an example:
kube-api:
...
extra_args:
audit-policy-file: /var/lib/k8s_audit/audit-policy.yaml
audit-webhook-config-file: /var/lib/k8s_audit/webhook-config.yaml
audit-webhook-batch-max-wait: 5s
extra_binds:
- /var/lib/k8s_audit:/var/lib/k8s_audit
...
This changes the command-line arguments for the API server to use an
alternate audit policy and to use the webhook backend you created.
Restart the RKE cluster via rke up
.
IKS (IBM)
IKS supports routing Kubernetes audit events to a single configurable
webhook backend URL. It does not support dynamic audit sinks and does
not support the ability to change the audit policy that controls which
Kubernetes audit events are sent.
The instructions below were adapted from the IBM-provided
documentation
on how to integrate with Fluentd. It is expected that you are familiar
with (or will review) the IKS tools for forwarding cluster and app logs
described there.
Limitation: The Kubernetes default audit policy generally does not
include events at the Request
or RequestResponse
levels, meaning
that any rules that look in detail at the objects being created/modified
(e.g. rules using the ka.req.*
and ka.resp.*
fields) will not
trigger. This includes the following rules:
Create Disallowed Pod
Create Privileged Pod
Create Sensitive Mount Pod
Create HostNetwork Pod
Pod Created in Kube Namespace
Create NodePort Service
Create/Modify Configmap With Private Credentials
Attach to cluster-admin Role
ClusterRole With Wildcard Created
ClusterRole With Write Privileges Created
ClusterRole With Pod Exec Created
These instructions describe how to redirect from Fluentd to the Sysdig
agent service.
Set the webhook backend URL to the IP address of the sysdig-agent
service:
http://$(kubectl get service sysdig-agent -o=jsonpath={.spec.clusterIP} -n sysdig-agent):7765/k8s_audit
Verify that the webhook backend URL has been set:
ibmcloud ks cluster master audit-webhook get --cluster <cluster_name_or_ID>
Apply the webhook to your Kubernetes API server by refreshing the
cluster master. It may take several minutes for the master to
refresh.
ibmcloud ks cluster master refresh --cluster <cluster_name_or_ID>
Minikube 1.11+
These instructions were verified using Minikube 1.19.0. Other Minikube
versions should also work as long as they run Kubernetes versions 1.11.
In all cases below, “the Minikube VM” refers to the VM created by
Minikube. In cases where you’re using --vm-driver=none
, this means the
local machine.
Create the directory /var/lib/k8s_audit
on the master node. (On
Minikube, it must be on the Minikube VM.)
For Kubernetes 1.11 to 1.18: Copy the
provided audit-policy.yaml file
into the directory /var/lib/k8s_audit
. (This directory will be
mounted into the API server, giving it access to the audit/webhook
files. On Minikube, it must be on the Minikube VM.)
For Kubernetes 1.19: Use this
audit-policy.yaml
file instead.
Create a Webhook Configuration
File
and copy it to each Kubernetes API master node, into the
directory /var/lib/k8s_audit
.
Modify the Kubernetes API server manifest
at /etc/kubernetes/manifests/kube-apiserver.yaml
, adding the
following command-line arguments:
--audit-log-path=/var/lib/k8s_audit/k8s_audit_events.log
--audit-policy-file=/var/lib/k8s_audit/audit-policy.yaml
--audit-log-maxbackup=1
--audit-log-maxsize=10
--audit-webhook-config-file=/var/lib/k8s_audit/webhook-config.yaml
--audit-webhook-batch-max-wait=5s
Command-line arguments are provided in the container spec as
arguments to the program /usr/local/bin/kube-apiserver
. The
relevant section of the manifest will look like this:
spec:
containers:
- command:
- kube-apiserver --allow-privileged=true --anonymous-auth=false
--audit-log-path=/var/lib/k8s_audit/audit.log
--audit-policy-file=/var/lib/k8s_audit/audit-policy.yaml
--audit-log-maxbackup=1
--audit-log-maxsize=10
--audit-webhook-config-file=/var/lib/k8s_audit/webhook-config.yaml
--audit-webhook-batch-max-wait=5s
...
Modify the Kubernetes API server manifest
at /etc/kubernetes/manifests/kube-apiserver.yaml
to add a mount of
/var/lib/k8s_audit
into the kube-apiserver
container. The
relevant sections look like this:
volumeMounts:
- mountPath: /var/lib/k8s_audit/
name: k8s-audit
readOnly: true
...
volumes:
- hostPath:
path: /var/lib/k8s_audit
type: DirectoryOrCreate
name: k8s-audit
...
Modifying the manifest will cause the Kubernetes API server
automatically to restart. Once restarted, it will route Kubernetes
audit events to the Sysdig agent’s service.
Prepare Webhook or (Legacy) Dynamic Backend
Most of the platform-specific instructions will use one of these
methods.
Create a Webhook Configuration File
Sysdig provides a templated resource file that sends audit events to an
IP associated with the Sysdig agent service, via port 7765.
It is “templated” in that the actual IP is defined in an environment
variable AGENT_SERVICE_CLUSTERIP
, which can be plugged in using a
program like envsubst
.
Download webhook-config.yaml.in
.
Run the following to fill in the template file with the ClusterIP IP
address associated with the sysdig-agent service
you created,
either when installing the agent or in the prereq step:
AGENT_SERVICE_CLUSTERIP=$(kubectl get service sysdig-agent -o=jsonpath={.spec.clusterIP} -n sysdig-agent) envsubst < webhook-config.yaml.in > webhook-config.yaml
Note: l Athough service domain names
like sysdig-agent.sysdig-agent.svc.cluster.local
can not be
resolved from the Kubernetes API server (they’re typically run as
pods but not really a part of the cluster), the ClusterIPs
associated with those services are routable.
Using a webhook backend to route audit events is a feature available
from Kubernetes v1.11+. See Kubernetes’ documentation
for
background info.
Create a Dynamic Audit Sink
When using dynamic audit sinks, you must create an AuditSink object that
directs audit events to the sysdig agent service
.
Sysdig provides a template file that can be used to create the sink.
Download audit-sink.yaml.in
.
Run the following to fill in the template file with the ClusterIP IP
address associated with the sysdig-agent service
you created,
either when installing the agent or in the prereq step:
AGENT_SERVICE_CLUSTERIP=$(kubectl get service sysdig-agent -o=jsonpath={.spec.clusterIP} -n sysdig-agent) envsubst < audit-sink.yaml.in > audit-sink.yaml
Apply the following:
kubectl apply -f audit-sink.yaml -n sysdig-agent
Test the Integration
To test that Kubernetes audit events are being properly passed to the
agent, you can do any of the following:
Enable the All K8s Object Modifications policy and create a
deployment, service, configmap, or namespace to see if the events
are recorded and forwarded.
Enable other policies, such as Suspicious K8s Activity,if and test
them.
You can use
the falco-event-generator Docker
image to generate activity that maps to many of the default
rules/policies provided in Sysdig Secure. You can run the image via
a command line like the following:
docker run -v $HOME/.kube:/root/.kube -it falcosecurity/falco-event-generator k8s_audit
This will create resources in a namespace falco-event-generator
.
See also: Using Falco within Sysdig
Secure
and the native Falco
documentation
for more information about this tool.
(BETA) Script to Automate Configuration Changes
As a convenience, Sysdig has created a
script: enable-k8s-audit.sh,
which performs the necessary steps for enabling audit log support for
all Kubernetes distributions described above, except EKS.
You can run it via: bash enable-k8s-audit.sh <distribution>
where
<distribution>
is one of the following:
minishift-3.11
openshift-3.11
openshift-4.2, openshift-4.3
gke
iks
rke-1.13 (implies Kubernetes 1.13)
kops
minikube-1.13 (implies Kubernetes 1.13)
minikube-1.12 (implies Kubernetes 1.11/1.12)
It should be run from the
sysdig-cloud-scripts/k8s_audit_config
directory.
In some cases, it may prompt for the GCE project ID, IKS cluster name,
etc..
For Minikube/Openshift-3.11/Minishift 3.11, it will use ssh/scp
to
copy files to and run scripts on the API master node. Otherwise, it
should be fully automated.
7.3 - Threat Detection with AWS CloudTrail
Threat Detection leverages audit logs from AWS CloudTrail plus Falco
rules to detect threats as soon as they occur and bring governance,
compliance, and risk auditing for your cloud accounts.
Deploy Sysdig Secure for cloud on
AWS and choose the
Threat Detection
module to track abnormal and suspicious activities in
your AWS environment. (In the future, cloud Threat Detection will extend
into other environments such as Google and Azure.)
With out-of-the-box Falco
rules, this feature can detect
events such as:
Add an AWS user to a group
Allocate a new elastic IP address to AWS account
Associate an elastic IP Address to an AWS network interface
Attach an Administrator Policy
CloudTrail logging disabled
Create an HTTP target group without SSL
Create an AWS user
Create an internet-facing AWS public-facing load balancer
Deactivate MFA for user access
Delete bucket encryption
Put inline policy in a group to allow access to all resources
Usage Steps
Deploy: Deploy Sysdig Secure for cloud on
AWS and choose the
Threat Detection with CloudTrail
option.
Insights becomes
your default landing page in Sysdig Secure.
Review the Events feed for detected activity.
Policies: Check Policies > Runtime Policies
and confirm
that the AWS Best Practices
policy is enabled. This consists
of the most-frequently-recommended rules for AWS and CloudTrail.
You can customize it by creating a new policy of the AWS
CloudTrail
type.

Events: In the Events
feed, search ‘cloud’ to show events
from AWS CloudTrail.

8 - Investigate
With Sysdig Secure On-Premises v4.0, an optional feature has been
introduced called Rapid Response. It enables designated users to remote
connect into a host from within the Sysdig Secure interface. For on-prem
users who enable this functionality, their menu options will differ from
earlier versions and from the SaaS version. This section describes those
options and changes.
With Sysdig Secure SaaS (June, 2021), the Activity Audit and Capture
modules have been moved into Investigate.
On-Prem Overview
If Sysdig Secure On-Prem v.4.0.0 is installed and the Rapid Response
feature flag has been enabled by Sysdig Support, the following
differences will appear in the Sysdig Secure UI for designated users:
Left navigation: Captures
is replaced by Investigate
The Captures feature
is now a subset of the Investigate module, along with the new Rapid
Response feature.

Rapid Response pages: Accessed from the Investigate module, the
Start Session
and Session Log
pages have been added. See Rapid
Response for details.
SaaS Overview
Activity Audit and
Captures features are now
both subsets of the Investigate module. See also: June 9, 2021.
8.1 - Activity Audit
Activity Audit takes the high-value data from captures and makes it
always-on, searchable, and indexed against your cloud-native assets.
This stream includes executed commands, network activity, file activity,
and kube exec
requests to the Kubernetes API.

Understanding How Activity Audit is Used
Activity Audit allows users to view different data sources in-depth for
monitoring, troubleshooting, diagnostics, or to meet regulatory
controls.
Using to Investigate Events
A system investigation may be triggered by an event generated by Sysdig,
or by an alert from another tool or person.
Find contextualized, relevant data Activity Audit allows easy
access to the underlying data to help trace the event, evaluate its
impact, and resolve the issue.
From Policy Events in Sysdig Secure, jump directly to the
relevant Activity Audit to investigate details.

Trace commands and connections back to users Activity Audit can
correlate the interactive requests from a Kubernetes user with the
commands and network connections performed inside the container,
allowing an operator to trace this activity back to a user identity.
Using for Regulatory Audits
The Activity Audit can also provide data about the infrastructure to
help prove to auditors that proper data visibility and security measures
are in place
Activity Audit is a critical requisite for many compliance standards:
Navigate the Audit Interface
Activity Audit displays a continuously updated list of activities. Use
the UI features to find and filter the information you need.
Select Investigate > Activity Audit
to access.

Filtering
Filtering is the heart of Activity Audit’s power. Filters allow you to
search, sort, parse, and surface meaningful data and connections as they
are needed.
Ways to filter Activity Data:
Scope: Reduce the scope of your investigation by focusing on a more specific area of your infrastructure.
By default, your scope will be set to Everywhere
, unless a team scope is defined for your currenly selected team.
Data Source: Choose a data source from the right side of the graph.
Currently available data sources are:
network activity
, commands
, kubectl exec
, or file
.
You can also select more than one source.
Attribute (=/!=): Choose = or -!= next to an attribute, either from the list or from the detail view,
to include/exclude that attribute from the filter
Attribute (manual): If you know the attribute, you can type it
into the filter box manually, with the following syntax:
Include an attribute
attribute_name="attribute_value"
e.g. comm="grep"
Exclude an attribute
attribute_name!="attribute_value"
e.g. comm!="grep"
Trace Trace entries to see all relevant activity in
that session from that user.
See the Trace Button paragraph for more detailed information.
Frequency graph: Select a section of the graph to zoom in on a time
frame and see detailed activity. More info in the Frequency Graph paragraph
Combine: These methods can be combined as needed.
For example, the filter below surfaces activity on a particular pod,
while excluding activity from one IP address that is known to be
normal.
resource="pods" name="woocommerce-6877958" sourceaddresses!="172.20.41.2
Frequency Graph
The graph shows the activity frequency for each data source, allowing
users easily to zero-in on anomalies.

The image above shows a spike in network activity (purple line) between
12:00 - 3:00 pm.
Drag the mouse over the peak to auto-zoom on the time frame and see more
detail.

Data Sources
Use the legend at the right side of the graph to filter information from one particular
data set. The currently available data sources are:
User commands
Network connections
Kube exec commands
File activities
Activity audit feature captures only interactive commands and the
network connections and file modifications related to those commands.
Kube exec
commands, on the other hand, are extracted from the
Kubernetes/OpenShift audit log.
Use the time window navigation bar to show only activities run within
that window.
Activity Row and Details
Select an activity row to see its details on the right panel, including all the collected attributes.
See Review Activity
Details
for the attributes of each data source.
Some attributes allow you to quickly add filters including =
or excluding !=
such a value.

You can also perform quick filtering by selecting attribute and filter type directly from the activity row.

Beside each activity originating a trace there is a Trace button

.
Such a button is available only for the following events:
- kube exec
- kube attach
- ssh
- dropbear
- shells (*sh)
This feature allows you to correlate activities from the originator to each single performed operation.
See Follow a kubectl exec Trace use case to see it in action.
This button does not appear if you are running on a GKE cluster.
kubectl run
The Kubernetes event in the activity audit list labeled kube run is received from the AC in the following cases:
- A kubectl run is performed
- A kubectl attach is performed
Review Activity Details
Command Details
Time | The date and time the command was executed. |
Command | The command executed. |
Full Command Line | The complete command, including all variables/options. |
Working Directory | The directory the command was executed in. |
Scope | The entities within the infrastructure impacted by the command, including |
Host | The hostname and MAC address of the host the command was executed on. |
Additional Details | Detailed user/host information: The Process ID (PID) of the command. The Parent Process ID (PPID) of the command. The user ID of the user that executed the command. The Shell ID.
|
Network Connection Details
Only TCP
or UDP
connections are currently captured in activity
audit.
Time | The date and time of the network connection |
Connection Direction | Incoming or outgoing connection |
Connection Details | Including: |
Scope | The entities impacted by the network connection, including |
Host | The host name and MAC address of the host where the connection was made |
Additional Details | The process name and ID (Parent Process ID/PID) that launched or received the network connection |
Kubectl Exec Details
Time | The date and time of the kubectl command |
Kubernetes resource | Including: resource: The kind of Kubernetes resource affected (currently only pods) name: name of the resource (pod name) subresource: currently exec command: the command executed container: the high-level name in the Kubernetes defintion
|
Kubernetes user and group | Including: user: user name performing the kubectl command. Can be either a service account or a human user. groups: groups the user belongs to userAgent: client userAgent
|
Sources addresses | External IP address that initiated the connection |
Scope | Including |
Host | Host name and MAC address of the host where the kubectl exec was made |
File Activity Details
Time | Date and time the file was modified |
File access details | - File name - File directory - Command used to access the file - Access mode |
Scope | Entities impacted by the file activity, including |
Host | The host name and MAC address of the host where the file activity occurred |
Sample Use Cases
Look for Suspicious Commands
During an investigation you usually need to find suspicious activities
among the most normal and recurring ones.
In this example, we have a ps
command being executed very frequently,
being noisy and making investigations harder.
In order facilitate focusing on other, more suspicious commands, filters
can be used to reduce noise and make the investigation easier.

In this case, quick filters can be used to exclude the ps
command,
so that other more interesting commands emerge instantly.

Filtering for Incident Response
A Policy Event reports a dangerous peak in network connections coming
from a specific pod. This example describes one way to search for the
root cause.
What user and what activity triggered this issue?
Use the Respond button next to the policy event to jump
directly to the relevant data.

Here one can determine at a glance:
The pod/namespace on which the heightened activity is occurring
The process related to the activity (in this case, ab
, or the
Apache Benchmark tool)
Related activities in the graph (cmd
and kube exec
lines)
Repetitive entries that can be screened out
Refine the view through filtering:
Switch from network
data source to cmd
and kube exe
.
Filter out noisy, repetitive entries (e.g. comm!="bash"
)
Investigate details of a kube exec item for user information.

After filtering, you have a focused incident report detailing:
The Kubernetes user “johndoe”
The external IP he used to connect
The set of commands he used to install and launch the Apache
Benchmark stress-testing tool.
Follow a kubectl exec Trace
In a production environment, kubectl exec
commands are typically
suspicious. Also, because such commands are interactive sessions, it can
be difficult to pinpoint which individual has issued the command(s) and
what other activities the individual performed. This is where Sysdig’s
Trace functionality comes in, correlating kubectl exec commands with a
specific user and the network and command activities performed in that
user’s session.
In this example, suspicious activity has been detected and you want to
determine whether someone has downloaded and executed a Trojan horse.
Use the Groups to display your Kubernetes hierarchy by namespace and
deployment. Focus on the pod displaying unexpected high levels of
activity (based on the number in parentheses).
Checking the corresponding activity graph, you zero in on a time
frame and see kube exec
activity among the hundreds of commands
and network events.

Select the kube exec
item and click the Trace button on left.
This session trace will display a formatted report of any container
activity (network, commands) that the user performed inside the
container.

This button does not appear if you are running on a GKE cluster.
8.2 - Captures
Sysdig capture files contain system calls and other OS events that can
be analyzed with either the open-source sysdig
or csysdig
(curses-based) utilities, and are displayed in the Captures module.
The Captures module contains a table listing the capture file name, the
host it was retrieved from, the time frame, and the size of the capture.
When the capture file status is uploaded, the file has been successfully
transmitted from the Sysdig agent to the storage bucket, and is
available for download and analysis.
Due to the nature and quantity of data collected, the Sysdig Agent is
limited to recording one capture at a time (concurrently) per host. If
multiple policies, each configured to create a capture, are triggered at
the same time on the same host, only the first event will be able to
store the captures. Additional attempts to create captures will result
in the error “Maximum number of outstanding captures (1) reached”. This
is also true of overlapping captures, often caused by long capture
settings.
This section describes how to create capture files in Sysdig Secure.
This feature is available in the Enterprise tier of the Sysdig product. See https://sysdig.com/pricing for details, or contact sales@sysdig.com.
If upgrading from Essentials to Enterprise, users must go to Settings>Teams><Your Team>
and check the Enable Captures
box. They must then log out and log in again. See also: User and Team
Administration.
From June, 2021, the Captures module has moved under the
Investigate menu in the nav bar.
Store Capture Files
Sysdig capture files are stored in Sysdig’s AWS S3 storage (for SaaS
environments), or in the Cassandra DB (for on-premises environments) by
default.
Create a Capture File
Capture files can be created in Sysdig Secure either by configuring them
as part of a policy, or by manually creating them from the Captures
module.
For more information on creating a capture as part of a policy, see
Manage Policies.
To create a capture file manually:
From the Captures
module, click the Take Capture
button to open
the capture creation window.

Define the name of the capture.
Configure the host and container the capture file should record
system calls from.
Define the duration of the capture. The maximum length is 300
seconds (five minutes).
Click the Start
button.
The Sysdig agent will be signaled to start a capture and send back the
resulting trace file. The file will then be displayed in the Captures
module.
Delete a Capture File
From the Captures
module, select the capture file(s) to be
deleted.
Click the Delete
(trash can) icon:

Click the Yes
(tick) icon to confirm deleting the capture, or the
No
(cross) icon to cancel.
Review Capture Files
Review the Capture File with Sysdig Inspect
To review the capture file in Sysdig Inspect:
From the Captures
module, select the capture file to be deleted.
Click the Inspect (Sysdig logo) icon to open Sysdig
Inspect in a new browser
tab:

See also: Quick Menu to Captures from Runtime
Events.
Download a Capture File
To download a capture file:
From the Captures
module, select the target capture file.
Click the Download
icon to download the capture file.

The capture file will now be downloaded to the local machine.
Disable Capture Functionality
Sometimes, security requirements dictate that capture functionality
should NOT be triggered at all (for example, PCI compliance for payment
information).
To disable Captures altogether, edit the agent configuration file as
described in Disable
Captures.
8.3 - Rapid Response
Overview
With Rapid Response, Sysdig has introduced a way to grant designated
Advanced Users in Sysdig Secure the ability to remote connect into a
host directly from the Event stream and execute desired commands there.
Finding the team or developer responsible for an application in a cloud
or Kubernetes environment can take hours or days. Troubleshooting a live
issue or security event may require faster investigation, to lower the
MTTR of these events.
Rapid Response allows security teams to connect to a remote shell within
your environment to start troubleshooting and investigating an event
using the commands they are already accustomed to, with the flexibility
they need to run the security tools at their disposal, directly from the
event alert.
Process Overview
Install: Install and configure the Rapid Response container on
- Sysdig Secure on-premises v4.0+ or
- Sysdig Secure SaaS
Configure Teams: Create or configure team(s) of Sysdig Secure Advanced
Users who should have Rapid Response privileges
Use: Team members log in and manage workloads using Rapid Response
shell.
Check logs Review session logs to keep track of what has been done using Rapid Response.
Rapid Response team members have access to a full shell from within the
Sysdig Secure UI. Responsibility for the security of this powerful
feature rests with you: your enterprise and your designated employees.
Suppose you have an existing team called CustomerResponse
with 40
members and you’d like five of those users to be granted Rapid Response
capabilities. You could create a team called, e.g.,CustomerResponse_RR
and add the five designated Advanced Users to it.
Create a team or teams, as described
here
Add
users,
assigning them the Advanced User
role.
Check the Rapid Response additional permission checkbox
Alternatively, to enable Rapid Response on an existing team, go to Settings > Teams
and choose the applicable team. On the resulting Edit Teams
page, select the
Rapid Response
checkbox in the additional permissions section and click Save
.

Usage
There are two points of entry to the Rapid Response feature:
In either case, the user will be prompted to enter a 2FA authentication
code generated by Sysdig and sent to the user by email. After entering both that and the password configured for the host, a shell will be spawned.
Log in the Sysdig Secure UI as a Rapid Response team member.
Select Investigate > Start Rapid Response
.

Select the host as prompted and click Start Session.

Enter the password for that host.
Enter the 2FA code that was emailed to your user address and click
Confirm
.

Begin your session. You can dock the terminal window at the bottom
or right panel of your page, or as a separate screen.
Launch Session from Events Detail
Log in the Sysdig Secure UI as a Rapid Response team member.
Select Events
and choose an event from the list to open the detail
pane. Click Respond: Launch Rapid Response.

Enter the 2FA code that was emailed to your user address and click
Confirm
.

Begin your session. You can dock the terminal window at the bottom
or right panel of your page, or as a separate screen.
Manage Rapid Response Logs
When reviewing the logs, you can download log sessions that have been
completed, or close sessions that are live, if needed.
The logs visible to the user depend on the team and role under which
they are logged in. Administrators will see the entire log list.
Review Session Log Info
The Session Log list includes the session initiator, the timestamp, and
the host name accessed.

If the session has been closed, the content of the session can be
downloaded from the UI (input and output) as an Open SLL-compatible gzip
encrypted file.
To open the file, use the following command, where session-file
is the
name of the downloaded file. <password>
is the password you setup for that host during the installation.
Note: to run this command OpenSSL >= 1.1.1 is required.
gzip -dc session-file | openssl enc -d -aes-256-ctr -pbkdf2 -k <password>
Note: this feature requires that a custom s3 storage bucket has been properly configured.
Close an Active Session
Any Rapid Response team member can review the Session Log list and close
any active session by clicking the Close
link.
9 - Integrations for Sysdig Secure
The Integrations menu option in Sysdig Secure provides quick-link access to both inbound data sources and outbound integrations such as notification channels and S3 captures.

Inbound
Data Sources: Cloud Accounts and Kubernetes Clusters
Log in to Sysdig Secure and choose Integrations > Cloud Accounts
or Integrations > Kubernetes Cluster
to review the status of your cloud accounts.
Outbound
S3 Capture Storage Use Integrations > Outbound | S3 Capture Storage
as a quick link to that page in Settings.
Notification Channels Integrations > Outbound | Notification Channels
gives a quick link to configure the notification channels in Sysdig Secure. (Sysdig Monitor notification channels must be configured separately and are access from the Monitor UI.)
Extensions and Levels of Support
“Integrations” for Sysdig Secure can include a wide range of tools and
software designed to connect Secure functionality (e.g., image scanning,
event handling, audit logging, and risk analysis) with other systems.
Some such tools are installed with the backend. Others are not, because
they exist to accommodate specific use cases, infrastructure details, or
additional customizations.
These added tools are called “extensions” and It is up to the user to
decide which extensions to install on top of the core backend
functionality.
There are two different categories of extensions depending on the
support level and backward- compatibility guarantees:
Preview features - These are pre-release features for which
Sysdig is seeking early feedback from users. If you’re interested in
trying these items, we will connect you directly with our
product/engineering teams. Depending on the level of engagement with
a preview Sysdig will decide to deprecate it or move it into an
officially supported extension or feature.
Fully supported) Extension features - These extensions are
installed outside the core Sysdig product and leverage Sysdig APIs,
but they are fully supported at the same level as any other core
product feature.
Features that are delivered with the core product are designated as
“built-in” and always receive full support.
Sysdig delivers many other code examples and integrations as blog
content, webinars, whitepapers, etc. Any code snippet or integration
that is not explicitly listed in the tables above is not officially
supported and is merely illustrative of a particular feature or
capability.
Types of Secure Integrations
Image scanning functionality can be integrated into the CI/CD pipeline
and with container registries. Kubernetes logs can be integrated from a
variety of platforms and distributions. Events can be forwarded to
various external processing systems.
Fully supported Extensions are marked with E. Preview
features are marked with P.
Developer Tools:
Admission
ControllerPfor
image scanning:
IBM Cloud Pak for Multicloud Management
E full
integration guide
9.1 - Data Sources
Data sources, grouped under Integrations
in Sysdig Secure, provide an overview of inbound, outbound, and third-party data integrations.

9.1.1 - Cloud Accounts
If you connect a cloud account using Sysdig Secure for cloud, you can review the details on this page and connect additional accounts as needed.
Review Data Sources
Access the Page
Log in to Sysdig Secure and select Integrations > Data Sources | Cloud Accounts
from the navigation bar.
The Cloud Accounts overview is displayed.
Review Cloud Accounts

Use the Cloud Accounts overview to:
- Confirm that the incoming data sources you expected are present
- Get an overview of the status
- Check whether managed clusters in the accounts were detected and whether an agent was installed with them.
The page lists:
Platform:
AWS, GCP, Azure
Account ID:
The AWS Account ID, GCP Project ID, or Azure Subscription ID
Alias:
As defined when connected
Region(#):
Each account may be deployed in multiple regions; click on a numbered entry to expand and view all the regions.
Date Added:
Date the account was added to Sysdig Secure
Date Last Seen:
Date of last observed activity on the account/region.
Clusters Connected (x/y):
This displays the number of managed clusters detected in the account/region (y)
and the number of clusters with at least one agent installed (x)
.
For example:
0/0
= no clusters contain an agent, no clusters detected1/17
= 1 cluster contains an agent, 17 total clusters detected
Connect Account
To connect a cloud account, click Connect Account
and select the appropriate cloud provider (AWS | GCP | Azure
), then follow the installation pop-up wizard.

See also: Installation | Sysdig Secure for Cloud
9.1.2 - Managed Kubernetes
Review Managed Kubernetes
From the Managed Kubernetes tab you can review cluster details of detected cloud accounts and instrument a cluster if needed.

Filtering Actions
You can:
- Search by keyword
- Filter by platform or account number
- Sort by Status, Cluster Name, Account ID, or Region
Use Instrumentation Modal
For un-instrumented clusters detected on an account, the modal under More
helps speed the instrumentation process.
Click Instructions to Instrument
. The instrumentation popup is displayed, with your access key and cluster-specific data prefilled.

Follow the two-step procedure to generate the kubeconfig and install the agent.
OR
Click Copy Script to Instrument
to get both parts in a single script you can deploy.
9.1.3 - Sysdig Agents
This page shows all of the Sysdig Agents that have reported into the Sysdig backend, and enables the user to quickly determine:
- Which agents are up-to-date, out of date, or approaching being out of date
- Which managed clusters have been detected in your cloud environment, but have not yet been instrumented with the Sysdig agent
The feature is in Technology Preview status; additional functionality and refined the workflows will continue to be added.
Review Environment
Select Integrations > Data Sources | Sysdig Agents
.

The resulting page shows all detected nodes in your environment and the status of the agents installed on them, or not. The view shows nodes detected from previously installed agents on hosts and from connected cloud accounts.
You can:
- See at a Glance: Quickly identify where agents are installed: by node, cluster name, and/or cloud account ID
- Know the Status: Check agent connection status and age
- Search or Filter: Narrow the view by searching or filtering on node name, cluster name, Account ID, agent version, or agent Status
- Agent Count: View your total connected agent count over time
- Install or Troubleshoot: Link to quick steps for adding an agent or troubleshooting disconnected nodes
Understand Agent Status
Status | Description | Notes |
---|
Never Connected | Cloud Accounts only. Detects nodes in a managed cluster in a cloud account connected to Sysdig, where an agent has not been deployed | Hover over the status to link to the Helm-based agent install instructions. |
Up to date | Your agent version is up to date. | |
Out of date | Deprecated agent version. Agents support is provided for the last three minor version releases. | Hover over the status for information on upgrading the agent. |
Almost out of date | On the next agent release, this agent will be deprecated. Agents support is provided for the last three minor version releases. | Hover over the status for information on upgrading the agent. |
Disconnected | A Sysdig agent on a registered Kubernetes node lost connection to Sysdig. | Hover over the status for information on how to troubleshoot an agent installation |
Options to Add Agent
Integrations > Data Sources | Sysdig Agents
and select Add Agent
.

Select whether to connect to a Kubernetes
cluster, Linux
, or Docker
, and follow the installation pop-up instructions.

See also: Agent Installation.
9.2 - Risk Spotlight Integrations (Controlled Availability)
Sysdig is developing a simplified way to integrate third-party tools with Effective Vulnerability Exposure (EVE), the technology behind Sysdig’s Risk Spotlight feature.
About Risk Spotlight
Risk Spotlight is based on Effective Vulnerability Exposure (EVE for short), a new technology developed by Sysdig that combines the observed runtime behaviour of a particular container image with vulnerabilities detected in its software packages. This combination is used to determine which packages are effectively loaded during the executing and thus, are a more direct security threat for your infrastructure.
Prioritizing the vulnerabilities which represent an actual risk to the organization is one of the most critical aspects of a successful vulnerability management program. Images often contain hundreds of vulnerabilities. Multiplying this by the number of workloads running for any non-trivial infrastructure deployment, it is easy to see that the total number of potential vulnerabilities to fix is actually very large.
There are many prioritization criteria that are commonly used and accepted to start filtering the list (Severity and CVSS scoring, Exploitability metrics, Runtime scope and other environment considerations, etc). EVE is a new criterion, completely supported by observed runtime behaviour, to add to the vulnerability management tool belt that can considerably reduce the working set of vulnerabilities that need to be addressed as a priority.
Technology Overview
The Sysdig Agent components deployed for every instrumented node (host) continuously observe the behaviour of runtime workloads. Some of the information collected includes:
- Image runtime behavior profile: accessed files, processes in execution, system calls, etc. See Profiling for details.
- The ‘Bill Of Materials’ associated with container images used by runtime containers, including used packages and versions and the vulnerabilities matched by those.
By correlating these two pieces of information, Sysdig can differentiate between packages merely installed in the image vs the ones that are loaded at execution time. This information is then propagated to vulnerabilities information.
Enabling the Feature
- Debian (except Distroless) (deb)
- Alpine (apk)
- RHEL (rpm)
- Ubuntu (deb)
- Amazon Linux
- Java (Maven)
- Python (PyPi)
- NPM (JS)
- Golang (built with Go 1.13+)
Package Types Currently NOT Supported
- Composer (PHP)
- Cargo (Rust)
- Ruby Gems
- NuGet
Currently supported Kubernetes container runtimes:
How to Integrate
At this time, Snyk is using an “in-cluster” integration model that will be deprecated and migrated to the new API-based integration. For now, the token mechanism does not apply to the Snyk integration process.
Generate a Token for the Integration
Select Integrations > 3rd Party|Risk Spotlight Integration
.
The Spotlight Integration page is displayed, with a list of existing tokens and their expiry dates.
Click +Add Token
.

Fill in the attributes and click Create Token
.
- Name: Choose a name that indicates the integration with which the token is associated
- Expiration: Select an expiration date (
1/3/6 months
; 1 year
)
Copy
the new token as it is displayed in the list.
Store
the token in a safe place; it will not be visible or recoverable again.
To Renew a token at any time, click the Renew
button, reset the expiry, and confirm.
To Delete a token, click the X
beside the token name and confirm. This action will sever the integration between Sysdig and the 3rd-party tool.
9.2.1 - Integrate Effective Vulnerability Exposure with Snyk
Integration with Snyk Overview
Snyk.io vulnerability management workflow can consume runtime EVE information to filter and prioritize detected vulnerabilities, following a similar approach to the one described in Risk Spotlight Integrations.
To integrate Sysdig EVE information with Snyk vulnerability management workflows:
- Have an account and working license to use both products: Snyk, Sysdig Secure
- Instrument the target runtime nodes using both products: Snyk, Sysdig Secure
- Have your Sysdig commercial contact explicitly enable Sysdig EVE for your Sysdig account. In particular, your account needs the feature flags for:
- Image Profiling
- Scanning v2 EVE
- Scanning v2 EVE integration
Both Snyk and Sysdig instrumentation must be in place. Choose the installation path below that corresponds to the components already installed on your infrastructure.
Installation Instructions
Snyk Installed, Sysdig Not Installed
Note the namespace you are currently using to run the Snyk instrumentation. Default: snyk-monitor
. You will need it to copy the secret in the last step.
Use the sysdig-deploy helm chart to install the Sysdig agent bundle. Provide the mandatory parameters and enable the eve
and eveConnector
parameters.
Example:
helm install --namespace sysdig-agent sysdig-agent \
....other parameters...
--set nodeAnalyzer.nodeAnalyzer.runtimeScanner.deploy=true \
--set nodeAnalyzer.nodeAnalyzer.runtimeScanner.eveConnector.deploy=true \
--set nodeAnalyzer.nodeAnalyzer.runtimeScanner.settings.eveEnabled=true \
sysdig/sysdig-deploy
Make sure the Sysdig agent images, RuntimeScanner, and EveConnector pods are running and healthy:
kubectl -n sysdig-agent get po
NAME READY STATUS RESTARTS AGE
sysdig-agent-8rmkt 1/1 Running 0 24s
sysdig-agent-eveconnector-api-74767bbf54-lw97g 1/1 Running 0 23s
sysdig-agent-hprw7 1/1 Running 0 24s
sysdig-agent-jrx2q 1/1 Running 0 24s
sysdig-agent-node-analyzer-5hltb 4/4 Running 0 24s
sysdig-agent-node-analyzer-b5ftm 4/4 Running 0 24s
sysdig-agent-node-analyzer-cd8rc 4/4 Running 0 24s
Copy the Sysdig Secret into the Snyk namespace.
Data can take up to an hour to initialize and start sending the initial profiles, then you should be able to leverage EVE data using Snyk vulnerability management workflows.
Sysdig Installed without EVE, Snyk Not Installed
If you already installed the Sysdig agent using the helm chart without enabling eve
and the eveConnector
parameters, do the following:
Install Snyk instrumentation following its documentation.
Upgrade the sysdig-deploy helm chart with the required eve settings:
helm upgrade sysdig-agent \
--namespace sysdig-agent \
--reuse-values \
--set nodeAnalyzer.nodeAnalyzer.runtimeScanner.deploy=true \
--set nodeAnalyzer.nodeAnalyzer.runtimeScanner.eveConnector.deploy=true \
--set nodeAnalyzer.nodeAnalyzer.runtimeScanner.settings.eveEnabled=true \
sysdig/sysdig-deploy
No Sysdig, No Snyk
- Install the Sysdig agent bundle using the official helm chart, and including the steps and parameters from the first installation scenario.
- Install Snyk instrumentation following its documentation.
- Copy the Sysdig Secret into the Snyk namespace.
Copy the Sysdig Secret
Once both Sysdig and Snyk instrumentation are deployed and healthy, you need to copy the secret that was automatically generated in the Sysdig namespace to the Snyk namespace:
Assuming the default namespace names for Sysdig (sysdig-agent) and Snyk (snyk-monitor), replace with your specific values:
kubectl get secret -n sysdig-agent sysdig-eve-secret -o json | jq '{ "apiVersion": .apiVersion, "kind": .kind, "type": .type, "metadata": { "name": .metadata.name }, "data": .data }' | kubectl apply -n snyk-monitor -f -
Check Integration in Snyk UI
Check to confirm that runtime vulnerabilities are detected and prioritized in the Snyk UI:

9.3 - IBM Cloud Pak for Multicloud Management
IBM Cloud Pak for Multicloud Management centralizes visibility,
governance, and automation for containerized workloads across clusters
and clouds into a single dashboard. One of the key capabilities of the
product is the centralization of security findings to help cloud team
administrators understand, prioritize, manage and resolve security
issues that are related to their cloud applications and workloads.
The integration of Sysdig Secure with IBM Cloud Pak for Multicloud
Management extends the depth of security intelligence available with:
Container image vulnerability management and configuration
validation
Runtime security with prevention, threat detection, and mitigation
Incident response and forensics
Compliance and audit
Sysdig Secure increases IBM Cloud Pak for Multicloud Management
compliance capabilities to help meet regulatory requirements like NIST,
PCI, GDPR, or HIPAA. By deploying the products together, users can
extend container security to prevent vulnerabilities, stop threats,
accelerate incident response, and enable forensics.
The integration involves several components, each of which is installed
and configured separately.
Users of IBM Cloud Pak for Multicloud Management can follow the
Installation Integration Guide
to install and configure:
The Sysdig agent
Event forwarding integration
Single sign-on (SSO) integration via OpenID Connect
Navigation menu shortcut integration
10 - Sysdig Secure for cloud
Sysdig Secure for cloud is the software that connects Sysdig Secure features to your cloud environments to provide unified threat detection, compliance, forensics, and analysis.
Because modern cloud applications are no longer just virtualized compute
resources, but a superset of cloud services on which businesses depend,
controlling the security of your cloud accounts is essential. Errors can
expose an organization to risks that could bring resources down,
infiltrate workloads, exfiltrate secrets, create unseen assets, or
otherwise compromise the business or reputation. As the number of cloud
services and configurations available grows exponentially, using a cloud
security platform protects against having an unseen misconfiguration
turn into a serious security issue.
Supported Clouds
Features
Installation
Setup options, details, troubleshooting, and validation steps for the various cloud vendors under Installations
10.1 - AWS
This section covers offering description
Check setup options, details, troubleshooting, and validation steps under Installations - Cloud - AWS
Available Features
- Threat detection based on auditing CloudTrail events
- Compliance Security Posture Management (CSPM), including CIS AWS
Benchmark compliance assessments
- Container registry scanning for ECR
- Image scanning for Fargate on ECS
- Permissions and Entitlements management (CIEM)

Threat Detection Based on CloudTrail
Threat Detection leverages audit logs from AWS CloudTrail plus Falco
rules to detect threats as soon as they occur and bring governance,
compliance, and risk auditing for your cloud accounts.
A rich set of Falco rules, an AWS Best Practices default policy, and
an AWS CloudTrail policy type for creating customized policies are
included. These correspond to security standards and benchmarks such as:
NIST 800-53, PCI DSS, SOC 2, MITRE ATT&CK®, CIS AWS, and AWS
Foundational Security Best Practices

CSPM/Compliance with CIS AWS Benchmarks
A new cloud compliance standard has been added to the Sysdig compliance
feature - CIS AWS Benchmark. This assessment is based on an
open-source engine - Cloud Custodian - and is an initial release of
Sysdig Cloud Security Posture Management (CSPM) engine. This first
Sysdig cloud compliance standard will be followed by additional security
compliance and regulatory standards for GCP, IBM Cloud and Azure.
The CIS AWS Benchmarks assessment evaluates your AWS services against
the benchmark requirements and returns the results and remediation
activities you need to fix misconfigurations in your cloud environment.
We’ve also included several UI improvements to provide additional
details such as: control descriptions, affected resources, failing
assets, and guided remediation steps, both manual and CLI-based when
available.

ECR Registry Scanning
ECR Registry Scanning automatically scans all container images pushed to
all your Elastic Container Registries, so you have a vulnerability
report available in your Sysdig Secure dashboard at all times, without
having to set up any additional pipeline.
An ephemeral CodeBuild pipeline is created each time a new image is
pushed, which executes an inline scan based on your defined scan
policies. Default policies cover vulnerabilities and dockerfile best
practices, and you can define advanced rules yourself.
Fargate Image Scanning on ECS
Fargate Image Scanning automatically scans any container image deployed
on a serverless Fargate task that run on Elastic Container Service. This
includes public images that live in registries other than ECR, as well
as private ones for which you set the credentials.
An ephemeral CodeBuild pipeline is automatically created when a
container is deployed on ECS Fargate to execute the inline scan.
Identity and Access Management
As cloud accounts proliferate, excessive permissions can become a security risk and a management headache. Sysdig Secure for cloud provides a Permissions and Entitlements module under Posture, that allows you to:
- Gain visibility into all cloud identities and their privileges: get a comprehensive view into access permissions across all AWS users and services
- Enforce least privilege: eliminate excessive permissions by applying least-privilege policies to users and services with automatically generated IAM policies. Sysdig proposes policies based on analyzing which entitlements are granted versus which are actually used.
- Simplify audit of access controls to meet compliance requirements: use reports for regular access reviews to evaluate active and inactive user permissions and activity.

10.1.1 - CloudTrail Falco rules
Scroll Top
APPRUNNER 4rules
AUTOSCALING 2rules
CLOUDSHELL 1rules
CLOUDTRAIL 7rules
CLOUDWATCH 3rules
CONFIG 19rules
CONSOLE 3rules
DMS 1rules
EBS 1rules
EC2 20rules
ECR 1rules
ECS 8rules
ECS EXEC 3rules
EFS 1rules
ELASTICSEARCH 2rules
ELB 4rules
FARGATE 8rules
GUARDDUTY 6rules
IAM 39rules
KMS 5rules
LAMBDA 6rules
RDS 13rules
ROUTE53 3rules
S3 14rules
SAGEMAKER 1rules
SECRETSMANAGER 1rules
SECURITYHUB 9rules
VPC 14rules
WAF 2rules
OTHER 2rules
Total 189 rules.
APPRUNNER
Create App Runner Service from Code Repository
Detect the building and deployment of an App Runner service from a code repository.
cloud aws aws_apprunnerCreate App Runner Service from Image Repository
Detect the deployment of an App Runner service from an image repository.
cloud aws aws_apprunnerDelete App Runner Service
Detect the deletion of an App Runner service.
cloud aws aws_apprunnerDeploy App Runner Service
Detect the deployment of an App Runner service.
cloud aws aws_apprunnerAUTOSCALING
Create Autoscaling Group without ELB Health Checks
Detect the creation of an autoscaling group associated with with a load balancer which is not using health checks.
cloud aws aws_autoscalingUpdate Autoscaling Group without ELB Health Checks
Detect the update of an autoscaling group associated with with a load balancer which is not using health checks.
cloud aws aws_autoscalingCLOUDSHELL
CloudShell Environment Created
Detect creation of a new Cloud Shell environment.
cloud aws aws_cloudshellCLOUDTRAIL
CloudTrail Trail Created
Detect creation of a new trail.
cloud aws aws_cloudtrail mitre_TA0009-collection mitre_T1530-data-from-cloud-storage-objectCloudTrail Trail Deleted
Detect deletion of an existing trail.
cloud aws aws_cloudtrail mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsCloudTrail Logfile Encryption Disabled
Detect disabling the CloudTrail logfile encryption.
cloud aws aws_cloudtrailCloudTrail Logfile Validation Disabled
Detect disabling the CloudTrail logfile validation.
cloud aws aws_cloudtrailCloudTrail Logging Disabled
The CloudTrail logging has been disabled, this could be potentially malicious.
cloud aws aws_cloudtrail mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsCloudTrail Multi-region Disabled
Detect disabling CloudTrail multi-region.
cloud aws aws_cloudtrailCloudTrail Trail Updated
Detect update of an existing trail.
cloud aws aws_cloudtrail mitre_TA0009-collection mitre_TA0040-impact mitre_T1492-store-data-manipulation mitre_T1530-data-from-cloud-storage-objectCLOUDWATCH
CloudWatch Delete Alarms
Detect deletion of an alarm.
cloud aws aws_cloudwatch mitre_TA0005-defense-evasion mitre_T1066-indicator-removal-from-toolsCloudWatch Delete Log Group
Detect deletion of a CLoudWatch log group.
cloud aws aws_cloudwatch mitre_TA0040-impact mitre_TA0005-defense-evasion mitre_T1089-disabling-security-tools mitre_T1485-data-destructionCloudWatch Delete Log Stream
Detect deletion of a CLoudWatch log stream.
cloud aws aws_cloudwatch mitre_TA0040-impact mitre_TA0005-defense-evasion mitre_T1089-disabling-security-tools mitre_T1485-data-destructionCONFIG
Delete Config Rule
Detect deletion of a configuration rule.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Configuration Aggregator
Detect deletion of the configuration aggregator.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Configuration Recorder
Detect deletion of the configuration recorder.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Conformance Pack
Detect deletion of a conformance pack.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Delivery Channel
Detect deletion of the delivery channel.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Organization Config Rule
Detect deletion of an organization config rule.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Organization Conformance Pack
Detect deletion of an organization conformance pack.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Remediation Configuration
Detect deletion of a remediation configuration.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Retention Configuration
Detect deletion of the retention configuration with details about retention period (number of days) that AWS Config stores historical information.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Config Rule
Detect addition or update in an AWS Config rule.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Configuration Aggregator
Detect creation and update of the configuration aggregator with the selected source accounts and regions.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Conformance Pack
Detect creation or update of a conformance pack.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Delivery Channel
Detect creation of a delivery channel.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Organization Config Rule
Detect addition or update in an AWS Organization Config rule.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Organization Conformance Pack
Detect deployment of conformance packs across member accounts in an AWS Organization.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Remediation Configurations
Detect addition or update of the remediation configuration with a specific AWS Config rule with the selected target or action.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Remediation Exceptions
Detect addition of a new exception or updates an existing exception for a specific resource with a specific AWS Config rule.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsPut Retention Configuration
Detect creation or update of the retention configuration with details about retention period (number of days) that AWS Config stores historical information.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsStop Configuration Recorder
Detect stoping the configuration recorder.
cloud aws aws_config mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsCONSOLE
Console Login Through Assume Role
Detect a console login through Assume Role.
cloud aws aws_console aws_iamConsole Login Without MFA
Detect a console login without MFA.
cloud aws aws_console aws_iamConsole Root Login Without MFA
Detect root console login without MFA.
cloud aws aws_console aws_iam mitre_TA0040-impact mitre_T1531-account-access-removalDMS
Create Public DMS Replication Instance
Detect creation of a public DMS replication instance.
cloud aws aws_dmsEBS
EBS Volume Creation without Encryption at Rest
Detect creation of an EBS volume without encryption at rest enabled.
cloud aws aws_ebsEC2
Allocate New Elastic IP Address to AWS Account
Detect that a public IP address has been allocated to the account.
cloud aws aws_ec2Associate Elastic IP Address to AWS Network Interface
Detect that a public IP address has been associated with a network interface.
cloud aws aws_ec2Authorize Security Group Egress
Detect addition of the specified egress rules to a security group.
cloud aws aws_ec2 mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsAuthorize Security Group Ingress
Detect addition of the specified ingress rules to a security group.
cloud aws aws_ec2 mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsCreate Snapshot
Detect creation of an EBS volume snapshot and stores it in Amazon S3.
cloud aws aws_ec2Delete Subnet
Detect deletion of the specified subnet.
cloud aws aws_ec2 mitre_TA0040-impact mitre_T1485-data-destructionDescribe Instances
Detect description of the specified EC2 instances or all EC2 instances.
cloud aws aws_ec2Disable EBS Encryption by Default
Detect disabling EBS encryption by default for an account in the current region.
cloud aws aws_ec2 mitre_TA0040-impact mitre_T1492-store-data-manipulationMake EBS Snapshot Public
Detect making public an EBS snapshot.
cloud aws aws_ec2EC2 Serial Console Access Enabled
Detect EC2 serial Console Acess enabled in the account for a specific region.
cloud aws aws_ec2Get Password Data
Detect retrieval of the encrypted administrator password for a running Windows instance.
cloud aws aws_ec2 mitre_TA0003-persistence mitre_T1108-redundant-accessModify Image Attribute
Detect modification of the specified attribute of the specified AMI.
cloud aws aws_ec2 mitre_TA0010-exfiltrationModify Snapshot Attribute
Detect addition or removal of permission settings for the specified EC2 snapshot.
cloud aws aws_ec2 mitre_TA0010-exfiltration mitre_T1537-transfer-data-to-cloud-accountReplace Route
Detect replacing an existing route within a route table in a VPC.
cloud aws aws_ec2 mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsRevoke Security Group Egress
Detect removal of the specified egress rules from a security group.
cloud aws aws_ec2 mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsRevoke Security Group Ingress
Detect removal of the specified ingress rules from a security group.
cloud aws aws_ec2 mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsRun Instances in Non-approved Region
Detect launching of a specified number of instances in a non-approved region.
cloud aws aws_ec2Run Instances with Non-standard Image
Detect launching of a specified number of instances with a non-standard image.
cloud aws aws_ec2Run Instances
Detect launching of a specified number of instances.
cloud aws aws_ec2Delete Cluster
Detect deletion of the specified cluster.
cloud aws aws_ec2 mitre_TA0040-impact mitre_T1485-data-destructionECR
ECR Image Pushed
Detect a new image has been pushed to an ECR registry
cloud aws aws_ecrECS
ECS Service Created
Detect a new service is created in ECS.
cloud aws aws_ecs aws_fargateECS Service Deleted
Detect a service is deleted in ECS.
cloud aws aws_ecs aws_fargateExecute Interactive Command inside an ECS Container
Detect execution of an interactive command inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-execution mitre_T1059-command-and-scripting-interpreterExecute Command inside an ECS Container
Detect execution of a command inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-executionECS Task Run or Started
Detect a new task is started in ECS.
cloud aws aws_ecs aws_fargateECS Task Stopped
Detect a task is stopped in ECS.
cloud aws aws_ecs aws_fargateTerminal Shell in ECS Container
A terminal shell has been executed inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-execution mitre_T1059-command-and-scripting-interpreter mitre_T1059.004-unix-shellECS Service Task Definition Updated
Detect a service task definition is updated in ECS.
cloud aws aws_ecs aws_fargateECS EXEC
Execute Interactive Command inside an ECS Container
Detect execution of an interactive command inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-execution mitre_T1059-command-and-scripting-interpreterExecute Command inside an ECS Container
Detect execution of a command inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-executionTerminal Shell in ECS Container
A terminal shell has been executed inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-execution mitre_T1059-command-and-scripting-interpreter mitre_T1059.004-unix-shellEFS
Create Unencrypted EFS
Detect creation of an unencrypted elastic file system.
cloud aws aws_efsELASTICSEARCH
Elasticsearch Domain Creation without Encryption at Rest
Detect creation of an Elasticsearch domain without encryption at rest enabled.
cloud aws aws_elasticsearchElasticsearch Domain Creation without VPC
Detect creation of an Elasticsearch domain without a VPC.
cloud aws aws_elasticsearchELB
Create HTTP Target Group without SSL
Detect creation of HTTP target group not using SSL.
cloud aws aws_elbCreate Internet-facing AWS Public Facing Load Balancer
Detect creation of an AWS internet-facing load balancer.
cloud aws aws_elbDelete Listener
Detect deletion of the specified listener.
cloud aws aws_elb mitre_TA0001-initial-access mitre_T1190-exploit-public-facing-applicationModify Listener
Detect replacing the specified properties of the specified listener.
cloud aws aws_elb mitre_TA0001-initial-access mitre_T1190-exploit-public-facing-applicationFARGATE
ECS Service Created
Detect a new service is created in ECS.
cloud aws aws_ecs aws_fargateECS Service Deleted
Detect a service is deleted in ECS.
cloud aws aws_ecs aws_fargateExecute Interactive Command inside an ECS Container
Detect execution of an interactive command inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-execution mitre_T1059-command-and-scripting-interpreterExecute Command inside an ECS Container
Detect execution of a command inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-executionECS Task Run or Started
Detect a new task is started in ECS.
cloud aws aws_ecs aws_fargateECS Task Stopped
Detect a task is stopped in ECS.
cloud aws aws_ecs aws_fargateTerminal Shell in ECS Container
A terminal shell has been executed inside an ECS container.
cloud aws aws_ecs aws_ecs_exec aws_fargate soc2_CC6.1 mitre_TA0002-execution mitre_T1059-command-and-scripting-interpreter mitre_T1059.004-unix-shellECS Service Task Definition Updated
Detect a service task definition is updated in ECS.
cloud aws aws_ecs aws_fargateGUARDDUTY
Delete Detector
Detect deletion of an Amazon GuardDuty detector.
cloud aws aws_guardduty mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsGuard Duty Delete Members
Detect deletion of GuardDuty member accounts (to the current GuardDuty administrator account) specified by the account IDs.
cloud aws aws_guardduty mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDisable GuardDuty
Detect disabling of GuardDuty.
cloud aws aws_guarddutyGuard Duty Disassociate from Master Account
Detect disassociation of the current GuardDuty member account from its administrator account.
cloud aws aws_guardduty mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsGuard Duty Disassociate Members
Detect disassociation of GuardDuty member accounts (to the current GuardDuty administrator account) specified by the account IDs.
cloud aws aws_guardduty mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsStop Monitoring Members
Detect stopping GuardDuty monitoring for the specified member accounts.
cloud aws aws_guardduty mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsIAM
Console Login Failure
Detect a console login failure
cloud aws aws_iamConsole Login Success From Untrusted IP
Detect a console login success from an untrusted IP address
cloud aws aws_iamConsole Login Success
Detect a console login success
cloud aws aws_iamConsole Login Through Assume Role
Detect a console login through Assume Role.
cloud aws aws_console aws_iamConsole Login Without MFA
Detect a console login without MFA.
cloud aws aws_console aws_iamConsole Root Login Without MFA
Detect root console login without MFA.
cloud aws aws_console aws_iam mitre_TA0040-impact mitre_T1531-account-access-removalLogged in without Using MFA
(DEPRECATED) Detect user login without using MFA (multi-factor authentication). Use "Console Login Without MFA" instead.
cloud aws aws_iamPassword Recovery Requested
Detect AWS IAM password recovery requests.
cloud aws aws_iam mitre_TA0001-initial-access mitre_T1078-valid-accountsPut Inline Policy in Group to Allow Access to All Resources
Detect putting an inline policy in a group that allows access to all resources.
cloud aws aws_iamCreate Access Key for Root User
Detect creation of an access key for root.
cloud aws aws_iam mitre_TA0001-initial-access mitre_T1078-valid-accountsDeactivate Hardware MFA for Root User
Detect deactivating hardware MFA configuration for root.
cloud aws aws_iamDeactivate MFA for Root User
Detect deactivating MFA configuration for root.
cloud aws aws_iamDeactivate Virtual MFA for Root User
Detect deactivating virtual MFA configuration for root.
cloud aws aws_iamDelete Virtual MFA for Root User
Detect deleting MFA configuration for root.
cloud aws aws_iam pcs_dss_iam.5Root User Executing AWS Command
Detect root user executing AWS command.
cloud aws aws_iamAdd AWS User to Group
Detect adding an user to a group.
cloud aws aws_iamAttach Administrator Policy
Detect attaching an administrator policy to a user.
cloud aws aws_iamAttach IAM Policy to User
Detect attaching an IAM policy to a user.
cloud aws aws_iamCreate Group
Detect creation of a new user group.
cloud aws aws_iam mitre_TA0003-persistence mitre_T1108-redundant-accessCreate Security Group Rule Allowing SSH Ingress
Detect creation of security group rule allowing SSH ingress.
cloud aws aws_iamCreate Security Group Rule Allowing Ingress Open to the World
Detect creation of security group rule allowing ingress open to the world.
cloud aws aws_iamCreate AWS user
Detect creation of a new AWS user.
cloud aws aws_iam mitre_TA0003-persistence mitre_T1136-create-accountCreate IAM Policy that Allows All
Detect creation of IAM policy that allows all.
cloud aws aws_iamDeactivate MFA for User Access
Detect deactivating MFA configuration for user access.
cloud aws aws_iamDelete Group
Detect deletion of a user group.
cloud aws aws_iam mitre_TA0040-impact mitre_T1531-account-access-removalDelete AWS user
Detect deletion of an AWS user.
cloud aws aws_iamPut IAM Inline Policy to User
Detect putting an IAM inline policy to an user.
cloud aws aws_iamRemove AWS User from Group
Detect removing a user from a group.
cloud aws aws_iamUpdate Account Password Policy Not Expiring
Detect updating password policy not expiring at all.
cloud aws aws_iamUpdate Account Password Policy Expiring in More Than 90 Days
Detect updating password policy expiring in more than 90 days.
cloud aws aws_iamUpdate Account Password Policy Not Preventing Reuse of Last 24 Passwords
Detect updating password policy not preventing reuse of the last 24 passwords.
cloud aws aws_iamUpdate Account Password Policy Not Preventing Reuse of Last 4 Passwords
Detect updating password policy not preventing reuse of the last 4 passwords.
cloud aws aws_iamUpdate Account Password Policy Not Requiring 14 Characters
Detect updating password policy not requiring a minimum length of 14 characters.
cloud aws aws_iamUpdate Account Password Policy Not Requiring 7 Characters
Detect updating password policy not requiring a minimum length of 7 characters.
cloud aws aws_iamUpdate Account Password Policy Not Requiring Lowercase
Detect updating password policy not requiring the use of an lowercase letter
cloud aws aws_iamUpdate Account Password Policy Not Requiring Number
Detect updating password policy not requiring the use of a number
cloud aws aws_iamUpdate Account Password Policy Not Requiring Symbol
Detect updating password policy not requiring the use of a symbol
cloud aws aws_iamUpdate Account Password Policy Not Requiring Uppercase
Detect updating password policy not requiring the use of an uppercase letter
cloud aws aws_iamUpdate Assume Role Policy
Detect modifying a role.
cloud aws aws_iam mitre_TA0006-credential-access mitre_T1110-brute-forceKMS
Create Customer Master Key
Detect creation of a new CMK (with rotation disabled).
cloud aws aws_kmsDisable CMK Rotation
Detect disabling of a customer master key's rotation.
cloud aws aws_kmsDisable Key
Detect disabling a customer master key (CMK), thereby preventing its use for cryptographic operations.
cloud aws aws_kmsRemove KMS Key Rotation
Detect removal of KMS key rotation.
cloud aws aws_kmsSchedule Key Deletion
Detect scheduling of the deletion of a customer master key.
cloud aws aws_kmsLAMBDA
Create Lambda Function Not Using Latest Runtime
Detect creation of a Lambda function not using the latest runtime.
cloud aws aws_lambda mitre_T1190-exploit-public-facing-applicationCreate Lambda Function Using Unsupported Runtime
Detect creation of a Lambda function using an unsupported runtime.
cloud aws aws_lambda mitre_T1190-exploit-public-facing-applicationCreate Lambda Function
Detect creation of a Lambda function.
cloud aws aws_lambda mitre_TA0003-persistenceDissociate Lambda Function from VPC
Detect dissociation of a Lambda function from a VPC.
cloud aws aws_lambdaUpdate Lambda Function Code
Detect updates to a Lambda function code.
cloud aws aws_lambda mitre_TA0003-persistence mitre_T1496-resource-hijackingUpdate Lambda Function Configuration
Detect updates to a Lambda function configuration.
cloud aws aws_lambda mitre_TA0003-persistence mitre_T1496-resource-hijackingRDS
Authorize DB Security Group Ingress
Detect enabling ingress to a DBSecurityGroup using one of two forms of authorization.
cloud aws aws_rdsCreate DB Cluster
Detect creation of a database cluster.
cloud aws aws_rds mitre_TA0003-persistence mitre_T1108-redundant-accessCreate DB Security Group
Detect creation of a database security group.
cloud aws aws_rdsCreate Global Cluster
Detect creation of a global cluster.
cloud aws aws_rds mitre_TA0003-persistence mitre_T1108-redundant-accessDelete DB Cluster
Detect deletion of a database cluster.
cloud aws aws_rds mitre_TA0040-impact mitre_T1485-data-destructionDelete DB Security Group
Detect deletion of a database security group.
cloud aws aws_rdsDelete DB Snapshot
Detect deletion of a database snapshot.
cloud aws aws_rds mitre_TA0040-impact mitre_T1485-data-destructionMake RDS DB Instance Public
Detect making public an RDS DB instance.
cloud aws aws_rdsMake RDS Snapshot Public
Detect making public an RDS snapshot.
cloud aws aws_rdsModify RDS Snapshot Attribute
Detect modification of an RDS snapshot attribute.
cloud aws aws_rds mitre_TA0010-exfitration mitre_T1537-transfer-data-to-cloud-accountRevoke DB Security Group Ingress
Detect revocation ingress from a DBSecurityGroup for previously authorized IP ranges or EC2 or VPC Security Groups.
cloud aws aws_rdsStop DB Cluster
Detect stopping of a database cluster.
cloud aws aws_rds mitre_TA0040-impact mitre_T1489-service-stopStop DB Instance
Detect stopping of a database instance.
cloud aws aws_rds mitre_TA0040-impact mitre_T1489-service-stopROUTE53
Associate VPC with Hosted Zone
Detect association of an Amazon VPC with a private hosted zone.
cloud aws aws_route53Change Resource Record Sets
Detect creation, changes, or deletion of a resource record set.
cloud aws aws_route53Register Domain
Detect registry of a new domain.
cloud aws aws_route53S3
Delete Bucket CORS
Detect deletion of the cors configuration for a bucket.
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostDelete Bucket Encryption
Detect deleting configuration to use encryption for bucket storage.
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostDelete Bucket Lifecycle
Detect deletion of the lifecycle configuration from the specified bucket.
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostDelete Bucket Policy
Detect deletion of the policy of a specified bucket.
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostDelete Bucket Public Access Block
Detect deleting blocking public access to bucket.
cloud aws aws_s3Delete Bucket Replication
Detect deletion of the replication configuration from the bucket.
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostRead Object in Watched Bucket
Detect a Read operation on objects in watched buckets.
cloud aws aws_s3List Buckets
Detect listing of all S3 buckets.
cloud aws aws_s3 mitre_TA0007-discovery mitre_T1083-file-and-directory-discoveryPut Bucket ACL
Detect setting the permissions on an existing bucket using access control lists.
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostPut Bucket CORS
Detect setting the cors configuration for a bucket.
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostPut Bucket Lifecycle
Detect creation or modification of a lifecycle configuration for the bucket [DEPRECATED use `Put Bucket Lifecycle Configuration` instead].
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostPut Bucket Policy
Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket.
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostPut Bucket Replication
Detect creation of a replication configuration or the replacement of an existing one..
cloud aws aws_s3 mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-hostPut Object in Watched Bucket
Detect a Put operation on objects in watched buckets.
cloud aws aws_s3SAGEMAKER
Create SageMaker Notebook Instance with Direct Internet Access
Detect creation of a SageMaker notebook instance with direct internet access.
cloud aws aws_sagemakerSECRETSMANAGER
Get Secret Value
Detect retrieval of the contents of the encrypted fields SecretString or SecretBinary from the specified version of a secret, whichever contains content.
cloud aws aws_secretsmanager mitre_TA0006-credential-access mitre_T1528-steal-application-access-tokenSECURITYHUB
Batch Disable Standards
Detect disabling of the standards specified by the provided StandardsSubscriptionArns.
cloud aws aws_securityhubDelete Action Target
Detect deletion of a custom action target from Security Hub.
cloud aws aws_securityhub mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsSecurity Hub Delete Members
Detect deletion the specified member accounts from Security Hub.
cloud aws aws_securityhub mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDisable Import Findings for Product
Detect disabling of the integration of the specified product with Security Hub.
cloud aws aws_securityhub mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDisable Security Hub
Detect disabling the Security Hub in the current region.
cloud aws aws_securityhub mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsSecurity Hub Disassociate From Master Account
Detect disassociation of the current Security Hub member account from the associated master account.
cloud aws aws_securityhub mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsSecurity Hub Disassociate Members
Detect disassociation of the current Security Hub member account from the associated master account.
cloud aws aws_securityhub mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsUpdate Action Target
Detect updating the name and description of a custom action target in Security Hub.
cloud aws aws_securityhub mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsUpdate Standards Control
Detect enabling or disabling of a standard control.
cloud aws aws_securityhub mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsVPC
Accept VPC Peering Connection
Detect accepting an VPC peering connection.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsAttach Internet Gateway
Detect attaching an internet gateway.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsCreate a Network ACL Entry Allowing Ingress Open to the World
Detect creation of access control list entry allowing ingress open to the world.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsCreate a Network ACL Entry
Detect creating a network ACL entry.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsCreate a Network ACL
Detect creating a network ACL.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsCreate VPC Route
Detect creating an VPC route.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsCreate VPC Peering Connection
Detect creating an VPC peering connection.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsCreate VPC with Default Security Group
Detect creation of a new VPC with default security group.
cloud aws aws_vpcCreate VPC with No Flow Log
Detect creation of a new VPC with no flow log.
cloud aws aws_vpcDelete VPC Flow Log
Detect deleting VPC flow log.
cloud aws aws_vpc mitre_TA0005-defense-evasion mitre_T1066-indicator-removal-from-toolsDelete a Network ACL Entry
Detect deletion of a network ACL entry.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsDelete a Network ACL
Detect deleting a network ACL.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsReplace a Network ACL Association
Detect replacement of a network ACL association.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsReplace a Network ACL Entry
Detect replacement of a network ACL entry.
cloud aws aws_vpc mitre_TA0003-persistence mitre_TA0005-defense-evasion mitre_T1108-redundant-access mitre_T1089-disabling-security-toolsWAF
Delete WAF Rule Group
Detect deleting a WAF rule group.
cloud aws aws_waf mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsDelete Web ACL
Detect deleting a web ACL.
cloud aws aws_waf mitre_TA0005-defense-evasion mitre_T1089-disabling-security-toolsOTHER
AWS Command Executed by Untrusted User
Detect AWS command execution by an untrusted user.
cloud awsAWS Command Executed on Unused Region
Detect AWS command execution on unused regions.
cloud aws mitre_T1526-cloud-service-discovery mitre_T1535-unused-unsupported-cloud-regions10.2 - GCP
This section covers offering description
Check setup options, details, troubleshooting, and validation steps under Installations - Cloud - GCP
Available Features
- Threat detection based on GCP Cloud Audit Logs integration
- Compliance Security Posture Management (CSPM), including CIS GCP and CIS GKE
Benchmark compliance assessments
- GCP Cloud Container scanning
- Image scanning on GCP
Threat Detection Based on GCP Cloud Audit Logs
Threat Detection leverages audit logs from GCP Cloud Audit logs plus Falco
rules to detect threats as soon as they occur and bring governance,
compliance, and risk auditing for your cloud accounts.
A rich set of Falco rules, a GCP Best Practices default policy, and
a GCP policy type for creating customized policies are
included. These correspond to security standards and benchmarks such as:
NIST 800-53, PCI DSS, SOC 2, MITRE ATT&CK®, and Google Cloud Security best practices.
CSPM/Compliance with CIS GKE and CIS GCP Benchmarks
A new cloud compliance standard has been added to the Sysdig compliance
feature - CIS GCP benchmarks. These assessments are based on an
open-source engine - Cloud Custodian - in Sysdig’s Cloud Security Posture Management (CSPM) engine.
The assessments evaluate your Google Cloud services against
the benchmark requirements and returns the results and remediation
activities you need to fix misconfigurations in your cloud environment.
GCP Cloud Container Scanning
GCP Cloud Container Scanning uses a PubSub topic to automatically detect any container image pushed to registries on Google Container Registry or Google Artifact Registry, as well as images deployed to Google Cloud Run. An ephemeral Google Cloud Build pipeline is then created to scan that image so a vulnerability report is available in your Sysdig backend.
10.2.1 - Auditlog Falco rules
Scroll Top
APIKEYS 1rules
CLOUDFUNCTIONS 3rules
CLOUDKMS 2rules
CLOUDRESOURCEMANAGER 1rules
CLOUDRUN 2rules
DNS 1rules
GCE 1rules
GKE 4rules
IAM 5rules
LOGGING 1rules
MONITORING 2rules
SQL 3rules
STORAGE BUCKETS 7rules
VM 5rules
VPC 2rules
VPC NETWORKS 2rules
OTHER 2rules
Total 44 rules.
APIKEYS
GCP Create API Keys for a Project
Detect creation of API keys for a project.
cloud gcp gcp_apikeys cis_controls_16 cis_gcp_1.12CLOUDFUNCTIONS
GCP Create Cloud Function Not Using Latest Runtime
Detect creation of a Cloud Function using and old or deprecated runtime.
cloud gcp gcp_cloudfunctions soc2 soc2_CC7.1 mitre_T1190-exploit-public-facing-applicationGCP Create Cloud Function
Detect creation of a Cloud function.
cloud gcp gcp_cloudfunctions mitre_TA0003-persistenceGCP Update Cloud Function
Detect updates to a Cloud Function.
cloud gcp gcp_cloudfunctions mitre_TA0003-persistence mitre_T1496-resource-hijackingCLOUDKMS
GCP Create KMS Key Without Rotation
Detect creation of a new KMS with rotation disabled.
cloud gcp gcp_cloudkms soc2 soc2_CC5.2 soc2_CC6.6 ISO_27001 ISO_27001_A.10.1.2GCP Remove KMS Key Rotation
Detect removal of KMS key rotation.
cloud gcp gcp_cloudkms soc2 soc2_CC6.1 soc2_CC8.1 ISO_27001 ISO_27001_A.10.1.2 ISO_27001_A.18.1.5 GDPR GDPR_32.1 GDPR_32.2CLOUDRESOURCEMANAGER
GCP Invitation Sent to Non-corporate Account
Detect sending invitations to not allowed corporate account.
cloud gcp gcp_cloudresourcemanager HIPAA HIPAA_164.308(a) HIPAA_164.312(a) HIPAA_164.312(d) HITRUST HITRUST_CSF_01.q cis_controls_16.2 cis_gcp_1.1 mitre_T1136-create-accountCLOUDRUN
CloudRun Create Service
Detect creation of a CloudRun Service.
cloud gcp gcp_cloudrunCloudRun Replace Service
Detect the replacement of a CloudRun Service.
cloud gcp gcp_cloudrunDNS
GCP Create or Patch DNS Zone without DNSSEC
Detect creation of a DNS zone with DNSSEC disabled or a modification of a DNS zone to disable DNSSEC.
cloud gcp gcp_dns cis_controls_11.1 cis_gcp_3.3GCE
GCP Describe Instance
Detect description of the specified GCE instance.
cloud gcp gcp_gceGKE
GCP Delete DNS Zone
Detect the deletion of a DNS zone.
cloud gcp gcp_gkeGCP Delete GKE Cluster
Detect the deletion of a GKE cluster.
cloud gcp gcp_gkeGCP Delete GKE Node Pool
Detect the deletion of a GKE node pool.
cloud gcp gcp_gkeGCP Delete Router
Detect the deletion of a router.
cloud gcp gcp_gkeIAM
GCP Create GCP-managed Service Account Key
Detect creating an access key for a GCP-managed service account.
cloud gcp gcp_iam soc2 soc2_CC5.2 soc2_CC6.6 ISO_27001 ISO_27001_A.10.1.2 HIPAA HIPAA_164.312(e) HITRUST HITRUST_CSF_06.d HITRUST_CSF_10.g cis_controls_16 mitre_T1550-use-alternate-authentication-materialGCP Create User-managed Service Account Key
Detect creating an access key for a user-managed service account.
cloud gcp gcp_iam soc2 soc2_CC5.2 soc2_CC6.6 ISO_27001 ISO_27001_A.10.1.2 HIPAA HIPAA_164.312(e) HITRUST HITRUST_CSF_06.d HITRUST_CSF_10.g cis_controls_16 cis_gcp_1.4 mitre_T1550-use-alternate-authentication-materialGCP Delete IAM Role
Detect the deletion of an IAM role.
cloud gcp gcp_iamGCP Operation by a Non-corporate Account
Detect executing an operation by a non-corporate account.
cloud gcp gcp_iam HIPAA HIPAA_164.308(a) HIPAA_164.312(a) HIPAA_164.312(d) HITRUST HITRUST_CSF_01.q cis_controls_16.2 cis_gcp_1.1GCP Super Admin Executing Command
Detect super admin executing GPC command.
cloud gcp gcp_iam soc2 soc2_CC6.2 soc2_CC6.6 FedRAMP FedRAMP_AC-2(12) ISO_27001 ISO_27001_A.6.1.2 ISO_27001_A.9.2.3 HIPAA HIPAA_164.308(a) HIPAA_164.312(a) HIPAA_164.312(b) HITRUST_CSF HITRUST_CSF_01.c HITRUST_CSF_09.aa GDPR GDPR_25.1 GDPR_25.2 GDPR_25.3LOGGING
GCP Update, Disable or Delete Sink
Detect the updating, disabling or deletion of a sink.
cloud gcp gcp_logging FedRAMP FedRAMP_AU-12(1) FedRAMP_AU-3(1) FedRAMP_AU-9(2) FedRAMP_CM-3(1) ISO_27001 ISO_27001_A.16.1.7 ISO_27001_A.18.1.3 HIPAA HIPAA_164.312(b) HITRUST HITRUST_CSF_09.aa HITRUST_CSF_10.k cis_controls_6.2 cis_controls_6.4 cis_gcp_2.2MONITORING
GCP Monitoring Alert Deleted
Detect deletion of an alert.
cloud gcp gcp_monitoring FedRAMP FedRAMP_AU-12(1) FedRAMP_AU-3(1) FedRAMP_AU-9(2) FedRAMP_CM-3(1) ISO_27001 ISO_27001_A.16.1.7 ISO_27001_A.18.1.3 HIPAA HIPAA_164.312(b) HITRUST HITRUST_CSF_09.aa HITRUST_CSF_10.k mitre_TA0005-defense-evasion mitre_T1066-indicator-removal-from-tools mitre_T1562-impair-defenses mitre_T1562.008-disable-cloud-logsGCP Monitoring Alert Updated
Detect updating of an alert.
cloud gcp gcp_monitoring FedRAMP FedRAMP_AU-12(1) FedRAMP_AU-3(1) FedRAMP_AU-9(2) FedRAMP_CM-3(1) ISO_27001 ISO_27001_A.16.1.7 ISO_27001_A.18.1.3 HIPAA HIPAA_164.312(b) HITRUST HITRUST_CSF_09.aa HITRUST_CSF_10.k mitre_TA0005-defense-evasion mitre_T1066-indicator-removal-from-toolsSQL
GCP Disable Automatic Backups for a Cloud SQL Instance
Detect that automatic backups have been disabled for a Cloud SQL instance.
cloud gcp gcp_sql cis_controls_10.1 cis_gcp_6.7GCP Disable the Requirement for All Incoming Connections to Use SSL for a Cloud SQL Instance
Detect that the requirement for all incoming connections to use SSL for a Cloud SQL instance has been disabled.
cloud gcp gcp_sql FedRAMP FedRAMP_CM-3(1) FedRAMP_SC-7(4) HIPAA HIPAA_164.310(b) HITRUST_CSF HITRUST_CSF_01.j HITRUST_CSF_01.n HITRUST_CSF_01.y HITRUST_CSF_05.i HITRUST_CSF_09.s HITRUST_CSF_10.k cis_controls_13 cis_controls_14.4 cis_controls_16.5 cis_gcp_6.4GCP Set a Public IP for a Cloud SQL Instance
Detect that a public IP address has been set for a Cloud SQL instance.
cloud gcp gcp_sql FedRAMP FedRAMP_SC-7(4) HITRUST_CSF HITRUST_CSF_01.n HITRUST_CSF_09.m cis_controls_13 cis_gcp_6.6STORAGE BUCKETS
GCP Create Bucket
Detect creation of a bucket.
cloud gcp gcp_storage_buckets mitre_T1074-data-stagedGCP Delete Bucket
Detect deletion of a bucket.
cloud gcp gcp_storage_bucketsGCP List Buckets
Detect listing of all storage buckets.
cloud gcp gcp_storage_buckets mitre_TA0007-discovery mitre_T1083-file-and-directory-discoveryGCP List Bucket Objects
Detect listing of all objects in a bucket.
cloud gcp gcp_storage_buckets mitre_TA0007-discovery mitre_T1083-file-and-directory-discoveryGCP Put Bucket ACL
Detect setting the permissions on an existing bucket using access control lists.
cloud gcp gcp_storage_buckets FedRAMP FedRAMP_AC-6(1) FedRAMP_AC-6(2) FedRAMP_AC-6(3) ISO_27001 ISO_27001_A.9.1.2 HIPAA HIPAA_164.308(a) HIPAA_164.312(a) HITRUST_CSF HITRUST_CSF_01.c HITRUST_CSF_01.q HITRUST_CSF_06.j mitre_TA0005-defense-evasion mitre_T1070-indicator-removal-on-host mitre_T1530-data-from-cloud-storage-objectGCP Set Bucket IAM Policy
Detect setting the permissions on an existing bucket using IAM policies.
cloud gcp gcp_storage_buckets FedRAMP FedRAMP_AC-6(1) FedRAMP_AC-6(2) FedRAMP_AC-6(3) ISO_27001 ISO_27001_A.9.1.2 HIPAA HIPAA_164.308(a) HIPAA_164.312(a) HITRUST_CSF HITRUST_CSF_01.c HITRUST_CSF_01.q HITRUST_CSF_06.j mitre_T1530-data-from-cloud-storage-objectGCP Update Bucket
Detect the update of a bucket.
cloud gcp gcp_storage_bucketsVM
GCP Enable Connecting to Serial Ports for a VM Instance
Detect enabling of connection to serial ports for a VM instance.
cloud gcp gcp_vm FedRAMP FedRAMP_CM-3(1) HITRUST_CSF HITRUST_CSF_10.k cis_controls_9.2 cis_gcp_4.5GCP Creation of a VM Instance with IP Forwarding Enabled
Detect creating a VM instance with IP forwarding enabled.
cloud gcp gcp_vm cis_controls_11.1 cis_controls_11.2 cis_gcp_4.6GCP Suspected Disable of OS Login in a VM Instance
Detect modification of the enable-oslogin metadata in an instance.
cloud gcp gcp_vm cis_controls_16 cis_gcp_4.4GCP Enable Project-wide SSH keys for a VM Instance
Detect enabling of project-wide SSH keys for a VM instance.
cloud gcp gcp_vm HIPAA HIPAA_164.310(b) HITRUST_CSF HITRUST_CSF_01.j HITRUST_CSF_01.n HITRUST_CSF_01.y HITRUST_CSF_05.i HITRUST_CSF_09.s cis_controls_16 cis_gcp_4.3GCP Shield Disabled for a VM Instance
Detect disabling of the Shielded VM parameter(s) of a VM instance.
cloud gcp gcp_vm cis_controls_13 cis_gcp_4.8VPC
GCP Delete VPC Network
Detect the deletion of a VPC network.
cloud gcp gcp_vpcGCP Delete VPC Subnetwork
Detect the deletion of a VPC subnetwork.
cloud gcp gcp_vpcVPC NETWORKS
GCP Create a Default VPC Network
Detect creation of a default network in a project.
cloud gcp gcp_vpc_networks FedRAMP FedRAMP_CM-3(1) FedRAMP_SC-7(4) HITRUST_CSF HITRUST_CSF_01.n HITRUST_CSF_10.k cis_controls_11.1 cis_gcp_3.1GCP Disable Subnet Flow Logs
Detect disabling the flow logs of a subnet.
cloud gcp gcp_vpc_networks soc2 soc2_CC6.6 FedRAMP FedRAMP_AU-12(1) FedRAMP_AU-3(1) FedRAMP_AU-9(2) FedRAMP_CM-3(1) ISO_27001 ISO_27001_A.16.1.7 ISO_27001_A.18.1.3 HIPAA HIPAA_164.312(b) HITRUST_CSF HITRUST_CSF_09.aa HITRUST_CSF_10.k cis_controls_6.2 cis_controls_12.8 cis_gcp_3.8OTHER
GCP Delete Resources from the PCI Blueprint Environment
Detect the deletion of resources from the blueprint environment.
cloud gcpGCP Command Executed on Unused Region
Detect GCP command execution on unused regions.
cloud gcp FedRAMP FedRAMP_AC-2(12) HIPAA HIPAA_164.308(a) HIPAA_164.312(a) mitre_T1526-cloud-service-discovery mitre_T1535-unused-unsupported-cloud-regions10.3 - Azure
This section covers offering description
Check setup options, details, troubleshooting, and validation steps under Installations - Cloud - Azure
Available Features
- Cloud Security Posture Management (CSPM): Based on CIS benchmarks tailored for your assets
- Cloud Threat Detection: Identify threats in your Azure environment using Falco rules for Azure
- Image Vulnerability Scanning: Automatic vulnerability scanning of images pushed to Azure Container Registry and images executed on Azure Container Instances
10.3.1 - Platformlogs Falco rules
Scroll Top
DATABASE SERVICES 2rules
FUNCTION APPS 5rules
LOGGING AND MONITORING 1rules
NETWORKING 2rules
SQL SERVER 2rules
STORAGE ACCOUNTS 11rules
Total 21 rules.
DATABASE SERVICES
Azure Auditing on SQL Server Has Been Disabled
The Azure platform allows a SQL server to be created as a service. Enabling auditing at the server level ensures that all existing and newly created databases on the SQL server instance are audited. Auditing policy applied on the SQL database does not override auditing policy and settings applied on the particular SQL server where the database is hosted.
Auditing tracks database events and writes them to an audit log in the Azure storage account. It also helps to maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations.
cloud azure azure_database_services azure_sql_server cis_azure_4.1.1 cis_controls_6.3Azure Server Vulnerability Assessment on SQL Server Has Been Removed
Vulnerability Assessment setting 'Periodic recurring scans' schedules periodic (weekly) vulnerability scanning for the SQL server and corresponding Databases. Periodic and regular vulnerability scanning provides risk visibility based on updated known vulnerability signatures and best practices.
cloud azure azure_database_services azure_sql_server cis_azure_4.2.2 cis_azure_4.2.3 cis_controls_3.1FUNCTION APPS
Azure Function App Deleted
A function app has been deleted.
cloud azure azure_function_appsAzure Function App Deployment Slot Deleted
A function app deployment slot has been deleted.
cloud azure azure_function_appsAzure Function App Host Key Deleted
A function app host key has been deleted.
cloud azure azure_function_appsAzure Function App Host Master Key Modified
A function app host master key has been renewed.
cloud azure azure_function_appsAzure Function Key Deleted
A function key has been deleted.
cloud azure azure_function_appsLOGGING AND MONITORING
Azure Diagnostic Setting Has Been Disabled
A diagnostic setting controls how a diagnostic log is exported. By default, logs are retained only for 90 days. Diagnostic settings should be defined so that logs can be exported and stored for a longer duration in order to analyze security activities within an Azure subscription.
cloud azure azure_logging_and_monitoring cis_azure_5.1.1 cis_controls_6.5NETWORKING
Azure RDP Access Is Allowed from The Internet
The potential security problem with using RDP over the Internet is that attackers can use various brute force techniques to gain access to Azure Virtual Machines. Once the attackers gain access, they can use a virtual machine as a launch point for compromising other machines on an Azure Virtual Network or even attack networked devices outside of Azure.
cloud azure azure_networking cis_azure_6.1 cis_controls_9.2Azure SSH Access Is Allowed from The Internet
The potential security problem with using SSH over the Internet is that attackers can use various brute force techniques to gain access to Azure Virtual Machines. Once the attackers gain access, they can use a virtual machine as a launch point for compromising other machines on the Azure Virtual Network or even attack networked devices outside of Azure.
cloud azure azure_networking cis_azure_6.2 cis_controls_9.2SQL SERVER
Azure Auditing on SQL Server Has Been Disabled
The Azure platform allows a SQL server to be created as a service. Enabling auditing at the server level ensures that all existing and newly created databases on the SQL server instance are audited. Auditing policy applied on the SQL database does not override auditing policy and settings applied on the particular SQL server where the database is hosted.
Auditing tracks database events and writes them to an audit log in the Azure storage account. It also helps to maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations.
cloud azure azure_database_services azure_sql_server cis_azure_4.1.1 cis_controls_6.3Azure Server Vulnerability Assessment on SQL Server Has Been Removed
Vulnerability Assessment setting 'Periodic recurring scans' schedules periodic (weekly) vulnerability scanning for the SQL server and corresponding Databases. Periodic and regular vulnerability scanning provides risk visibility based on updated known vulnerability signatures and best practices.
cloud azure azure_database_services azure_sql_server cis_azure_4.2.2 cis_azure_4.2.3 cis_controls_3.1STORAGE ACCOUNTS
Azure Access Level creation attempt for Blob Container Set to Public
Anonymous, public read access to a container and its blobs can be enabled in Azure Blob storage. It grants read-only access to these resources without sharing the account key, and without requiring a shared access signature. It is recommended not to provide anonymous access to blob containers until, and unless, it is strongly desired. A shared access signature token should be used for providing controlled and timed access to blob containers. If no anonymous access is needed on the storage account, it's recommended to set allowBlobPublicAccess false.
cloud azure azure_storage_accounts cis_azure_3.5 cis_controls_16Creation attempt Azure Secure Transfer Required Set to Disabled
The secure transfer option enhances the security of a storage account by only allowing requests to the storage account by a secure connection. For example, when calling REST APIs to access storage accounts, the connection must use HTTPS. Any requests using HTTP will be rejected when 'secure transfer required' is enabled. When using the Azure files service, connection without encryption will fail, including scenarios using SMB 2.1, SMB 3.0 without encryption, and some flavors of the Linux SMB client. Because Azure storage doesn't support HTTPS for custom domain names, this option is not applied when using a custom domain name.
cloud azure azure_storage_accounts cis_azure_3.5 cis_controls_16Creation attempt Azure Default Network Access Rule for Storage Account Set to Allow
Storage accounts should be configured to deny access to traffic from all networks (including internet traffic). Access can be granted to traffic from specific Azure Virtual networks, allowing a secure network boundary for specific applications to be built. Access can also be granted to public internet IP address ranges, to enable connections from specific internet or on-premises clients. When network rules are configured, only applications from allowed networks can access a storage account. When calling from an allowed network, applications continue to require proper authorization (a valid access key or SAS token) to access the storage account.
cloud azure azure_storage_accounts cis_azure_3.6 cis_controls_16Azure Access Level for Blob Container Set to Public
Anonymous, public read access to a container and its blobs can be enabled in Azure Blob storage. It grants read-only access to these resources without sharing the account key, and without requiring a shared access signature. It is recommended not to provide anonymous access to blob containers until, and unless, it is strongly desired. A shared access signature token should be used for providing controlled and timed access to blob containers. If no anonymous access is needed on the storage account, it's recommended to set allowBlobPublicAccess false.
cloud azure azure_storage_accounts cis_azure_3.5 cis_controls_16Azure Default Network Access Rule for Storage Account Set to Allow
Storage accounts should be configured to deny access to traffic from all networks (including internet traffic). Access can be granted to traffic from specific Azure Virtual networks, allowing a secure network boundary for specific applications to be built. Access can also be granted to public internet IP address ranges, to enable connections from specific internet or on-premises clients. When network rules are configured, only applications from allowed networks can access a storage account. When calling from an allowed network, applications continue to require proper authorization (a valid access key or SAS token) to access the storage account.
cloud azure azure_storage_accounts cis_azure_3.6 cis_controls_16Azure Secure Transfer Required Set to Disabled
The secure transfer option enhances the security of a storage account by only allowing requests to the storage account by a secure connection. For example, when calling REST APIs to access storage accounts, the connection must use HTTPS. Any requests using HTTP will be rejected when 'secure transfer required' is enabled. When using the Azure files service, connection without encryption will fail, including scenarios using SMB 2.1, SMB 3.0 without encryption, and some flavors of the Linux SMB client. Because Azure storage doesn't support HTTPS for custom domain names, this option is not applied when using a custom domain name.
cloud azure azure_storage_accounts cis_azure_3.1 cis_controls_14.4Azure Blob Created
A blob has been created in a storage container.
cloud azure azure_storage_accountsAzure Blob Deleted
A blob has been deleted from a storage container.
cloud azure azure_storage_accountsAzure Container Created
A Container has been created.
cloud azure azure_storage_accountsAzure Container Deleted
A Container has been deleted.
cloud azure azure_storage_accountsAzure Container ACL Modified
A container ACL has been modified.
cloud azure azure_storage_accounts11 - IaC Security
Introduction
Benefits and Use Cases
Infrastructure as Code helps move security protocols and standards down into the development piipeline, highlighting and resolving potential issues as early as possible in development process. This benefits many players within the organization:
- Security and compliance personnel see reductions in violations and security risks
- DevOps managers can streamline processes and secure the pipeline
- Developers can detect issues early and have clear guidance on how remediate them with minimal effort.
11.1 - Git Iac Scanning
Introduction
Sysdig has introduced Git Integrations as part of its Infrastructure as Code (IaC) solution. At this time, the integrations can be used to scan incoming Pull Requests (PRs) for security violations based on predefined policies. The results of the scanning evaluation are presented in the PR itself. If passed, the user can merge; if failed the user cannot merge. Information provided in the PR also targets the problem area to assist the user in remediation.
See the Iac Supportability Matrix to review the resources and file types currently supported.
Benefits and Use Cases
Infrastructure as Code helps move security protocols and standards down into the development piipeline, highlighting and resolving potential issues as early as possible in development process. This benefits many players within the organization:
- Security and compliance personnel see reductions in violations and security risks
- DevOps managers can streamline processes and secure the pipeline
- Developers can detect issues early and have clear guidance on how remediate them with minimal effort.
Process Overview
Sysdig currently supports Github, Bitbucket, GitLab, and Azure DevOps integrations.
In each case, you log in as admin
, select Git Integrations
, choose your flavor, configure it, and define which parts of the source to protect:
- The
repositories
(selected from the list) - The
folders
within each repo (or all folders using /
) - The
branches
(for pull request evaluations only)
Launching an Integration
Log in to Sysdig Secure as admin
and choose the Settings
button in the navigation bar.
Select Git Integrations
.
If no integrations have ever been added, the page is empty. Click Add Git Integration
.
If some integrations already exist, the Git Integrations List page is displayed, showing the integration name
, status
, and number of configured sources
.

Click Add Git Integration
.
Select the relevant integration type from the drop-down list and begin the configuration.
Configuration Steps
Github
This configuration toggles between the Sysdig Secure interface and the Github interface.
From the Git Integrations List page, choose Github
and:
Enter an Integration
Name
and click Complete in Github
.
The Github interface opens in a new tab.
Sign in to Github and select where to install the Sysdig Github
app. Click Configure
.
Select All Repositories
or define chosen repos and click Install
.
You will be redirected to the Integration page in Sysdig Secure when installation is complete. The Integration Status should show Active
.
Click Add Sources
on the new integration listing.

Note: It’s possible to stop here; when you come back to the List page, you can click Configure Sources
to resume.
Add Repos
one at a time, defining the Folder(s)
to be scanned.
Choose Branches
where Sysdig should run a Pull Request evaluation check. Define the branch using a regular expression. You can use .*
to check PRs on all branches or use main
.
Click Add Source
. Repeat as needed and click Save
. The system automatically checks that valid folder names have been entered
Review the Status
on the Integrations List page, which shows any issues in the connection between Sysdig Secure and the Sysdig Github application:
- Active: Everything is working as expected
- Last Scanned: As soon as the integration is fully configured and active, a scan will be run. The Last Scanned field is updated after every scan (every 24 hours by default).
- Not Installed: The Sysdig Github App is not installed
- Suspended: The Sysdig Github App is suspended and needs to be resumed
See also the Additional Options.
Bitbucket
Prerequisites
- Open your Bitbucket organization and create a designated account for Sysdig.
- Configure the account’s access for the relevant workspace.
- Create a new app password for the account:
- Navigate to
Personal Settings > App passwords
, then click Create app password
. - Assign the following permissions:
- Account:
Read
- Repositories:
Read, Write, Admin
- Pull requests:
Read, Write
- Webhooks:
Read and write
- Click
Create
.
Add Bitbucket Integration
In Sysdig, navigate to the Git Integration screen.
Click Add Git Integration
and choose Bitbucket
.

Fill the details including the created app password from the prerequisites step.
Click Add to complete. You will be redirected to the Integration page in Sysdig Secure when installation is complete. The Integration Status should show Active
.
Click Add Sources
on the new integration listing.
Add Repos
one at a time, defining the Folder(s)
to be scanned.
Choose Branches
where Sysdig should run a Pull Request evaluation check. Define the branch using a regular expression. You can use .*
to check PRs on all branches or use main
.
Repeat as needed and click Save
. The system automatically checks that valid folder names have been entered
Review the Status
on the Integrations List page, which shows any issues in the connection between Sysdig Secure and the Sysdig Bitbucket application:
- Active: Everything is working as expected
- Not Installed: The Sysdig Bitbucket App is not installed
- Suspended: The Sysdig Bitbucket App is suspended and needs to be resumed
See also the Additional Options.
GitLab
Prerequisites in GitLab UI:
- Log in to your GitLab organization and create a designated account for Sysdig Secure
- Configure the account’s access for
Projects
- Create a unique personal access token, setting a:
- Unique name for the token
- Token expiration date
- The following scopes for the token:
api
read_repository
write_repository
- Copy the token value
Add the Integration
From the Git Integrations List page, choose GitLab and:
Enter an Integration Name and the Token from the prerequisite step.

Click Test Connection
, then click Add
.
The Manage Integration page is displayed.
Click Add Sources
on the new integration listing.
Add Repos
one at a time, defining the Folder(s)
to be scanned.
Choose Branches
where Sysdig should run a Pull Request evaluation check. Define the branch using a regular expression. You can use .*
to check PRs on all branches or use main
.
The system automatically checks that valid folder names have been entered
Review the Status
on the Integrations List page.
See also the Additional Options.
Azure DevOps
Prerequisites in Azure DevOps UI
Log in to your Azure DevOps organization and create a designated account for Sysdig Secure for cloud
Account Access: Configure the account’s access for Repositories
and Projects
Account Subscription Permissions: Assign View
, Edit
, and Delete
subscriptions permissions to the account.
HINT: To grant the required subscription access usiing the Azure CLI:
- ServiceHooks Namespace: Run
az devops security permission namespace list --output table
and record the ServiceHooks namespace ID - PublisherSecurity Token: Run
az devops security permission update --allow-bit 7 --namespace-id {{ServiceHooks namespace Id}} --subject {{accountUserEmail}} --token PublisherSecurity --output table
- Personal Access Token: Retrieve a unique personal access token
- Record the token value
- Token Scope: Set to
Custom Defined
- Code Scope: Choose
Read
, Write
, and Status
permissions - Extensions Scope: Choose
Read
permission - For additional help, see the Azure DevOps documentation
Add the Integration
From the Git Integrations List page, choose Azure DevOps
and:
Enter an Integration Name
, Organization Name
, and the Personal Access Token
from the prerequisite step.

Click Test Connection
, then click Add
.
The Manage Integration page is displayed.
Click Add Sources
on the new integration listing.
Add Repos
one at a time, defining the Folder(s)
to be scanned.
Choose Branches
where Sysdig should run a Pull Request evaluation check (below). Define the branch using a regular expression. You can use .*
to check PRs on all branches or use main
.
The system automatically checks that valid folder names have been entered
Review the Status
on the Integrations List page.
See also the Additional Options.
Additional Options
From the Integrations List page, you can use the burger (3-dot) menu for additional options on an integration.

Start Code Scan Manually
Use this option to trigger a scan before the default 24-hour time is reached.
Delete an Integration
This action deletes associated sources as well.
Pull Request Policy Evaluation
For the branches defined in Git Sources, Sysdig will run a Pull Request Policy Evaluation check. The check scans the Infrastructure-as-Code files in the pull request and identifies violations against the predefined policies.
The result of the check contain the list of violations, their severity and the failed resources list per file.
Example output for GitHub:

11.2 - IaC Policy Controls
Introduction
When running a Github integration to check the compliance of a pull request during development, Sysdig will run the controls from the following policies, depending on the resource type.
You can navigate in the product to Policies > CSPM Policies
to find the list of requirements and controls for each policy.
Kubernetes Workloads
Amazon Web Services
11.3 - IaC Supportablility Matrix
At this time, Sysdig’s Infrastructure as Code (IaC) Git-integrated scanning supports the following resource and source types:
12 - Scanning (Legacy)
Two Types of Scanning
As of May 2021, Sysdig Secure includes two different types of scanning
for vulnerabilities:
Image scanning This includes all prior scanning tools, policies,
alerts, etc. in Sysdig Secure and focuses on scanning the container images in an environment.
Host scanning:(New) This feature, deployed via the Node
Analyzer, scans the host operating system, whether OS (e.g rpm
, dpkg
) or non-OS
(e.g. Java packages, Ruby gems).
Host scanning documentation is self-contained; the rest of the topics in
this Scanning module concern image scanning.
How Sysdig Image Scanning Works
Image scanning allows you to scan container images for vulnerabilities,
secrets, license violations, and more. It can be used as part of a
development build process, can validate images added to your container
registry, and can scan the images used by running containers on your
infrastructure.
The basic set up for image scanning is simple: provide registry
information where your images are stored, trigger a scan, and review the
results.
Behind the scenes:
Image contents are analyzed.
The contents report is evaluated against multiple vulnerability
databases.
It is then compared against default or user-defined policies.
Results are reported, both in Sysdig Secure and (if applicable) in a
developer’s external CI tool.
Prerequisites
Network and port requirements
Image Scanning requires access to an external vulnerability feed. To
ensure proper access to the latest definitions, refer to the
Network and Port
requirements.
Whitelisted IP for image scanning requests
Image scanning requests and Splunk event forwards both originate
from 18.209.200.129. To enable Sysdig to scan private
repositories, your firewall will need to allow inbound requests from
this IP address.
Image Contents Reported
The analysis generates a detailed report of the image contents,
including:
Vulnerability Databases Used
Sysdig Secure continuously checks against a wide range of vulnerability
databases, updating the Runtime scan results with any newly detected
CVEs.
The current database list includes:
Use Cases
As an organization, you define what is an acceptable, secure, reliable
image running in your environment. Image scanning for the development
pipeline follows a somewhat different flow than for security personnel.
Scanning During Container Development (DevOps)
Use image scanning as part of your development
pipeline, to check for best
practices, vulnerabilities, and sensitive content.
To begin:
Add Registry: Add a registry where your images are stored, along
with the credentials necessary to access them.
Integrate CI Tool: Integrate image scanning with an external CI
tool, using the Jenkins plugin or building your own integration from
a SysdigLabs solution.
Scan Image(s): The plugin or CLI integration triggers the image
scanning process. Failed builds will be stopped, if so configured.
Review Results (in CI tool): Developers can analyze the results
in the integrated CI tool (Jenkins).
(Optionally: add policies or refine the default policies to suit
your needs, assign policies to particular images or tags, and
configure alerts and notifications.)
Scanning Running Containers (Security Personnel)
Security personnel uses image scanning to monitor which containers are
running, what their scan status is, and whether new vulnerabilities are
present in their images.
Add Registry: Add a registry where your images are stored, along
with the credentials necessary to access them.
Scan Image(s): Trigger an image scan with the node image
analyzer or manually (one-by-one).
Review Results (in Sysdig Secure): Security personnel can
analyze scan results in the Sysdig Secure image scanning UI.
(Optionally: add policies or refine the default policies to suit
your needs, assign policies to particular images or tags, and
configure alerts and notifications.)
Image Scanning requires access to an external vulnerability feed. To
ensure proper access to the latest definitions, refer to the Network
and Port requirements.
Add Scanning to Container Registries
In some cases, it is possible to integrate image scanning directly into
a container registry and automatically trigger an event or action every
time a new container is pushed into the registry. This feature is
currently supported for the following container registry:
12.1 - Integrate with CI/CD Tools
You have the option to use image scanning as part of your development
pipeline, to check for best practices, vulnerabilities, and sensitive
content.
Review the Types of Secure
Integrations table for more
context. The CI/CD Tools column lists the various options and their
levels of support.
Inline Scanning
Sysdig provides a stand-alone inline scanner– a containerized
application that can perform local analysis on container images (both
pulling from registries or locally built) and post the result of the
analysis to Sysdig Secure.
Other scanning integrations (i.e. the Jenkins CI/CD
plugin) make use of this
component under the hood to provide local image analysis capabilities,
but it can also be used as a stand-alone component for custom pipelines,
or simply as a way to one-shot scan a container from any host.
The Sysdig inline scanner works as an independent container, without any
Docker dependency (it can be run using other container runtimes), and
can analyze images using different image formats and sources.
This feature has a variety of use cases and benefits:
Images don’t leave their own environment
SaaS users don’t send images and proprietary code to Sysdig’s SaaS
service
Registries don’t have to be exposed
Images can be scanned in parallel more easily
Images can be scanned before they hit the registry, which can
Prerequisites
At a minimum, the inline scanner requires:
Sysdig Secure v2.5.0+ (with API token)
Internet access to post results to Sysdig Secure (SaaS or On-Prem)
Ability to run a container
Using the inline_script.sh
is deprecated. This script uses a different set of parameters; for more information about porting the parameters to
the inline scanner container, see changes-from-v1xx.
Implement Inline Scanning
Quick Start
You can scan an image from any host by executing:
docker run --rm quay.io/sysdig/secure-inline-scan:2 <image_name> --sysdig-token <my_API_token> --sysdig-url <secure_backend_endpoint>
…
…
Status is pass
View the full result @ https://secure.sysdig.com/#/scanning/scan-results/docker.io%2Falpine%3A3.12.1/sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a/summaries
PDF report of the scan results can be generated with -r option.
Upgrading
You can rerun this Docker command to upgrade to the latest inline scanning component at any time.
Common Parameters
Image name (mandatory) | Container image to be analyzed, following the usual registry/repo:tag format, i.e. docker.io/alpine:3.12.1 . If no tag is specified, the latest will be used. Digest format is also supported, i.e.: docker.io/alpine@sha256:c0e9560cda118f9ec6... |
--sysdig-token (mandatory) | Sysdig API token, visible from the User Profile page. |
--sysdig-url: | Not required for Sysdig Secure SaaS in the us-east region. For any other case, you must adjust this parameter. I.e. for SaaS us-west it is: --sysdig-url https://us2.app.sysdig.com. See also SaaS Regions and IP Ranges. |
Quick Help and Parameter List from -h
Display a quick help and parameters description from the image
itself by executing:
docker run --rm quay.io/sysdig/secure-inline-scan:2 -h
.
Sample output:
$ docker run quay.io/sysdig/secure-inline-scan:2 -h
Sysdig Inline Analyzer -- USAGE
Container for performing analysis on local container images, utilizing the Sysdig analyzer subsystem.
After image is analyzed, the resulting image archive is sent to a remote Sysdig installation
using the -s <URL> option. This allows inline analysis data to be persisted & utilized for reporting.
Usage: sysdig-inline-scan.sh -k <API Token> [ OPTIONS ] <FULL_IMAGE_TAG>
== GLOBAL OPTIONS ==
-k <TEXT> [required] API token for Sysdig Scanning auth
(ex: -k 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx')
Alternatively, set environment variable SYSDIG_API_TOKEN
Alias: --sysdig-token
-s <URL> [optional] Sysdig Secure URL (ex: -s 'https://secure-sysdig.svc.cluster.local').
If not specified, it will default to Sysdig Secure SaaS URL (https://secure.sysdig.com).
Alias: --sysdig-url
--sysdig-skip-tls
[optional] skip tls verification when calling secure endpoints
-o [optional] Use this flag if targeting onprem sysdig installation
Alias: --on-prem
-a <TEXT> [optional] Add annotations (ex: -a 'key=value,key=value')
Alias: --annotations
-f <PATH> [optional] Path to Dockerfile (ex: -f ./Dockerfile)
Alias: --dockerfile
-m <PATH> [optional] Path to Docker image manifest (ex: -m ./manifest.json)
Alias: --manifest
-i <TEXT> [optional] Specify image ID used within Sysdig (ex: -i '<64 hex characters>')
Alias: --image-id
-d <SHA256> [optional] Specify image digest (ex: -d 'sha256:<64 hex characters>')
Alias: --digest
-c [optional] Remove the image from Sysdig Secure if the scan fails
-r <PATH> [optional] Download scan result pdf in a specified container-local directory (ex: -r /staging/reports)
This directory needs to be previously mounted from the host to persist the data
Alias: --report-folder
-v [optional] Increase verbosity
Alias: --verbose
--format <FORMAT>
[optional] The only valid format is JSON. It sets the output format to a valid JSON which
can be processed in an automated way.
--write-json <PATH>
Write the final JSON report to <PATH>.
--time-profile
Output information about the time elapsed in the different stages of the scan process
--malware-scan-enable
Enables malware scan on container.
WARNING: it's generally a very slow process.
--malware-scan-db-path <PATH>
Local container path with updated ClamAV database.
Will be used to call clamscan command as "clamscan --database=<PATH> ..."
--malware-scan-output <DIR-PATH>
Save JSON output of scan to path. Will be saved to <PATH>/malware_findings.json.
Output is a JSON array of {"path": "...", "signature": "..."} objects.
Note: path should exists and should be a directory.
--malware-fail-fast true|false
Fails immediately when a malware is found, skipping sending analysis
results to Secure Backend.
Default: true
--malware-exclude REGEX
Exclude dirs (and its content) and files which match the given regex.
Arguments are passed to ClamAV --exclude AND --exclude-dir options, please
refer to its official documentation.
(https://www.clamav.net/documents/clam-antivirus-user-manual)
== IMAGE SOURCE OPTIONS ==
[default] If --storage-type is not specified, pull container image from registry.
== REGISTRY AUTHENTICATION ==
When pulling from the registry,
the credentials in the config file located at /config/auth.json will be
used (so you can mount a docker config.json file, for example).
Legacy .dockercfg file is also supported.
Alternatively, you can provide authentication credentials with:
--registry-auth-basic username:password Authenticate using the provided <username> and <password>
--registry-auth-token <TOKEN> Authenticate using this Bearer <Token>
--registry-auth-file <PATH> Path to config.json or auth.json file with registry credentials
--registry-auth-dockercfg <PATH> Path to legacy .dockercfg file with registry credentials
== TLS OPTIONS ==
-n Skip TLS certificate validation when pulling image
Alias: --registry-skip-tls
--storage-type <SOURCE-TYPE>
Where <SOURCE-TYPE> can be one of:
docker-daemon Get the image from the Docker daemon.
Requires /var/run/docker.sock to be mounted in the container
cri-o Get the image from containers-storage (CRI-O and others).
Requires mounting /etc/containers/storage.conf and /var/lib/containers
docker-archive Image is provided as a Docker .tar file (from docker save).
Tarfile must be mounted inside the container and path set with --storage-path
oci-archive Image is provided as a OCI image tar file.
Tarfile must be mounted inside the container and path set with --storage-path
oci-dir Image is provided as a OCI image, untared.
The directory must be mounted inside the container and path set with --storage-path
--storage-path <PATH> Specifies the path to the source of the image to scan, that has to be
mounted inside the container, it is required if --storage-type is set to
docker-archive, oci-archive or oci-dir
== EXIT CODES ==
0 Scan result "pass"
1 Scan result "fail"
2 Wrong parameters
3 Error during execution
The inline scanner can pull the target image from different sources.
Each case requires a different set of parameters and/or host mounts, as
described in the relevant Execution
Examples.
Output Options
When the inline scanner has completed the image analysis, it sends the
metadata to the Sysdig Secure backend to perform the policy
evaluation step. The scan
results can then be consumed inline or by accessing the Secure UI.
Container Exit Code
The container exit codes are:
0 - image passed policy evaluation
1 - image failed policy evaluation
2 - incorrect parameters (i.e. no API token)
3 - other execution errors
Use the exit code, for example, to decide whether to abort the CI/CD
pipeline.
Standard Output
The standard output produces a human-readable output including:
Image information (digest, image ID, etc)
Evaluation results, including the final pass / fail decision
A link to visualize the complete scan report using the Sysdig UI
If you prefer JSON output, simply pass --format JSON
as a parameter.
JSON Output
You can write a JSON report, while keeping the human-readable output in
the console, by adding the following flag:
--write-json /out/report.json
Remember to bind mount the output directory from the host in the
container and provide the corresponding write permissions.
PDF Report
You can also download the scan result PDF in a specified container-local
directory. Remember to mount this directory from the host in the
container to retain the data.
--report-folder /output
Execution Examples
Docker Daemon
Scan a local image build; mounting the host Docker socket is required.
You might need to include Docker options ‘-u root
’ and
‘--privileged
’, depending on the access permissions
for /var/run/docker.sock
docker build -t <image-name> .
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
quay.io/sysdig/secure-inline-scan:2 \
--sysdig-url <omitted> \
--sysdig-token <omitted> \
--storage-type docker-daemon \
--storage-path /var/run/docker.sock \
<image-name>
Docker Archive
Trigger the scan, assuming the image is available as an image tarball at
image.tar
. For example, the command
docker save <image-name> -o image.tar
creates a tarball for
<image-name>
. Mount this file inside the container:
docker run --rm \
-v ${PWD}/image.tar:/tmp/image.tar \
quay.io/sysdig/secure-inline-scan:2 \
--sysdig-url <omitted> \
--sysdig-token <omitted> \
--storage-type docker-archive \
--storage-path /tmp/image.tar \
<image-name>
OCI Archive
Trigger the scan assuming the image is available as an OCI tarball at
oci-image.tar
. Mount this file inside the container:
docker run --rm \
-v ${PWD}/oci-image.tar:/tmp/oci-image.tar \
quay.io/sysdig/secure-inline-scan:2 \
--sysdig-url <omitted> \
--sysdig-token <omitted> \
--storage-type oci-archive \
--storage-path /tmp/oci-image.tar \
<image-name>
OCI Layout
Trigger the scan assuming the image is available in OCI format in the
directory ./oci-image
. Mount the OCI directory inside the container:
docker run --rm \
-v ${PWD}/oci-image:/tmp/oci-image \
quay.io/sysdig/secure-inline-scan:2 \
--sysdig-url <omitted> \
--sysdig-token <omitted> \
--storage-type oci-dir \
--storage-path /tmp/oci-image \
<image-name>
Container Storage: Build w/ Buildah & Scan w/ Podman
Build an image using Buildah from a Dockerfile, and perform a scan. You
might need to include docker options '-u root'
and '--privileged'
,
depending on the access permissions for /var/lib/containers
. Mount the
container storage folder inside the container: