Install Shield on Linux Nodes
You use the shield
chart to install the Host and Cluster Shield components in your Kubernetes environment. In addition to providing instructions for freshly installing the shield
chart, this topic also guides you through migrating from previously installed Sysdig components deployed with the sysdig-deploy
chart to the Host and Cluster Shield components.
The shield
chart deploys the Cluster Shield as a deployment and the Host Shield as a daemonset in your Kubernetes environment.
Prerequisites
kubectl
installed- Helm
v3.10
and above - Your agent access key
- Sysdig Secure Endpoint for your Sysdig SaaS region
- Review Understand Agent Drivers
System Requirements
- A supported distribution or Kubernetes platform.
- Ports 443 and 6443 open for outbound traffic.
- Allow traffic on port
12000
to communicate within the cluster (same namespace) for Kubernetes Security Posture Management (KSPM). - Allow traffic on port
4222
to communicate within the cluster (same namespace) for Container Vulnerability Management. - Linux Kernel 3.10 or later.
Kubernetes Platforms
The supported Kubernetes platforms are:
Kubernetes (Vanilla)
Amazon Elastic Kubernetes Service (EKS)
Note: AWS Fargate is not supported on EKS
Google Kubernetes Engine (GKE)
Azure Kubernetes Service (AKS)
RedHat Openshift
IBM Kubernetes Service (IKS)
RKE Government (RKE2)
Linux Distributions
The supported Linux distributions are:
- Debian
- Ubuntu 18.04 and above
- Ubuntu (Amazon)
- CentOS 7 and above
- Alma Linux
- Rocky Linux
- Red Hat Enterprise Linux (RHEL) 7 and above
- SuSE Linux Enterprise Server*
- RHEL CoreOS (RHCOS)
- Fedora
- Fedora CoreOS
- Linux Mint
- Amazon Linux (Original)
- Amazon Linux 2 (AL2)
- Amazon Linux 2023 (AL2023)
- Amazon Bottlerocket
- Google Container Optimized OS (COS)
- Oracle Linux (UEH)
- Oracle Linux (RHCK)
- Azure Linux 3
- EulerOS
- ArchLinux
- Flatcar
- Alpine Linux 3.20 and above
We may support additional Linux distributions depending on the feature required. For more details, Contact Sysdig Support.
CPU Architectures
The supported CPU architectures are:
- X86
- ARM
- ppc64le (IBM Power)
- s390x (zLinux)
Coverage Map
Platform | Threat Detection and Response | Vulnerability Management | Posture Management |
---|---|---|---|
EKS | ✅ | ✅ | ✅ |
EKS Fargate | ❌ | ❌ | ❌ |
GKE | ✅ | ✅ | ✅ |
GKE Autopilot | ✅ | ✅ | ✅ |
AKS | ✅ | ✅ | ✅ |
IKS | ✅ | ✅ | ✅ |
Kubernetes Vanilla | ✅ | ✅ | ✅ |
Mirantis (MKE) | ✅ | ✅ | ✅ |
OpenShift (OCP4) | ✅ | ✅ | ✅ |
Rancher (RKE2) | ✅ | ✅ | ✅ |
Migrate to the Shield Chart
Sysdig introduces a new chart, shield
, to install Cluster Shield and Host Shield components. If you have previously installed Sysdig components in your cluster or are considering a fresh installation, use the shield
chart instead of sysdig-deploy
.
Since the Host and Cluster Shield replace all the components previously deployed using the sysdig-deploy
chart, uninstall any existing installations before proceeding. This will prevent encountering duplicate entity errors.
Before uninstalling, make sure to take a backup of your Sysdig deployment to preserve configurations and data.
helm get values {RELEASE_NAME} -n {NAMESPACE} > sysdig-agent-backup.yaml
To remove an existing installation, run the following command:
helm uninstall sysdig-agent --namespace sysdig-agent
If you are doing a fresh installation, you can ignore this requirement.
Install Using Helm
Configuration File
To install Host Shield and Cluster Shield, use the following values.yaml
file:
cluster_config:
name: <your-cluster-name> # The name of the cluster
sysdig_endpoint:
region: <your-sysdig-region> # Sysdig Secure instance location region. Defaults to `custom` if not specified.
access_key: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # Access key for Sysdig Secure instance
features:
kubernetes_metadata:
enabled: true # Enable Kubernetes metadata collection for the cluster
posture:
host_posture:
enabled: true # Enable host posture assessment
cluster_posture:
enabled: true # Enable cluster posture assessment
vulnerability_management:
host_vulnerability_management:
enabled: true # Enable host vulnerability management
container_vulnerability_management:
enabled: true # Enable container vulnerability management
in_use:
enabled: true # Enable retrieval of in-use packages
detections:
drift_control:
enabled: true # Enable Drift Detection
malware_control:
enabled: true # Enable malware control detection
ml_policies:
enabled: true # Enable machine learning policies
kubernetes_audit:
enabled: true # Enable Kubernetes audit logging
investigations:
activity_audit:
enabled: true # Enable activity audit
live_logs:
enabled: true # Enable Kubernetes live logs
captures:
enabled: true # Enable System captures
respond:
response_actions:
enabled: true # Enable Response Actions
host:
driver: universal_ebpf # Driver for the host agent (Accepted Values: kmod, legacy_ebpf, universal_ebpf (Linux Kernel ≥ 5. 8))
Google Kubernetes Engine (GKE) Autopilot
To deploy Host Shield and Cluster Shield on GKE Autopilot, add the following configuration to your values.yaml
file:
cluster_config:
cluster_type: gke-autopilot
shield
chart 1.1.0 supports GKE Autopilot version 1.32.2-gke.1652000
and later.
Custom Registries and SHA256 in GKE Autopilot
This section explains how to work with custom registries, SHA256 digests, and the Google allow list when deploying Sysdig on GKE Autopilot. It also provides a list of approved versions and SHA256 digests.
Why This Matters
GKE Autopilot allows workloads only from approved images, verified by their SHA256 digest.
When using a custom registry, you must mirror the public image (sysdig/agent-slim) without altering the digest so it matches Google’s allow list.
Mirror public image to custom registry
To mirror the public sysdig/agent-slim
to your custom registry without altering the digest, you can use skopeo with the following command:
skopeo copy --multi-arch all --preserve-digests docker://quay.io/sysdig/agent-slim:14.1.1 docker://company-registry/sysdig/agent-slim:14.1.1
Set custom registry on Shield Chart
You can use the following table or run the command below to retrieve the proper SHA256 Digest
docker pull quay.io/sysdig/agent-slim:14.1.1
docker inspect quay.io/sysdig/agent-slim:14.1.1 --format='{{index .RepoDigests 0}}'
Then update the host.image
section in your values.yaml
:
host:
image:
registry: your_company_registry
repository: sysdig
kmodule_name: agent-kmodule
shield_name: agent-slim
tag: sha256:1111112222233333
List of Approved Versions and SHA256 Digests
This table is updated when Google adds new SHA256 digests to the allow list. There may be a delay of ~10 business days after a new Sysdig release before its SHA is approved.
Sysdig Shield Version | SHA256 Digest | Approval Date by Google |
---|---|---|
13.9.1 | sha256:14860d181a8b712c4150bb59e3ba0ff4be08959e2c45376b32c8eb7ff70461f9 | 2025-07-11 |
13.9.2 | sha256:0dcdb6d70bab60dae4bf5f70c338f2feb9daeba514f1b8ad513ed24724c2a04d | 2025-07-11 |
14.0.0 | sha256:9d668dc0d3fc3db783bdf4ce5c4755c355ff7b3b401b7d0ad4c087d05ba270f9 | 2025-07-11 |
14.0.1 | sha256:b1f5bf4677632c715e9a5cde9af8d36dd66f5e79c80aadfd4b74dc5cc310a570 | 2025-07-11 |
14.1.0 | sha256:2c6401018cfe3f5fcbd0713b64b096c38d47de1b5cd6c11de4691912752263fc | 2025-07-24 |
14.1.1 | sha256:36366b082d8d45dfe44d995830a1c0b0293cb9df9e55c6ab8c389e800596c743 | 2025-08-07 |
custom
Region Configuration
If you are using a Sysdig Secure instance in a custom region, use the following configuration:
cluster_config:
name: <your-cluster-name> # The name of the cluster
sysdig_endpoint:
region: <your-sysdig-region> # Sysdig Secure instance location region. Defaults to `custom` if not specified.
access_key: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # Access key for Sysdig Secure instance
# When the region is `custom`, the following configuration is required.
api_url: <your-api-url> # Custom Sysdig Secure API URL.
collector:
host: <your-collector-hostname> # Custom Sysdig Secure collector hostname.
port: <your-collector-port> # Custom Sysdig Secure collector port.
Installation
helm repo add sysdig https://charts.sysdig.com
helm repo update
helm upgrade --install --atomic --create-namespace \
-n sysdig \
-f values.yaml \
shield \
sysdig/shield
Parameters:
http_proxy
: Specifies the URL for the HTTP proxy server.https_proxy
: Specifies the URL for the HTTPS proxy server.no_proxy
: A comma-separated list of hosts or domains to bypass the proxy. For example:localhost,127.0.0.1,. my-cluster.local
Feature Management
Feature management in Sysdig Host and Cluster Shield is handled through a values.yaml
configuration file, where you can enable or disable specific features like posture, vulnerability management, admission control, and detection capabilities. Each feature has associated options, allowing customization to fit your environment’s security and compliance needs.
For example, you can enable host scanning with the following snippet:
features:
vulnerability_management:
host_vulnerability_management:
enabled: true
This setup activates host vulnerability scanning, allowing you to identify and address potential security risks on your cluster’s nodes.
Additional Features
To enable the following additional features, edit the values.yaml
file:
Admission Controller
Add the following configuration to your admission_control
section under features
.
features:
admission_control:
# Enable Admission Controller
enabled: true
container_vulnerability_management:
# Enable Container Vulnerability Management on Admission Controller
enabled: true
posture:
# Enable Posture on Admission Controller
enabled: true
See Admission Controller for more details.
Network Security
Add the following configuration to your existing investigations
section under the features
section.
See Network for details on this feature.
features:
investigations:
network_security:
enabled: true
Rapid Response
Add the following configuration to your existing respond
section under the features
section.
features:
respond:
rapid_response:
enabled: true
password: <password>
Later, you can use the password you define here to Start Rapid Response.
See Rapid Response for details on this feature.
Further customizations are available in the Configuration Library.
Proxy Settings
If your environment requires internet access through a proxy server, you can configure proxy settings in the values.yaml
file. These settings ensure that Sysdig Host and Cluster Shield can communicate with Sysdig.
Add the following configuration under the proxy section:
proxy:
http_proxy: http://customer-proxy
https_proxy: http://customer-proxy
no_proxy: <comma-separated-list-of-hosts-or-domains>
Advanced Settings
You can use the additional_settings
section to configure advanced debugging options, such as log levels, syscall filtering, and DNS detection. It is recommended to use these settings with caution and contact Sysdig Support for guidance.
For the detailed information on configuring the shield
chart, see shield.
Setting Logs
The console_priority
sets the minimum log level for messages displayed in the console.
host:
additional_settings:
log:
console_priority: warning # or none, debug, info, warning, error
Access Key Using a Kubernetes Secret
You can use a Kubernetes Secret to pass the Sysdig access key. Store the key in a Secret and reference it in the values.yaml
file using the access_key_existing_secret
field.
Create a Kubernetes Secret
Run the following command to create a secret named my-secret
in the Sysdig namespace.
kubectl create secret generic my-secret \
--type=Opaque \
--from-literal=access-key=<access-key> \
-n sysdig
Replace <access-key>
with your Sysdig access key.
This command creates a Secret named my-secret
in the sysdig
namespace.
Configure Sysdig Access Key Using a Kubernetes Secret
In your values.yaml
, configure the access key using the access_key_existing_secret
parameter:
sysdig_endpoint:
# Sysdig accees key secret
access_key_existing_secret: my-secret
Cluster Shield looks for the Kubernetes Secret my-secret
and extracts the access key stored under access-key
.
Dynamic Syscall Filtering
Dynamic Syscall Filtering setting improves performance by monitoring only the system calls (syscalls) required by active components such as plugins, features, and security policies.
Configuration
Dynamic Syscall Filtering is enabled by default.
host:
additional_settings:
feature_syscalls_filtering:
enabled: true # set to 'false' to disable
When enabled, shield
automatically ignores system calls that are not needed. This reduces CPU and memory usage, especially in lightweight or high-load environments.
Further customizations are available in the Configuration Library.
Add Specific System Calls
Use add_events_by_type
to ensure shield
always monitors specific system calls, even if they are not currently required by any feature.
host:
additional_settings:
feature_syscalls_filtering:
enabled: true # Set to false to disable the feature.
add_events_by_type: # List the specific syscalls of interest (optional)
- recvmsg
- process_vm_readv
skip_events_by_type
If the skip_events_by_type
setting is used, it takes precedence over Dynamic Syscall Filtering.
If a syscall is excluded due to skip_events_by_type
but is required by an active feature, shield
logs a warning.
The warning includes the syscall name and the affected feature (such as policies or plugins).
This helps troubleshoot unexpected behavior or misconfiguration.
enrich_with_host_ips
Use enrich_with_host_ips
to add the host’s IPv4 and IPv6 addresses to Threat Events. When disabled, no IP enrichment is performed.
host:
additional_settings:
enrich_with_host_ips:
enabled: true # Set to false to disable.
Captures
When using the Captures feature:
Only system calls required by active features are recorded by default.
Capture files may not include all system call events unless configured otherwise.
To capture all system call events:
Disable Dynamic Syscall Filtering, or
Explicitly list required system calls using
add_events_by_type
Recommendations
Sysdig recommends that you keep Dynamic Syscall Filtering enabled for performance improvements.
Use
add_events_by_type
to include system calls required for custom workflows or debugging.Disable filtering only when full system call visibility is needed (for example, when creating complete capture files).
Backoff on Proxy
The backoff_on_proxy_error
setting controls how the agent retries connections when a proxy error occurs. Based on the type of connection failure, the agent will either retry every second or use exponential backoff with a capped delay.
Configuration
host:
additional_settings:
backoff_on_proxy_error: true
When enabled, the agent uses exponential backoff to manage retry intervals. The maximum backoff interval is capped to prevent unbounded delays. This approach helps avoid overwhelming network resources while ensuring connection recovery is attempted in a controlled manner.
Support for Custom Container Runtimes
Custom Container Runtime support (referred to as Custom Container) enables you to monitor workloads that run outside of standard container runtimes such as containerd, CRI-O, and Docker Engine, as well as other runtimes commonly used by industry-standard container orchestrators.
Using the Custom Container feature, you can match processes from containers running on your custom container runtimes through either of the two mechanisms, or both:
- cgroup matching
- Environment matching Metadata can be assigned to containers matching the cgroup or environment rules, enabling the agent to enrich the data it sends back to the collector for those containers.
Configuration
Custom Container support is disabled by default. To enable it, add the following to your shield.yaml
:
host:
additional_settings:
custom_container:
enabled: true
limit: 50 # Maximum number of custom containers per host (default: 50, max: 150)
max_id_length: 12 # Maximum container ID length (max: 100)
Match by cgroup
You can use cgroups to map a process to a container. Run the following to inspect a process cgroup:
cat /custom/<pid>/cgroup
An example of a cgroup path is freezer:/docker/custom/xxxxxxx9992392abcdefxxxx.scope
Example
host:
additional_settings:
custom_container:
match:
cgroup: ^/custom/.* # Match any cgroup path starting with /custom/ (uses regex).
container:
id: <cgroup:1> # ":1" is the capture group for the container ID. For example, `/custom/xxxxxxx9992392abcdefxxxx.scope` would capture `xxxxxxx9992392abcdefxxxx`
name: <myapp>
image: <custom-app>
The cgroup value supports regex patterns with capture groups.
Match by Environment Variables
You can also match based on process environment variables:
sudo cat /custom/<pid>/environ
Example
host:
additional_settings:
custom_container:
match:
environ:
CUSTOM_CONTAINER_NAME: (.*)
MY_VAR: container_([a-z]*)
id: <cgroup:1>
name: <CUSTOM_CONTAINER_NAME>
Container Metadata
After a process matches, the agent fills container metadata that is sent to the backend.
Key | Field | Description |
---|---|---|
id | container.id | Required. Unique identifier for the container. Must not be empty. |
name | container.name | Human-readable name. Defaults to id if not set. |
image | container.image | Application image name. Optional. Used for grouping across hosts. |
labels | container.label.* | Key–value pairs for tagging (for example, env=prod ). Empty values are not sent. |
Metadata can be assigned to plain strings or any of the following template parameters:
Templated Parameter | Translates to |
---|---|
<cgroup> | Full matched cgroup path. |
<cgroup:N> | The N-th capture group of the cgroup (useful to shorten the container ID) . |
<hostname> | Full hostname of the host running the agent (UTS namespaces not supported). |
<hostname:1> | Hostname up to the first dot (UTS namespaces not supported). |
<ENV_VAR_NAME> | The process’s matching environment variable, or an empty string if unset. |
<MATCHED_ENV_VAR_NAME:N> | The N-th capture group of the given process’s matching environment variable. |
Incremental Metadata Scans
By default, metadata is taken only from the first process that matches the conditions, provided it has a non-empty container name. In some cases, that process may not supply all the metadata defined in the configuration.
To handle this, enable custom_container.incremental_metadata = true
as described below, which allows the agent to continue searching all the metadata in other matching processes:
host:
additional_settings:
custom_container:
incremental_metadata: true
metadata_deadline_secs: 10 # maximum time to retrieve container metadata from processes when metadata is incomplete. Defaults to `10`.
Example
With the following sample configuration, processes that match both the cgroup and environment rules will be assigned to containers with:
Container ID
: the cgroup path captured by the cgroup capture groupContainer Name
: the value of theCUSTOMER_CONTAINER_NAME
environment variableContainer Image
: a string formed by concatenating the static prefiximage-
with the first capture group from theMY_VAR
environment variableLabel
: the pairname: static_string
host:
additional_settings:
custom_container:
enabled: true
limit: 50
max_id_length: 50
incremental_metadata: true
match:
cgroup: ^/custom/(.*)
environ:
CUSTOM_CONTAINER_NAME: (.*)
MY_VAR: container_([a-z]*)
id: <cgroup:1>
name: <CUSTOM_CONTAINER_NAME>
image: image-<MY_VAR:1>
labels:
name: static_string
Limitations
- The custom container runtime supports:
- Up to 150 containers per host.
- Container IDs up to 100 characters in length.
- Container-aware filters (for example,
container.id
,container.name
) are not available in system captures. - The characters
<
and>
are reserved by the template parameters and cannot be used in container data. Any character other than ASCII letters (A–Z
,a–z
), digits (0–9
),:
,_
, or.
will be replaced with an underscore (_
).
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.