Manual Installation on Kubernetes

All on-premises installations and upgrades are now scheduled with and guided by Sysdig technical account managers and professional services division. See Oversight Services Now Offered for All Installs and Upgrades. For customers, the instructions in this section are for review purposes only.

The Sysdig platform includes both Sysdig Monitor and Sysdig Secure, which are licensed separately. All installations include Sysdig Monitor, while some of the Secure components are installed and configured as additional steps, as noted.

When installing the Sysdig platform with Kubernetes as the orchestrator, you install each backend component with separate kubectl commands.

Installation with the Installer tool is recommended from version 2.5.0 onwards.

To perform a manual install on OpenShift, see Manual Install (OpenShift). The manual install on Kubernetes 1.9+ is described below.


  • Access to a running Kubernetes cluster 1.9+

    (Note: if your environment is installed elsewhere, such as your own data center, contact Sysdig Professional Services to customize the installation instructions appropriately.)

  • Two items from your Sysdig purchase-confirmation email:

    • Your Sysdig license key

    • Your Sysdig pull secret

  • kubectl installed on your machine and communicating with the Kubernetes cluster

    (Note that your kubectl and Kubernetes versions should match to avoid errors.)

  • An External Load Balancer (required for production – see below)

    If installing in a cloud-provider environment (such as AWS, GCloud, or Azure), you will deploy an HAProxy load balancer and point a DNS record to that load balancer.

    If installing in your own data center, then you will need two DNS records, one for the collector and one for the UI.

  • A DNS server and control over a DNS name that you can point to Sysdig

Consider Elasticsearch Default Privileges

By default, the Elasticsearch container will be installed in privileged (root-access) mode. This mode is only needed so the container can reconfigure the hosts’ Linux file descriptors if necessary. See Elasticsearch’s description here.

If you prefer not to allow Elasticsearch to run with root access to the host, you will need to:

  1. Set your own file descriptors on all Linux hosts in the Kubernetes cluster.

    If one host were to go down, Kubernetes could choose a different node for Elasticsearch, so each Linux host must have the file descriptors set.

  2. Set privileged:false in the elasticsearch-statefulset.yaml file.

    See the step under Coonfigure Backend Components, below, for details.

Configure Storage Class

If you are using EKS or GKE, default storage classes are provided; check for them (step 1).

In other environments, you may need to create a storage class (step 2).

Finally, enter the storageClassName in the appropriate .yaml files (step 3).

  1. Verify whether a storage class has been created, by running the command:

    kubectl get storageclass
  2. If no storage class has been defined, create a manifest for one, and then deploy it.

    For example, a manifest could be named sysdigcloud-storageclass.yamland contain the following contents (for a storage class using GP2 volumes in AWS):

    kind: StorageClass
      name: gp2
      annotations: "true"
      labels: "true" EnsureExists
      type: gp2

    Now run the command:

    kubectl apply -f sysdigcloud-storageclass.yaml

Download the Source Files to a New Namespace

Sysdig provides the necessary scripts, images, and .yaml files in a GitHub repository. The first step is to clone those files and check out the latest version. (These examples use 1234.)

Find the current release tag from

  1. Run the command:

    git clone
    cd sysdigcloud-kubernetes
    git checkout tags/<1234>
  2. Create a namespace called sysdigcloud:

    kubectl create namespace sysdigcloud

Add External Load Balancer

Create a TCP load balancer (i.e., AWS NLB) that forwards ports 80, 443, 6443 to the Kubernetes worker nodes, with a healthcheck to /healthz on port 10253.

This can be done in three ways:

  1. Use an existing external load balancer. Sysdig relies heavily on DNS; you need a DNS record pointing to the load balancer.

  2. Create a load balancer in your cloud provider. (For example in AWS, see You need a DNS record that points to the load balancer. This is the fully qualified domain name required later in the config.yaml, api-ingress.yaml and/or api-ingress-with-secure.yaml.

  3. Create a yaml with the following content and apply it to the sysdigcloud namespace. This automatically creates a load balancer in the cloud provider environment, with an external DNS name.

    This is the fully qualified domain name required later in the config.yaml, api-ingress.yaml and/or api-ingress-with-secure.yaml.

    apiVersion: v1
    kind: Service
      name: haproxy-ingress-lb-service
      type: LoadBalancer
      - name: http
        port: 80
        targetPort: 80
      - name: https
        port: 443
        targetPort: 443
      - name: https2
        port: 6443
        targetPort: 6443
        run: haproxy-ingress
  4. Apply the changes to the sysdigcloud namespace.

    kubectl -n sysdigcloud apply -f <yourlbfile.yamlservice.yaml>
  5. To get the DNS name, run the command:

    kubectl get svc -o wide -n sysdigcloud

    The output shows the External-IP (DNS name):

    NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP                           PORT(S)                                        AGE       SELECTOR
    haproxy-ingress-lb-service   LoadBalancer  80:31688/TCP,443:32324/TCP,6443:30668/TCP      1d        run=haproxy-ingress

DNS Entry (For Test Environments without a Load Balancer)

Not for production environments.

Create a DNS entry for your Sysdig install using the fully qualified domain name that contains all the external IPs as A records. This will use DNS round-robin to load balance your clients to the Kubernetes cluster.

Prepare the Environment

The install images, scripts, and other files are located in a GitHub repository:

Step 1 Configure Backend Components

The ConfigMap (config.yaml) is populated with information about usernames, passwords, SSL certs, and various application-specific settings.

The steps below give the minimum edits that should be performed in a test environment.

It is necessary to review and customize the entries in config.yaml before launching in a production environment.

See Apply Configuration Changes for the kubectl format to use for post-install edits, such as adding third-party authenticators like LDAP.

If you are not installing Sysdig Secure, set the following attributes to false in the config.yaml:

  • nats.enabled: "false"
  • nats.forward.enabled: "false"
  1. Add your license key:

    In config.yaml, enter the key that was emailed to you in the following parameter:

    # Required: Sysdig Cloud license
      sysdigcloud.license: "
  2. Change the super admin name and password, which are the super admin credentials for the entire system. See here for details.

    Find the settings in config.yaml here:

      # Required: Sysdig Cloud super admin user password
      # NOTE: Change upon first login
      sysdigcloud.default.user.password: test
  3. Change the mysql.password from change_me to desired credentials.

    mysql.password: change_me
      # Required: Cassandra endpoint DNS/IP. If Cassandra is deployed as a
        Kubernetes service, this will be the service name.
      # If using an external database, put the proper address (the address of a
        single node will be sufficient)
  4. **Edit the collector endpoint and api-url:**Change the defaults (sysdigcloud-collector and sysdigcloud-api:443) to point to the DNS name you have established for Sysdig.

    Note: The collector port should remain 6443.

    collector.endpoint: <DNS_NAME>
    collector.port: "6443"
    api.url: https://<DNS_NAME>:443
  5. Recommended: edit the file to set the JVM options for Cassandra, Elasticsearch, and API, worker, and collector as well.

    (To use the AWS implicit key, edit the JVM options as described in AWS: Integrate AWS Account and CloudWatch Metrics (Optional).)

    For installations over 100 agents, it is recommended to allocate 8 GB per JVM.

      cassandra.jvm.options: "-Xms8G -Xmx8G"
      elasticsearch.jvm.options: "-Xms8G -Xmx8G"
      sysdigcloud.jvm.api.options: "-Xms8G -Xmx8G"
      sysdigcloud.jvm.worker.options: "-Xms8G -Xmx8G"
      sysdigcloud.jvm.collector.options: "-Xms8G -Xmx8G"

    Note: If you do not wish to use SSL between the agent and the collector, use the following settings instead:

    cassandra.jvm.options: "-Xms8G -Xmx8G"
    elasticsearch.jvm.options: "-Xms8G -Xmx8G"
    sysdigcloud.jvm.api.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
    sysdigcloud.jvm.worker.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
    sysdigcloud.jvm.collector.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"

    See also: Step 5: Set Up SSL Connectivity to the Backend.

  6. Optional: Change ElasticSearch container setting to non-privileged.

    To change the default setting, edit the file elasticsearch-statefulset.yamland set privileged: false.

            - name: elasticsearch
                privileged: false
  7. Deploy the configuration map and secrets for all services by running the commands:

    For Sysdig Monitor:

    kubectl -n sysdigcloud apply -f sysdigcloud/config.yaml

    To add Sysdig Secure:

    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-secrets.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-secrets.yaml

    Apply the secret for the policy advisor:

    kubectl -n sysdigcloud apply -f sysdigcloud/policy-advisor-secret.yaml
  8. Configure DNS name inapi-ingress.yaml (or api-ingress-with-secure.yaml if using Secure). (Files located in sysdigcloud/)

    Edit: host: <EXTERNAL-DNS-NAME> to suit your DNS name

  9. Define namespace in ingress-clusterrolebinding.yaml. (File located in sysdigcloud/ingress_controller/) Edit namespace: sysdigcloud

Step 2 Add Storage Class to Manifests

Using either the existing storage class name from step 1, or the storage class name defined in the previous step, edit the storageClassName in the following .yaml files:

For Monitor:


With Secure:


Step 3 (Secure-Only): Edit mysql-deployment yaml

If using Sysdig Secure:

Edit the MySQL deployment to uncomment the MYSQL_EXTRADB_* environment variables.

This forces MySQL to create the necessary scanning database on startup.

File location: datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml

                  name: sysdigcloud-config
                  key: scanning.mysql.dbname
                  name: sysdigcloud-config
                  key: scanning.mysql.user
                  name: sysdigcloud-scanning
                  key: scanning.mysql.password

The scanning service will not start unless MySQL creates the scanning database.

Step 4 Deploy Your Quay Pull Secret

A specific Quay pull secret is sent via email with your license key.

  1. Edit the file sysdigcloud/pull-secret.yaml and change the place holder <PULL_SECRET> with the provided pull secret.

  2. Deploy the pull secret object:

    kubectl -n sysdigcloud apply -f sysdigcloud/pull-secret.yaml

Step 5 Set Up SSL Connectivity to the Backend

SSL-secured communication is used between user browsers and the Sysdig API server(s), and between the Sysdig agent and the collectors.

To set this up, you must:

  • Use existing standard certs for API and collector, or

  • Create self-signed certificates and keys for API and collector

To Disable SSL between Agent and Collector

To disable SSL between agents and collectors, set JVM options when configuring backend components.

To Create Self-Signed Certs

Run these commands (edit to add your API_DNS_NAMEand COLLECTOR_DNS_NAME):

openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/C=US/ST=CA/L=SanFrancisco/O=ICT/CN=<API_DNS_NAME>" -keyout server.key -out server.crt
openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/C=US/ST=CA/L=SanFrancisco/O=ICT/CN=<COLLECTOR_DNS_NAME>" -keyout collector.key -out collector.crt

To Create Kubernetes Secrets

This uses two different certificates, one for the API/UI, and one for the collector.

kubectl -n sysdigcloud create secret tls sysdigcloud-ssl-secret --cert=server.crt --key=server.key
kubectl -n sysdigcloud create secret tls sysdigcloud-ssl-secret-collector --cert=collector.crt --key=collector.key

Step 6 (Optional) Use CA Certs for External SSL Connection

The Sysdig platform may sometimes open connections over SSL to certain external services, including:

  • LDAP over SSL

  • SAML over SSL

  • OpenID Connect over SSL

  • HTTPS Proxies

If the signing authorities for the certificates presented by these services are not well-known to the Sysdig Platform (e.g., if you maintain your own Certificate Authority), they are not trusted by default.

To allow the Sysdig platform to trust these certificates, use the command below to upload one or more PEM-format CA certificates. You must ensure you’ve uploaded all certificates in the CA approval chain to the root CA.

kubectl -n sysdigcloud create secret generic sysdigcloud-java-certs --from-file=certs1.crt --from-file=certs2.crt

Install Components

Install Datastores and Backend Components

For Sysdig Monitor:

  1. Create the datastore statefulsets for Elasticsearch and Cassandra. Elasticsearch and Cassandra are automatically set up with --replica=3 generating full clusters.

    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/cassandra/cassandra-service.yaml
    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/cassandra/cassandra-statefulset.yaml
    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-service.yaml
    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-statefulset.yaml
  2. Wait for those processes to be running, then create the database and caching systems: MySQL, and Redis.

    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml
    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/redis/redis-deployment.yaml

    To add Sysdig Secure: Create the PostgreSQL database:

    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/postgres/postgres-service.yaml
    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/postgres/postgres-statefulset.yaml
  3. Wait until datastore pods are in ready state:

    Run the command:

    kubectl -n sysdigcloud get pods

    Then look in the READY column to ensure all pods are ready. For example, displaying a 1/1 means 1 of 1 pods is ready

  4. Apply the NATS service and deployment to deliver events to Sysdig backend components:

    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/nats-streaming/nats-streaming-deployment.yaml
    kubectl -n sysdigcloud apply -f  datastores/as_kubernetes_pods/manifests/nats-streaming/nats-streaming-service.yaml
  5. Apply the API deployment. Pause until all containers in the API pod are running, then apply the collector and worker deployments.

    kubectl -n sysdigcloud apply -f sysdigcloud/api-deployment.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/collector-deployment.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/worker-deployment.yaml
  6. Create the service for the API and collector:

    kubectl -n sysdigcloud apply -f sysdigcloud/api-headless-service.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/collector-headless-service.yaml
  7. Sysdig Secure only Create anchore-engine deployments and service (used in scanning):

    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-service.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-core-config.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-core-deployment.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-worker-config.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-worker-deployment.yaml

    Wait 60 seconds to ensure the Anchore components are up and running. Then deploy custom Sysdig Secure scanning components:

    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-service.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-api-deployment.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-alertmgr-service.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-alertmgr-deployment.yaml
  8. Sysdig Secure only Create services, deployments, and a janitor job for the activity audit and policy advisor features:

    kubectl -n sysdigcloud apply -f sysdigcloud/policy-advisor-service.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/activity-audit-api-service.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/activity-audit-api-deployment.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/policy-advisor-deployment.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/activity-audit-worker-deployment.yaml
    kubectl -n sysdigcloud apply -f sysdigcloud/activity-audit-janitor-cronjob.yaml

Connecting to the Cluster

Add Cluster-Admin to User (GKE/GCloud Only)

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)

Add Ingress Controller

For Sysdig Monitor:

To permit incoming connections to the Sysdig API and collector, deploy the following ingress yamls.

kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-clusterrole.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-clusterrolebinding.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-role.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-rolebinding.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-serviceaccount.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/default-backend-service.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/default-backend-deployment.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-configmap.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-tcp-services-configmap.yaml
kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-daemonset.yaml

If NOT using Sysdig Secure, then apply the following ingress.yaml:

kubectl -n sysdigcloud apply -f sysdigcloud/api-ingress.yaml

For Sysdig Secure:

If you ARE using Secure, replace the api-ingress.yaml with the following line:

kubectl -n sysdigcloud apply -f sysdigcloud/api-ingress-with-secure.yaml

Install Complete

When the terminal messages indicate that installation was successfully completed:

  • Point your browser tohttps://API_DNS_NAME.

    You will be prompted to log in with the Admin credentials you set in Step 2 Configure Backend Components.

  • Log in as Super Admin.

    The Welcome Wizard is launched and prompts you to install your first Sysdig agent.

  • Install the agent(s).

    The Welcome Wizard should be populated with install parameters from your environment (access key, collector name, and collector port). For example:

    {docker run -d --name sysdig-agent --restart always --privileged --net host --pid host -e ACCESS_KEY=xxxxxxxxxx -e -e COLLECTOR_PORT=6443 -e CHECK_CERTIFICATE=false -e TAGS=example_tag:example_value -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro --shm-size=350m

Apply Configuration Changes

Replace kubectl with oc for OpenShift.

Update the Config Map

There are two ways to change the original installation parameters in the config map: edit or overwrite.

  • To edit the config map, run the following command:

    kubectl edit configmap/sysdigcloud-config --namespace sysdigcloud

    A text editor is presented with the config map to be edited. Enter parameters as needed, then save and quit.

    Then restart the config map (below).

  • To overwrite the config map that is edited on the client-side, (e.g. to keep it synced in a git repository), use the following command:

    kubectl replace -f sysdigcloud/config.yaml --namespace sysdigcloud

    Then restart the config map (below).

Restart Configmap

After updating the configmap, the Sysdig components must be restarted for the changed parameters to take effect. This can be done by forcing a rolling update of the deployments.

A possible way to do so is to change something innocuous, which forces a rolling update. E.g.:

kubectl -n sysdigcloud patch deployment [deploymnet] -p \
 "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"$(date +'%s')\"}}}}}"

Replace kubectl with oc for OpenShift.