This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

  • 1:
    • 1.1:
    • 2:
      • 3:
        • 4:
          • 5:
            • 5.1:
              • 5.2:
                • 5.3:
                • 6:

                  On-Premises Installation

                  When planning to install Sysdig products on-premises, enterprises should:

                  Oversight Services Now Offered for All Installs and Upgrades

                  As part of our continued focus on our customers, we are now offering oversight services for all on-premise installs and upgrades. Your Technical Account Manager (TAM), in conjunction with our support organization and Professional Services [where applicable], will work with you to:

                  • Assess your environment to ensure it is configured correctly

                  • Review your infrastructure to validate the appropriate storage capacities are available

                  • Review and provide recommendations for backing up your Sysdig data

                  • Work with you to ensure our teams are ready to assist you during the install and upgrade process

                  • Provide the software for the install

                  • Be available during the process to ensure a successful deployment

                  You can always review the process in the documentation on GitHub (v. 3.6.0+) or the standard docs site (for older versions).

                  If you are a new customer looking to explore Sysdig, please head over here to sign up for a trial on our SaaS Platform. Alternatively, you can contact us here.

                  1 -

                  Installer (Kubernetes | OpenShift)

                  For v 3.6.0+, go to the GitHub repo. On-prem installation documentation is transitioning to GitHub.

                  All on-premises installations and upgrades are now scheduled with and guided by Sysdig technical account managers and professional services division. See Oversight Services Now Offered for All Installs and Upgrades .

                  For customers, the instructions in this section are for review purposes only.

                  The Sysdig Installer tool is a binary containing a collection of scripts that help automate the on-premises deployment of the Sysdig platform (Sysdig Monitor and/or Sysdig Secure), for environments using Kubernetes or OpenShift. Use the Installer to install or upgrade your Sysdig platform. It is recommended as a replacement for the earlier Kubernetes manual installation and upgrade procedures.

                  Installation Overview

                  To install, you will download the installer binary and a values.yaml file, provide a few basic parameters, and launch the Installer. In a normal installation, the rest is automatically configured and deployed.

                  You can perform a quick install if your environment has access to the internet, or a partial or full airgapped installation, as needed. Each is described below.

                  See Frequently Used Installer Configurations to:

                  • Customize or override settings

                  • Use hostPath for static storage of Sysdig components

                  • Use Kubernetes node labels and taints to run only Sysdig pods on selected nodes in a cluster

                  Install vs Upgrade

                  With Sysdig Platform 3.5.0, the installer has been simplified from previous versions. Upgrade differs from Install in that you run an installer diff to discover the differences between the old and new versions and then installer deploy for the new version.

                  If you are installing the Sysdig Platform for the first time, ignore the For Upgrade Only step in the process.

                  If you are upgrading, check the Upgrade notes before you begin.

                  Prerequisites

                  The installer must be run from a machine with kubectl/oc configured with access to the target cluster where the Sysdig platform will be installed. Note that this cluster may be different than where the Sysdig agent will be deployed.

                  Requirements for Installation Machine with Internet Access

                  • Network access to Kubernetes cluster

                  • Network access to quay.io

                  • A domain name you are in control of.

                  Additional Requirements for Airgapped Environments

                  • Edited values.yaml with airgap registry details updated

                  • Network and authenticated access to the private registry

                  Access Requirements

                  • Sysdig license key (Monitor and/or Secure)

                  • Quay pull secret

                  Storage Requirements

                  You may use dynamic or static storage on a variety of platforms to store the Sysdig platform components (stateful sets). Different configuration parameters and values are used during the install, depending on which scenario you have.

                  Use Case 1: Default, undefined (AWS/GKE)

                  If you will use dynamic storage on AWS or GKE and haven’t configured any storage class there yet, then the Quick Install streamlines the process for you.

                  • storageclassProvisioner: Enter aws or gke. The installer will create the appropriate storage class and then use it for all the Sysdig platform stateful sets.

                  • storageclassName: Leave empty.

                  Use Case 2: Dynamic, predefined

                  It is also possible that you are using dynamic storage but have already created storage classes there. This dynamic storage could be AWS, GKE, or any other functioning dynamic storage you use.  In this case, you would enter: 

                  • storageclassProvisioner: Leave empty; anything put here would be ignored.

                  • storageclassName: Provide the name of the pre-configured storage class you want to use. The installer will use this storage class for all the Sysdig platform stateful sets.

                  Use Case 3: Static Storage

                  In cases where dynamic storage is not available, you can use static storage for the Sysdig stateful sets. In this case, you would use:

                  • storageclassProvisioner: Enter hostpath, then define the nodes for the four main Sysdig components: ElasticSearch, Cassandra, MySQL, and Postgres.storageclassProvisioner

                  • See Frequently Used Installer Configurations for details.

                  Quickstart Install

                  This install assumes the Kubernetes cluster has network access to pull images from quay.io.

                  1. Have your Sysdig Technical Account Manager download the installer binary that matches your OS from the  the sysdigcloud-kubernetes releases page.

                  2. For Upgrades Only: Copy the current version of values.yaml to your working directory.]

                    ./installer-image import -n sysdig --certs-directory certs -o values.yaml
                    

                    If you will be editing for an OpenShift installation and want to review a sample, see openshift-with-hostpath values.yaml. .

                  3. Edit the following values:

                    • size: Specifies the size of the cluster. Size defines CPU, Memory, Disk, and Replicas. Valid options are: small, medium and large

                    • quaypullsecret: quay.io provided with your Sysdig purchase confirmation mail

                    • storageClassProvisioner: Review Storage Requirements, above.

                      If you have the default use case, enter aws or gke in the storageClassProvisioner field. Otherwise, refer to Use Case 2 or 3.

                    • sysdig.license: Sysdig license key provided with your Sysdig purchase confirmation mail

                    • sysdig.dnsname: The domain name the Sysdig APIs will be served on. Note that the master node may not be used as the DNS name when using hostNetwork mode.

                    • sysdig.collector.dnsName: (OpenShift installs only) Domain name the Sysdig collector will be served on. When not configured it defaults to whatever is configured for sysdig.dnsName. Note that the master node may not be used as the DNS name when using hostNetwork mode.

                    • deployment: (OpenShift installs only) Add deployment: openshift to the root of the values.yaml file.

                    • sysdig.ingressNetworking: The networking construct used to expose the Sysdig API and collector.Options are:

                      • hostnetwork: sets the hostnetworking in the ingress daemonset and opens host ports for api and collector. This does not create a Kubernetes service.

                      • loadbalancer: creates a service of type loadbalancer and expects that your Kubernetes cluster can provision a load balancer with your cloud provider.

                      • nodeport: creates a service of type nodeport.The node ports can be customized with:

                        sysdig.ingressNetworkingInsecureApiNodePort

                        sysdig.ingressNetworkingApiNodePort

                        sysdig.ingressNetworkingCollectorNodePort

                        When not configured, sysdig.ingressNetworking defaults to hostnetwork.

                      If doing an airgapped install , you would also edit the following values:

                    • airgapped_registry_name: The URL of the airgapped (internal) docker registry. This URL is used for installations where the Kubernetes cluster can not pull images directly from Quay

                    • airgapped_repository_prefix: This defines custom repository prefix for airgapped_registry. Tags and pushes images as airgapped_registry_name/airgapped_repository_prefix/image_name:tag

                    • airgapped_registry_password: The password for the configured airgapped_registry_username. Ignore this parameter if the registry does not require authentication.

                    • airgapped_registry_username: The username for the configured airgapped_registry_name. Ignore this parameter if the registry does not require authentication.

                  4. [For Upgrades Only:]

                    [Generate and review the diff of changes the installer is about to introduce:

                    ./installer diff
                    

                    This will generate the differences between the installed environment and the upgrade version. The changes will be displayed in your terminal.

                    If you want to override a change, based on your environment’s custom settings, then contact Sysdig Support for assistance.]

                  5. Run the installer:

                    ./installer deploy
                    
                  6. See Output (below) to finish.

                  Save the values.yaml file in a secure location; it will be used for future upgrades.

                  There will also be a generated directory containing various Kubernetes configuration yaml files that were applied by the Installer against your cluster. It is not necessary to keep the generated directory, as the Installer can regenerate it consistently with the same values.yaml file.

                  Airgapped Installation Options

                  The installer can be used in airgapped environments, either with a multi-homed installation machine that has internet access, or in an environment with no internet access.

                  Airgapped with Multi-Homed Installation Machine

                  This assumes a private docker registry is used and the installation machine has network access to pull from quay.io and push images to the private registry.

                  The Prerequisites and workflow are the same as in the Quickstart Install (above) with the following exceptions:

                  • In step 2, add the airgap registry information

                  • After step 3, make the installer push Sysdig images to the airgapped registry by running:

                    ./installer airgap
                    

                    That will pull all the images into the images_archive directory as tar files and push them to the airgapped registry.

                  • If you are upgrading, run the diff as directed in Step 4.

                  • Run the installer:

                    ./installer deploy
                    

                  Full Airgap Install

                  This assumes a private docker registry is used and the installation machine does not have network access to pull from quay.io, but can push images to the private registry.

                  In this situation, a machine with network access (called the “jump machine”) will pull an image containing a self-extracting tarball which can be copied to the installation machine.

                  Access Requirements

                  • Sysdig license key (Monitor and/or Secure) 

                  • Quay pull secret

                  • Anchore license file (if Sysdig Secure is licensed)

                  Requirements for jump machine

                  • Network access to quay.io

                  • Docker

                  • jq

                  Requirements for installation machine

                  • Network access to Kubernetes cluster

                  • Docker

                  • Network and authenticated access to the private registry

                  • Edited values.yaml with airgap registry details updated

                  • Host Disk Space Requirements:/tmp > 4 GB; directory from which the installer is run >8GB; and /var/lib/docker > 4GB.

                    NOTE: The environment variable TMPDIR can be used to override the /tmp directory.

                  Docker Log In to quay.io

                  • Retrieve Quay username and password from Quay pull secret. For example:

                    AUTH=$(echo <REPLACE_WITH_quaypullsecret> | base64 --decode | jq -r '.auths."quay.io".auth'| base64 --decode)
                    QUAY_USERNAME=${AUTH%:*}
                    QUAY_PASSWORD=${AUTH#*:}
                    
                  • Log in to quay.ioUse the username and password retrieved above.

                    docker login -u "$QUAY_USERNAME" -p "$QUAY_PASSWORD" quay.io
                    

                  Workflow

                  On the Jump Machine

                  1. Follow the Docker Log In to quay.io steps, above.

                  2. Pull the image containing the self-extracting tar:

                    docker pull quay.io/sysdig/installer:-uber
                    
                  3. Extract the tarball:

                    docker create --name uber_image quay.io/sysdig/installer:-uber
                    docker cp uber_image:/sysdig_installer.tar.gz .
                    docker rm uber_image
                    
                  4. Copy the tarball to the installation machine.

                  On the Installation Machine:

                  1. Copy the current version values.yaml to your working directory.

                    wget https://raw.githubusercontent.com/draios/sysdigcloud-kubernetes/installer/installer/values.yaml
                    
                  2. Edit the following values:

                    • size: Specifies the size of the cluster. Size defines CPU, Memory, Disk, and Replicas. Valid options are: small, medium and large

                    • quaypullsecret: quay.io provided with your Sysdig purchase confirmation mail

                    • storageClassProvisioner: Review Storage Requirements, above.

                      If you have the default use case, enter aws or gke in the storageClassProvisioner field. Otherwise, refer to Use Case 2 or 3.

                    • sysdig.license: Sysdig license key provided with your Sysdig purchase confirmation mail

                    • sysdig.dnsname: The domain name the Sysdig APIs will be served on. Note that the master node may not be used as the DNS name when using hostNetwork mode.

                    • sysdig.collector.dnsName: (OpenShift installs only) Domain name the Sysdig collector will be served on. When not configured it defaults to whatever is configured for sysdig.dnsName. Note that the master node may not be used as the DNS name when using hostNetwork mode.

                    • deployment: (OpenShift installs only) Add deployment: openshift to the root of the values.yaml file.

                    • sysdig.ingressNetworking: The networking construct used to expose the Sysdig API and collector.Options are:

                      • hostnetwork: sets the hostnetworking in the ingress daemonset and opens host ports for api and collector. This does not create a Kubernetes service.

                      • loadbalancer: creates a service of type loadbalancer and expects that your Kubernetes cluster can provision a load balancer with your cloud provider.

                      • nodeport: creates a service of type nodeport.The node ports can be customized with:

                        sysdig.ingressNetworkingInsecureApiNodePort

                        sysdig.ingressNetworkingApiNodePort

                        sysdig.ingressNetworkingCollectorNodePort

                    • airgapped_registry_name: The URL of the airgapped (internal) docker registry. This URL is used for installations where the Kubernetes cluster can not pull images directly from Quay

                    • airgapped_repository_prefix: This defines custom repository prefix for airgapped_registry. Tags and pushes images as airgapped_registry_name/airgapped_repository_prefix/image_name:tag

                    • airgapped_registry_password: The password for the configured airgapped_registry_username. Ignore this parameter if the registry does not require authentication.

                    • airgapped_registry_username: The username for the configured airgapped_registry_name. Ignore this parameter if the registry does not require authentication.

                  3. Copy the tarball file to the directory where you have your values.yaml file.

                  4. Run:

                    installer airgap --tar-file sysdig_installer.tar.gz
                    

                    NOTE: This step will extract the images into the images_archive directory relative to where the installer was run and push the images to the airgapped_registry.

                  5. [For Upgrades Only:]

                    [Generate and review the diff of changes the installer is about to introduce:

                    ./installer diff
                    

                    This will generate the differences between the installed environment and the upgrade version. The changes will be displayed in your terminal.

                    If you want to override a change, based on your environment’s custom settings, then contact Sysdig Support for assistance.]

                  6. Run the installer:

                    ./installer deploy
                    
                  7. See Output (below) to finish.

                  Save the values.yaml file in a secure location; it will be used for future upgrades.

                  There will also be a generated directory containing various Kubernetes configuration yaml files that were applied by the Installer against your cluster. It is not necessary to keep the generated directory, as the Installer can regenerate it consistently with the same values.yaml file.

                  Updating Vulnerability Feed in Airgapped Environments

                  NOTE: Sysdig Secure users who install in an airgapped environment do not have internet access to the continuous checks of vulnerability databases that are used in image scanning. (See also: How Sysdig Image Scanning Works.)

                  As of installer version 3.2.0-9, airgapped environments can also receive periodic vulnerability database updates.

                  When you install with the “airgapped_” parameters enabled (see Full Airgap Install instructions), the installer will automatically push the latest vulnerability database to your environment. Follow the steps below to reinstall/refresh the vuln db, or use the script and chron job to schedule automated updates (daily, weekly, etc.).

                  To automatically update the vulnerability database, you can:

                  1. Download the image file quay.io/sysdig/vuln-feed-database:latest from the Sysdig registry to the jump box server and save it locally.

                  2. Move the file from the jump box server to the airgapped environment (if needed)

                  3. Load the image file and push it to the airgapped image registry.

                  4. Restart the pod sysdigcloud-feeds-db

                  5. Restart the pod feeds-api

                  The following script (feeds_database_update.sh) performs the five steps:

                  #!/bin/bash
                  QUAY_USERNAME="<change_me>"
                  QUAY_PASSWORD="<change_me>"
                  
                  # Download image
                  docker login quay.io/sysdig -u ${QUAY_USERNAME} -p ${QUAY_PASSWORD}
                  docker image pull quay.io/sysdig/vuln-feed-database:latest
                  # Save image
                  docker image save quay.io/sysdig/vuln-feed-database:latest -o vuln-feed-database.tar
                  # Optionally move image
                  mv vuln-feed-database.tar /var/shared-folder
                  # Load image remotely
                  ssh -t user@airgapped-host "docker image load -i /var/shared-folder/vuln-feed-database.tar"
                  # Push image remotely
                  ssh -t user@airgapped-host "docker tag vuln-feed-database:latest airgapped-registry/vuln-feed-database:latest"
                  ssh -t user@airgapped-host "docker image push airgapped-registry/vuln-feed-database:latest"
                  # Restart database pod
                  ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-db --replicas=0"
                  ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-db --replicas=1"
                  # Restart feeds-api pod
                  ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-api --replicas=0"
                  ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-api --replicas=1"
                  

                  Schedule a chron job to run the script on a chosen schedule (e.g. every day):

                  0 8 * * * feeds-database-update.sh >/dev/null 2>&1
                  

                  Output

                  A successful installation should display output in the terminal such as:

                  All Pods Ready.....Continuing
                  Congratulations, your Sysdig installation was successful!
                  You can now login to the UI at "https://awesome-domain.com:443" with:
                  
                  username: "configured-username@awesome-domain.com"
                  password: "awesome-password"
                  

                  There will also be a generated directory containing various Kubernetes configuration yaml files which were applied by installer against your cluster. It is not necessary to keep the generated directory, as the installer can regenerate consistently with the same values.yaml file.

                  Additional Installer Resources

                  1.1 -

                  Frequently Used Installer Configurations

                  SMTP Configs for Email Notifications

                  The available fields for SMTP configuration are documented in the configuration_parameters.md. Each includes SMTP in its name. For example:

                  sysdig:
                    ...
                    smtpServer: smtp.sendgrid.net
                    smtpServerPort: 587
                    #User,Password can be empty if the server does not require authentication
                    smtpUser: apikey
                    smtpPassword: XY.abcdefghijk...
                    smtpProtocolTLS: true
                    smtpProtocolSSL: false
                    #Optional Email Header
                    smtpFromAddress: sysdig@mycompany.com
                  

                  To configure email settings to be used for a notification channel, copy the parameters and appropriate values into your values.yaml.

                  Configure AWS Credentials Using the Installer

                  The available fields for AWS credentials are documented in the configuration_parameters.md. They are:

                  sysdig:
                    accessKey: my_awesome_aws_access_key
                    secretKey: my_super_secret_secret_key
                  

                  Use hostPath for Static Storage of Sysdig Components

                  The Installer assumes the usage of a dynamic storage provider (AWS or GKE). In case these are not used in your environment, add the entries below to thevalues.yamlto configure static storage.

                  Based on the size entered in the values.yaml file (small/medium/large), the Installer assumes a minimum number of replicas and nodes to be provided. You will enter the names of the nodes on which you will run the Cassandra, ElasticSearch, mySQL and Postgres components of Sysdig in the values.yaml, as in the parameters and example below.

                  Parameters

                  • storageClassProvisioner:``hostPath.

                  • sysdig.cassandra.hostPathNodes: The number of nodes configured here needs to be at minimum 1 when configured size is small, 3 when configured size is medium and 6 when configured size is large.

                  • elasticsearch.hostPathNodes: The number of nodes configured here needs to be at minimum 1 when configured size is small, 3 when configured size is medium and 6 when configured size is large.

                  • sysdig.mysql.hostPathNodes: When sysdig.mysqlHA is configured to true, this must be at least 3 nodes. When sysdig.mysqlHA is not configured, it should be at least 1 node.

                  • sysdig.postgresql.hostPathNodes: This can be ignored if Sysdig Secure is not licensed or used in this environment. If Secure is used, then the parameter should be set to 1, regardless of the size setting

                  Example

                  storageClassProvisioner: hostPath
                  elasticsearch:
                    hostPathNodes:
                      - my-cool-host1.com
                      - my-cool-host2.com
                      - my-cool-host3.com
                      - my-cool-host4.com
                      - my-cool-host5.com
                      - my-cool-host6.com
                  sysdig:
                    cassandra:
                      hostPathNodes:
                        - my-cool-host1.com
                        - my-cool-host2.com
                        - my-cool-host3.com
                        - my-cool-host4.com
                        - my-cool-host5.com
                        - my-cool-host6.com
                    mysql:
                      hostPathNodes:
                        - my-cool-host1.com
                    postgresql:
                      hostPathNodes:
                        - my-cool-host1.com
                  

                  Run Only Sysdig Pods on a Node Using Taints and Tolerations

                  If you have a large shared Kubernetes cluster and want to dedicate a few nodes for just the Sysdig backend component installation, you can use the Kubernetes concept of taints and tolerations.

                  The basic process is:

                  1. Assign labels and taints to the relevant nodes.

                  2. Review the sample node-labels-and-taints values.yaml in the Sysdig github repo.

                  3. Copy that section to your own values.yaml file and edit with labels and taints you assigned.

                  Example from the sample file:

                  # To make the ‘tolerations’ code sample below functional, assign nodes the taint
                  # dedicated=sysdig:NoSchedule. E.g:
                  # kubectl taint my-awesome-node01 dedicated=sysdig:NoSchedule
                    tolerations:
                      - key: "dedicated"
                        operator: "Equal"
                        value: sysdig
                        effect: "NoSchedule"
                  # To make the Label code sample below functional, assign nodes the label
                  # role=sysdig.
                  # e.g: kubectl label nodes my-awesome-node01 role=sysdig
                    nodeaffinityLabel:
                      key: role
                      value: sysdig
                  

                  Patching

                  Patching can be used to customize or “tweak” the default behavior of the Installer to accommodate the unique requirements of a specific environment. Use patching to modify the parameters that are not exposed by thevalues.yaml. Refer to the configuration_parameters.md for more detail about various parameters.  

                  The most common use case for patching is during upgrades. When generating the differences between an existing installation and the upgrade, you may see previously customized configurations that the upgrade would overwrite, but that you want to preserve.

                  Patching Process

                  If you have run  generate diff  and found a configuration that you need to tweak (e.g. the installer will delete something you want to keep, or you need to add something that isn’t there), then follow these general steps:

                  • Create an overlays directory in the same location as the values.yaml.

                    This directory, and the PATCH.yaml you create for it, must be kept. The installer will use it during future upgrades of Sysdig.

                  • Create a .yaml file to be used for patching. You can name it whatever you want; we will call it PATCH.yaml for this example.

                    Patch files must include, at a minimum:

                    • apiVersion

                    • kind

                    • metadata.name

                    of the object to be patched.

                    Then you add the specific configuration required for your needs. See one example below.

                    You will need this patch definition for every Kubernetes object you want to patch.

                  • Run generate diff again and check that the outcome will be what you want.

                  • When satisfied, complete the update by changing the scripts value to deploy and running the installer (see Installer Upgrade (2.5.0+).

                  If you want to add another patch, you can either add a separate .yaml file or a new YAML document separated by ---

                  The recommended practice is to use a single patch per Kubernetes object.

                  Example

                  Presume you have the following generated configuration:

                  apiVersion: v1
                  kind: Service
                  metadata:
                    annotations: {}
                    labels:
                      app: sysdigcloud
                      role: api
                    name: sysdigcloud-api
                    namespace: sysdigcloud
                  spec:
                    clusterIP: None
                    ports:
                    - name: api
                      port: 8080
                      protocol: TCP
                      targetPort: 8080
                    selector:
                      app: sysdigcloud
                      role: api
                    sessionAffinity: None
                    type: ClusterIP
                  

                  To Add to the Generated Configuration

                  Suppose you want to add an extra label my-awesome-label: my-awesome-value to the Service object. Then in the PATCH.yaml, you would put the following:

                  apiVersion: v1
                  kind: Service
                  metadata:
                    name: sysdigcloud-api
                    labels:
                      my-awesome-label: my-awesome-value
                  

                  Run the installer again, and the configuration would be as follows:

                  apiVersion: v1
                  kind: Service
                  metadata:
                    annotations: {}
                    labels:
                      app: sysdigcloud
                      role: api
                      my-awesome-label: my-awesome-value
                    name: sysdigcloud-api
                    namespace: sysdigcloud
                  spec:
                    clusterIP: None
                    ports:
                    - name: api
                      port: 8080
                      protocol: TCP
                      targetPort: 8080
                    selector:
                      app: sysdigcloud
                      role: api
                    sessionAffinity: None
                    type: ClusterIP
                  

                  To Remove from the Generated Configuration

                  Supposed you wanted to remove all the labels. Then in the PATCH.yaml, you would put the following:

                  apiVersion: v1
                  kind: Service
                  metadata:
                    name: sysdigcloud-api
                    labels:
                  

                  Run the installer again, and the configuration would be as follows:

                  apiVersion: v1
                  kind: Service
                  metadata:
                    annotations: {}
                    name: sysdigcloud-api
                    namespace: sysdigcloud
                  spec:
                    clusterIP: None
                    ports:
                    - name: api
                      port: 8080
                      protocol: TCP
                      targetPort: 8080
                    selector:
                      app: sysdigcloud
                      role: api
                    sessionAffinity: None
                    type: ClusterIP
                  
                  

                  2 -

                  Manual Install 3.0.0+ (Kubernetes)

                  All on-premises installations and upgrades are now scheduled with and guided by Sysdig technical account managers and professional services division. See Oversight Services Now Offered for All Installs and Upgrades .

                  For customers, the instructions in this section are for review purposes only.

                  The Sysdig platform includes both Sysdig Monitor and Sysdig Secure, which are licensed separately. All installations include Sysdig Monitor, while some of the Secure components are installed and configured as additional steps, as noted.

                  When installing the Sysdig platform with Kubernetes as the orchestrator, you install each backend component with separate kubectl commands.

                  Installation with the Installer tool is recommended from version 2.5.0 onwards.

                  To perform a manual install on OpenShift, see Manual Install (OpenShift).

                  The manual install on Kubernetes 1.9+ is described below.

                  Prerequisites

                  • Access to a running Kubernetes cluster 1.9+

                    (Note: if your environment is installed elsewhere, such as your own data center, contact Sysdig Professional Services to customize the installation instructions appropriately.)

                  • Two items from your Sysdig purchase-confirmation email:

                    • Your Sysdig license key

                    • Your Sysdig quay.io pull secret

                  • kubectl installed on your machine and communicating with the Kubernetes cluster

                    (Note that your kubectl and Kubernetes versions should match to avoid errors.)

                  • An External Load Balancer (required for production – see below)

                    If installing in a cloud-provider environment (such as AWS, GCloud, or Azure), you will deploy an HAProxy load balancer and point a DNS record to that load balancer.

                    If installing in your own data center, then you will need two DNS records, one for the collector and one for the UI.

                  • A DNS server and control over a DNS name that you can point to Sysdig

                  Consider Elasticsearch Default Privileges

                  By default, the Elasticsearch container will be installed in privileged (root-access) mode. This mode is only needed so the container can reconfigure the hosts' Linux file descriptors if necessary. See Elasticsearch’s description here.

                  If you prefer not to allow Elasticsearch to run with root access to the host, you will need to:

                  1. Set your own file descriptors on all Linux hosts in the Kubernetes cluster.

                    If one host were to go down, Kubernetes could choose a different node for Elasticsearch, so each Linux host must have the file descriptors set.

                  2. Set privileged:false in the elasticsearch-statefulset.yaml file.

                    See the step under Coonfigure Backend Components, below, for details.

                  Configure Storage Class

                  If you are using EKS or GKE, default storage classes are provided; check for them (step 1).

                  In other environments, you may need to create a storage class (step 2).

                  Finally, enter the storageClassName in the appropriate .yaml files (step 3).

                  1. Verify whether a storage class has been created, by running the command:

                    kubectl get storageclass
                    
                  2. If no storage class has been defined, create a manifest for one, and then deploy it.

                    For example, a manifest could be named sysdigcloud-storageclass.yaml and contain the following contents (for a storage class using GP2 volumes in AWS):

                    apiVersion: storage.k8s.io/v1
                    kind: StorageClass
                    metadata:
                      name: gp2
                      annotations:
                        storageclass.beta.kubernetes.io/is-default-class: "true"
                      labels:
                        kubernetes.io/cluster-service: "true"
                        addonmanager.kubernetes.io/mode: EnsureExists
                    provisioner: kubernetes.io/aws-ebs
                    parameters:
                      type: gp2
                    

                    Now run the command:

                    kubectl apply -f sysdigcloud-storageclass.yaml
                    

                  Download the Source Files to a New Namespace

                  Sysdig provides the necessary scripts, images, and .yaml files in a GitHub repository. The first step is to clone those files and check out the latest version. (These examples use 1234.)

                  Find the current release tag from https://github.com/draios/sysdigcloud-kubernetes/releases/latest.

                  1. Run the command:

                    git clone https://github.com/draios/sysdigcloud-kubernetes.git
                    cd sysdigcloud-kubernetes
                    git checkout tags/<1234>
                    
                  2. Create a namespace called sysdigcloud:

                    kubectl create namespace sysdigcloud
                    

                  Add External Load Balancer

                  Create a TCP load balancer (i.e., AWS NLB) that forwards ports 80, 443, 6443 to the Kubernetes worker nodes, with a healthcheck to /healthz on port 10253.

                  This can be done in three ways:

                  1. Use an existing external load balancer. Sysdig relies heavily on DNS; you need a DNS record pointing to the load balancer.

                  2. Create a load balancer in your cloud provider. (For example in AWS, see https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-network-load-balancer.html.) You need a DNS record that points to the load balancer. This is the fully qualified domain name required later in the config.yaml, api-ingress.yaml and/or api-ingress-with-secure.yaml.

                  3. Create a yaml with the following content and apply it to the sysdigcloud namespace. This automatically creates a load balancer in the cloud provider environment, with an external DNS name.

                    This is the fully qualified domain name required later in the config.yaml, api-ingress.yaml and/or api-ingress-with-secure.yaml.

                    ---
                    apiVersion: v1
                    kind: Service
                    metadata:
                      name: haproxy-ingress-lb-service
                    spec:
                      type: LoadBalancer
                      ports:
                      - name: http
                        port: 80
                        targetPort: 80
                      - name: https
                        port: 443
                        targetPort: 443
                      - name: https2
                        port: 6443
                        targetPort: 6443
                      selector:
                        run: haproxy-ingress
                    
                  4. Apply the changes to the sysdigcloud namespace.

                    kubectl -n sysdigcloud apply -f <yourlbfile.yamlservice.yaml>
                    
                  5. To get the DNS name, run the command:

                    $ kubectl get svc -o wide -n sysdigcloud
                    

                    The output shows the External-IP (DNS name):

                    NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP                           PORT(S)                                        AGE       SELECTOR
                    haproxy-ingress-lb-service   LoadBalancer   100.66.118.183  sample123.us-east-1.elb.amazonaws.com  80:31688/TCP,443:32324/TCP,6443:30668/TCP      1d        run=haproxy-ingress
                    

                  DNS Entry (For Test Environments without a Load Balancer)

                  Not for production environments.

                  Create a DNS entry for your Sysdig install using the fully qualified domain name that contains all the external IPs as A records. This will use DNS round-robin to load balance your clients to the Kubernetes cluster.

                  Prepare the Environment

                  The install images, scripts, and other files are located in a GitHub repository:https://github.com/draios/sysdigcloud-kubernetes

                  Step 1 Configure Backend Components

                  The ConfigMap (config.yaml) is populated with information about usernames, passwords, SSL certs, and various application-specific settings.

                  The steps below give the minimum edits that should be performed in a test environment.

                  It is necessary to review and customize the entries in config.yaml before launching in a production environment.

                  See To Make Configuration Changes for the kubectl format to use for post-install edits, such as adding third-party authenticators like LDAP.

                  If you are not installing Sysdig Secure, set the following attributes to false in the config.yaml:

                  • nats.enabled: “false”

                  • nats.forward.enabled: "false"

                  1. Add your license key:

                    In config.yaml, enter the key that was emailed to you in the following parameter:

                    # Required: Sysdig Cloud license
                      sysdigcloud.license: "
                    
                  2. Change the super admin name and password, which are the super admin credentials for the entire system. See here for details.

                    Find the settings in config.yaml here:

                      sysdigcloud.default.user: test@sysdig.com
                      # Required: Sysdig Cloud super admin user password
                      # NOTE: Change upon first login
                      sysdigcloud.default.user.password: test
                    
                  3. Change the mysql.password from change_me to desired credentials.

                    mysql.password: change_me
                      # Required: Cassandra endpoint DNS/IP. If Cassandra is deployed as a
                        Kubernetes service, this will be the service name.
                      # If using an external database, put the proper address (the address of a
                        single node will be sufficient)
                    
                  4. **Edit the collector endpoint and api-url:**Change the defaults (sysdigcloud-collector and sysdigcloud-api:443) to point to the DNS name you have established for Sysdig.

                    Note: The collector port should remain 6443.

                    collector.endpoint: <DNS_NAME>
                    collector.port: "6443"
                    api.url: https://<DNS_NAME>:443
                    
                  5. Recommended: edit the file to set the JVM options for Cassandra, Elasticsearch, and API, worker, and collector as well.

                    (To use the AWS implicit key, edit the JVM options as described in AWS: Integrate AWS Account and CloudWatch Metrics (Optional).)

                    For installations over 100 agents, it is recommended to allocate 8 GB per JVM.

                      cassandra.jvm.options: "-Xms8G -Xmx8G"
                      elasticsearch.jvm.options: "-Xms8G -Xmx8G"
                      sysdigcloud.jvm.api.options: "-Xms8G -Xmx8G"
                      sysdigcloud.jvm.worker.options: "-Xms8G -Xmx8G"
                      sysdigcloud.jvm.collector.options: "-Xms8G -Xmx8G"
                    

                    Note: If you do not wish to use SSL between the agent and the collector, use the following settings instead:

                    cassandra.jvm.options: "-Xms8G -Xmx8G"
                    elasticsearch.jvm.options: "-Xms8G -Xmx8G"
                    sysdigcloud.jvm.api.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
                    sysdigcloud.jvm.worker.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
                    sysdigcloud.jvm.collector.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
                    

                    See also: Step 5: Set Up SSL Connectivity to the Backend.

                  6. Optional: Change ElasticSearch container setting to non-privileged.

                    See Consider Elasticsearch Default Privileges, above.

                    To change the default setting, edit the file elasticsearch-statefulset.yaml and set privileged: false.

                    containers:
                            - name: elasticsearch
                              image: quay.io/sysdig/elasticsearch:5.6.16.15
                              securityContext:
                                privileged: false
                    
                  7. Deploy the configuration map and secrets for all services by running the commands:

                    For Sysdig Monitor:

                    kubectl -n sysdigcloud apply -f sysdigcloud/config.yaml
                    

                    To add Sysdig Secure:

                    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-secrets.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-secrets.yaml
                    

                    Apply the secret for the policy advisor:

                    kubectl -n sysdigcloud apply -f sysdigcloud/policy-advisor-secret.yaml
                    
                  8. Configure DNS name in api-ingress.yaml (or api-ingress-with-secure.yaml if using Secure). (Files located in sysdigcloud/)

                    Edit: host: <EXTERNAL-DNS-NAME> to suit your DNS name

                  9. Define namespace in ingress-clusterrolebinding.yaml. (File located in sysdigcloud/ingress_controller/) Edit namespace: sysdigcloud

                  Step 2 Add Storage Class to Manifests

                  Using either the existing storage class name from step 1, or the storage class name defined in the previous step, edit the storageClassName in the following .yaml files:

                  For Monitor:

                  datastores/as_kubernetes_pods/manifests/cassandra/cassandra-statefulset.yaml
                  datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-statefulset.yaml
                  datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml
                  

                  With Secure:

                  datastores/as_kubernetes_pods/manifests/postgres/postgres-statefulset.yaml
                  

                  Step 3 (Secure-Only): Edit mysql-deployment yaml

                  If using Sysdig Secure:

                  Edit the MySQL deployment to uncomment the MYSQL_EXTRADB_* environment variables.

                  This forces MySQL to create the necessary scanning database on startup.

                  File location: datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml

                   - name: MYSQL_EXTRADB_SCANNING_DBNAME
                                valueFrom:
                                  configMapKeyRef:
                                    name: sysdigcloud-config
                                    key: scanning.mysql.dbname
                              - name: MYSQL_EXTRADB_SCANNING_USER
                                valueFrom:
                                  configMapKeyRef:
                                    name: sysdigcloud-config
                                    key: scanning.mysql.user
                              - name: MYSQL_EXTRADB_SCANNING_PASSWORD
                                valueFrom:
                                  secretKeyRef:
                                    name: sysdigcloud-scanning
                                    key: scanning.mysql.password
                  

                  The scanning service will not start unless MySQL creates the scanning database.

                  Step 4 Deploy Your Quay Pull Secret

                  A specific Quay pull secret is sent via email with your license key.

                  1. Edit the file sysdigcloud/pull-secret.yaml and change the place holder <PULL_SECRET> with the provided pull secret.

                  2. Deploy the pull secret object:

                    kubectl -n sysdigcloud apply -f sysdigcloud/pull-secret.yaml
                    

                  Step 5 Set Up SSL Connectivity to the Backend

                  SSL-secured communication is used between user browsers and the Sysdig API server(s), and between the Sysdig agent and the collectors.

                  To set this up, you must:

                  • Use existing standard certs for API and collector, or

                  • Create self-signed certificates and keys for API and collector

                  To Disable SSL between Agent and Collector

                  To disable SSL between agents and collectors, set JVM options when configuring backend components.

                  To Create Self-Signed Certs

                  Run these commands (edit to add your API_DNS_NAME and COLLECTOR_DNS_NAME):

                  openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/C=US/ST=CA/L=SanFrancisco/O=ICT/CN=<API_DNS_NAME>" -keyout server.key -out server.crt
                  openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/C=US/ST=CA/L=SanFrancisco/O=ICT/CN=<COLLECTOR_DNS_NAME>" -keyout collector.key -out collector.crt
                  

                  To Create Kubernetes Secrets

                  This uses two different certificates, one for the API/UI, and one for the collector.

                  kubectl -n sysdigcloud create secret tls sysdigcloud-ssl-secret --cert=server.crt --key=server.key
                  kubectl -n sysdigcloud create secret tls sysdigcloud-ssl-secret-collector --cert=collector.crt --key=collector.key
                  

                  Step 6 (Optional) Use CA Certs for External SSL Connection

                  The Sysdig platform may sometimes open connections over SSL to certain external services, including:

                  • LDAP over SSL

                  • SAML over SSL

                  • OpenID Connect over SSL

                  • HTTPS Proxies

                  If the signing authorities for the certificates presented by these services are not well-known to the Sysdig Platform (e.g., if you maintain your own Certificate Authority), they are not trusted by default.

                  To allow the Sysdig platform to trust these certificates, use the command below to upload one or more PEM-format CA certificates. You must ensure you’ve uploaded all certificates in the CA approval chain to the root CA.

                  kubectl -n sysdigcloud create secret generic sysdigcloud-java-certs --from-file=certs1.crt --from-file=certs2.crt
                  

                  Install Components

                  Install Datastores and Backend Components

                  For Sysdig Monitor:

                  1. Create the datastore statefulsets for Elasticsearch and Cassandra. Elasticsearch and Cassandra are automatically set up with --replica=3 generating full clusters.

                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/cassandra/cassandra-service.yaml
                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/cassandra/cassandra-statefulset.yaml
                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-service.yaml
                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-statefulset.yaml
                    
                  2. Wait for those processes to be running, then create the database and caching systems: MySQL, and Redis.

                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml
                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/redis/redis-deployment.yaml
                    

                    To add Sysdig Secure: Create the PostgreSQL database:

                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/postgres/postgres-service.yaml
                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/postgres/postgres-statefulset.yaml
                    
                  3. Wait until datastore pods are in ready state:

                    Run the command:

                    kubectl -n sysdigcloud get pods
                    

                    Then look in the READY column to ensure all pods are ready. For example, displaying a 1/1 means 1 of 1 pods is ready

                  4. Apply the NATS service and deployment to deliver events to Sysdig backend components:

                    kubectl -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/nats-streaming/nats-streaming-deployment.yaml
                    kubectl -n sysdigcloud apply -f  datastores/as_kubernetes_pods/manifests/nats-streaming/nats-streaming-service.yaml
                    
                  5. Apply the API deployment. Pause until all containers in the API pod are running, then apply the collector and worker deployments.

                    kubectl -n sysdigcloud apply -f sysdigcloud/api-deployment.yaml
                    
                    kubectl -n sysdigcloud apply -f sysdigcloud/collector-deployment.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/worker-deployment.yaml
                    
                  6. Create the service for the API and collector:

                    kubectl -n sysdigcloud apply -f sysdigcloud/api-headless-service.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/collector-headless-service.yaml
                    
                  7. Sysdig Secure only Create anchore-engine deployments and service (used in scanning):

                    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-service.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-core-config.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-core-deployment.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-worker-config.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/anchore-worker-deployment.yaml
                    

                    Wait 60 seconds to ensure the Anchore components are up and running. Then deploy custom Sysdig Secure scanning components:

                    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-service.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-api-deployment.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-alertmgr-service.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/scanning-alertmgr-deployment.yaml
                    
                  8. Sysdig Secure only Create services, deployments, and a janitor job for the activity audit and policy advisor features:

                    kubectl -n sysdigcloud apply -f sysdigcloud/policy-advisor-service.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/activity-audit-api-service.yaml
                    
                    kubectl -n sysdigcloud apply -f sysdigcloud/activity-audit-api-deployment.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/policy-advisor-deployment.yaml
                    kubectl -n sysdigcloud apply -f sysdigcloud/activity-audit-worker-deployment.yaml
                    
                    kubectl -n sysdigcloud apply -f sysdigcloud/activity-audit-janitor-cronjob.yaml
                    

                  Connecting to the Cluster

                  Add Cluster-Admin to User (GKE/GCloud Only)

                  kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
                  

                  Add Ingress Controller

                  For Sysdig Monitor:

                  To permit incoming connections to the Sysdig API and collector, deploy the following ingress yamls.

                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-clusterrole.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-clusterrolebinding.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-role.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-rolebinding.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-serviceaccount.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/default-backend-service.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/default-backend-deployment.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-configmap.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-tcp-services-configmap.yaml
                  kubectl -n sysdigcloud apply -f sysdigcloud/ingress_controller/ingress-daemonset.yaml
                  

                  If NOT using Sysdig Secure, then apply the following ingress.yaml:

                  kubectl -n sysdigcloud apply -f sysdigcloud/api-ingress.yaml
                  

                  For Sysdig Secure:

                  If you ARE using Secure, replace the api-ingress.yaml with the following line:

                  kubectl -n sysdigcloud apply -f sysdigcloud/api-ingress-with-secure.yaml
                  

                  Install Complete

                  When the terminal messages indicate that installation was successfully completed:

                  • Point your browser to https://API_DNS_NAME.

                    You will be prompted to log in with the Admin credentials you set in Step 2 Configure Backend Components.

                  • Log in as Super Admin.

                    The Welcome Wizard is launched and prompts you to install your first Sysdig agent.

                  • Install the agent(s).

                    The Welcome Wizard should be populated with install parameters from your environment (access key, collector name, and collector port). For example:

                    {docker run -d --name sysdig-agent --restart always --privileged --net host --pid host -e ACCESS_KEY=xxxxxxxxxx -e COLLECTOR=abc.us-west.elb.amazonaws.com -e COLLECTOR_PORT=6443 -e CHECK_CERTIFICATE=false -e TAGS=example_tag:example_value -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro --shm-size=350m sysdig/agent
                    

                  To Make Configuration Changes

                  Replace kubectl with oc for OpenShift.

                  Update the Config Map

                  There are two ways to change the original installation parameters in the config map: edit or overwrite.

                  • To edit the config map, run the following command:

                    kubectl edit configmap/sysdigcloud-config --namespace sysdigcloud
                    

                    A text editor is presented with the config map to be edited. Enter parameters as needed, then save and quit.

                    Then restart the config map (below).

                  • To overwrite the config map that is edited on the client-side, (e.g. to keep it synced in a git repository), use the following command:

                    kubectl replace -f sysdigcloud/config.yaml --namespace sysdigcloud
                    

                    Then restart the config map (below).

                  Restart Configmap

                  After updating the configmap, the Sysdig components must be restarted for the changed parameters to take effect. This can be done by forcing a rolling update of the deployments.

                  A possible way to do so is to change something innocuous, which forces a rolling update. E.g.:

                  kubectl -n sysdigcloud patch deployment [deploymnet] -p \
                   "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"$(date +'%s')\"}}}}}"
                  

                  Replace kubectl with oc for OpenShift.

                  3 -

                  Installer (Kubernetes | OpenShift) 2.5.0-3.2.2

                  For Sysdig installations on Kubernetes or OpenShift, version 2.5.0 and above.

                  The Sysdig Installer tool is a Docker image containing a collection of scripts that help automate the on-premises deployment of the Sysdig platform (Sysdig Monitor and/or Sysdig Secure), for environments using Kubernetes or OpenShift. Use the Installer to install or upgrade your Sysdig platform. It is recommended as a replacement for the earlier Kubernetes manual installation and upgrade procedures.

                  Installation Overview

                  To install, you will log in to quay.io, download a values.yaml file, provide a few basic parameters in it, and launch the Installer. In a normal installation, the rest is automatically configured and deployed.

                  You can perform a quick install if your environment has access to the internet, or a partial or full airgapped installation, as needed. Each is described below.

                  See Frequently Used Installer Configurations to:

                  • Customize or override settings

                  • Use hostPath for static storage of Sysdig components

                  • Use Kubernetes node labels and taints to run only Sysdig pods on selected nodes in a cluster

                  Prerequisites

                  The installer must be run from a machine with kubectl/oc configured with access to the target cluster where the Sysdig platform will be installed. Note that this cluster may be different than where the Sysdig agent will be deployed.

                  Requirements for Installation Machine with Internet Access

                  • Network access to Kubernetes cluster

                  • Docker

                  • Bash

                  • jq

                  • Network access to quay.io (See Docker Login to quay.io, below.)

                  • A domain name you are in control of.

                  Additional Requirements for Airgapped Environments

                  • Edited values.yaml with airgap registry details updated

                  • Network and authenticated access to the private registry

                  Access Requirements

                  • Sysdig license key (Monitor and/or Secure)

                  • Quay pull secret

                  Storage Requirements

                  You may use dynamic or static storage on a variety of platforms to store the Sysdig platform components (stateful sets). Different configuration parameters and values are used during the install, depending on which scenario you have.

                  Use Case 1: Default, undefined (AWS/GKE)

                  If you will use dynamic storage on AWS or GKE and haven’t configured any storage class there yet, then the Quick Install streamlines the process for you.

                  • storageclassProvisioner: Enter aws or gke. The installer will create the appropriate storage class and then use it for all the Sysdig platform stateful sets.

                  • storageclassName: Leave empty.

                  Use Case 2: Dynamic, predefined

                  It is also possible that you are using dynamic storage but have already created storage classes there. This dynamic storage could be AWS, GKE, or any other functioning dynamic storage you use.  In this case, you would enter: 

                  • storageclassProvisioner: Leave empty; anything put here would be ignored.

                  • storageclassName: Provide the name of the pre-configured storage class you want to use. The installer will use this storage class for all the Sysdig platform stateful sets.

                  Use Case 3: Static Storage

                  In cases where dynamic storage is not available, you can use static storage for the Sysdig stateful sets. In this case, you would use:

                  • storageclassProvisioner: Enter hostpath, then define the nodes for the four main Sysdig components: ElasticSearch, Cassandra, MySQL, and Postgres.storageclassProvisioner

                  • See Frequently Used Installer Configurations for details.

                  Docker Login to quay.io

                  1. Retrieve the Quay username and password from Quay pull secret.

                    For example

                    AUTH=$(echo <REPLACE_WITH_quaypullsecret> | base64 --decode | jq -r '.auths."quay.io".auth'| base64 --decode)
                    QUAY_USERNAME=${AUTH%:*}
                    QUAY_PASSWORD=${AUTH#*:}
                    
                  2. **Log in to quay.io.**Use the username and password retrieved above.

                    docker login -u "$QUAY_USERNAME" -p "$QUAY_PASSWORD" quay.io
                    

                  Quickstart Install

                  This install assumes the Kubernetes cluster has network access to pull images from quay.io.

                  1. Copy the current version values.yaml to your working directory.

                    wget https://raw.githubusercontent.com/draios/sysdigcloud-kubernetes/installer/installer/values.yaml
                    

                    If you will be editing for an OpenShift installation and want to review a sample, see openshift-with-hostpath values.yaml.

                  2. Edit the following values:

                    • size: Specifies the size of the cluster. Size defines CPU, Memory, Disk, and Replicas. Valid options are: small, medium and large

                    • quaypullsecret: quay.io provided with your Sysdig purchase confirmation mail

                    • storageClassProvisioner: Review Storage Requirements, above.

                      If you have the default use case, enter aws or gke in the storageClassProvisioner field. Otherwise, refer to Use Case 2 or 3.

                    • sysdig.license: Sysdig license key provided with your Sysdig purchase confirmation mail

                    • sysdig.dnsname: The domain name the Sysdig APIs will be served on. Note that the master node may not be used as the DNS name when using hostNetwork mode.

                    • sysdig.collector.dnsName: (OpenShift installs only) Domain name the Sysdig collector will be served on. When not configured it defaults to whatever is configured for sysdig.dnsName. Note that the master node may not be used as the DNS name when using hostNetwork mode.

                    • deployment: (OpenShift installs only) Add deployment: openshift to the root of the values.yaml file.

                    • sysdig.ingressNetworking: The networking construct used to expose the Sysdig API and collector.Options are:

                      • hostnetwork: sets the hostnetworking in the ingress daemonset and opens host ports for api and collector. This does not create a Kubernetes service.

                      • loadbalancer: creates a service of type loadbalancer and expects that your Kubernetes cluster can provision a load balancer with your cloud provider.

                      • nodeport: creates a service of type nodeport.The node ports can be customized with:

                        sysdig.ingressNetworkingInsecureApiNodePort

                        sysdig.ingressNetworkingApiNodePort

                        sysdig.ingressNetworkingCollectorNodePort

                        When not configured, sysdig.ingressNetworking defaults to hostnetwork.

                      If doing an airgapped install , you would also edit the following values:

                    • airgapped_registry_name: The URL of the airgapped (internal) docker registry. This URL is used for installations where the Kubernetes cluster can not pull images directly from Quay

                    • airgapped_repository_prefix: This defines custom repository prefix for airgapped_registry. Tags and pushes images as airgapped_registry_name/airgapped_repository_prefix/image_name:tag

                    • airgapped_registry_password: The password for the configured airgapped_registry_username. Ignore this parameter if the registry does not require authentication.

                    • airgapped_registry_username: The username for the configured airgapped_registry_name. Ignore this parameter if the registry does not require authentication.

                  3. Run the installer. (This step differs in Airgapped Installation, below.)

                    docker run \
                      -e HOST_USER=$(id -u) \
                      -e KUBECONFIG=/.kube/config \
                      -v ~/.kube:/.kube:Z \
                      -v $(pwd):/manifests:Z \
                      quay.io/sysdig/installer:
                    
                  4. See Output (below) to finish.

                  Save the values.yaml file in a secure location; it will be used for future upgrades.

                  There will also be a generated directory containing various Kubernetes configuration yaml files that were applied by the Installer against your cluster. It is not necessary to keep the generated directory, as the Installer can regenerate it consistently with the same values.yaml file.

                  Airgapped Installation Options

                  The installer can be used to install in airgapped environments, either with a multi-homed installation machine that has internet access, or in an environment with no internet access.

                  Updating Vulnerability Feed in Airgapped Environments

                  NOTE: Sysdig Secure users who install in an airgapped environment do not have internet access to the continuous checks of vulnerability databases that are used in image scanning. (See also: How Sysdig Image Scanning Works.)

                  As of installer version 3.2.0-9, airgapped environments can also receive periodic vulnerability database updates.

                  When you install with the “airgapped_” parameters enabled (see Full Airgap Install instructions), the installer will automatically push the latest vulnerability database to your environment. Follow the steps below to reinstall/refresh the vuln db, or use the script and chron job to schedule automated updates (daily, weekly, etc.).

                  To automatically update the vulnerability database, you can:

                  1. Download the image file quay.io/sysdig/vuln-feed-database:latest from the Sysdig registry to the jump box server and save it locally.

                  2. Move the file from the jump box server to the airgapped environment (if needed)

                  3. Load the image file and push it to the airgapped image registry.

                  4. Restart the pod sysdigcloud-feeds-db

                  5. Restart the pod feeds-api

                  The following script (feeds_database_update.sh) performs the five steps:

                  #!/bin/bash
                  QUAY_USERNAME="<change_me>"
                  QUAY_PASSWORD="<change_me>"
                  
                  # Download image
                  docker login quay.io/sysdig -u ${QUAY_USERNAME} -p ${QUAY_PASSWORD}
                  docker image pull quay.io/sysdig/vuln-feed-database:latest
                  # Save image
                  docker image save quay.io/sysdig/vuln-feed-database:latest -o vuln-feed-database.tar
                  # Optionally move image
                  mv vuln-feed-database.tar /var/shared-folder
                  # Load image remotely
                  ssh -t user@airgapped-host "docker image load -i /var/shared-folder/vuln-feed-database.tar"
                  # Push image remotely
                  ssh -t user@airgapped-host "docker tag vuln-feed-database:latest airgapped-registry/vuln-feed-database:latest"
                  ssh -t user@airgapped-host "docker image push airgapped-registry/vuln-feed-database:latest"
                  # Restart database pod
                  ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-db --replicas=0"
                  ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-db --replicas=1"
                  # Restart feeds-api pod
                  ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-api --replicas=0"
                  ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-api --replicas=1"
                  

                  Schedule a chron job to run the script on a chosen schedule (e.g. every day):

                  0 8 * * * feeds-database-update.sh >/dev/null 2>&1
                  

                  Airgapped with Multi-Homed Installation Machine

                  This assumes a private docker registry is used and the installation machine has network access to pull from quay.io and push images to the private registry.

                  The Prerequisites and workflow are the same as in the Quickstart Install (above) with the following exceptions:

                  • In step 2, add the airgap registry information

                  • In step 3, run the installer as follows:

                    docker run \
                      -e HOST_USER=$(id -u) \
                      -e KUBECONFIG=/.kube/config \
                      -e IMAGE_EXTRACT_PUSH=true \
                      -v ~/.kube:/.kube:Z \
                      -v $(pwd):/manifests:Z \
                      -v /var/run/docker.sock:/var/run/docker.sock:Z \
                      -v ~/.docker:/root/docker:Z \
                      quay.io/sysdig/installer:
                    

                  Full Airgap Install

                  This assumes a private docker registry is used and the installation machine does not have network access to pull from quay.io, but can push images to the private registry.

                  In this situation, a machine with network access (called the “jump machine”) will pull an image containing a self-extracting tarball which can be copied to the installation machine.

                  Requirements for jump machine

                  • Network access to quay.io

                  • Docker

                  • jq

                  Requirements for installation machine

                  • Network access to Kubernetes cluster

                  • Docker

                  • Bash

                  • tar

                  • Network and authenticated access to the private registry

                  • Edited values.yaml with airgap registry details updated

                  • Host Disk Space Requirements:/tmp > 4 GB; directory from which the installer is run >8GB; and /var/lib/docker > 4GB.

                    NOTE: The environment variable TMPDIR can be used to override the /tmp directory.

                  Workflow

                  On the Jump Machine

                  1. Follow the Docker Log In to quay.io steps.

                  2. Pull the image containing the self-extracting tar:

                    docker pull quay.io/sysdig/installer:-uber
                    
                  3. Extract the tarball:

                    docker create --name uber_image quay.io/sysdig/installer:-uber
                    docker cp uber_image:/sysdig_installer.tar.gz .
                    docker rm uber_image
                    
                  4. Copy the tarball to the installation machine.

                  On the Installation Machine:

                  1. Copy the current version values.yaml to your working directory.

                    wget https://raw.githubusercontent.com/draios/sysdigcloud-kubernetes/installer/installer/values.yaml
                    
                  2. Edit the following values:

                    • size: Specifies the size of the cluster. Size defines CPU, Memory, Disk, and Replicas. Valid options are: small, medium and large

                    • quaypullsecret: quay.io provided with your Sysdig purchase confirmation mail

                    • storageClassProvisioner: Review Storage Requirements, above.

                      If you have the default use case, enter aws or gke in the storageClassProvisioner field. Otherwise, refer to Use Case 2 or 3.

                    • sysdig.license: Sysdig license key provided with your Sysdig purchase confirmation mail

                    • sysdig.dnsname: The domain name the Sysdig APIs will be served on. Note that the master node may not be used as the DNS name when using hostNetwork mode.

                    • sysdig.collector.dnsName: (OpenShift installs only) Domain name the Sysdig collector will be served on. When not configured it defaults to whatever is configured for sysdig.dnsName. Note that the master node may not be used as the DNS name when using hostNetwork mode.

                    • deployment: (OpenShift installs only) Add deployment: openshift to the root of the values.yaml file.

                    • sysdig.ingressNetworking: The networking construct used to expose the Sysdig API and collector.Options are:

                      • hostnetwork: sets the hostnetworking in the ingress daemonset and opens host ports for api and collector. This does not create a Kubernetes service.

                      • loadbalancer: creates a service of type loadbalancer and expects that your Kubernetes cluster can provision a load balancer with your cloud provider.

                      • nodeport: creates a service of type nodeport.The node ports can be customized with:

                        sysdig.ingressNetworkingInsecureApiNodePort

                        sysdig.ingressNetworkingApiNodePort

                        sysdig.ingressNetworkingCollectorNodePort

                    • airgapped_registry_name: The URL of the airgapped (internal) docker registry. This URL is used for installations where the Kubernetes cluster can not pull images directly from Quay

                    • airgapped_repository_prefix: This defines custom repository prefix for airgapped_registry. Tags and pushes images as airgapped_registry_name/airgapped_repository_prefix/image_name:tag

                    • airgapped_registry_password: The password for the configured airgapped_registry_username. Ignore this parameter if the registry does not require authentication.

                    • airgapped_registry_username: The username for the configured airgapped_registry_name. Ignore this parameter if the registry does not require authentication.

                  3. Copy the tarball file to the directory where you have your values.yaml file.

                  4. Run the tar file:

                    bash sysdig_installer.tar.gz

                    NOTE: The above step extracts images, runs the installer, and pushes images to the remote repository in a single step. The extract/push images can be redundant for successive installer runs. Setting IMAGE_EXTRACT_PUSH=false runs only the installer:

                    IMAGE_EXTRACT_PUSH=false bash sysdig_installer.tar.gz

                  5. See Output (below) to finish.

                  Save the values.yaml file in a secure location; it will be used for future upgrades.

                  There will also be a generated directory containing various Kubernetes configuration yaml files that were applied by the Installer against your cluster. It is not necessary to keep the generated directory, as the Installer can regenerate it consistently with the same values.yaml file.

                  Output

                  A successful installation should display output in the terminal such as:

                  All Pods Ready.....Continuing
                  Congratulations, your Sysdig installation was successful!
                  You can now login to the UI at "https://awesome-domain.com:443" with:
                  
                  username: "configured-username@awesome-domain.com"
                  password: "awesome-password"
                  

                  There will also be a generated directory containing various Kubernetes configuration yaml files which were applied by installer against your cluster. It is not necessary to keep the generated directory, as the installer can regenerate consistently with the same values.yaml file.

                  Additional Installer Resources

                  4 -

                  Manual Install (OpenShift)

                  All on-premises installations and upgrades are now scheduled with and guided by Sysdig technical account managers and professional services division. See Oversight Services Now Offered for All Installs and Upgrades .

                  For customers, the instructions in this section are for review purposes only.

                  As of Sysdig Platform v 2.5.0, a semi-automated install option is available and is preferred.

                  This section describes how to install the backend components of the Sysdig platform using an existing OpenShift cluster. It applies to backend versions 1929 and higher.

                  Introduction

                  The Sysdig platform includes both Sysdig Monitor and Sysdig Secure, which are licensed separately. All installations include Sysdig Monitor, while some of the Secure components are installed and configured as additional steps within the overall installation process. When installing the Sysdig platform on OpenShift manually, you will install each backend component with separate oc commands.

                  Prerequisites

                  Overview

                  • Access to a running OpenShift 3.11+ instance

                  • Two items from your Sysdig purchase-confirmation email:

                    • Your Sysdig license key

                    • Your Sysdig quay.io pull secret

                  • octools installed on your machine and communicating with the OpenShift cluster. (Note that your oc and OpenShift versions should match to avoid errors.)

                  DNS Preparation

                  If you want more information on OpenShift’s DNS requirements; see the OpenShift documentation.

                  • Option 1: DNS without Wildcard

                    You need to request two different DNS records from your DNS team: one for the Sysdig API/UI and another for the Sysdig collector. These records should point to your infrastructure nodes and are the two routes that will be exposed, i.e., sysdig.api.example.com and sysdig.collector.example.com.

                  • Option 2: DNS with Wildcard

                    With wildcard DNS, you do not have to make an official request from the DNS team. Your implementation team can pick any two DNS names to use for the API/UI and Collector. These will be exposed to the infrastructure nodes once the configuration is completed. (i.e. sysdig.api.example.com and sysdig.collector.example.com.)

                  SSL Certificate Preparation

                  Step 5: Set Up SSL Connectivity to the Backend discusses how to implement SSL; decide ahead of time whether you will use SSL with wildcard or without.

                  • SSL with Wildcard

                    With wildcard SSL, you use the same certificate for both the API and the collector.

                  • SSL without Wildcard

                    You need two SSL certs, one for each DNS record.

                  Consider Elasticsearch Default Privileges

                  By default, the Elasticsearch container will be installed in privileged (root-access) mode. This mode is only needed so the container can reconfigure the hosts' Linux file descriptors if necessary. See Elasticsearch’s description here.

                  If you prefer not to allow Elasticsearch to run with root access to the host, you will need to:

                  1. Set your own file descriptors on all Linux hosts in the Kubernetes cluster.

                    If one host were to go down, Kubernetes could choose a different node for Elasticsearch, so each Linux host must have the file descriptors set.

                  2. Set privileged:false in the elasticsearch-statefulset.yaml file.

                    See the step under Coonfigure Backend Components, below, for details.

                  Prepare the Environment

                  Step 1 Download and Unpack the Latest Release

                  1. Download the latest release from https://github.com/draios/sysdigcloud-kubernetes/releases/latest

                  2. Unpack the .tar ball.

                    The source link has the format: https://github.com/draios/sysdigcloud-kubernetes/archive/<v1234>.tar.gz. To unpack it, run the following commands (replacing version number as appropriate):

                    wget https://github.com/draios/sysdigcloud-kubernetes/archive/<v1234>.tar.gz
                    tar zxf <1234>.tar.gz
                    cd sysdigcloud-kubernetes-<1234>
                    
                  3. Create a new project called sysdigcloud and copy the cloned folders into it:

                    oc new-project sysdigcloud
                    
                  4. Apply the correct security contexts to the namespace. (This allows you to run privileged containers in the sysdigcloud namespace)

                    oc adm policy add-scc-to-user anyuid -n sysdigcloud -z default
                    oc adm policy add-scc-to-user privileged -n sysdigcloud -z default
                    

                  Step 2: Configure Backend Components

                  The ConfigMap (config.yaml) is populated with information about usernames, passwords, SSL certs, and various application-specific settings.

                  The steps below give the minimum edits that should be performed in a test environment.

                  It is necessary to review and customize the entries in config.yaml before launching in a production environment.

                  See Making Configuration Changes, below, for the oc format to use for post-install edits, such as adding 3rd-party authenticators such as LDAP.

                  1. Add your license key:

                    In config.yaml, enter the key that was emailed to you in the following parameter:

                    # Required: Sysdig Cloud license
                      sysdigcloud.license: ""
                    
                  2. Change the super admin name and password, which are the super admin credentials for the entire system. See here for details.

                    Find the settings in config.yaml here:

                     sysdigcloud.default.user: test@sysdig.com
                      # Required: Sysdig Cloud super admin user password
                      # NOTE: Change upon first login
                      sysdigcloud.default.user.password: test
                    
                  3. **Edit the collector endpoint and API URL:**Change the placeholder to point to the DNS names you have established for Sysdig.

                    Remember that you must have defined one name for the collector and another for the API URL.

                    Note: Change the collector port to 443.

                    collector.endpoint: <COLLECTOR_DNS_NAME>
                    collector.port: "443"
                    api.url: https://<API_DNS_NAME>:443
                    
                  4. Recommended: edit the file to set the JVM options for Cassandra, Elasticsearch, and API, worker, and collector as well.

                    (To use the AWS implicit key, edit the JVM options as described in AWS: Integrate AWS Account and CloudWatch Metrics (Optional).)

                    For installations over 100 agents, it is recommended to allocate 8 GB of heap per JVM.

                      cassandra.jvm.options: "-Xms8G -Xmx8G"
                      elasticsearch.jvm.options: "-Xms8G -Xmx8G"
                      sysdigcloud.jvm.api.options: "-Xms4G -Xmx8G"
                      sysdigcloud.jvm.worker.options: "-Xms4G -Xmx8G"
                      sysdigcloud.jvm.collector.options: "-Xms4G -Xmx8G"
                    

                    Note: If you do not wish to use SSL between the agent and the collector, use the following settings instead:

                    cassandra.jvm.options: "-Xms8G -Xmx8G"
                    elasticsearch.jvm.options: "-Xms8G -Xmx8G"
                    sysdigcloud.jvm.api.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
                    sysdigcloud.jvm.worker.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
                    sysdigcloud.jvm.collector.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
                    
                  5. Optional: Change ElasticSearch container setting to non-privileged.

                    See Consider Elasticsearch Default Privileges, above.

                    To change the default setting, edit the file elasticsearch-statefulset.yaml and set privileged: false.

                    containers:
                            - name: elasticsearch
                              image: quay.io/sysdig/elasticsearch:5.6.16.15
                              securityContext:
                                privileged: false
                    
                  6. Deploy the configuration maps and secrets for all services by running the commands:

                    For Sysdig Monitor:

                    oc -n sysdigcloud apply -f sysdigcloud/config.yaml
                    
                  7. **(Sysdig Secure only) Edit and apply secrets for Anchore and the scanning component:**Edit theyaml files:

                    scanning-secrets.yaml

                    stringData:
                      scanning.mysql.password: change_me
                    

                    anchore-secrets yaml

                    stringData:
                      anchore.admin.password: change_me
                      anchore.db.password: change_me
                    

                    policy-advisor-secret.yaml

                    stringData:
                      padvisor.mysql.password: change_me
                    

                    Then apply the files:

                    oc -n sysdigcloud apply -f sysdigcloud/scanning-secrets.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/anchore-secrets.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/policy-advisor-secret.yaml
                    
                  8. Edit the API DNS name in either api-ingress.yaml or api-ingress-with-secure.yaml (if using Secure).

                    The files are located in sysdigcloud/

                     spec:
                       rules:
                         - host: <API_DNS_NAME>
                    ...
                    
                     tls:
                         - hosts:
                             - <API_DNS_NAME>
                           secretName: sysdigcloud-ssl-secret
                    
                  9. Edit the collector DNS name in the file openshift-collector-router.yaml. Use the collector DNS name you created in the Prerequisites.

                    The file is located in sysdigcloud/openshift/.

                    spec:
                      host: <COLLECTOR_DNS_NAME>
                    

                  Step 3 (Secure-Only): Edit mysql-deployment.yaml

                  If using Sysdig Secure :

                  Edit the MySQL deployment to uncomment the MYSQL_EXTRADB_* environment variables. This forces MySQL to create the necessary scanning database on startup.

                  File location: datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml

                   - name: MYSQL_EXTRADB_SCANNING_DBNAME
                                valueFrom:
                                  configMapKeyRef:
                                    name: sysdigcloud-config
                                    key: scanning.mysql.dbname
                              - name: MYSQL_EXTRADB_SCANNING_USER
                                valueFrom:
                                  configMapKeyRef:
                                    name: sysdigcloud-config
                                    key: scanning.mysql.user
                              - name: MYSQL_EXTRADB_SCANNING_PASSWORD
                                valueFrom:
                                  secretKeyRef:
                                    name: sysdigcloud-scanning
                                    key: scanning.mysql.password
                  

                  The scanning service will not start unless MySQL creates the scanning database.

                  Step 4: Deploy Your Quay Pull Secret

                  A specific Quay pull secret is sent via email with your license key.

                  1. Edit the file sysdigcloud/pull-secret.yaml and change the place holder <PULL_SECRET> with the provided pull secret.

                    vi sysdigcloud/pull-secret.yaml
                    
                        ---
                        apiVersion: v1
                        kind: Secret
                        metadata:
                        name: sysdigcloud-pull-secret
                        data:
                        .dockerconfigjson: <PULL_SECRET>
                        type: kubernetes.io/dockerconfigjson
                    
                  2. Deploy the pull secret object:

                    oc -n sysdigcloud apply -f sysdigcloud/pull-secret.yaml
                    

                  Step 5: Set Up SSL Connectivity to the Backend

                  SSL-secured communication is used between user browsers and the Sysdig API server(s), and between the Sysdig agent and the collectors.

                  To set this up, you must:

                  • Use an existing wildcard SSL certificate and key, or

                  • Use existing standard certs for API and collector, or

                  • Create self-signed certificates and keys for API and collector

                  If you are not using wildcard SSL, you have to use two separate certificates, one for API URL and one for the collector.

                  • To disable SSL between agent and collector:

                    To disable SSL between agent and collectors, you set a JVM option when configuring backend components (below).

                  • To create self-signed certs:

                    Run these commands (edit to add your API_DNS_NAME and COLLECTOR_DNS_NAME):

                    openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/C=US/ST=CA/L=SanFrancisco/O=ICT/CN=<API_DNS_NAME>" -keyout server.key -out server.crt
                    openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/C=US/ST=CA/L=SanFrancisco/O=ICT/CN=<COLLECTOR_DNS_NAME>" -keyout collector.key -out collector.crt
                    
                  • To use an existing wildcard cert:

                    Obtain the respective server.crt and server.key files.

                  To Create Kubernetes Secrets for the Certs

                  With Wildcard

                  Uses the same certificate for both the API/UI and the collector.

                  Run these commands:

                  oc -n sysdigcloud create secret tls sysdigcloud-ssl-secret --cert=server.crt --key=server.key
                  oc -n sysdigcloud create secret tls sysdigcloud-ssl-secret-collector --cert=server.crt --key=server.key
                  

                  Without Wildcard

                  Uses two different certificates, one for the API/UI, and one for the collector.

                  Run these commands:

                  oc -n sysdigcloud create secret tls sysdigcloud-ssl-secret --cert=server.crt --key=server.key
                  oc -n sysdigcloud create secret tls sysdigcloud-ssl-secret-collector --cert=collector.crt --key=collector.key
                  

                  Step 6: (Optional) Use CA Certs for External SSL Connections

                  The Sysdig platform may sometimes open connections over SSL to certain external services, including:

                  • LDAP over SSL

                  • SAML over SSL

                  • OpenID Connect over SSL

                  • HTTPS Proxies

                  If the signing authorities for the certificates presented by these services are not well-known to the Sysdig Platform (e.g., if you maintain your own Certificate Authority), they are not trusted by default.

                  To allow the Sysdig platform to trust these certificates, use the command below to upload one or more PEM-format CA certificates. You must ensure you’ve uploaded all certificates in the CA approval chain to the root CA.

                  oc -n sysdigcloud create secret generic sysdigcloud-java-certs --from-file=certs1.crt --from-file=certs2.crt
                  

                  Install Components (OpenShift)

                  Edit storageClassName Parameters

                  You need a storage class; step 2 shows how to create one if needed.

                  Enter the storageClassName in the appropriate .yaml files (see step 3).

                  1. Verify whether a storage class has been created, by running the command:

                    oc get storageclass
                    
                  2. If no storage class has been defined, create a manifest for one, and then deploy it.

                    For example, a manifest could be named sysdigcloud-storageclass.yaml and contain the following contents (for a storage class using GP2 volumes in AWS):

                    apiVersion: storage.k8s.io/v1
                    kind: StorageClass
                    metadata:
                      name: gp2
                      labels:
                        kubernetes.io/cluster-service: "true"
                        addonmanager.kubernetes.io/mode: EnsureExists
                    provisioner: kubernetes.io/aws-ebs
                    parameters:
                      type: gp2
                    

                    Now run the command:

                    oc apply -f sysdigcloud-storageclass.yaml
                    
                  3. Using either the existing storage class name from step 1, or the storage class name defined in step 2, edit the storageClassName in the following .yaml files:

                    For Monitor:

                    datastores/as_kubernetes_pods/manifests/cassandra/cassandra-statefulset.yaml
                    datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-statefulset.yaml
                    datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml
                    

                    With Secure:

                    datastores/as_kubernetes_pods/manifests/postgres/postgres-statefulset.yaml
                    

                    In each file, the code snippet looks the same:

                    volumeClaimTemplates:
                     - metadata:
                         name: data
                       spec:
                         accessModes: ["ReadWriteOnce"]
                         resources:
                           requests:
                             storage: 50Gi
                         storageClassName: <STORAGECLASS_NAME>
                    

                  Install Datastores and Backend Components

                  For Sysdig Monitor

                  1. Create the datastore statefulsets for Elasticsearch and Cassandra. Elasticsearch and Cassandra are automatically set up with --replica=3 generating full clusters.

                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/cassandra/cassandra-service.yaml
                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/cassandra/cassandra-statefulset.yaml
                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-service.yaml
                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-statefulset.yaml
                    
                  2. Wait for those processes to be running, then create the MySQL and Redis databases:

                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml
                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/redis/redis-deployment.yaml
                    

                    To add Sysdig Secure: Create the PostgreSQL database:

                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/postgres/postgres-service.yaml
                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/postgres/postgres-statefulset.yaml
                    
                  3. Wait until datastore pods are in ready state, then deploy the backend deployment sets (worker, collector, and API).

                    Run the command:

                    kubectl -n sysdigcloud get pods
                    

                    Then look in the READY column to ensure all pods are ready. For example, displaying a 1/1 means 1 of 1 pods is ready.

                  4. Apply the NATS service and deployment to deliver events to Sysdig backend components:

                    oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/nats-streaming/nats-streaming-deployment.yaml
                    oc -n sysdigcloud apply -f  datastores/as_kubernetes_pods/manifests/nats-streaming/nats-streaming-service.yaml
                    
                  5. Then deploy the backend deployment sets (worker, collector, and API). Pause for 60 seconds after creating the API deployment.

                    oc -n sysdigcloud apply -f sysdigcloud/api-deployment.yaml
                    
                    oc -n sysdigcloud apply -f sysdigcloud/openshift/openshift-collector-deployment.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/worker-deployment.yaml
                    
                  6. Create the service for the API and collector:

                    oc -n sysdigcloud apply -f sysdigcloud/api-headless-service.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/openshift/openshift-collector-service.yaml
                    
                  7. For Sysdig Secure Wait for the API, worker, and collector to come up before proceeding.

                    Then create anchore-engine deployments and service (used in scanning):

                    oc -n sysdigcloud apply -f sysdigcloud/anchore-service.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/anchore-core-config.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/anchore-core-deployment.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/anchore-worker-config.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/anchore-worker-deployment.yaml
                    

                    Wait 60 seconds to ensure the core-deployment is in Running status, then deploy the rest of the Secure-related yamls:

                    oc -n sysdigcloud apply -f sysdigcloud/scanning-service.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/scanning-api-deployment.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/scanning-alertmgr-service.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/scanning-alertmgr-deployment.yaml
                    
                  8. Sysdig Secure only Create services, deployments, and a janitor job for the activity audit and policy advisor features:

                    oc -n sysdigcloud apply -f sysdigcloud/policy-advisor-service.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/activity-audit-api-service.yaml
                    
                    oc -n sysdigcloud apply -f sysdigcloud/activity-audit-api-deployment.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/policy-advisor-deployment.yaml
                    oc -n sysdigcloud apply -f sysdigcloud/activity-audit-worker-deployment.yaml
                    
                    oc -n sysdigcloud apply -f sysdigcloud/activity-audit-janitor-cronjob.yaml
                    

                  Configure Access for Connectivity to the Cluster

                  Apply the appropriate ingress yaml. (The API_DNS name was entered in step 7, in Step 2: Configure Backend Components This configures the route to the Sysdig UI.

                  For Sysdig Monitor

                  oc -n sysdigcloud apply -f sysdigcloud/api-ingress.yaml
                  

                  With Sysdig Secure:

                  oc -n sysdigcloud apply -f sysdigcloud/api-ingress-with-secure.yaml
                  

                  Configure connectivity to the collector for the agent:

                  oc -n sysdigcloud apply -f sysdigcloud/openshift/openshift-collector-router.yaml
                  

                  To Make Configuration Changes

                  Replace kubectl with oc for OpenShift.

                  Update the Config Map

                  There are two ways to change the original installation parameters in the config map: edit or overwrite.

                  • To edit the config map, run the following command:

                    kubectl edit configmap/sysdigcloud-config --namespace sysdigcloud
                    

                    A text editor is presented with the config map to be edited. Enter parameters as needed, then save and quit.

                    Then restart the config map (below).

                  • To overwrite the config map that is edited on the client-side, (e.g. to keep it synced in a git repository), use the following command:

                    kubectl replace -f sysdigcloud/config.yaml --namespace sysdigcloud
                    

                    Then restart the config map (below).

                  Restart Configmap

                  After updating the configmap, the Sysdig components must be restarted for the changed parameters to take effect. This can be done by forcing a rolling update of the deployments.

                  A possible way to do so is to change something innocuous, which forces a rolling update. E.g.:

                  kubectl -n sysdigcloud patch deployment [deploymnet] -p \
                   "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"$(date +'%s')\"}}}}}"
                  

                  Replace kubectl with oc for OpenShift.

                  5 -

                  Install with Replicated

                  Sysdig will deprecate support for Replicated installs in the coming months. If you are a new customer considering installing with Replicated, please contact Sysdig support.

                  Understand the Choice Points

                  When planning an on-premises installation, the following choice points must be decided upon.

                  1. Infrastructure Managers: To install Sysdig on-premises, administrators choose one of two infrastructure managers:

                    • Kubernetes (see Installer (Kubernetes | OpenShift), or

                    • Replicated: an easy-to-use orchestrator that includes a GUI management tool.

                      This guide describes how to install the Replicated client and use it to install and manage the Sysdig platform.

                  2. Single-Host or Multi-Host Install: For test or proof-of-concept installations, a single-host install will include all components; for production, a distributed environment is needed.

                  3. Airgapped or non-airgapped environment:

                    If your environment is accessible to the Internet during the install process, then the installation options include both script-based or GUI-based.

                    In airgapped environments (no Internet access), you must download components into your airgapped repository, and can only use the GUI-based installation.

                    See Airgapped Installation.

                  4. Where to put the Replicated Management Console: When installing on-premises using Replicated as the orchestrator, the following Replicated components will be installed on your system:

                    • Replicated UI (on a host you designate to host the Replicated Management Console)

                    • Replicated retraced containers that handle logging (on the Management Console host only)

                    • Replicated operator component (will go on all hosts)

                  In a multi-host installation, one server will be the Replicated Management Console host. The system load for these components is minor.

                  No matter which installation options you choose, you will use the Replicated GUI post-installation to:

                  Understand the Installation Process

                  1. Review and complete the Pre-Install requirements.

                  2. If installing on multiple nodes, decide which node will host the Replicated Management Console.

                  3. If using an airgapped environment, set up for an Airgapped Installation.

                  4. Install the Replicated Clienton a host.

                  5. Log In to the Replicated Management Console and set the Replicated Management Console Password.

                  6. Configure Sysdig Admin Password and Basic Settings.

                  7. Configure Sysdig Application Advanced Settings (if necessary).

                  8. Complete Distributed Install Steps (if necessary).

                  9. Restart the host(s).

                  5.1 -

                  Airgapped Installation

                  Sysdig will deprecate support for Replicated installs in the coming months. If you are a new customer considering installing with Replicated, please contact Sysdig support.

                  To install the Sysdig platform on-premises, in an environment that has no inbound or outbound paths available to internet traffic, you must use the Replicated GUI-based installation option. No script-based option is currently available.

                  Perform the following steps to download the required Sysdig installation files, the Replicated components, and the Sysdig license file, and save them to a repository on your airgapped server. Then perform the setup steps in the Replicated Management Console, as described below.

                  Prerequisites

                  A server instance with Docker version 1.7.1 or later installed is required prior to installation.

                  The Replicated .airgap installation script does not install docker-engine. Sysdig recommends using the latest version of Docker available for the server operating system.

                  For more information on installing Docker in an airgapped environment, refer to the Installing Docker in an Airgapped Environment documentation.

                  Instructions

                  Download Components to a Repository

                  1. Download the latest Sysdig installation files using the links provided by the Sysdig Sales Engineer:

                    • The Sysdig platform application .airgap package

                    • The Sysdig application license file (.rli)

                    • (Optional) The Sysdig Agent Docker image

                  2. Download the latest Replicated installation file from:

                    https://s3.amazonaws.com/replicated-airgap-work/replicated.tar.gz

                  3. Copy all downloaded files to a designated location on your airgapped server. For example:

                    /var/tmp/sysdig

                    (Note this path to be used when you complete the Install Components (Replicated).)

                  4. Open a command shell on the airgapped server and extract the replicated.tar.gz file:

                    sudo tar xzvf replicated.tar.gz
                    

                  Install and Set Up Replicated Management Infrastructure

                  1. Run the following command to install the Replicated infrastructure manager:

                    sudo cat ./install.sh | sudo bash -s airgap

                  2. In a browser, navigate to the Replicated Management Console: https://server_address:8800 **(**Replace server_address with the server name/IP address.)

                  3. Accept the default self-signed certificate, or provide a custom one, and click Continue.

                  4. On the next screen, once the “preflight” checks have been resolved, select the Airgapped option, and click Continue.

                  5. Upload the .rli license file.

                  6. Provide a path to the Sysdig application .airgap file.

                    Should you need to upgrade an airgapped license at a future time, see Upgrade an On-Premises License. For general license information, see Subscription.

                  Complete the Installation Steps

                  Continue with “Setting the Replication Management Password” and the rest of the installation steps in Install Components (Replicated).

                  5.2 -

                  Install Components (Replicated)

                  Sysdig will deprecate support for Replicated installs in the coming months. If you are a new customer considering installing with Replicated, please contact Sysdig support.

                  You can use the Replicated UI to install the Sysdig platform on either a single host or on multiple hosts. If multi-host, decide which machine will also run the Replicated Admin Console and begin there.

                  If your environment is “airgapped” (no access to inbound or outbound internet traffic), there are some setup steps you must perform before doing the GUI-based Replicated installation.

                  See Airgapped Installation for details.

                  Install the Replicated Client

                  Log in to the chosen machine with a shell and run a command to install the Replicated components. You can also install Docker if it’s not already on the environment.

                  1. Log into the designated server instance with SSH.

                  2. Run the following commands:

                    a. To install the Replicated Infrastructure and Docker:

                    sudo curl -sSL https://install.sysdigcloud.com/docker | sudo bash
                    

                    b. If Docker is already installed on the server instance, add-s --no-dockerto the command:

                    sudo curl -sSL https://install.sysdigcloud.com/docker | sudo bash -s -- no-docker
                    

                    c. If installing the Replicated Infrastructure behind a proxy, modify the installation command as shown below:

                    sudo curl -sSL -x http://<proxy>:<port> -o /tmp/sdc-onpremises-installer.sh https://install.sysdigcloud.com/docker && bash /tmp/sdc-onpremises-installer.sh http-proxy=http://<proxy>:<port>
                    

                  Define Basic Settings & License Info

                  Log In to Replicated Admin Console/ “admin console” and Set SSL Certificate

                  1. As prompted, open the Replicated Client at https://<yourserver>:8800.

                  2. Supply the DNS hostname for the Replicated Admin Console.

                  3. Accept the self-signed certificate, or upload a custom SSL certificate and private key.

                    Note: If a self-signed certificate is uploaded, it must include the end user, all intermediate, and the root certificates, as the certificate will be used by the Sysdig platform, as well as for the Replicated Admin Console.

                    To later replace a self-sign cert with a custom cert, see Replace a Self-Signed Cert with Custom Cert.

                  4. Click the Choose License button, and upload the Sysdig license file supplied from Sysdig Sales.

                  5. Choose Online installation option if prompted.

                  Set the Replicated Admin Console Password

                  Once the Sysdig license validation is complete, secure the Replicated Admin Console using a local password, LDAP user account, or anonymous access (insecure).

                  Sysdig recommends securing the console with either a local password or LDAP user account.

                  Click Continue.

                  Configure Sysdig Super Admin Password and Basic Settings

                  After clicking Continue, the Settings page is displayed. Here you enter the configuration information that will be used by Replicated to orchestrate the Sysdig installation.

                  || ||

                  Define Advanced Settings

                  These settings are typically defined with consultation from a Sysdig Sales Engineer.

                   

                  Any JVM options to be passed to the application, such as memory constraint settings for the Java Virtual Machine components, proxy settings, etc.

                  At a minimum, it is recommended to define the memory constraints, in the format:

                  -Xms###g Xmx###g.

                  Note that if multiple components are on a single machine, adjust the percentages as needed so JVMs all fit in a node.

                  • Cassandra JVM options: recommended allocating 50% of the host’s memory to this JVM

                    (in a multi-node environment)

                  • Elasticsearch JVM options: recommended allocating 50% of the host’s memory to this JVM

                    (in a multi-node environment)

                  • Sysdig Cloud application JVM options: recommended to allocate up to 80% of the host’s memory to this JVM.

                    This is also used to set proxy settings; see HTTP/HTTPS and Proxy Support.

                    It is also used to set an implicit key in AWS; see AWS: Integrate AWS Account and CloudWatch Metrics (Optional).

                    NOTE: If you do not want to use SSL between the agent and the collectors, you append the following settings to the Sysdig Cloud application JVM options entry:

                    -Ddraios.agents.installParams.sslEnabled=false
                    

                    For example:

                    -Xms8G Xmx8G -Ddraios.agents.installParams.sslEnabled=false
                    

                  Ports and Security

                  • Sysdig UI port: default 80. Port used for the Sysdig Monitor/ Sysdig Secure GUI.

                  • Sysdig UI secure port: default 433. SSL port used for Sysdig Monitor/ Sysdig Secure GUI.

                  • Force HTTPS: This turns off the unsecured port (80) access.

                  • Forward Sysdig application logs to stdout: switches logging from the application log files to Linux standard output (stdout).

                  • Sysdig collector port: default 6443. Port used for agent metrics collection. See also Agent Installation.

                    In earlier versions, the Sysdig Agent connected to port 6666. This behavior has been deprecated, as the Sysdig agent now connects to port 6443.

                  • Sysdig secure collector port: default 6443. Port used for agent metrics collection. See also Agent Installation.

                  • Exposed port for HTTP traffic inbound to Sysdig Platform backend container: 27878 – do not change without the recommendation of Sysdig Support.

                  • Exposed port for Collector traffic inbound: 27877 – do not change without the recommendation of Sysdig Support.

                  Database Entries

                  • Store Sysdig Captures in Cassandra (recommended): Default checked. Used for Sysdig trace file storage when capture function is used. If you do not store files in the Cassandra DB, you can alternately configure an AWS S3 bucket storage location.

                    See also: Storage: Configure AWS Capture File Storage (Optional) and Captures.

                  • Sysdig data directory: default /opt. Where Cassandra, MySQL, and Elasticsearch databases will be created on a host.

                  • Cassandra CQL native client’s port: The default port is 9042. Change the default port if you are running your own Cassandra cluster with non-standard ports.

                  • Cassandra replication factor: The value should be either 1 or 3, never 2.

                  • Sysdig MySQL user: default admin, recommend changing

                  • Sysdig MySQL password: Enter a unique password and store securely.

                  • This password is needed for future updates and will not be visible in the Replicated Admin Console. Retain this password for future use.

                  • Sysdig MySQL max connections: The default is 1024.

                  • Cassandra CQL native client’s port: The default is 9042.

                  • External MySQL service: The secure end of your MySQL service. This is external to the Sysdig platform.

                  • External Cassandra service: The secure end of your Cassandra service. This is external to the Sysdig platform.

                  • External Redis service: The secure end of your Redis Service. This is external to the Sysdig platform.

                  • Sysdig Redis password: The password associated with the Redis account.

                  • External Elasticsearch service URL: An external service URL with user name and password embedded

                  • OAuth allowed domains: List of secure Elasticsearch domains.

                  • Google OAuth client ID: Used when integrating Google-based user login.

                    See Google OAuth (On-Prem)

                  • Google OAuth client secret: Used when integrating Google-based user login. See Google OAuth (On-Prem)

                  • SSO CA certificate: CA certificate for single sign-on.

                  • Datastore Authorization and SSL: See Authenticating Backend Components on Cassandra and Authenticating Backend Components on Elasticsearch.

                  When fields are complete, click Save.

                  After Saving, click Start Now to apply settings to the environment immediately. Click **Cancel**to apply settings at a later time.

                  Authenticating Backend Components on Cassandra

                  As of version 2.4.1, authenticating Sysdig backend components on Sysdig’s Cassandra nodes or for your own Cassandra nodes is supported. In order to authenticate the backend components to Cassandra, enable the option, specify credentials of the identity you want to establish with Cassandra, and enable secure communication. This is the additional layer of defense against unauthorized access to the datastore.

                  Enable Cassandra Authentication
                  • Enable Cassandra authentication: Select this option if you want to authenticate Sysdig backend components to use Cassandra datastore. The option by default is disabled.

                  • Cassandra password for authentication: The password associated with the username. If running Sysdig’s Cassandra database, create a password here. If you are using your own Cassandra database, enter the appropriate user password for Sysdig access.

                  • Enable Cassandra TLS: (Mandatory) Establish TLS communication between the Sysdig backend components and the Cassandra node. The option by default is unchecked.

                  • Cassandra username for authentication: The username of the identity that you want to establish with Cassandra. If running Sysdig’s Cassandra database, create a user here.  If you are using your own Cassandra database, enter the appropriate user account for Sysdig access.

                  Authenticating Backend Components on Elasticsearch

                  As of version 2.4.1, authenticating Sysdig backend components on both Sysdig’s Elasticsearch cluster or for your own Elasticsearch cluster is supported. In order to authenticate the backend components to Elasticsearch datastore, configure TLS-based authentication. You generate certificates and keys for Elasticsearch server, client, and admin user, and specify them along with Elasticsearch user credentials while setting up Sysdig platform. This is the additional layer of security to safeguard the datastore.

                  Before you configure Elasticsearch authentication, ensure that you set up Sysdig Agent for data collection and TLS generate certificates.

                  Generate TLS Certificates

                  1. Log into Quay:

                    1. Locate your Quay pull_secret. Contact Support if you are unable to locate it.

                    2. Get your credentials by running:

                      # Note: For MacOS users, change "base64 -d" to "base64 -D"echo <quay_pull_secret> | base64 -d | awk NR==4 | cut -d'"' -f4 | xargs | base64 -d
                      

                      The Output should look as follows:

                      sysdig+<your_username>:<your_password>
                      
                    3. Log into Quay by running the following:

                      docker login quay.io -u sysdig+<your_username> -p <your_password>
                      
                  2. Run the following docker command to generate the root/admin certificates for Elasticsearch to a directory within the current working directory:

                    docker run -d -v "$(pwd)"/generated_elasticsearch_certs:/tools/out quay.io/sysdig/elasticsearch:1.0.1-es-certs
                    

                    The following files are generated in the generated_elasticsearch_certs directory.  Retain the certificates and key files to upload as part of the TLS configuration as described in Configure TLS Authentication.

                    • Elasticsearch root CA

                      • root-ca.pem

                      • root-ca.key

                    • Elasticsearch Admin (Kirk)

                      • kirk.pem

                      • kirk.key

                    • EElasticsearch Client (Spock)

                      • spoke.pem

                      • spoke.key

                  Configure TLS Authentication

                  Sysdig Replicated install supports Search Guard to establish secure authentication with Elasticsearch datastore. You set up two users in order to access Elasticsearch datastore on behalf of the Sysdig backend components: Admin user and read-only user.

                  Amin user: The admin user will have the read and write permissions on Elasticsearch clusters and indices. Sysdig backend components use this identity to write data to Elasticsearch clusters. This is the same as the Search Guard admin user. 

                  Read-only user: As the name implies, the read-only user will only have the read permission on Elasticsearch indices. Sysdig Agent uses this identity to read data from Elasticsearch datastore. This is the same as the Search Guard sg_readonly user that is created as part of the installation.

                  Enable Elasticsearch authentication
                  • Enable Elasticsearch Authentication and TLS:  Select this option to enable authentication and secure communication between Sysdig backend components and the Elasticsearch datastore. To gain access to Elasticsearch datastore, you must prove your identity, by using credentials and certificates. The Elastic Stack authenticates users by identifying the users behind the requests that hit the datastore and verifying that they are who they claim to be.

                  • Elasticsearch admin username: The admin user is created by default. You can edit the user name if desired. The default user is admin.

                  • Elasticsearch admin password: The password associated with the Elasticsearch admin user.

                  • Elasticsearch read-only username: Specify the username for the read-only access to the Elasticsearch indices. If running your own secure Elasticsearch cluster, enter the username for the read-only Search Guard user.

                  • Elasticsearch read-only password: The password associated with Elasticsearch read-only username.

                  When fields are complete, click Save. 

                  After saving, click Restart Now to apply settings to the environment immediately.

                  Click Cancel to apply settings at a later time.

                  Configure Sysdig Agent

                  If you are monitoring Elasticsearch with sysdig-agent, ensure the sysdig-agent configuration file, dragent.yaml, has the following Elasticsearch configuration in the data.dragent.yaml.app_checks section below:

                  app_checks:
                    - name: elasticsearch
                      check_module: elastic
                      pattern:
                        port: 9200
                        comm: java
                      conf:
                        url: https://<DNS_or_ip_address_to_elasticsearch>:9200
                        username: <your_read_only_username>
                        password: <your_read_only_password>
                        ssl_verify: false
                  
                  Example for Docker Environment
                  1. Follow these steps if you are running the Agent in a Docker container:

                    READONLY_USERNAME=<your_readonly_username>
                    READONLY_PASSWORD=<your_readonly_username_password>
                    ELASTICSEARCH_PORT=9200
                    URL_TO_SECURE_ELASTICSEARCH=https://<your_url_to_secure_elasticsearch>
                    ADDITIONAL_CONF="$(echo "app_checks:
                      - name: elasticsearch
                        check_module: elastic
                        pattern:
                          port: $ELASTICSEARCH_PORT
                          comm: java
                        conf:
                          url: $URL_TO_SECURE_ELASTICSEARCH:$ELASTICSEARCH_PORT
                          username: $READONLY_USERNAME
                          password: $READONLY_PASSWORD
                          ssl_verify: false
                    " | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g')"
                    
                  2. Remove the existing Agent container:

                    Make sure that you remove the existing Agent container instead of just stopping it. By default, the Agent container is named sysdig-agent. If you stop the Agent container and attempt to create a new one, you will get a name-conflict error:

                    docker: Error response from daemon: Conflict. The container name “/sysdig-agent” is already in use by container <ontainer-id>. You have to remove (or rename) that container to be able to reuse that name.

                  3. Run the Agent container with the new additional config. For example:

                    docker run \
                        --name sysdig-agent \
                        --restart always \
                        --privileged \
                        --net host \
                        --pid host \
                        -e ACCESS_KEY=1234-your-key-here-1234 \
                        -e COLLECTOR=collector_ip \
                        -e COLLECTOR_PORT=6443 \
                        -e SECURE=true \
                        -e TAGS=dept:sales,local:NYC \
                        -e ADDITIONAL_CONF="$ADDITIONAL_CONF" \
                        -v /var/run/docker.sock:/host/var/run/docker.sock \
                        -v /dev:/host/dev \
                        -v /proc:/host/proc:ro \
                        -v /boot:/host/boot:ro \
                        -v /lib/modules:/host/lib/modules:ro \
                        -v /usr:/host/usr:ro \
                        sysdig/agent
                    

                    You may encounter an error in the sysdig-agent logs stating that an unverified HTTPS request has been made. You can safely ignore the error for now.

                  Example for Non-Containerized Environment

                  Do the following if you are running the Agent directly on the machine (non-containerized environment):

                  1. Add the app_check configuration to your /opt/draios/etc/dragent.yaml configuration:

                    app_checks:
                      - name: elasticsearch
                        check_module: elastic
                        pattern:
                          port: 9200
                          comm: java
                        conf:
                          url: https://<DNS_or_ip_address_to_elasticsearch>:9200
                          username: <your_read_only_username>
                          password: <your_read_only_password>
                          ssl_verify: false
                    
                  2. Restart the agent:

                    service dragent restart
                    

                  Single-Host Installation Wrap-Up

                  After completing the Settings and restarting, no further installation steps are required for a single-host install.

                  The dashboard will remain in Starting mode for approximately 4-5 minutes, depending on the internet connection bandwidth, while Sysdig application software is downloaded and installed. Once the installation is complete, the dashboard will move to Started mode.

                  1. Click the Open link to navigate to the Sysdig Monitor login panel.

                  2. Input the Super Admin user login credentials defined in the basic settings, above.

                  Next Steps

                  • To start, stop, and update the application, or to retrieve support information, use the Replicated Admin Console: https://<yourserver>:8800.

                  • To login as a user and see metrics for hosts with the Sysdig Agent installed, use the Sysdig Monitor Web Interface: https://<yourserver>:80

                    • If you have not yet done so, install Sysdig Agents to monitor your environment. See Agent Installation for details.

                  Multi-Host Installation Wrap-Up

                  After configuring the settings and clicking Start Now, an error will indicate the need to assign and install the remaining components. You will need to define the hosts/nodes to be used and will assign the Sysdig components to be installed on them. The steps below describe the actions on one host; they must be repeated on all applicable hosts and all the Sysdig components must be assigned.

                  1. Choose the Cluster tab in the Replicated Admin Console.

                    From here, you can tag components to be run on the local host, and/or add new nodes.

                    To add and configure new nodes:

                  2. Click Add Node.

                    The Add Node worksheet is displayed. Here you enter the IP address and then tag the Sysdig component(s) to be installed on that node.

                    Replicated will compile either an installation script or a Docker run command out of your entries, which you will copy and use on the given node.

                  3. On the Add Node worksheet page, do the following:

                    Choose Installation script or Docker run command option.

                    Enter private and/or public IP address, depending on the type of access you want to permit.

                    Select the Sysdig components to be installed by checking the appropriate “Tags” buttons.

                    Descriptions in the table below:

                    Name

                    Tag

                    Role Description

                    api

                    api

                    Application Programming Interface server

                    cassandradb

                    cassandra

                    Cassandra database server

                    elasticsearch

                    elasticsearch

                    Elasticsearch server for events storage/search

                    collector

                    collector

                    Agent metrics collector

                    lb_collector

                    lb_collector

                    Load balancer for collector service; handles connections from the agents

                    lb_api

                    lb_api

                    Load balancer for API service; handles user connection requests to the Sysdig application.

                    Use the address for this node as the DNS entry for the cluster.

                    mysql, redis

                    mysql & redis

                    MySQL & Redis databases

                    worker

                    worker

                    Metrics history processor

                    emailrenderer

                    emailrenderer

                    Email renderer

                    nginxfrontend

                    nginxfrontend

                    Frontend static server

                    When setting up a DNS entry for the cluster, use the address for the ‘lb_api' node.

                    At the bottom of the page, a curl script or Docker run command is compiled for you.

                    Copy the command and issue it on the targeted host.

                  4. Repeat this procedure on all desired hosts.

                  5. Restart the Sysdig application from the Replicated console.

                    The dashboard will be in “Starting” mode for several minutes while software is downloaded and installed onto each server component (depending on your internet connection bandwidth).

                    You should see green check marks for each host next to the Provisioned and Connected columns, as the software is installed and the node connects successfully to the Replicated Admin server.

                    Once the installation is fully completed, the infrastructure admin dashboard will be in “Started” mode and will also show the “Open” link that will bring you to Sysdig Monitor web interface login screen.

                  6. At the login screen, use the credentials configured earlier (Default User) to log in and start using the Sysdig application on-premises solution.

                    To start, stop, and update the application or retrieve support information use the Replicated Admin dashboard: https://server_address:8800

                    To log in as a user and see metrics about hosts where Sysdig agents are installed, use the Sysdig Monitor UI: https://server_address:80

                  5.3 -

                  Post-Install Configuration

                  Sysdig will deprecate support for Replicated installs in the coming months. If you are a new customer considering installing with Replicated, please contact Sysdig support.

                  These configurations are optional.

                  Replace a Self-Signed Cert with Custom Cert

                  This process differs depending on how you installed the Sysdig Platform.

                  For Kubernetes Installer Installs

                  If you installed the Sysdig Platform on Kubernetes or OpenShift using the Installer, the Installer automatically generates a self-signed cert on the fly. To use a different certificate you would:

                  • Add your cert and key to the /certs directory ex: (server.crt, server.key)

                  • Update values.yaml:

                    sysdig:
                      certificate:
                        crt: certs/server.crt
                        key: certs/server.key
                    
                  • Rerun the Installer.

                  The configuration_parameter.md Readme gives full details on sysdig.certificate.crt and sysdig.certificate.key.

                  For Kubernetes Manual Installs

                  If you installed the Sysdig Platform manually on Kubernetes or OpenShift, the steps for managing the certs are described in Step 5 of the installation procedures:

                  For Replicated Installs

                  If you installed the Sysdig Platform using Replicated and you accepted the self-signed certificate for SSL/TLS communication when installing the Sysdig components (see Define Basic Settings & License Info ), you can exchange for a custom certificate as follows:

                  • Log in to the Replicated Management Console and select the Gear icon > Console Settings.

                  • Click Upload certificate and it will automatically replace the original self-signed certificate.

                  Optional: Custom Self-Signed Certificat

                  Sysdig Monitor/Cloud/etc uses a self-signed SSL/TLS security certificate, unless a custom certificate is provided.

                  The example command below creates a custom, unsigned certificate called MyCert.pem; the certificate has a private key called MyCert.key, and is valid for five years:

                  sudo openssl req -new -x509 -sha256 -days 1825 -nodes -out ./MyCert.pem -keyout ./MyCert.key
                  
                  

                  6 -

                  Troubleshooting On-Premises Installation

                  See also Get Help | Using Sysdig Support (On-Prem) )

                  Collect Troubleshooting Data

                  When experiencing issues, you can collect troubleshooting data that can help the support team. The data can be collected by hand, or Sysdig provides a very simple get_support_bundle.sh script that takes as an argument the namespace where Sysdig is deployed and will generate a tarball containing some information (mostly log files). The script is located in the GitHub repository: https://github.com/draios/sysdigcloud-kubernetes. (See also Get Help | Using Sysdig Support (On-Prem).)

                  $ ./scripts/get_support_bundle.sh sysdigcloud
                  Getting support logs for sysdigcloud-api-1477528018-4od59
                  Getting support logs for sysdigcloud-api-1477528018-ach89
                  Getting support logs for sysdigcloud-cassandra-2987866586-fgcm8
                  Getting support logs for sysdigcloud-collector-2526360198-e58uy
                  Getting support logs for sysdigcloud-collector-2526360198-v1egg
                  Getting support logs for sysdigcloud-mysql-2388886613-a8a12
                  Getting support logs for sysdigcloud-redis-1701952711-ezg8q
                  Getting support logs for sysdigcloud-worker-1086626503-4cio9
                  Getting support logs for sysdigcloud-worker-1086626503-sdtrc
                  Support bundle generated: 1473897425_sysdig_cloud_support_bundle.tgz
                  

                  Docker Connectivity Issues (IPv4/IPv6)

                  Some issues with IPv4 and IPv6 interconnectivity between on-premises containers and the outside world have been detected.

                  IP packet forwarding is governed by the ip_forward system parameter. Packets can only pass between containers if this parameter is 1. Usually, you will simply leave the Docker server at its default setting --ip-forward=trueand Docker will go set ip_forward to 1 for you when the server starts up. If you set --ip-forward=false and your system’s kernel has it enabled, the --ip-forward=false option has no effect.

                  To check the setting on your kernel use:

                  sysctl net.ipv4.conf.all.forwarding
                  

                  To turn it on use:

                  sysctl net.ipv4.conf.all.forwarding=1
                  

                  Please see this article from docker for more details on Docker Connectivity.

                  Proxy/Firewall Issues

                  Prior to installing ensure your proxy settings are valid for the session. You can use curl, lynx, or wget to test internet connectivity:

                  export http_proxy="http://user:password@proxy_server:port"
                  export https_proxy="https://user:password@proxy_server:port"
                  echo $http_proxy
                  

                  You can then attempt a curl or docker hub call to ensure outside connectivity.

                  Firewall

                  Prior to installation, you may want to disable local firewall (iptables) to rule out local connectivity issues.

                  However here are some details around Sysdig connectivity and backend connectivity requirements.

                  Sysdig Connectivity:

                  6443 Agent communication

                  443 Sysdig Monitor UI access

                  8800 Management console access

                  Here are specifics around what is used for connectivity for the Sysdig backend for on-premises solution:

                  https://www.replicated.com/docs/kb/supporting-your-customers/firewalls/

                  File Write Permissions Issues (SELINUX or APP ARMOR)

                  During the install, you may see errors writing to volumes such as (/var or /opt) from either the onprem install scripts or Docker. You should disable SELINUX (CENTOS/RHEL) or Apparmor (UBUNTU/DEBIAN) during the course of the install so the valid directories can be created. This can be accomplished by:

                  Centos (SELINUX)

                  From the command line, edit the /etc/sysconfig/selinux file. This file is a symlink to /etc/selinux/config. The configuration file is self-explanatory. Changing the value of SELINUX or *SELINUXTYPE*changes the state of SELinux and the name of the policy to be used the next time the system boots.

                  [root@host2a ~]# cat /etc/sysconfig/selinux
                  # This file controls the state of SELinux on the system.
                  # SELINUX= can take one of these three values:
                  #       enforcing - SELinux security policy is enforced.
                  #       permissive - SELinux prints warnings instead of enforcing.
                  #       disabled - SELinux is fully disabled.
                  SELINUX=permissive
                  # SELINUXTYPE= type of policy in use. Possible values are:
                  #       targeted - Only targeted network daemons are protected.
                  #       strict - Full SELinux protection.
                  SELINUXTYPE=targeted
                  
                  # SETLOCALDEFS= Check local definition changes
                  SETLOCALDEFS=0
                  

                  See SELinux Modes for more information.

                  UBUNTU/Debian (AppArmor)

                  AppArmor can be disabled, and the kernel module unloaded by entering the following:

                  sudo systemctl stop apparmor.service
                  sudo update-rc.d -f apparmor remove
                  

                  To re-enable AppArmor enter:

                  sudo systemctl start apparmor.service
                  sudo update-rc.d apparmor defaults
                  

                  Advanced Troubleshooting - Firewall, IPtables, IP forwarding

                  In the preflight check step with Replicated, if you come across the error:

                  getsockopt: no route to host
                  

                  Please do the following:

                  For CentOS 7/RedHat:

                  Log in as root or run these commands via sudo:

                  service firewalld stop
                  systemctl disable firewalld
                  sysctl -w net.ipv4.ip_forward=1
                  iptables -F
                  setenforce 0
                  service docker restart
                  

                  For Ubuntu:

                  Log in as root or run these commands via sudo:

                  sysctl -w net.ipv4.ip_forward=1
                  systemctl stop apparmor.service
                  update-rc.d -f apparmor remove
                  ufw disable
                  iptables -F
                  service docker restart