Manual Installation on OpenShift
All on-premises installations and upgrades are now scheduled with and guided by Sysdig technical account managers and professional services division. See Oversight Services Now Offered for All Installs and Upgrades.
For customers, the instructions in this section are for review purposes only.
As of Sysdig Platform v 2.5.0, a semi-automated install option is available and is preferred. This section describes how to install the backend components of the Sysdig platform using an existing OpenShift cluster. It applies to backend versions 1929 and higher.
Introduction
The Sysdig platform includes both Sysdig Monitor and Sysdig Secure,
which are licensed separately. All installations include Sysdig Monitor,
while some of the Secure components are installed and configured as
additional steps within the overall installation process. When
installing the Sysdig platform on OpenShift manually, you will install
each backend component with separate oc
commands.
Prerequisites
Overview
Access to a running OpenShift v4 instance
Two items from your Sysdig purchase-confirmation email:
Your Sysdig license key
Your Sysdig quay.io pull secret
octools
installed on your machine and communicating with the OpenShift cluster. (Note that youroc
and OpenShift versions should match to avoid errors.)
DNS Preparation
If you want more information on OpenShift’s DNS requirements; see the OpenShift documentation.
Option 1: DNS without Wildcard
You need to request two different DNS records from your DNS team: one for the Sysdig API/UI and another for the Sysdig collector. These records should point to your infrastructure nodes and are the two routes that will be exposed, i.e.,
sysdig.api.example.com
andsysdig.collector.example.com
.Option 2: DNS with Wildcard
With wildcard DNS, you do not have to make an official request from the DNS team. Your implementation team can pick any two DNS names to use for the API/UI and Collector. These will be exposed to the infrastructure nodes once the configuration is completed. (i.e.
sysdig.api.example.com
andsysdig.collector.example.com
.)
SSL Certificate Preparation
Step 5: Set Up SSL Connectivity to the Backend discusses how to implement SSL; decide ahead of time whether you will use SSL with wildcard or without.
SSL with Wildcard
With wildcard SSL, you use the same certificate for both the API and the collector.
SSL without Wildcard
You need two SSL certs, one for each DNS record.
Consider Elasticsearch Default Privileges
By default, the Elasticsearch container will be installed in
privileged
(root-access) mode. This mode is only needed so the
container can reconfigure the hosts’ Linux file descriptors if
necessary. See Elasticsearch’s description.
If you prefer not to allow Elasticsearch to run with root access to the host, you will need to:
Set your own file descriptors on all Linux hosts in the Kubernetes cluster.
If one host were to go down, Kubernetes could choose a different node for Elasticsearch, so each Linux host must have the file descriptors set.
Set
privileged:false
in the elasticsearch-statefulset.yaml file.See the step under Coonfigure Backend Components, below, for details.
Prepare the Environment
Step 1 Download and Unpack the Latest Release
Download the latest release from https://github.com/draios/sysdigcloud-kubernetes/releases/latest
Unpack the .tar ball.
The source link has the format:
https://github.com/draios/sysdigcloud-kubernetes/archive/<v1234>.tar.gz.
To unpack it, run the following commands (replacing version number as appropriate):wget https://github.com/draios/sysdigcloud-kubernetes/archive/<v1234>.tar.gz tar zxf <1234>.tar.gz cd sysdigcloud-kubernetes-<1234>
Create a new project called
sysdigcloud
and copy the cloned folders into it:oc new-project sysdigcloud
Apply the correct security contexts to the namespace. (This allows you to run privileged containers in the sysdigcloud namespace)
oc adm policy add-scc-to-user anyuid -n sysdigcloud -z default oc adm policy add-scc-to-user privileged -n sysdigcloud -z default
Step 2: Configure Backend Components
The ConfigMap
(config.yaml
)
is populated with information about usernames, passwords, SSL certs, and
various application-specific settings.
The steps below give the minimum edits that should be performed in a test environment.
It is necessary to review and customize the entries in config.yaml
before launching in a production environment.
See Making Configuration Changes, below, for the oc
format to use for post-install edits, such as adding 3rd-party authenticators such as LDAP.
Add your license key:
In
config.yaml,
enter the key that was emailed to you in the following parameter:# Required: Sysdig Cloud license sysdigcloud.license: ""
Change the super admin name and password, which are the super admin credentials for the entire system. See here for details.
Find the settings in
config.yaml
here:sysdigcloud.default.user: test@sysdig.com # Required: Sysdig Cloud super admin user password # NOTE: Change upon first login sysdigcloud.default.user.password: test
**Edit the collector endpoint and API URL:**Change the placeholder to point to the DNS names you have established for Sysdig.
Remember that you must have defined one name for the collector and another for the API URL.
Note: Change the collector port to 443.
```yaml
collector.endpoint: <COLLECTOR_DNS_NAME>
collector.port: "443"
api.url: https://<API_DNS_NAME>:443
```
Recommended: edit the file to set the JVM options for Cassandra, Elasticsearch, and API, worker, and collector as well.
(To use the AWS implicit key, edit the JVM options as described in AWS: Integrate AWS Account and CloudWatch Metrics (Optional).)
For installations over 100 agents, it is recommended to allocate 8 GB of heap per JVM.
cassandra.jvm.options: "-Xms8G -Xmx8G" elasticsearch.jvm.options: "-Xms8G -Xmx8G" sysdigcloud.jvm.api.options: "-Xms4G -Xmx8G" sysdigcloud.jvm.worker.options: "-Xms4G -Xmx8G" sysdigcloud.jvm.collector.options: "-Xms4G -Xmx8G"
Note: If you do not wish to use SSL between the agent and the collector, use the following settings instead:
cassandra.jvm.options: "-Xms8G -Xmx8G" elasticsearch.jvm.options: "-Xms8G -Xmx8G" sysdigcloud.jvm.api.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false" sysdigcloud.jvm.worker.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false" sysdigcloud.jvm.collector.options: "-Xms8G -Xmx8G -Ddraios.agents.installParams.sslEnabled=false"
Optional: Change ElasticSearch container setting to non-privileged.
To change the default setting, edit the file
elasticsearch-statefulset.yaml
and setprivileged: false
.containers: - name: elasticsearch image: quay.io/sysdig/elasticsearch:5.6.16.15 securityContext: privileged: false
Deploy the configuration maps and secrets for all services by running the commands:
For Sysdig Monitor:
oc -n sysdigcloud apply -f sysdigcloud/config.yaml
**(Sysdig Secure only) Edit and apply secrets for Anchore and the scanning component:**Edit the
yaml
files:
scanning-secrets.yaml
stringData: scanning.mysql.password: change_me
anchore-secrets yaml
stringData: anchore.admin.password: change_me anchore.db.password: change_me
policy-advisor-secret.yaml
stringData: padvisor.mysql.password: change_me
Then apply the files:
oc -n sysdigcloud apply -f sysdigcloud/scanning-secrets.yaml oc -n sysdigcloud apply -f sysdigcloud/anchore-secrets.yaml oc -n sysdigcloud apply -f sysdigcloud/policy-advisor-secret.yaml
Edit the API DNS name in either
api-ingress.yaml
orapi-ingress-with-secure.yaml
(if using Secure).The files are located in
sysdigcloud/
spec: rules: - host: <API_DNS_NAME> ... tls: - hosts: - <API_DNS_NAME> secretName: sysdigcloud-ssl-secret
Edit the collector DNS name in the file
openshift-collector-router.yaml
. Use the collector DNS name you created in the Prerequisites.The file is located in
sysdigcloud/openshift/
.spec: host: <COLLECTOR_DNS_NAME>
Step 3 (Secure-Only): Edit mysql-deployment.yaml
If using Sysdig Secure :
Edit the MySQL deployment to uncomment the MYSQL_EXTRADB_* environment variables. This forces MySQL to create the necessary scanning database on startup.
File location:
datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml
- name: MYSQL_EXTRADB_SCANNING_DBNAME
valueFrom:
configMapKeyRef:
name: sysdigcloud-config
key: scanning.mysql.dbname
- name: MYSQL_EXTRADB_SCANNING_USER
valueFrom:
configMapKeyRef:
name: sysdigcloud-config
key: scanning.mysql.user
- name: MYSQL_EXTRADB_SCANNING_PASSWORD
valueFrom:
secretKeyRef:
name: sysdigcloud-scanning
key: scanning.mysql.password
The scanning service will not start unless MySQL creates the scanning database.
Step 4: Deploy Your Quay Pull Secret
A specific Quay pull secret is sent via email with your license key.
Edit the file
sysdigcloud/pull-secret.yaml
and change the place holder<PULL_SECRET>
with the provided pull secret.vi sysdigcloud/pull-secret.yaml --- apiVersion: v1 kind: Secret metadata: name: sysdigcloud-pull-secret data: .dockerconfigjson: <PULL_SECRET> type: kubernetes.io/dockerconfigjson
Deploy the pull secret object:
oc -n sysdigcloud apply -f sysdigcloud/pull-secret.yaml
Step 5: Set Up SSL Connectivity to the Backend
SSL-secured communication is used between user browsers and the Sysdig API server(s), and between the Sysdig agent and the collectors.
To set this up, you must:
Use an existing wildcard SSL certificate and key, or
Use existing standard certs for API and collector, or
Create self-signed certificates and keys for API and collector
If you are not using wildcard SSL, you have to use two separate certificates, one for API URL and one for the collector.
To disable SSL between agent and collector:
To disable SSL between agent and collectors, you set a JVM option when configuring backend components (below).
To create self-signed certs:
Run these commands (edit to add your
API_DNS_NAME
andCOLLECTOR_DNS_NAME
):openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/C=US/ST=CA/L=SanFrancisco/O=ICT/CN=<API_DNS_NAME>" -keyout server.key -out server.crt openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/C=US/ST=CA/L=SanFrancisco/O=ICT/CN=<COLLECTOR_DNS_NAME>" -keyout collector.key -out collector.crt
To use an existing wildcard cert:
Obtain the respective
server.crt
andserver.key
files.
To Create Kubernetes Secrets for the Certs
With Wildcard
Uses the same certificate for both the API/UI and the collector.
Run these commands:
oc -n sysdigcloud create secret tls sysdigcloud-ssl-secret --cert=server.crt --key=server.key
oc -n sysdigcloud create secret tls sysdigcloud-ssl-secret-collector --cert=server.crt --key=server.key
Without Wildcard
Uses two different certificates, one for the API/UI, and one for the collector.
Run these commands:
oc -n sysdigcloud create secret tls sysdigcloud-ssl-secret --cert=server.crt --key=server.key
oc -n sysdigcloud create secret tls sysdigcloud-ssl-secret-collector --cert=collector.crt --key=collector.key
Step 6: (Optional) Use CA Certs for External SSL Connections
The Sysdig platform may sometimes open connections over SSL to certain external services, including:
LDAP over SSL
SAML over SSL
OpenID Connect over SSL
HTTPS Proxies
If the signing authorities for the certificates presented by these services are not well-known to the Sysdig Platform (for example, if you maintain your own Certificate Authority), they are not trusted by default.
To allow the Sysdig platform to trust these certificates, use the command below to upload one or more PEM-format CA certificates. You must ensure you’ve uploaded all certificates in the CA approval chain to the root CA.
oc -n sysdigcloud create secret generic sysdigcloud-java-certs --from-file=certs1.crt --from-file=certs2.crt
Install Components (OpenShift)
Edit storageClassName Parameters
You need a storage class; step 2 shows how to create one if needed.
Enter the storageClassName in the appropriate .yaml
files (see step
3).
Verify whether a storage class has been created, by running the command:
oc get storageclass
If no storage class has been defined, create a manifest for one, and then deploy it.
For example, a manifest could be named
sysdigcloud-storageclass.yaml
and contain the following contents (for a storage class using GP2 volumes in AWS):apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gp2 labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: EnsureExists provisioner: kubernetes.io/aws-ebs parameters: type: gp2
Now run the command:
oc apply -f sysdigcloud-storageclass.yaml
Using either the existing storage class name from step 1, or the storage class name defined in step 2, edit the
storageClassName
in the following .yaml
files:For Monitor:
datastores/as_kubernetes_pods/manifests/cassandra/cassandra-statefulset.yaml datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-statefulset.yaml datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml
With Secure:
datastores/as_kubernetes_pods/manifests/postgres/postgres-statefulset.yaml
In each file, the code snippet looks the same:
volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 50Gi storageClassName: <STORAGECLASS_NAME>
Install Datastores and Backend Components
For Sysdig Monitor
Create the datastore statefulsets for Elasticsearch and Cassandra. Elasticsearch and Cassandra are automatically set up with
--replica=3
generating full clusters.oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/cassandra/cassandra-service.yaml oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/cassandra/cassandra-statefulset.yaml oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-service.yaml oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/elasticsearch/elasticsearch-statefulset.yaml
Wait for those processes to be running, then create the MySQL and Redis databases:
oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/mysql/mysql-deployment.yaml oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/redis/redis-deployment.yaml
To add Sysdig Secure: Create the PostgreSQL database:
oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/postgres/postgres-service.yaml oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/postgres/postgres-statefulset.yaml
Wait until datastore pods are in
ready
state, then deploy the backend deployment sets (worker, collector, and API).Run the command:
kubectl -n sysdigcloud get pods
Then look in the READY column to ensure all pods are ready. For example, displaying a 1/1 means 1 of 1 pods is ready.
Apply the NATS service and deployment to deliver events to Sysdig backend components:
oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/nats-streaming/nats-streaming-deployment.yaml oc -n sysdigcloud apply -f datastores/as_kubernetes_pods/manifests/nats-streaming/nats-streaming-service.yaml
Then deploy the backend deployment sets (worker, collector, and API). Pause for 60 seconds after creating the API deployment.
oc -n sysdigcloud apply -f sysdigcloud/api-deployment.yaml oc -n sysdigcloud apply -f sysdigcloud/openshift/openshift-collector-deployment.yaml oc -n sysdigcloud apply -f sysdigcloud/worker-deployment.yaml
Create the service for the API and collector
:
oc -n sysdigcloud apply -f sysdigcloud/api-headless-service.yaml oc -n sysdigcloud apply -f sysdigcloud/openshift/openshift-collector-service.yaml
For Sysdig Secure Wait for the API, worker, and collector to come up before proceeding.
Then create
anchore-engine
deployments and service (used in scanning):oc -n sysdigcloud apply -f sysdigcloud/anchore-service.yaml oc -n sysdigcloud apply -f sysdigcloud/anchore-core-config.yaml oc -n sysdigcloud apply -f sysdigcloud/anchore-core-deployment.yaml oc -n sysdigcloud apply -f sysdigcloud/anchore-worker-config.yaml oc -n sysdigcloud apply -f sysdigcloud/anchore-worker-deployment.yaml
Wait 60 seconds to ensure the core-deployment is in
Running
status, then deploy the rest of the Secure-related yamls:oc -n sysdigcloud apply -f sysdigcloud/scanning-service.yaml oc -n sysdigcloud apply -f sysdigcloud/scanning-api-deployment.yaml oc -n sysdigcloud apply -f sysdigcloud/scanning-alertmgr-service.yaml oc -n sysdigcloud apply -f sysdigcloud/scanning-alertmgr-deployment.yaml
Sysdig Secure only Create services, deployments, and a janitor job for the activity audit and policy advisor features:
oc -n sysdigcloud apply -f sysdigcloud/policy-advisor-service.yaml oc -n sysdigcloud apply -f sysdigcloud/activity-audit-api-service.yaml oc -n sysdigcloud apply -f sysdigcloud/activity-audit-api-deployment.yaml oc -n sysdigcloud apply -f sysdigcloud/policy-advisor-deployment.yaml oc -n sysdigcloud apply -f sysdigcloud/activity-audit-worker-deployment.yaml oc -n sysdigcloud apply -f sysdigcloud/activity-audit-janitor-cronjob.yaml
Configure Access for Connectivity to the Cluster
Apply the appropriate ingress yaml. (The API_DNS
name was entered in
step 7, in Step 2: Configure Backend
Components
This configures the route to the Sysdig UI.
For Sysdig Monitor
oc -n sysdigcloud apply -f sysdigcloud/api-ingress.yaml
With Sysdig Secure:
oc -n sysdigcloud apply -f sysdigcloud/api-ingress-with-secure.yaml
Configure connectivity to the collector for the agent:
oc -n sysdigcloud apply -f sysdigcloud/openshift/openshift-collector-router.yaml
Apply Configuration Changes
Replace kubectl
with oc
for OpenShift.
Update the Config Map
There are two ways to change the original installation parameters in the config map: edit or overwrite.
To edit the config map, run the following command:
kubectl edit configmap/sysdigcloud-config --namespace sysdigcloud
A text editor is presented with the config map to be edited. Enter parameters as needed, then save and quit.
Then restart the config map (below).
To overwrite the config map that is edited on the client-side, (e.g. to keep it synced in a git repository), use the following command:
kubectl replace -f sysdigcloud/config.yaml --namespace sysdigcloud
Then restart the config map (below).
Restart Configmap
After updating the configmap, the Sysdig components must be restarted for the changed parameters to take effect. This can be done by forcing a rolling update of the deployments.
A possible way to do so is to change something innocuous, which forces a rolling update. E.g.:
kubectl -n sysdigcloud patch deployment [deploymnet] -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"$(date +'%s')\"}}}}}"
Replace kubectl
with oc
for OpenShift.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.