Review the diagram and component descriptions. When installing on-premises, you can decide where to deploy various components.
Sysdig will collect monitoring and security information from all the target entities. To achieve this, one Sysdig agent should be deployed in each host. These hosts can be:
The nodes that make up a Kubernetes or OpenShift cluster
Virtual machines or bare metal
Living in a cloud environment (for example, Amazon Web Service (AWS), Google Cloud, IBM Cloud, and Azure) or on the user premises.
The Sysdig agent can be installed as a container itself using a Helm chart, Kubernetes operator, and so on.
Once the agent is installed in the host it will automatically start collecting information from the running containers, container runtime, the orchestration API (Kubernetes, OpenShift, and so on), metrics from defined Prometheus endpoints, auto-detected JMX sources, StatsD, and integrations as well as from the host itself.
The Sysdig agent maintains a permanent communication channel with the Sysdig backend, and send messages containing the monitoring metrics, infrastructure metadata, and security events. The channel is secured with standard TLS encryption and transports data as binary messages. The agent uses this channel to transmit data, and to receive additional configuration from the backend, such as security runtime policies or benchmarks.
For the Sysdig backend, you have a choice between using the SaaS version, managed transparently by Sysdig, or installing it directly on your premises. Neither choice affects the operations described below.
Once the agent messages are received in the backend, they are processed and extracted into data available to the platform, for example, as time series, infrastructure and security events, and infrastructure metadata.
The main components of the backend/platform include:
Extraction and post-processing of the metric data from the agent, so that full time-series, with all the necessary infrastructure metadata, are available to the user
Maintenance of the infrastructure metadata (most notably Kubernetes state), so that all events and time series can be enriched and correctly grouped
Storage of time-series and event data
Processing of time-series data to calculate alert triggers
Queuing the security events triggered by the agents to be shown on the event feed, notifying through the configured notification channels and alerts and forwarding via the Event Forwarder to external platforms like Splunk, Syslog or IBM Multicloud Manager (MCM) / Qradar
Aggregating and post-processing other security data such as container fingerprints that will be used to generate container profiles, or security benchmark results
The Sysdig platform then stores this post-processed data in a set of internal databases that will be combined by the API service to create the data views, such as dashboards, event feeds, vulnerability reports, and security benchmarks.
The Sysdig platform provides several ways to consume and present its internal data. All APIs are RESTful, HTTP JSON-based, and secured using TLS. The same APIs are used to power the Sysdig front end, as well as any API clients (such as sdc-cli).
User and Team management API
Data API (proprietary Sysdig API for querying time-series data)
Image Scanning API
Security Events API
Activity Audit API
Secure Overview API
PromQL API: Prometheus compatible HTTP API for querying time -series data
These enable different use cases:
User access to the platform via the Sysdig user interface
Programmatic input and extraction of data, i.e.
Automatic user creation
Terraform scripts to save or recover configuration state
Inline scanning to push scanning results from the CI/CD pipeline
Instrumentation using the sdc-cli.
PromQL API interface that can be used to connect any PromQL-compatible solutions, such as Grafana.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.