This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

  • 1:
    • 2:
      • 3:
        • 4:

          Configure Monitoring Integrations

          Monitoring Integration provides an at-a-glance summary of workloads running in your infrastructure and a deeper insight into the health and performance of your services across platforms and the cloud. You can easily identify the workloads in your team scope, the service discovered (such as etcd) within each workload, and configure the Prometheus exporter integration to collect and visualize time series metrics. Monitoring Integration also powers Alerts Library.

          The following indicates integration status for each service integrations:

          • Reporting Metrics: The integration is configured correctly and is reporting metrics.

          • Needs Attention: An integration has stopped working and is no longer reporting metrics or requires some other type of attention.

          • Pending Metrics: An integration has recently been configured and has been waiting to receive metrics.

          • Configure Integration: The integration needs to be configured, and therefore no metrics are reported.

          Access Monitoring Integrations

          1. Log in to Sysdig Monitor.

          2. Click Monitoring Integration in the management section of the left-hand sidebar.

            The Integrations page is displayed. Continue with Configure an Integration.

          Configure an Integration

          1. Locate the service that you want to configure an integration for. To do so, identify the workload and drill down to the grouping where the service is running.

            To locate the service, you can use one of the following:

            • Text search
            • Type filtering
            • Left navigation to filter the workload and then use text search or type filtering
            • Use the Configure Integration option on the top, and locate the service using text search or type filtering
          2. Click Configure Integration.

            1. Click Start Installation.
            2. Review the prerequisites.
            3. Do one of the following:
              • Dry Run: Use kubectl command to install the service. Follow the on-screen instructions to complete the tasks successfully.
              • Patch: Install directly on your workload. Follow the on-screen instructions to complete the tasks successfully.
              • Manual: Use an exporter and install the service manually. Click Documentation to learn more about the service exporter and integrate with Sysdig Monitor
          3. Click Validate to validate the installation.

          4. Make sure that the wizard shows the Installation Complete screen.

          5. Click Close to close the window.

          Show Unidentified Workloads

          The services that Sysdig Monitor cannot discover can technically still be monitored through the Unidentified Workloads option. You can view the workloads with these unidentified services or applications and see their status. To do so, use the Unidentified Workloads slider at the top right corner of the Integration page.

          1 -

          Guidelines for Monitoring Integrations

          You are directed to this page because your agent deployment include a configuration that causes either of the following:

          • Prohibits the use of Monitoring Integrations
          • Affect the current metrics you are already collecting


          • Upgrade Sysdig agent to v12.0.0

          • If you have clusters with more than 50 nodes and you don’t have the prom_service_discovery option enabled:

            • Enabling the latest Prometheus features might create an additional connection to the Kubernetes API server from each Sysdig agent in your environment. The surge in agent connections can increase the CPU and memory load in your API servers. Therefore, ensure that your API servers are suitably sized to handle the increased load in large clusters.
            • If you encounter any problems contact Sysdig Support.
          • Remove the following manual configurations in the dragent.yaml file because they might interfere with those provided by Sysdig:

            • use_promscrape
            • promscrape_fastproto
            • prom_service_discovery
            • prometheus.max_metrics
            • prometheus.ingest_raw
            • prometheus.ingest_calculated
          • The sysdig_sd_configs configuration is no longer supported. Remove the existing prometheus.yaml if it includes the sysdig_sd_configs configuration.

          If you are not currently using Prometheus metrics in Sysdig Monitor, you can skip the following steps:

          • If you are using a custom Prometheus process_filter in dragent.yaml to trigger scraping, see Migrating from Promscrape V1 to V2.

          • If you are using service annotations or container labels to find scrape targets, you may need to create new scrape_configs in prometheus.yaml , preferably based on Kubernetes pods service discovery. This configuration can be complicated in certain environments and therefore we recommend that you contact Sysdig support for help.

          2 -

          Migrating from Promscrape V1 to V2

          Promscrape is the lightweight Prometheus server in the Sysdig agent. An updated version of promscrape, named Promscrape V2 is available. This configuration is controlled by the prom_discovery_service parameter in the dragent.yaml file. To use the latest features, such as Service Discovery and Monitoring Integrations, you need to have this option enabled in your environment.

          Compare Promscrape V1 and V2

          The main difference between V1 and V2 is how scrape targets are determined.

          In v1 targets are found through process-filtering rules configured in dragent.yaml or dragent.default.yaml (if no rules are given in dragent.yaml).The process-filtering rules are applied to all the running processes on the host. Matches are made based on process attributes, such as process name or TCP ports being listened to, as well as associated contexts from docker or Kubernetes, such as container labels or Kubernetes annotations.

          With Promscrape V2, scrape targets are determined by scrape_configs fields in a prometheus.yaml file (or the prometheus-v2.default.yaml file if no prometheus.yaml exists). Because promscrape is adapted from the open-source Prometheus server, the scrape_config settings are compatible with the normal Prometheus configuration. For more information, see Configuration.

          Migrate Using Default Configuration

          The default configuration for Promscrape v1 triggers the scraping based on standard Kubernetes pod annotations and container labels. The default configuration for v2 currently triggers scraping only based on the standard Kubernetes pod annotations leveraging the Prometheus native Kubernetes service discovery. Use the following configuration:





          Required field.

          The port number to scrape

          Optional. It will scrape all pod-registered ports if omitted.


          The default is http.

          (required field)

          The URL

          The default is /metrics.

          • Users running Kubernetes with Promscrape v1 default rules and triggering scraping based on pod annotations, need not take any action to migrate to v2. The migration happens automatically.

          • Users operating non-Kubernetes environments might need to continue using v1 for now, depending on how scraping is triggered. As of today promscrape.v2 doesn’t support leveraging container and Docker labels to discover Prometheus metrics endpoints. If your environment is one of these, define static jobs with the IP:port to be scrapped.

          Migrate Using Custom Rules

          If you relying on custom process_filter rules to collect metrics, use any method using standard Prometheus configuration syntax to scrape the endpoints. We recommend one of the following:

          • Adopt the standard approach of adding the standard Prometheus annotations to their pods. For more information, see Migrate Using Default Configuration.
          • Write a Prometheus scrape_config by using Kubernetes pods service discovery and use the appropriate pod metadata to trigger their scrapes.

          See the below example for converting your process_filter rules to Prometheus terminology.



          - include:
          - action: keep
            source_labels: [__meta_kubernetes_pod_annotation_sysdig_com_test]
            regex: true
          - include:
          - action: keep
            source_labels: [__meta_kubernetes_pod_label_app]
            regex: 'sysdig'

          Not supported.

          - include:

          Not supported.

          - include:
              process.cmdline: sysdig-agent

          Not supported.

          - include:
              port: 8080
          - action: keep
            source_labels: [__meta_kubernetes_pod_container_port_number]
            regex: '8080'
          - include:
              container.image: sysdig-agent

          Not supported.

          - include:
          - action: keep
            source_labels: [__meta_kubernetes_pod_container_name]
            regex: 'sysdig-agent'
          - include:
              appcheck.match: sysdig

          Appchecks are not compatble with Promscrape v2. See Configure Monitoring Integrations for supported integrations.

          3 -

          Configure Default Integrations

          Each Monitoring Integration holds a specific job that scrapes its metrics and sends them to Sysdig Monitor. To optimize metrics scraping for building dashboards and alerts in Sysdig Monitor, Sysdig offers default jobs for these integrations. Periodically, the Sysdig agent connects with Sysdig Monitor and retrieves the default jobs and make the Monitoring Integrations available for use. See the list of the available integrations and corresponding jobs.

          You can find all the jobs in the /opt/draios/etc/promscrape.yaml file in the sysdig-agent container in your cluster.

          Supported Monitoring Integrations

          IntegrationOut of the BoxEnabled by defaultJob name in config file
          Apacheapache-exporter-default, apache-grok-default
          Harborharbor-exporter-default, harbor-core-default, harbor-registry-default, harbor-jobservice-default
          Kubernetes Control Planekube-dns-default, kube-controller-manager-default, kube-scheduler-default
          Kubernetes Etcdetcd-default
          Kubernetes Persistent Volume Claimk8s-pvc-default
          Nginx Ingressnginx-ingress-default
          Open Policy Agent - Gatekeeperopa-default
          Prometheus Default Jobk8s-pods
          Sysdig Admission Controllersysdig-admission-controller-default

          Enable and Disable Integrations

          Some integrations are disabled by default, due to the potential high cardinality of their metrics. To enable them, contact our support department. The same applies to disabling integrations by default in all your clusters.

          Customize a Default Job

          The default jobs offered by Sysdig for integrations are optimized to scrape the metrics for building dashboards and alerts in Sysdig Monitor. Instead of processing all the metrics available, you can determine which metrics to include or exclude for your requirements. To do so, you can overwrite the default configuration in the prometheus.yaml file. The prometheus.yaml file is located in the sysdig-agent ConfigMap in the sysdig-agent namespace.

          You can overwrite the default job for a specific integration by adding a new job to the prometheus.yaml file with the same name as the default job that you want to replace. For example, if you want to create a new job for the Apache integration, create a new job with the name apache-default. The jobs defined by the user has precedence over the default ones.

          See Supported Monitoring Integrations for thee complete list of integrations and corresponding job names.

          Use Sysdig Annotations in Exporters

          Sysdig provides a set of Helm charts that helps you configure the exporters for the integrations. For more information on installing Monitor Integrations, see the Monitoring Integrations option in the Sysdig Monitor. Additionally, the Helm charts are publicly available in the Sysdig Helm repository.

          If exporters are already installed in your cluster, you can use the standard Prometheus annotations and the Sysdig agent will automatically scrape them.

          For example, if you use the annotation given below, the incoming metrics will have the information about the pod that generates the metrics.


          If you use an exporter, the incoming metrics will be associated with the exporter pod, not the application pod. To change this behavior, you can use the Sysdig-provided annotations and configure the exporter with special settings on the agent.

          Annotate the Exporter

          Use the following annotations to configure the exporter:

          • port: The port to scrape for metrics on the exporter.
          • target_ns: The namespace of the workload corresponding to the application (not the exporter).
          • target_workload_type: The type of the workload of the application (not the exporter). The possible values are deployment, statefulset, and daemonset.
          • target_workload_name: The name of the workload corresponding to the application (not the exporter).
          • integration_type: The type of the integration. The job created in the Sysdig agent use this value to find the exporter.

          Configure a New Job

          Edit the prometheus.yaml file to configure a new job in Sysdig agent. The file is located in the sysdig-agent ConfigMap in the sysdig-agentnamespace.

          You can use this example template:

          - job_name: my-integration
              insecure_skip_verify: true
              - role: pod
              - action: keep
                source_labels: [__meta_kubernetes_pod_host_ip]
                regex: __HOSTIPS__
              - action: drop
                source_labels: [__meta_kubernetes_pod_annotation_promcat_sysdig_com_omit]
                regex: true
              - action: keep
                  - __meta_kubernetes_pod_annotation_promcat_sysdig_com_integration_type
                regex: 'my-integration' # Use here the integration type that you defined in your annotations
              - action: replace
                source_labels: [__meta_kubernetes_pod_annotation_promcat_sysdig_com_target_ns]
                target_label: kube_namespace_name
              - action: replace
                source_labels: [__meta_kubernetes_pod_annotation_promcat_sysdig_com_target_workload_type]
                target_label: kube_workload_type
              - action: replace
                source_labels: [__meta_kubernetes_pod_annotation_promcat_sysdig_com_target_workload_name]
                target_label: kube_workload_name
              - action: replace
                replacement: true
                target_label: sysdig_omit_source
              - action: replace
                source_labels: [__address__, __meta_kubernetes_pod_annotation_promcat_sysdig_com_port]
                regex: ([^:]+)(?::\d+)?;(\d+)
                replacement: $1:$2
                target_label: __address__
              - action: replace
                source_labels: [__meta_kubernetes_pod_uid]
                target_label: sysdig_k8s_pod_uid
              - action: replace
                source_labels: [__meta_kubernetes_pod_container_name]
                target_label: sysdig_k8s_pod_container_name

          Exclude a Deployment from Being Scraped

          If you want the Agent to exclude a deployment from being scraped, you can use the following annotation:


          4 -

          Troubleshooting Monitoring Integrations

          Review the common troubleshooting scenarios you might encounter while getting a Monitor integration working and see what you can do if an integration does not report metics after installation.

          Check Prerequisites

          Some integrations require secrets and other resources available in the correct namespace in order for it to work. Integrations such as database exporters might require you to create a user and provide with special permissions in the database to be able to connect with the endpoint and generate metrics.

          Ensure that the prerequisites of the integration are met before proceeding with installation.

          Verity Exporter Is Running

          If the integration is an exporter, ensure that the pods corresponding to the exporter are running correctly. You can check this after installing the integration. If the exporter is installed as a sidecar of the application (such as Nginx), verify that the exporter container is added to the pod.

          You can check the status of the pods with the Kubernetes dashboard Pods Status and Performance or with the following command:

          kubectl get pods --namespace=<namespace>

          Additionally, if the container has problems and cannot start, check the description of the pod for error messages:

          kubectl describe pod <pod-name> --namespace=<namespace>

          Verify Metrics Are Generated

          Check whether a running exporter is generating metrics by accessing the metrics endpoint:

          kubectl port-forward <pod-name> <pod-port> <local-port> --namespace=<namespace>
          curl http://localhost:<local-port>/metrics

          This is also valid for applications that don’t need an exporter to generate their own metrics.

          If the exporter is not generating metics, there could be problems accessing or authenticating with the application. Check the logs associated with the pods:

          kubectl logs <pod-name> --namespace=<namespace>

          If the application is instrumented and is not generating metrics, check if the Prometheus metrics option or the module is activated.

          Verify Sysdig Agent Is Scraping Metrics

          If an application doesn’t need an exporter to generate metrics, check if it has the default Prometheus annotations.

          Additionally, you can check if the Sysdig agent can access the metrics endpoint. To do so, use the following command:

          kubectl exec <sysdig-agent-pod-name> --namespace=sysdig-agent -- /bin/sh -c "curl http://<exporer-pod-ip>:<pod-port>/metrics"

          Select the Sysdig Agent pod in the same node than the pod used to scrape.