This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

  • 1:
    • 2:
      • 3:
        • 4:
          • 5:
            • 6:
              • 7:
                • 8:
                  • 9:

                    Collect Prometheus Metrics

                    Sysdig supports collecting, storing, and querying Prometheus native metrics and labels. You can use Sysdig in the same way that you use Prometheus and leverage Prometheus Query Language (PromQL) to create dashboards and alerts. Sysdig is compatible with Prometheus HTTP API to query your monitoring data programmatically using PromQL and extend Sysdig to other platforms like Grafana.

                    From a metric collection standpoint, a lightweight Prometheus server is directly embedded into the Sysdig agent to facilitate metric collection. This also supports targets, instances, and jobs with filtering and relabeling using Prometheus syntax. You can configure the agent to identify these processes that expose Prometheus metric endpoints on its own host and send it to the Sysdig collector for storing and further processing.

                    This document uses metric and time series interchangeably. The description of configuration parameters refers to “metric”, but in strict Prometheus terms, those imply time series. That is, applying a limit of 100 metrics implies applying a limit on time series, where all the time series data might not have the same metric name.

                    The Prometheus product itself does not necessarily have to be installed for Prometheus metrics collection.

                    See the Sysdig agent versions and compatibility with Prometheus features:

                    • Sysdig agent v10.5.0 and above: Promscrape V2 is used that supports Prometheus native service discovery. This option is controlled by the prom_service_discovery parameter in the dragent.yaml file. Additionally, a default prometheus.yaml file has been added with Kubernetes pod discovery rules to use when native Prometheus service discovery is enabled.

                    • Sysdig agent v10.0.0 and above: promscrape, a lightweight Prometheus server, by default is used for scraping Prometheus metrics. This feature is controlled by the use_promscrape parameter and is enabled by default.

                    • Sysdig agent v9.8.0 to v10.0: promscrape , a lightweight Prometheus server, is introduced in v9.8.0 to scrape Prometheus metrics. You must enable use_promscrape in the dragent.yaml file to use this method.

                    • Sysdig agent v0.70.0 and above: Provides rich support for automatically collecting metrics from Prometheus exporters .

                    The following topics describe in detail how to configure the Sysdig agent for service discovery, metrics collection, and further processing.

                    Learn More

                    See the following blog posts for additional context on the Prometheus metric and how such metrics are typically used.

                    1 -

                    Working with Prometheus Metrics

                    The Sysdig agent uses its visibility to all running processes (at both the host and container levels) to find eligible targets for scraping Prometheus metrics. By default, no scraping is attempted. Once the feature is enabled, the agent assembles a list of eligible targets, apply filtering rules, and sends back to the Sysdig collector.

                    Latest Prometheus Features

                    Sysdig agents v12.0 or above is required for the following capabilities:

                    Sysdig agents v10.0 or above is required for the following capabilities:

                    • New capabilities of using Prometheus data:

                      • Ability to visualize data using PromQL queries. See Using PromQL.

                      • Create alerts from PromQL-based Dashboards. See Create Panel Alerts.

                      • Backward compatibility for dashboards v2 and alerts.

                        The new PromQL data cannot be visualized by using the Dashboard v2 Histogram. Use time-series based visualization for the histogram metrics.

                    • New metrics limit per agent

                    • 10-second data granularity

                    • Higher retention rate on the new metric store.

                    • New metrics naming convention:

                      • The legacy Prometheus metrics are available with the prefix promlegacy. The naming convention is promlegacy.<metrics> . For example, cortex_build_info is renamed as promlegacy.cortex_build_info.

                    Prerequisites and Guidelines

                    • Sysdig agent v 10.0.0 and above is required for the latest Prometheus features.

                    • Prometheus feature is enabled in the dragent.yaml file.

                      prometheus:
                        enabled: true
                      

                      See Setting up the Environment for more information.

                    • The endpoints of the target should be available on a TCP connection to the agent. The agent scrapes a target, remote or local, specified by the IP: Port or the URL in dragent.yaml.

                    Service Discovery

                    To use native Prometheus service discovery, enable Promscrape V2 as described in Enable Prometheus Native Service Discovery. This section covers the Sysdig way of service discovery that involves configuring process filters in the Sysdig agent.

                    The way service discovery works in the Sysdig agent differs from that of the Prometheus server. While the Prometheus server has built-in integration with several service discovery mechanisms and the prometheus.yml file to read the configuration settings from, the Sysdig agent auto-discovers any process (exporter or instrumented) that matches the specifications in the dragent``.yaml, file and instructs the embedded lightweight Prometheus server to retrieve the metrics from it.

                    The lightweight Prometheus server in the agent is named promscrape and is controlled by the flag of the same name in the dragent.yaml file. See Configuring Sysdig Agent for more information.

                    Unlike the Prometheus server that can scrape processes running on all the machines in a cluster, the agent can scrape only those processes that are running on the host that it is installed on.

                    Within the set of eligible processes/ports/endpoints, the agent scrapes only the ports that are exporting Prometheus metrics and will stop attempting to scrape or retry on ports based on how they respond to attempts to connect and scrape them. It is therefore strongly recommended that you create a configuration that restricts the process and ports for attempted scraping to the minimum expected range for your exporters. This minimizes the potential for unintended side-effects in both the Agent and your applications due to repeated failed connection attempts.

                    The end to end metric collection can be summarized as follows:

                    1. A process is determined to be eligible for possible scraping if it positively matches against a series of Process Filter include/exclude rules. See Process Filter for more information.

                    2. The Agent will then attempt to scrape an eligible process at a /metrics endpoint on all of its listening TCP ports unless the additional configuration is present to restrict scraping to a subset of ports and/or another endpoint name.

                    3. As of agent v9.8.0, filtering metrics at ingestion can be enabled. If enabled, filtering rules are applied at ingestion as it receives the metrics. See Filtering Prometheus Metrics for more information.

                    4. Upon receiving the metrics, the agent applies the following rules before sending them to the Sysdig collector.

                    The metrics ultimately appear in the Sysdig Monitor Explore interface in the Prometheus section.

                    2 -

                    Setting up the Environment

                    Quick Start For Kubernetes Environments

                    Prometheus users who are already leveraging Kubernetes Service Discovery (specifically the approach in this sample prometheus-kubernetes.yml) may already have Annotations attached to the Pods that mark them as eligible for scraping. Such environments can quickly begin scraping the same metrics using the Sysdig Agent in a couple of easy steps.

                    1. Enable the Prometheus metrics feature in the Sysdig Agent. Assuming you are deploying using DaemonSets, the needed config can be added to the Agent’s dragent.yaml by including the following in your DaemonSet YAML (placing it in the env section for the sysdig-agent container):

                      - name: ADDITIONAL_CONF
                        value: "prometheus:\n  enabled: true"
                      
                    2. Ensure the Kubernetes Pods that contain your Prometheus exporters have been deployed with the following Annotations to enable scraping (substituting the listening exporter-TCP-port) :

                      spec:
                        template:
                          metadata:
                            annotations:
                              prometheus.io/scrape: "true"
                              prometheus.io/port: "exporter-TCP-port"
                      

                      The configuration above assumes your exporters use the typical endpoint called /metrics. If an exporter is using a different endpoint, this can also be specified by adding the following additional optional Annotation, substituting the exporter-endpoint-name:

                      prometheus.io/path: "/exporter-endpoint-name"
                      

                    If you try this Kubernetes Deployment of a simple exporter, you will quickly see auto-discovered Prometheus metrics being displayed in Sysdig Monitor. You can use this working example as a basis to similarly Annotate your own exporters.

                    If you have Prometheus exporters not deployed in annotated Kubernetes Pods that you would like to scrape, the following sections describe the full set of options to configure the Agent to find and scrape your metrics.

                    Quick Start for Container Environments

                    In order for Prometheus scraping to work in a Docker-based container environment, set the following labels to the application containers, substituting <exporter-port> and <exporter-path> with the correct port and path where metrics are exported by your application:

                    • io.prometheus.scrape=true

                    • io.prometheus.port=<exporter-port>

                    • io.prometheus.path=<exporter-path>

                    For example, if mysqld-exporter is to be scraped, spin up the container as follows:

                    docker -d -l io.prometheus.scrape=true -l io.prometheus.port=9104 -l io.prometheus.path=/metrics mysqld-exporter
                    
                    

                    3 -

                    Configuring Sysdig Agent

                    This feature is not supported with Promscrape V2. For information on different versions of Promscrape and migrating to the latest version, see Migrating from Promscrape V1 to V2.

                    As is typical for the agent, the default configuration for the feature is specified in dragent.default.yaml, and you can override the defaults by configuring parameters in the dragent.yaml. For each parameter, you do not set in dragent.yaml, the defaults in dragent.default.yaml will remain in effect.

                    Main Configuration Parameters

                    Parameter

                    Default

                    Description

                    prometheus

                    See below

                    Turns Prometheus scraping on and off.

                    process_filter

                    See below

                    Specifies which processes may be eligible for scraping. See the Process Filter section below.

                    use_promscrape

                    See below.

                    Determines whether to use promscrape for scraping Prometheus metrics.

                    promscrape

                    Promscrape is a lightweight Prometheus server that is embedded with the Sysdig agent. The use_promscrape parameter controls whether to use it to scrape Prometheus endpoints.

                    Promscrape has two versions: Promscrape V1 and Promscrape V2. With V1, Sysdig agent discovers scrape targets through the process_filter rules. With V2, promscrape itself discovers targets by using the standard Prometheus configuration, allowing the use of relabel_configs to find or modify targets.

                    For more information, see Filtering Prometheus Metrics.

                    Parameters

                    Default

                    Description

                    use_promscrape

                    true

                    prometheus

                    The prometheus section defines the behavior related to Prometheus metrics collection and analysis. It allows for turning the feature on, set a limit from the agent side on the number of metrics to be scraped, and determines whether to report histogram metrics and log failed scrape attempts.

                    Parameter

                    Default

                    Description

                    enabled

                    false

                    Turns Prometheus scraping on and off.

                    interval

                    10

                    How often (in seconds) the agent will scrape a port for Prometheus metrics

                    prom_service_discovery

                    true

                    Enables native Prometheus service discovery. If disabled, promscrape.v1 is used to scrape the targets. See Enable Prometheus Native Service Discovery.

                    On agent versions prior to 11.2, the default is false.

                    max_metrics

                    1000

                    The maximum number of total Prometheus metrics that will be scraped across all targets. This value of 1000 is the maximum per-agent, and is a separate limit from other Custom Metrics (e.g. statsd, JMX, and other Application Checks).

                    timeout

                    1

                    Used to configure the amount of time the agent will wait while scraping a Prometheus endpoint before timing out. The default value is 1 second.

                    As of agent v10.0, this parameter is only used when promscrape is disabled. Since promscrape is now default, timeout can be considered deprecated, however it is still used when you explicitly disable promscrape.

                    Process Filter

                    The process_filter section specifies which of the processes known by an agent may be eligible for scraping.

                    Note that once you specify a process_filter in your dragent.yaml, this replaces the entire Prometheus process_filter section (i.e. all the rules) shown in the dragent.default.yaml.

                    The Process Filter is specified in a series of include and exclude rules that are evaluated top-to-bottom for each process known by an Agent. If a process matches an include rule, scraping will be attempted via a /metrics endpoint on each listening TCP port for the process, unless a conf section also appears within the rule to further restrict how the process will be scraped (see conf below).

                    Multiple patterns can be specified in a single rule, in which case all patterns must match for the rule to be a match (AND logic).

                    Within a pattern value, simple “glob” wildcarding may be used, where * matches any number of characters (including none) and ? matches any single character. Note that due to YAML syntax, when using wildcards, be sure to enclose the value in quotes ("*").

                    The table below describes the supported patterns in Process Filter rules. To provide realistic examples, we’ll use a simple sample Prometheus exporter (source code here) which can be deployed as a container using the Docker command line below. To help illustrate some of the configuration options, this sample exporter presents Prometheus metrics on /prometheus instead of the more common /metrics endpoint, which will be shown in the example configurations further below.

                    # docker run -d -p 8080:8080 \
                        --label class="exporter" \
                        --name my-java-app \
                        luca3m/prometheus-java-app
                    
                    # ps auxww | grep app.jar
                    root     11502 95.9  9.2 3745724 753632 ?      Ssl  15:52   1:42 java -jar /app.jar --management.security.enabled=false
                    
                    # curl http://localhost:8080/prometheus
                    ...
                    random_bucket{le="0.005",} 6.0
                    random_bucket{le="0.01",} 17.0
                    random_bucket{le="0.025",} 51.0
                    ...
                    

                    Pattern name

                    Description

                    Example

                    container.image

                    Matches if the process is running inside a container running the specified image

                    - include:

                    container.image: luca3m/prometheus-java-app

                    container.name

                    Matches if the process is running inside a container with the specified name

                    - include:

                    container.name: my-java-app

                    container.label.*

                    Matches if the process is running in a container that has a Label matching the given value

                    - include:

                    container.label.class: exporter

                    kubernetes.<object>.annotation.* kubernetes.<object>.label.*

                    Matches if the process is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with the Annotation/Label matching the given value.

                    Note: This pattern does not apply to the Docker-only command-line shown above, but would instead apply if the exporter were installed as a Kubernetes Deployment using this example YAML.

                    Note: See Kubernetes Objects, below, for information on the full set of supported Annotations and Labels.

                    - include:

                    kubernetes.pod.annotation.prometheus.io/scrape: true

                    process.name

                    Matches the name of the running process

                    - include:

                    process.name: java

                    process.cmdline

                    Matches a command line argument

                    - include:

                    process.cmdline: "*app.jar*"

                    port

                    Matches if the process is listening on one or more TCP ports.

                    The pattern for a single rule can specify a single port as shown in this example, or a single range (e.g.8079-8081), but does not support comma-separated lists of ports/ranges.

                    Note: This parameter is only used to confirm if a process is eligible for scraping based on the ports on which it is listening. For example, if a process is listening on one port for application traffic and has a second port open for exporting Prometheus metrics, it would be possible to specify the application port here (but not the exporting port), and the exporting port in the conf section (but not the application port), and the process would be matched as eligible and the exporting port would be scraped.

                    - include:

                    port: 8080

                    appcheck.match

                    Matches if an Application Check with the specific name or pattern is scheduled to run for the process.

                    - exclude:

                    appcheck.match: "*"

                    Instead of the **`include`** examples shown above that would have each matched our process, due to the previously-described ability to combine multiple patterns in a single rule, the following very strict configuration would also have matched:
                    - include:
                        container.image: luca3m/prometheus-java-app
                        container.name: my-java-app
                        container.label.class: exporter
                        process.name: java
                        process.cmdline: "*app.jar*"
                        port: 8080
                    

                    conf

                    Each include rule in the port_filter may include a conf portion that further describes how scraping will be attempted on the eligible process. If a conf portion is not included, scraping will be attempted at a /metrics endpoint on all listening ports of the matching process. The possible settings:

                    Parameter name

                    Description

                    Example

                    port

                    Either a static number for a single TCP port to be scraped, or a container/Kubernetes Label name or Kubernetes Annotation specified in curly braces. If the process is running in a container that is marked with this Label or is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with this Annotation/Label, scraping will be attempted only on the port specified as the value of the Label/Annotation.

                    Note: The Label/Annotation to match against will not include the text shown in red.

                    Note: See Kubernetes Objectsfor information on the full set of supported Annotations and Labels.

                    Note: If running the exporter inside a container, this should specify the port number that the exporter process in the container is listening on, not the port that the container exposes to the host.

                    port: 8080

                    - or -

                    port: "{container.label.io.prometheus.port}"

                    - or -

                    port: "{kubernetes.pod.annotation.prometheus.io/port}"

                    port_filter

                    A set of include and exclude rules that define the ultimate set of listening TCP ports for an eligible process on which scraping may be attempted. Note that the syntax is different from the port pattern option from within the higher-level include rule in the process_filter. Here a given rule can include single ports, comma-separated lists of ports (enclosed in square brackets), or contiguous port ranges (without brackets).

                    port_filter:

                    - include: 8080 - exclude: [9092,9200,9300] - include: 9090-9100

                    path

                    Either the static specification of an endpoint to be scraped, or a container/Kubernetes Label name or Kubernetes Annotation specified in curly braces. If the process is running in a container that is marked with this Label or is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with this Annotation/Label, scraping will be attempted via the endpoint specified as the value of the Label/Annotation.

                    If path is not specified, or specified but the Agent does not find the Label/Annotation attached to the process, the common Prometheus exporter default of /metrics will be used.

                    Note: A Label/Annotation to match against will not include the text shown in red.

                    Note: See Kubernetes Objects for information on the full set of supported Annotations and Labels.

                    path: "/prometheus"

                    - or -

                    path: "{container.label.io.prometheus.path}"

                    - or -

                    path: "{kubernetes.pod.annotation.prometheus.io/path}"

                    host

                    A hostname or IP address. The default is localhost.

                    host: 192.168.1.101
                    - or -
                    host: subdomain.example.com
                    - or -
                    host: localhost

                    use_https

                    When set to true, connectivity to the exporter will only be attempted through HTTPS instead of HTTP. It is false by default.

                    (Available in Agent version 0.79.0 and newer)

                    use_https: true

                    ssl_verify

                    When set to true, verification will be performed for the server certificates for an HTTPS connection. It is false by default. Verification was enabled by default before 0.79.0.

                    (Available in Agent version 0.79.0 and newer)

                    ssl_verify: true

                    Authentication Integration

                    As of agent version 0.89, Sysdig can collect Prometheus metrics from endpoints requiring authentication. Use the parameters below to enable this function.

                    • For username/password authentication:

                      • username

                      • password

                    • For authentication using a token:

                      • auth_token_path
                    • For certificate authentication with a certificate key:

                      • auth_cert_path

                      • auth_key_path

                    Token substitution is also supported for all the authorization parameters. For instance a username can be taken from a Kubernetes annotation by specifying

                    username: "{kubernetes.service.annotation.prometheus.openshift.io/username}"

                    conf Authentication Example

                    Below is an example of the dragent.yaml section showing all the Prometheus authentication configuration options, on OpenShift, Kubernetes, and etcd.

                    In this example:

                    • The username/password are taken from a default annotation used by OpenShift.

                    • The auth token path is commonly available in Kubernetes deployments.

                    • The certificate and key used here for etcd may normally not be as easily accessible to the agent. In this case they were extracted from the host namespace, constructed into Kubernetes secrets, and then mounted into the agent container.

                    prometheus:
                      enabled: true
                      process_filter:
                        - include:
                            port: 1936
                            conf:
                                username: "{kubernetes.service.annotation.prometheus.openshift.io/username}"
                                password: "{kubernetes.service.annotation.prometheus.openshift.io/password}"
                        - include:
                            process.name: kubelet
                            conf:
                                port: 10250
                                use_https: true
                                auth_token_path: "/run/secrets/kubernetes.io/serviceaccount/token"
                        - include:
                            process.name: etcd
                            conf:
                                port: 2379
                                use_https: true
                                auth_cert_path: "/run/secrets/etcd/client-cert"
                                auth_key_path: "/run/secrets/etcd/client-key"
                    

                    Kubernetes Objects

                    As described above, there are multiple configuration options that can be set based on auto-discovered values for Kubernetes Labels and/or Annotations. The format in each case begins with "kubernetes.OBJECT.annotation." or "kubernetes.OBJECT.label." where OBJECT can be any of the following supported Kubernetes object types:

                    • daemonSet

                    • deployment

                    • namespace

                    • node

                    • pod

                    • replicaSet

                    • replicationController

                    • service

                    • statefulset

                    The configuration text you add after the final dot becomes the name of the Kubernetes Label/Annotation that the Agent will look for. If the Label/Annotation is discovered attached to the process, the value of that Label/Annotation will be used for the configuration option.

                    Note that there are multiple ways for a Kubernetes Label/Annotation to be attached to a particular process. One of the simplest examples of this is the Pod-based approach shown in Quick Start For Kubernetes Environments. However, as an example alternative to marking at the Pod level, you could attach Labels/Annotations at the Namespace level, in which case auto-discovered configuration options would apply to all processes running in that Namespace regardless of whether they’re in a Deployment, DaemonSet, ReplicaSet, etc.

                    4 -

                    Enable Prometheus Native Service Discovery

                    Prometheus service discovery is a standard method of finding endpoints to scrape for metrics. You configure prometheus.yaml to set up the scraping mechanism. As of agent v10.5.0, Sysdig supports the native Prometheus service discovery and you can configure in prometheus.yaml in the same way you do for native Prometheus.

                    When enabled in dragent.yaml, the new version of promscrape will use the configured prometheus.yaml to find the endpoints instead of using those that the agent has found through the use of process_filter rules. The new version of promscrape is named promscrape.v2 .

                    Promscrape V2

                    • promscrape.v2 supports Prometheus native relabel_config in addition to  metric_relabel_configs. Relabel configuration enables the following:

                      • Edit the label format of the target before scraping the labels

                      • Drop unnecessary metrics or unwanted labels from metrics

                    • In addition to the regular sample format (metrics name, labels, and metrics reading), Promscrape V2 includes metrics type (counter, gauge, histogram, summary) to every sample sent to the agent.

                    • Promscrape V2 supports all types of scraping configuration, such as federation, blackbox-exporter, and so on.

                    • The metrics can be mapped to their source (pod, process) by using the source labels which map certain Prometheus label names to the known agent tags.

                    Limitations of Promscrape V2

                    • Promscrape V2 does not support calculated metrics.

                    • Promscrape V2 does not support cluster-wide features such as recording rules and alert management.

                    • Service discovery configurations in Promscrape and Promscrape V2 are incompatible and non-translatable.

                    • Promscrape V2, when enabled, will run on every node that is running an agent and is intended to collect the metrics from local or remote targets specified in the prometheus.yaml file.

                      Promscrape V2 is enabled from agent versions 11.2 and above.

                      The prometheus.yaml is shared across all promscrape instances. It does not make sense to configure promscrape to scrape remote targets, because we will have metrics duplication in this case.

                    • Promscrape V2 does not have the cluster view and therefore it ignores the configuration of recording rules and alerts, which is used in the cluster-wide metrics collection.

                      Therefore, the following Prometheus Configurations are not supported

                    • Sysdig uses __HOSTNAME__, which is not a standard Prometheus keyword.

                    Enable Promscrape V2

                    In agent version 11.2 and above, the `prom_service_discovery` parameter is enabled by default, which in turn enables Promscrape V2 as well by default.

                    To enable Prometheus native service discovery on agent versions prior to 11.2:

                    1. Open dragent.yaml file.

                    2. Set the following Prometheus Service Discovery parameter to true:

                      prometheus:
                        prom_service_discovery: true
                      

                      If true, promscrape``.v2 is used. Otherwise, promscrape.v1 is used to scrape the targets.

                    3. Restart the agent.

                    Default Prometheus Configuration File

                    Here is the default prometheus.yaml file.

                    global:
                      scrape_interval: 10s
                    scrape_configs:
                    - job_name: 'k8s-pods'
                      tls_config:
                        insecure_skip_verify: true
                      kubernetes_sd_configs:
                      - role: pod
                      relabel_configs:
                        # Trying to ensure we only scrape local targets
                        # __HOSTIPS__ is replaced by promscrape with a regex list of the IP addresses
                        # of all the active network interfaces on the host
                      - action: keep
                        source_labels: [__meta_kubernetes_pod_host_ip]
                        regex: __HOSTIPS__
                      - action: keep
                        source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
                        regex: true
                      - action: replace
                        source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
                        target_label: __scheme__
                        regex: (https?)
                      - action: replace
                        source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
                        target_label: __metrics_path__
                        regex: (.+)
                      - action: replace
                        source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
                        regex: ([^:]+)(?::\d+)?;(\d+)
                        replacement: $1:$2
                        target_label: __address__
                        # Holding on to pod-id and container name so we can associate the metrics
                        # with the container (and cluster hierarchy)
                      - action: replace
                        source_labels: [__meta_kubernetes_pod_uid]
                        target_label: sysdig_k8s_pod_uid
                      - action: replace
                        source_labels: [__meta_kubernetes_pod_container_name]
                        target_label: sysdig_k8s_pod_container_name
                    

                    The Prometheus configuration file comes with a default configuration for scraping the pods running on the local node. This configuration also includes the rules to preserve pod UID and container name labels for further correlation with Kubernetes State Metrics or Sysdig native metrics.

                    Scrape Interval

                    The default scrape interval is 10 seconds. However, the value can be overridden per scraping job. The scrape interval configured in the prometheus.yaml is independent of the agent configuration.

                    Promscrape V2 reads prometheus.yaml and initiates scraping jobs.

                    The metrics from targets are collected per scrape interval for each target and immediately forwarded to the agent. The agent sends the metrics every 10 seconds to the Sysdig collector. Only those metrics that have been received since the last transmission are sent to the collector. If a scraping job for a job has a scrape interval longer than 10 seconds, the agent transmissions might not include all the metrics from that job.

                    Hostname Selection

                    __HOSTIPS__ is replaced by the host IP addresses. Selection by the host IP address is preferred because of its reliability.

                    __HOSTNAME__ is replaced with the actual hostname before promscrape starts scraping the targets. This allows promscrape to ignore targets running on other hosts.

                    Relabeling Configuration

                    The default Prometheus configuration file contains the following two relabeling configurations:

                    - action: replace
                      source_labels: [__meta_kubernetes_pod_uid]
                      target_label: sysdig_k8s_pod_uid
                    - action: replace
                      source_labels: [__meta_kubernetes_pod_container_name]
                      target_label: sysdig_k8s_pod_container_name
                    

                    These rules add two labels, sysdig_k8s_pod_uid and sysdig_k8s_pod_container_name to every metric gathered from the local targets, containing pod ID and container name respectively. These labels will be dropped from the metrics before sending them to the Sysdig collector for further processing.

                    5 -

                    Filtering Prometheus Metrics

                    As of Sysdig agent 9.8.0, a lightweight Prometheus server is embedded in agents named promscrape and a prometheus.yaml file is included as part of configuration files. Using the open source Prometheus capabilities, Sysdig leverages a Prometheus feature to allow you to filter Prometheus metrics at the source before ingestion. To do so, you will:

                    • Ensure that the Prometheus scraping is enabled in the  dragent.yaml file.

                      prometheus:
                        enabled: true
                      
                    • On agent v9.8.0 and above, enable the feature by setting the use_promscrape parameter to true in the dragent.yaml. See Enable Filtering at Ingestion. On agent v10.0 promscrape is used by default for scraping metrics.

                    • Edit the configuration in the prometheus.yaml file. See Edit Prometheus Configuration File.

                      Sysdig-specific configuration is found in the prometheus.yaml file.

                    Enable Filtering at Ingestion

                    On agent v9.8.0, in order for target filtering to work, the use_promscrape parameter in the dragent.yaml must be set to true. For more information on configuration, see Configuring Sysdig Agent.

                    use_promscrape: true
                    

                    On agent v10.0, use_promscrape is enabled by default. Implies, promscrape is used for scraping Prometheus metrics.

                    Filtering configuration is optional. The absence of prometheus.yaml  will not change the existing behavior of the agent.

                    Edit Prometheus Configuration File

                    About the Prometheus Configuration File

                    The prometheus.yaml file contains mostly the filtering/relabeling configuration in a list of key-value pairs, representing target process attributes.

                    You replace keys and values with the desired tags corresponding to your environment.

                    In this file, you will configure the following:

                    • Default scrape interval (optional).

                      For example: scrape_interval: 10s

                    • Of the labeling parameters that Prometheus offers, Sysdig supports only metric_relabel_configs. The relabel_config parameter is not supported.

                    • Zero or more process-specific filtering configurations (optional).

                      See Kubernetes Environments and Docker Environments

                      The filtering configuration includes:

                      • Filtering rules

                        For example: - source_labels: [container_label_io_kubernetes_pod_name]

                      • Limit on number of scraped samples (optional)

                        For example: sample_limit: 2000

                    • Default filtering configuration (optional). See Default Configuration.

                      The filtering configuration includes:

                      • Filtering rules

                        For example: - source_labels: [car]

                      • Limit on number of scraped samples (optional)

                        For example: sample_limit: 2000

                    The prometheus.yaml file is installed alongside dragent.yaml. For the most part, the syntax of prometheus.yaml complies with the standard Prometheus configuration

                    Default Configuration

                    A configuration with empty key-value pairs is considered a default configuration. The default configuration will be applied to all the processes to be scraped that don’t have a matching filtering configuration. In Sample Prometheus Configuration File, the job_name: 'default' section represents the default configuration.

                    Kubernetes Environments

                    If the agent runs in Kubernetes environments (Open Source/OpenShift/GKE), include the following Kubernetes objects as key-value pairs. See Agent Install: Kubernetes for details on agent installation.

                    For example:

                    sysdig_sd_configs:
                    - tags:
                        namespace: backend
                        deployment: my-api
                    

                    In addition to the aforementioned tags, any of these object types can be matched against:

                    daemonset: my_daemon
                    deployment: my_deployment
                    hpa: my_hpa
                    namespace: my_namespace
                    node: my_node
                    pod: my_pode
                    replicaset: my_replica
                    replicationcontroller: my_controller
                    resourcequota: my_quota
                    service: my_service
                    stateful: my_statefulset
                    

                    For Kubernetes/OpenShift/GKE deployments, prometheus.yaml shares the same ConfigMap with dragent.yaml.

                    Docker Environments

                    In Docker environments, include attributes such as container, host, port, and more. For example:

                    sysdig_sd_configs:
                    - tags:
                        host: my-host
                        port: 8080
                    

                    For Docker-based deployments, prometheus.yaml can be mounted from the host.

                    Sample Prometheus Configuration File

                    global:
                      scrape_interval: 20s
                    scrape_configs:
                    - job_name: 'default'
                      sysdig_sd_configs: # default config
                      relabel_configs:
                    - job_name: 'my-app-job'
                      sample_limit: 2000
                      sysdig_sd_configs:  # apply this filtering config only to my-app
                      - tags:
                          namespace: backend
                          deployment: my-app
                      metric_relabel_configs:
                      # Drop all metrics starting with http_
                      - source_labels: [__name__]
                        regex: "http_(.+)"
                        action: drop
                      metric_relabel_configs:
                      # Drop all metrics for which the city label equals atlantis
                      - source_labels: [city]
                        regex: "atlantis"
                        action: drop
                    
                    

                    6 -

                    Example Configuration

                    This topic introduces you to default and specific Prometheus configurations.

                    Default Configuration

                    As an example that pulls together many of the configuration elements shown above, consider the default Agent configuration that’s inherited from the dragent.default.yaml.

                    prometheus:
                      enabled: true
                      interval: 10
                      log_errors: true
                      max_metrics: 1000
                      max_metrics_per_process: 100
                      max_tags_per_metric: 20
                    
                      # Filtering processes to scan. Processes not matching a rule will not
                      # be scanned
                      # If an include rule doesn't contain a port or port_filter in the conf
                      # section, we will scan all the ports that a matching process is listening to.
                      process_filter:
                        - exclude:
                            process.name: docker-proxy
                        - exclude:
                            container.image: sysdig/agent
                        # special rule to exclude processes matching configured prometheus appcheck
                        - exclude:
                            appcheck.match: prometheus
                        - include:
                            container.label.io.prometheus.scrape: "true"
                            conf:
                                # Custom path definition
                                # If the Label doesn't exist we'll still use "/metrics"
                                path: "{container.label.io.prometheus.path}"
                    
                                # Port definition
                                # - If the Label exists, only scan the given port.
                                # - If it doesn't, use port_filter instead.
                                # - If there is no port_filter defined, skip this process
                                port: "{container.label.io.prometheus.port}"
                                port_filter:
                                    - exclude: [9092,9200,9300]
                                    - include: 9090-9500
                                    - include: [9913,9984,24231,42004]
                        - exclude:
                            container.label.io.prometheus.scrape: "false"
                        - include:
                            kubernetes.pod.annotation.prometheus.io/scrape: true
                            conf:
                                path: "{kubernetes.pod.annotation.prometheus.io/path}"
                                port: "{kubernetes.pod.annotation.prometheus.io/port}"
                        - exclude:
                            kubernetes.pod.annotation.prometheus.io/scrape: false
                    

                    Consider the following about this default configuration:

                    • All Prometheus scraping is disabled by default. To enable the entire configuration shown here, you would only need to add the following to your dragent.yaml:

                      prometheus:
                        enabled: true
                      

                      Enabling this option and any pods (in case of Kubernetes) that have the right annotation set or containers (if not) that have the labels set will automatically be scrapped.

                    • Once enabled, this default configuration is ideal for the use case described in the Quick Start For Kubernetes Environments.

                    • A Process Filter rule excludes processes that are likely to exist in most environments but are known to never export Prometheus metrics, such as the Docker Proxy and the Agent itself.

                    • Another Process Filter rule ensures that any processes configured to be scraped by the legacy Prometheus application check will not be scraped.

                    • Another Process Filter rule is tailored to use container Labels. Processes marked with the container Label io.prometheus.scrape will become eligible for scraping, and if further marked with container Labels io.prometheus.port and/or io.prometheus.path, scraping will be attempted only on this port and/or endpoint. If the container is not marked with the specified path Label, scraping the /metrics endpoint will be attempted. If the container is not marked with the specified port Label, any listening ports in the port_filter will be attempted for scraping (this port_filter in the default is set for the range of ports for common Prometheus exporters, with exclusions for ports in the range that are known to be used by other applications that are not exporters).

                    • The final Process Filter Include rule is tailored to the use case described in the Quick Start For Kubernetes Environments.

                    Scrape a Single Custom Process

                    If you need to scrape a single custom process, for instance, a java process listening on port 9000 with path /prometheus, add the following to the dragent.yaml:

                    prometheus:
                      enabled: true
                      process_filter:
                        - include:
                            process.name: java
                            port: 9000
                            conf:
                              # ensure we only scrape port 9000 as opposed to all ports this process may be listening to
                              port: 9000
                              path: "/prometheus"
                    

                    This configuration overrides the default process_filter section shown in Default Configuration. You can add relevant rules from the default configuration to this to further filter down the metrics.

                    port has different purposes depending on where it’s placed in the configuration. When placed under the include section, it is a condition for matching the include rule.

                    Placing a port under conf indicates that only that particular port is scraped when the rule is matched as opposed to all the ports that the process could be listening on.

                    In this example, the first rule will be matched for the Java process listening on port 9000. The java process listening only on port 9000 will be scrapped.

                    Scrape a Single Custom Process Based on Container Labels

                    If you still want to scrape based on container labels, you could just append the relevant rules from the defaults to the process_filter. For example:

                    prometheus:
                      enabled: true
                      process_filter:
                        - include:
                            process.name: java
                            port: 9000
                            conf:
                              # ensure we only scrape port 9000 as opposed to all ports this process may be listening to
                              port: 9000
                              path: "/prometheus"
                        - exclude:
                            process.name: docker-proxy
                        - include:
                            container.label.io.prometheus.scrape: "true"
                            conf:
                                path: "{container.label.io.prometheus.path}"
                                port: "{container.label.io.prometheus.port}"
                    

                    port has a different meaning depending on where it’s placed in the configuration. When placed under the include section, it’s a condition for matching the include rule.

                    Placing port under conf indicates that only that port is scraped when the rule is matched as opposed to all the ports that the process could be listening on.

                    In this example, the first rule will be matched for the process listening on port 9000. The java process listening only on port 9000 will be scrapped.

                    Container Environment

                    With this default configuration enabled, a containerized install of our example exporter shown below would be automatically scraped via the Agent.

                    # docker run -d -p 8080:8080 \
                        --label io.prometheus.scrape="true" \
                        --label io.prometheus.port="8080" \
                        --label io.prometheus.path="/prometheus" \
                        luca3m/prometheus-java-app
                    

                    Kubernetes Environment

                    In a Kubernetes-based environment, a Deployment with the Annotations as shown in this example YAML would be scraped by enabling the default configuration.

                    apiVersion: extensions/v1beta1
                    kind: Deployment
                    metadata:
                      name: prometheus-java-app
                    spec:
                      replicas: 1
                      template:
                        metadata:
                          labels:
                            app: prometheus-java-app
                          annotations:
                            prometheus.io/scrape: "true"
                            prometheus.io/path: "/prometheus"
                            prometheus.io/port: "8080"
                        spec:
                          containers:
                            - name: prometheus-java-app
                              image: luca3m/prometheus-java-app
                              imagePullPolicy: Always
                    

                    Non-Containerized Environment

                    This is an example of a non-containerized environment or a containerized environment that doesn’t use Labels or Annotations. The following dragent.yaml would override the default and do per-second scrapes of our sample exporter and also a second exporter on port 5005, each at their respective non-standard endpoints. This can be thought of as a conservative “whitelist” type of configuration since it restricts scraping to only exporters that are known to exist in the environment and the ports on which they’re known to export Prometheus metrics.

                    prometheus:
                      enabled: true
                      interval: 1
                      process_filter:
                        - include:
                            process.cmdline: "*app.jar*"
                            conf:
                              port: 8080
                              path: "/prometheus"
                        - include:
                            port: 5005
                            conf:
                              port: 5005
                              path: "/wacko"
                    

                    port has a different meaning depending on where it’s placed in the configuration. When placed under the include section, it’s a condition for matching the include rule. Placing port under conf indicates that only that port is scraped when the rule is matched as opposed to all the ports that the process could be listening on.

                    In this example, the first rule will be matched for the process *app.jar*. The java process listening only on port 8080 will be scrapped as opposed to all the ports that *app.jar* could be listening on. The second rule will be matched for port 5005 and the process listening only on 5005 will be scraped.

                    7 -

                    Logging and Troubleshooting

                    Logging

                    After the Agent begins scraping Prometheus metrics, there may be a delay of up to a few minutes before the metrics become visible in Sysdig Monitor. To help quickly confirm your configuration is correct, starting with Agent version 0.80.0, the following log line will appear in the Agent log the first time since starting that it has found and is successfully scraping at least one Prometheus exporter:

                    2018-05-04 21:42:10.048, 8820, Information, 05-04 21:42:10.048324 Starting export of Prometheus metrics
                    

                    As this is an INFO level log message, it will appear in Agents using the default logging settings. To reveal even more detail,increase the Agent log level to DEBUG , which produces a message like the following that reveals the name of a specific metric first detected. You can then look for this metric to be visible in Sysdig Monitor shortly after.

                    2018-05-04 21:50:46.068, 11212, Debug, 05-04 21:50:46.068141 First prometheus metrics since agent start: pid 9583: 5 metrics including: randomSummary.95percentile
                    

                    Troubleshooting

                    See the previous section for information on expected log messages during successful scraping. If you have enabled Prometheus and are not seeing the Starting export message shown there, revisit your configuration.

                    It is also suggested to leave the configuration option in its default setting of log_errors: true , which will reveal any issues scraping eligible processes in the Agent log.

                    For example, here is an error message for a failed scrape of a TCP port that was listening but not accepting HTTP requests:

                    2017-10-13 22:00:12.076, 4984, Error, sdchecks[4987] Exception on running check prometheus.5000: Exception('Timeout when hitting http://localhost:5000/metrics',)
                    2017-10-13 22:00:12.076, 4984, Error, sdchecks, Traceback (most recent call last):
                    2017-10-13 22:00:12.076, 4984, Error, sdchecks, File "/opt/draios/lib/python/sdchecks.py", line 246, in run
                    2017-10-13 22:00:12.076, 4984, Error, sdchecks, self.check_instance.check(self.instance_conf)
                    2017-10-13 22:00:12.076, 4984, Error, sdchecks, File "/opt/draios/lib/python/checks.d/prometheus.py", line 44, in check
                    2017-10-13 22:00:12.076, 4984, Error, sdchecks, metrics = self.get_prometheus_metrics(query_url, timeout, "prometheus")
                    2017-10-13 22:00:12.076, 4984, Error, sdchecks, File "/opt/draios/lib/python/checks.d/prometheus.py", line 105, in get_prometheus_metrics
                    2017-10-13 22:00:12.077, 4984, Error, sdchecks, raise Exception("Timeout when hitting %s" % url)
                    2017-10-13 22:00:12.077, 4984, Error, sdchecks, Exception: Timeout when hitting http://localhost:5000/metrics
                    

                    Here is an example error message for a failed scrape of a port that was responding to HTTP requests on the /metrics endpoint but not responding with valid Prometheus-format data. The invalid endpoint is responding as follows:

                    # curl http://localhost:5002/metrics
                    This ain't no Prometheus metrics!
                    

                    And the corresponding error message in the Agent log, indicating no further scraping will be attempted after the initial failure:

                    2017-10-13 22:03:05.081, 5216, Information, sdchecks[5219] Skip retries for Prometheus error: could not convert string to float: ain't
                    2017-10-13 22:03:05.082, 5216, Error, sdchecks[5219] Exception on running check prometheus.5002: could not convert string to float: ain't
                    
                    

                    8 -

                    This feature is not supported with Promscrape V2. For information on different versions of Promscrape and migrating to the latest version, see Migrating from Promscrape V1 to V2.

                    Collecting Prometheus Metrics from Remote Hosts

                    Sysdig Monitor can collect Prometheus metrics from remote endpoints with minimum configuration. Remote endpoints (remote hosts) refer to hosts where Sysdig Agent cannot be deployed. For example, a Kubernetes master node on managed Kubernetes services such as GKE and EKS where user workload cannot be deployed, which in turn implies no Agents involved. Enabling remote scraping on such hosts is as simple as identifying an Agent to perform the scraping and declaring the endpoint configurations with a remote services section in the Agent configuration file.

                    The collected Prometheus metrics are reported under and associated with the Agent that performed the scraping as opposed to associating them with a process.

                    Preparing the Configuration File

                    Multiple Agents can share the same configuration. Therefore, determine which one of those Agents scrape the remote endpoints with the dragent.yaml file. This is applicable to both

                    • Create a separate configuration section for remote services in the Agent configuration file under the prometheus configuration.

                    • Include a configuration section for each remote endpoint, and add either a URL or host/port (and an optional path) parameter to each section to identify the endpoint to scrape. The optional path identifies the resource at the endpoint. An empty path parameter defaults to the "/metrics" endpoint for scraping.

                    • Optionally, add custom tags for each endpoint configuration for remote services. In the absence of tags, metric reporting might not work as expected when multiple endpoints are involved. Agents cannot distinguish similar metrics scraped from multiple endpoints unless those metrics are uniquely identified by tags.

                    To help you get started, an example configuration for Kubernetes is given below:

                    prometheus:
                      remote_services:
                            - prom_1:
                                kubernetes.node.annotation.sysdig.com/region: europe
                                kubernetes.node.annotation.sysdig.com/scraper: true
                                conf:
                                    url: "https://xx.xxx.xxx.xy:5005/metrics"
                                    tags:
                                        host: xx.xxx.xxx.xy
                                        service: prom_1
                                        scraping_node: "{kubernetes.node.name}"
                            - prom_2:
                                kubernetes.node.annotation.sysdig.com/region: india
                                kubernetes.node.annotation.sysdig.com/scraper: true
                                conf:
                                    host: xx.xxx.xxx.yx
                                    port: 5005
                                    use_https: true
                                    tags:
                                        host: xx.xxx.xxx.yx
                                        service: prom_2
                                        scraping_node: "{kubernetes.node.name}"
                            - prom_3:
                                kubernetes.pod.annotation.sysdig.com/prom_3_scraper: true
                                conf:
                                    url: "{kubernetes.pod.annotation.sysdig.com/prom_3_url}"
                                    tags:
                                        service: prom_3
                                        scraping_node: "{kubernetes.node.name}"
                            - haproxy:
                                kubernetes.node.annotation.yourhost.com/haproxy_scraper: true
                                conf:
                                    host: "mymasternode"
                                    port: 1936
                                    path: "/metrics"
                                    username: "{kubernetes.node.annotation.yourhost.com/haproxy_username}"
                                    password: "{kubernetes.node.annotation.yourhost.com/haproxy_password}"
                                    tags:
                                        service: router
                    

                    In the above example, scraping is triggered by node and pod annotations. You can add annotations to nodes and pods by using the kubectl annotate command as follows:

                    kubectl annotate node mynode --overwrite sysdig.com/region=india sysdig.com/scraper=true haproxy_scraper=true yourhost.com/haproxy_username=admin yourhost.com/haproxy_password=admin
                    

                    In this example, you set annotation on a node to trigger scraping of the prom2 and haproxy services as defined in the above configuration.

                    Preparing Container Environments

                    An example configuration for Docker environment is given below:

                    prometheus:
                      remote_services:
                            - prom_container:
                                container.label.com.sysdig.scrape_xyz: true
                                conf:
                                    url: "https://xyz:5005/metrics"
                                    tags:
                                        host: xyz
                                        service: xyz
                    

                    In order for remote scraping to work in a Docker-based container environment, set the com.sysdig.scrape_xyz=true label to the Agent container. For example:

                    docker run -d --name sysdig-agent --restart always --privileged --net host --pid host -e ACCESS_KEY=<KEY> -e COLLECTOR=<COLLECTOR> -e SECURE=true -e TAGS=example_tag:example_value -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro --shm-size=512m sysdig/agent
                    

                    Substitute <KEY>, <COLLECTOR>, TAGS with your account key, collector, and tags respectively.

                    Syntax of the Rules

                    The syntax of the rules for theremote_servicesis almost identical to those ofprocess_filterwith an exception to the include/exclude rule. Theremote_servicessection does not use include/exclude rules. Theprocess_filteruses include and exclude rules of which only the first match against a process is applied, whereas, in theremote_servicessection, each rule has a corresponding service name and all the matching rules are applied.

                    Rule Conditions

                    The rule conditions work the same way as those for the process_filter. The only caveat is that the rules will be matched against the Agent process and container because the remote process/context is unknown. Therefore, matches for container labels and annotations work as before but they must be applicable to the Agent container as well. For instance, node annotations will apply because the Agent container runs on a node.

                    For annotations, multiple patterns can be specified in a single rule, in which case all patterns must match for the rule to be a match (AND operator). In the following example, the endpoint will not be considered unless both the annotations match:

                    kubernetes.node.annotation.sysdig.com/region_scraper: europe
                    kubernetes.node.annotation.sysdig.com/scraper: true
                    

                    That is, Kubernetes nodes belonging to only the Europe region are considered for scraping.

                    Authenticating Sysdig Agent

                    Sysdig Agent requires necessary permissions on the remote host to scrape for metrics. The authentication methods for local scraping works for authenticating agents on remote hosts as well, but the authorization parameters work only in the agent context.

                    • Authentication based on certificate-key pair requires it to be constructed into Kubernetes secret and mounted to the agent.

                    • In token-based authentication, make sure the agent token has access rights on the remote endpoint to do the scraping.

                    • Use annotation to retrieve username/password instead of passing them in plaintext. Any annotation enclosed in curly braces will be replaced by the value of the annotation. If the annotation doesn’t exist the value will be an empty string. Token substitution is supported for all the authorization parameters. Because authorization works only in the Agent context, credentials cannot be automatically retrieved from the target pod. Therefore, use an annotation in the Agent pod to pass them. To do so, set the password into an annotation for the selected Kubernetes object.

                    In the following example, an HAProxy account is authenticated with the password supplied in the yourhost.com/haproxy_password annotation on the agent node.

                    - haproxy:
                                kubernetes.node.annotation.yourhost.com/haproxy_scraper: true
                                conf:
                                    host: "mymasternode"
                                    port: 1936
                                    path: "/metrics"
                                    username: "{kubernetes.node.annotation.yourhost.com/haproxy_username}"
                                    password: "{kubernetes.node.annotation.yourhost.com/haproxy_password}"
                                    tags:
                                        service: router
                    

                    9 -

                    Configure Sysdig with Grafana

                    Sysdig enables Grafana users to query metrics from Sysdig and visualize them in Grafana dashboards. In order to integrate Sysdig with Grafana, you configure a data source. There are two types of data sources supported:

                    • Prometheus

                      Prometheus data source comes with Grafana and is natively compatible with PromQL. Sysdig provides a Prometheus-compatible API to achieve API-only integration with Grafana.

                    • Sysdig

                      Sysdig data source requires additional settings and is more compatible with the simple “form-based” data configuration. Use the Sysdig native API instead of the Prometheus API. See Sysdig Grafana datasource for more information.

                    Using the Prometheus API on Grafana v6.7 and Above

                    You use the Sysdig Prometheus API to set up the datasource to use with Grafana. Before Grafana can consume Sysdig metrics, Grafana must authenticate itself to Sysdig. To do so, you must set up an HTTP authentication by using the Sysdig API Token because no UI support is currently available on Grafana.

                    1. Assuming that you are not using Grafana, spin up a Grafana container as follows:

                      $ docker run --rm -p 3000:3000 --name grafana grafana/grafana
                      
                    2. Login to Grafana as administrator and create a new datasource by using the following information:

                      • URL: https://<Monitor URL for Your Region>/prometheus

                        See SaaS Regions and IP Ranges and identify the correct URLs associated with your Sysdig application and region.

                      • Authentication: Do not select any authentication mechanisms.

                      • Access: Server (default)

                      • Custom HTTP Headers:

                        • Header: Enter the word, Authorization

                        • Value:  Enter the word, Bearer , followed by a space and <Your Sysdig API Token>

                          API Token is available through Settings > User Profile > Sysdig Monitor API.

                    Using the Grafana API on Grafana v6.6 and Below

                    The feature requires Grafana v5.3.0 or above.

                    You use the Grafana API to set up the Sysdig datasource.

                    1. Download and run Grafana in a container.

                      docker run --rm -p 3000:3000 --name grafana grafana/grafana
                      
                    2. Create a JSON file.

                      cat grafana-stg-ds.json
                      {
                          "name": "Sysdig staging PromQL",
                          "orgId": 1,
                          "type": "prometheus",
                          "access": "proxy",
                          "url": "https://app-staging.sysdigcloud.com/prometheus",
                          "basicAuth": false,
                          "withCredentials": false,
                          "isDefault": false,
                          "editable": true,
                          "jsonData": {
                              "httpHeaderName1": "Authorization",
                              "tlsSkipVerify": true
                          },
                          "secureJsonData": {
                              "httpHeaderValue1": "Bearer your-Sysdig-API-token"
                          }
                      }
                      
                    3. Get your Sysdig API Token and plug it in the JSON file above.

                      “httpHeaderValue1”: “Bearer your_Sysdig_API_Token”
                      
                    4. Add the datasource to Grafana.

                      curl -u admin:admin -H "Content-Type: application/json" http://localhost:3000/api/datasources -XPOST -d @grafana-stg-ds.json
                      
                    5. Run Grafana.

                      http://localhost:3000
                      
                    6. Use the default credentials, admin: admin, to sign in to Grafana.

                    7. Open the Data Source tab under Configuration on Grafana and confirm that the one you have added is listed on the page.