Sysdig Documentation

Configuring Sysdig Agent

As is typical for the Agent, the default configuration for the feature is specified in dragent.default.yaml , and you can override the defaults by configuring parameters in the dragent.yaml . For each parameter you do not set in dragent.yaml , the defaults in dragent.default.yaml will remain in effect.

Main Config Parameters

Parameter

Default

Description

enabled

false

Turns Prometheus scraping on/off.

interval

10

How often (in seconds) the Agent will scrape a port for Prometheus metrics

log_errors

true

Whether the Agent should log details on failed attempts to scrape eligible targets

max_metrics

1000

The maximum number of total Prometheus metrics that will be scraped across all targets. This value of 1000 is the maximum per-Agent, and is a separate limit from other Custom Metrics (e.g. statsd, JMX, and other Application Checks).

max_metrics_per_process

1000

The maximum number of Prometheus metrics that the Agent will save from a single scraped target.

(Default changed from 100 to 1000 in agent version 0.90.3.)

max_tags_per_metric

20

The maximum number of tags per Prometheus metric that the Agent will save from a scraped target

histograms

false

Whether the Agent should scrape and report histogram metrics. See the section below on histograms for details.

process_filter

See below

Specifies which processes may be eligible for scraping. See the Process Filter section below.

timeout

1

Used to configure the amount of time the agent will wait while scraping a Prometheus endpoint before timing out. The default value is 1 second.

Process Filter

The process_filter section specifies which of the processes known by an Agent may be eligible for scraping.

Note that once you specify a process_filter in your dragent.yaml , this replaces the entire Prometheus process_filter section (i.e. all rules) shown in the dragent.default.yaml .

The Process Filter is specified in a series of include and exclude rules that are evaluated top-to-bottom for each process known by an Agent. If a process matches an include rule, scraping will be attempted via a /metrics endpoint on each listening TCP port for the process, unless a conf section also appears within the rule to further restrict how the process will be scraped (see the "conf" section below).

Multiple patterns can be specified in a single rule, in which case all patterns must match for the rule to be a match (AND logic).

Within a pattern value, simple "glob" wildcarding may be used, where * matches any number of characters (including none) and ? matches any single character. Note that due to YAML syntax, when using wildcards, be sure to enclose the value in quotes ( "*" ).

The table below describes the supported patterns in Process Filter rules. To provide realistic examples, we'll use a simple sample Prometheus exporter (source code here) which can be deployed as a container using the Docker command line below. To help illustrate some of the configuration options, this sample exporter presents Prometheus metrics on /promtheus instead of the more common /metrics endpoint, which will be shown in the example configurations further below.

# docker run -d -p 8080:8080 \
    --label class="exporter" \
    --name my-java-app \
    luca3m/prometheus-java-app
 
# ps auxww | grep app.jar
root     11502 95.9  9.2 3745724 753632 ?      Ssl  15:52   1:42 java -jar /app.jar --management.security.enabled=false
 
# curl http://localhost:8080/prometheus
...
random_bucket{le="0.005",} 6.0
random_bucket{le="0.01",} 17.0
random_bucket{le="0.025",} 51.0
...

Pattern name

Description

Example

container.image

Matches if the process is running inside a container running the specified image

- include:

container.image: luca3m/prometheus-java-app

container.name

Matches if the process is running inside a container with the specified name

- include:

container.name: my-java-app

container.label.*

Matches if the process is running in a container that has a Label matching the given value

- include:

container.label.class: exporter

kubernetes.<object>.annotation.* kubernetes.<object>.label.*

Matches if the process is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with the Annotation/Label matching the given value.

Note: This pattern does not apply to the Docker-only command-line shown above, but would instead apply if the exporter were installed as a Kubernetes Deployment using this example YAML.

Note: See the section below on Integrate Prometheus Metrics into Sysdig Monitor UI#Kubernetes Objects for information on the full set of supported Annotations and Labels.

- include:

kubernetes.pod.annotation.prometheus.io/scrape: true

process.name

Matches the name of the running process

- include:

process.name: java

process.cmdline

Matches a command line argument

- include:

process.cmdline: "*app.jar*"

port

Matches if the process is listening on one or more TCP ports.

The pattern for a single rule can specify a single port as shown in this example, or a single range (e.g.8079-8081), but does not support comma-separated lists of ports/ranges.

Note: This parameter is only used to confirm if a process is eligible for scraping based on the ports on which it is listening. For example, if a process is listening on one port for application traffic and has a second port open for exporting Prometheus metrics, it would be possible to specify the application port here (but not the exporting port), and the exporting port in the conf section (but not the application port), and the process would be matched as eligible and the exporting port would be scraped.

- include:

port: 8080

appcheck.match

Matches if an Application Check with the specific name or pattern is scheduled to run for the process.

- exclude:

appcheck.match: "*"

Instead of the include examples shown above that would have each matched our process, due to the previously-described ability to combine multiple patterns in a single rule, the following very strict configuration would also have matched:

- include:
    container.image: luca3m/prometheus-java-app
    container.name: my-java-app
    container.label.class: exporter
    process.name: java
    process.cmdline: "*app.jar*"
    port: 8080

conf

Each include rule in the port_filter may include a conf portion that further describes how scraping will be attempted on the eligible process. If a conf portion is not included, scraping will be attempted at a /metrics endpoint on all listening ports of the matching process. The possible settings:

Parameter name

Description

Example

port

Either a static number for a single TCP port to be scraped, or a container/Kubernetes Label name or Kubernetes Annotation specified in curly braces. If the process is running in a container that is marked with this Label or is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with this Annotation/Label, scraping will be attempted only on the port specified as the value of the Label/Annotation.

Note: The Label/Annotation to match against will not include the text shown in red.

Note: See the section below on Kubernetes Objects for information on the full set of supported Annotations and Labels.

Note: If running the exporter inside a container, this should specify the port number that the exporter process in the container is listening on, not the port that the container exposes to the host.

port: 8080

- or -

port: "{container.label.io.prometheus.port}"

- or -

port: "{kubernetes.pod.annotation.prometheus.io/port}"

port_filter

A set of include and exclude rules that define the ultimate set of listening TCP ports for an eligible process on which scraping may be attempted. Note that the syntax is different from the port pattern option from within the higher-level include rule in the process_filter . Here a given rule can include single ports, comma-separated lists of ports (enclosed in square brackets), or contiguous port ranges (without brackets).

port_filter:

- include: 8080 - exclude: [9092,9200,9300] - include: 9090-9100

path

Either the static specification of an endpoint to be scraped, or a container/Kubernetes Label name or Kubernetes Annotation specified in curly braces. If the process is running in a container that is marked with this Label or is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with this Annotation/Label, scraping will be attempted via the endpoint specified as the value of the Label/Annotation.

If path is not specified, or specified but the Agent does not find the Label/Annotation attached to the process, the common Prometheus exporter default of /metrics will be used.

Note: A Label/Annotation to match against will not include the text shown in red.

Note: See the section below on Kubernetes Objects for information on the full set of supported Annotations and Labels.

path: "/prometheus"

- or -

path: "{container.label.io.prometheus.path}"

- or -

path: "{kubernetes.pod.annotation.prometheus.io/path}"

host

A hostname or IP address. The default is localhost.

host: 192.168.1.101
- or -
host: subdomain.example.com
- or -
host: localhost

use_https

When set to true , connectivity to the exporter will only be attempted through HTTPS instead of HTTP. It is false by default.

(Available in Agent version 0.79.0 and newer)

use_https: true

ssl_verify

When set to true , verification will be performed for the server certificates for an HTTPS connection. It is false by default. Verification was enabled by default before 0.79.0.

(Available in Agent version 0.79.0 and newer)

ssl_verify: true

Authentication Integration

As of agent version 0.89, Sysdig can collect Prometheus metrics from endpoints requiring authentication. Use the parameters below to enable this function.

  • For username/password authentication:

    • username

    • password

  • For authentication using a token:

    • auth_token_path

  • For certificate authentication with a certificate key:

    • auth_cert_path

    • auth_key_path

Note

Token substitution is also supported for all the authorization parameters. For instance a username can be taken from a Kubernetes annotation by specifying

username: "{kubernetes.service.annotation.prometheus.openshift.io/username}"

conf Authentication Example

Below is an example of the dragent.yaml section showing all the Prometheus authentication configuration options, on OpenShift, Kubernetes, and etcd.

In this example:

  • The username/password are taken from a default annotation used by OpenShift.

  • The auth token path is commonly available in Kubernetes deployments.

  • The certificate and key used here for etcd may normally not be as easily accessible to the agent. In this case they were extracted from the host namespace, constructed into Kubernetes secrets, and then mounted into the agent container.

prometheus: 
  enabled: true
  process_filter: 
    - include: 
        port: 1936
        conf: 
            username: "{kubernetes.service.annotation.prometheus.openshift.io/username}"
            password: "{kubernetes.service.annotation.prometheus.openshift.io/password}"
    - include: 
        process.name: kubelet
        conf: 
            port: 10250
            use_https: true
            auth_token_path: "/run/secrets/kubernetes.io/serviceaccount/token"
    - include: 
        process.name: etcd
        conf: 
            port: 2379
            use_https: true
            auth_cert_path: "/run/secrets/etcd/client-cert"
            auth_key_path: "/run/secrets/etcd/client-key"

Kubernetes Objects

As described above, there are multiple configuration options that can be set based on auto-discovered values for Kubernetes Labels and/or Annotations. The format in each case begins with "kubernetes.OBJECT.annotation." or "kubernetes.OBJECT.label." where OBJECT can be any of the following supported Kubernetes object types:

  • daemonSet

  • deployment

  • namespace

  • node

  • pod

  • replicaSet

  • replicationController

  • service

  • statefulset

The configuration text you add after the final dot becomes the name of the Kubernetes Label/Annotation that the Agent will look for. If the Label/Annotation is discovered attached to the process, the value of that Label/Annotation will be used for the configuration option.

Note that there are multiple ways for a Kubernetes Label/Annotation to be attached to a particular process. One of the simplest examples of this is the Pod-based approach shown above in the Quick Start For Kubernetes Environments. However, as an example alternative to marking at the Pod level, you could attach Labels/Annotations at the Namespace level, in which case auto-discovered configuration options would apply to all processes running in that Namespace regardless of whether they're in a Deployment, DaemonSet, ReplicaSet, etc.