(Legacy) Configuring Sysdig Agent
This feature is not supported with Promscrape V2. For information on different versions of Promscrape and migrating to the latest version, see Migrating from Promscrape V1 to V2.
As is typical for the agent, the default configuration for the feature is specified in dragent.default.yaml
, and you can override the defaults by configuring parameters in the dragent.yaml
. For each parameter, you do not set in dragent.yaml
, the defaults in dragent.default.yaml
will remain in effect.
Main Configuration Parameters
Parameter | Default | Description |
---|---|---|
| See below | Turns Prometheus scraping on and off. |
| See below | Specifies which processes may be eligible for scraping. See [Process Filter](/en/docs/sysdig-monitor/integrations/monitoring-integrations/legacy-integrations/legacycollect-prometheus-metrics/configuring-sysdig-agent/#process-filter). |
| See below. | Determines whether to use |
promscrape
Promscrape is a lightweight Prometheus server that is embedded with the
Sysdig agent. The use_promscrape
parameter controls whether to use it
to scrape Prometheus endpoints.
Parameters | Default | Description |
---|---|---|
|
|
prometheus
The prometheus
section defines the behavior related to Prometheus
metrics collection and analysis. It allows for turning the feature on,
set a limit from the agent side on the number of metrics to be scraped,
and determines whether to report histogram metrics and log failed scrape
attempts.
Parameter | Default | Description |
---|---|---|
|
| Turns Prometheus scraping on and off. |
|
| How often (in seconds) the agent will scrape a port for Prometheus metrics |
|
| Enables native Prometheus service discovery. If disabled, On agent versions prior to 11.2, the default is false. |
|
| The maximum number of total Prometheus metrics that will be scraped across all targets. This value of 1000 is the maximum per-agent, and is a separate limit from other Custom Metrics. For example, StatsD, JMX, and App Checks. |
|
| Determines whether to report histogram metrics. By default, it does not. |
| 1 | Used to configure the amount of time the agent will wait while scraping a Prometheus endpoint before timing out. The default value is 1 second. As of agent v10.0, this parameter is only used when |
Process Filter
The process_filter
section specifies which of the processes known
by an agent may be eligible for scraping.
Note that once you specify a process_filter
in your
dragent.yaml
, this replaces the entire Prometheus
process_filter
section (i.e. all the rules) shown in the
dragent.default.yaml
.
The Process Filter is specified in a series of include
and
exclude
rules that are evaluated top-to-bottom for each process
known by an Agent. If a process matches an include
rule, scraping
will be attempted via a /metrics
endpoint on each listening TCP
port for the process, unless a conf
section also appears within
the rule to further restrict how the process will be scraped. See
conf for more information.
Multiple patterns can be specified in a single rule, in which case all patterns must match for the rule to be a match (AND logic).
Within a pattern value, simple “glob” wildcarding may be used, where
*
matches any number of characters (including none) and ?
matches any single character. Note that due to YAML syntax, when using
wildcards, be sure to enclose the value in quotes ("*"
).
The table below describes the supported patterns in Process Filter
rules. To provide realistic examples, we’ll use a simple sample
Prometheus
exporter (source
code
here)
which can be deployed as a container using the Docker command line
below. To help illustrate some of the configuration options, this sample
exporter presents Prometheus metrics on /prometheus
instead of the
more common /metrics
endpoint, which will be shown in the example
configurations further below.
# docker run -d -p 8080:8080 \
--label class="exporter" \
--name my-java-app \
luca3m/prometheus-java-app
# ps auxww | grep app.jar
root 11502 95.9 9.2 3745724 753632 ? Ssl 15:52 1:42 java -jar /app.jar --management.security.enabled=false
# curl http://localhost:8080/prometheus
...
random_bucket{le="0.005",} 6.0
random_bucket{le="0.01",} 17.0
random_bucket{le="0.025",} 51.0
...
Pattern name | Description | Example |
---|---|---|
| Matches if the process is running inside a container running the specified image |
|
| Matches if the process is running inside a container with the specified name |
|
| Matches if the process is running in a container that has a Label matching the given value |
|
| Matches if the process is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with the Annotation/Label matching the given value. Note: This pattern does not apply to the Docker-only command-line shown above, but would instead apply if the exporter were installed as a Kubernetes Deployment using this example YAML. Note: See Kubernetes Objects, below, for information on the full set of supported Annotations and Labels. |
|
| Matches the name of the running process |
|
| Matches a command line argument |
|
| Matches if the process is listening on one or more TCP ports. The pattern for a single rule can specify a single port as shown in this example, or a single range (e.g. Note: This parameter is only used to confirm if a process is eligible for scraping based on the ports on which it is listening. For example, if a process is listening on one port for application traffic and has a second port open for exporting Prometheus metrics, it would be possible to specify the application port here (but not the exporting port), and the exporting port in the conf section (but not the application port), and the process would be matched as eligible and the exporting port would be scraped. |
|
| Matches if an Application Check with the specific name or pattern is scheduled to run for the process. |
|
- include:
container.image: luca3m/prometheus-java-app
container.name: my-java-app
container.label.class: exporter
process.name: java
process.cmdline: "*app.jar*"
port: 8080
conf
Each include
rule in the port_filter
may include a
conf
portion that further describes how scraping will be attempted
on the eligible process. If a conf
portion is not included,
scraping will be attempted at a /metrics
endpoint on all listening
ports of the matching process. The possible settings:
Parameter name | Description | Example |
---|---|---|
| Either a static number for a single TCP port to be scraped, or a container/Kubernetes Label name or Kubernetes Annotation specified in curly braces. If the process is running in a container that is marked with this Label or is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with this Annotation/Label, scraping will be attempted only on the port specified as the value of the Label/Annotation. Note: The Label/Annotation to match against will not include the text shown in red. Note: See Kubernetes Objectsfor information on the full set of supported Annotations and Labels. Note: If running the exporter inside a container, this should specify the port number that the exporter process in the container is listening on, not the port that the container exposes to the host. |
- or -
- or -
|
| A set of include and exclude rules that define the ultimate set of listening TCP ports for an eligible process on which scraping may be attempted. Note that the syntax is different from the |
|
| Either the static specification of an endpoint to be scraped, or a container/Kubernetes Label name or Kubernetes Annotation specified in curly braces. If the process is running in a container that is marked with this Label or is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with this Annotation/Label, scraping will be attempted via the endpoint specified as the value of the Label/Annotation. If Note: A Label/Annotation to match against will not include the text shown in red. Note: See Kubernetes Objects for information on the full set of supported Annotations and Labels. |
- or -
- or -
|
| A hostname or IP address. The default is localhost. |
|
| When set to (Available in Agent version 0.79.0 and newer) |
|
| When set to (Available in Agent version 0.79.0 and newer) |
|
Authentication Integration
As of agent version 0.89, Sysdig can collect Prometheus metrics from endpoints requiring authentication. Use the parameters below to enable this function.
For username/password authentication:
username
password
For authentication using a token:
auth_token_path
For certificate authentication with a certificate key:
auth_cert_path
auth_key_path
Token substitution is also supported for all the authorization parameters. For instance a username can be taken from a Kubernetes annotation by specifying
username: "{kubernetes.service.annotation.prometheus.openshift.io/username}"
conf Authentication Example
Below is an example of the dragent.yaml
section showing all the
Prometheus authentication configuration options, on OpenShift,
Kubernetes, and etcd.
In this example:
The
username/password
are taken from a default annotation used by OpenShift.The
auth token
path is commonly available in Kubernetes deployments.The
certificate
andkey
used here for etcd may normally not be as easily accessible to the agent. In this case they were extracted from the host namespace, constructed into Kubernetes secrets, and then mounted into the agent container.
prometheus:
enabled: true
process_filter:
- include:
port: 1936
conf:
username: "{kubernetes.service.annotation.prometheus.openshift.io/username}"
password: "{kubernetes.service.annotation.prometheus.openshift.io/password}"
- include:
process.name: kubelet
conf:
port: 10250
use_https: true
auth_token_path: "/run/secrets/kubernetes.io/serviceaccount/token"
- include:
process.name: etcd
conf:
port: 2379
use_https: true
auth_cert_path: "/run/secrets/etcd/client-cert"
auth_key_path: "/run/secrets/etcd/client-key"
Kubernetes Objects
As described above, there are multiple configuration options that can be
set based on auto-discovered values for Kubernetes Labels and/or
Annotations. The format in each case begins with
"kubernetes.OBJECT.annotation."
or "kubernetes.OBJECT.label."
where
OBJECT
can be any of the following supported Kubernetes object types:
daemonSet
deployment
namespace
node
pod
replicaSet
replicationController
service
statefulset
The configuration text you add after the final dot becomes the name of the Kubernetes Label/Annotation that the Agent will look for. If the Label/Annotation is discovered attached to the process, the value of that Label/Annotation will be used for the configuration option.
Note that there are multiple ways for a Kubernetes Label/Annotation to be attached to a particular process. One of the simplest examples of this is the Pod-based approach shown in Quick Start For Kubernetes Environments. However, as an example alternative to marking at the Pod level, you could attach Labels/Annotations at the Namespace level, in which case auto-discovered configuration options would apply to all processes running in that Namespace regardless of whether they’re in a Deployment, DaemonSet, ReplicaSet, etc.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.