# Collecting Prometheus Metrics from Remote Hosts

Sysdig Monitor can collect Prometheus metrics from remote endpoints with minimum configuration. Remote endpoints (remote hosts) refer to hosts where Sysdig Agent cannot be deployed. For example, a Kubernetes master node on managed Kubernetes services such as GKE and EKS where user workload cannot be deployed, which in turn implies no Agents involved. Enabling remote scraping on such hosts is as simple as identifying an Agent to perform the scraping and declaring the endpoint configurations with a remote services section in the Agent configuration file.

The collected Prometheus metrics are reported under and associated with the Agent that performed the scraping as opposed to associating them with a process.

## Preparing the Configuration File

Multiple Agents can share the same configuration. Therefore, determine which one of those Agents scrape the remote endpoints with the dragent.yaml file. This is applicable to both

• Create a separate configuration section for remote services in the Agent configuration file under the prometheus configuration.

• Include a configuration section for each remote endpoint, and add either a URL or host/port (and an optional path) parameter to each section to identify the endpoint to scrape. The optional path identifies the resource at the endpoint. An empty path parameter defaults to the "/metrics" endpoint for scraping.

• Optionally, add custom tags for each endpoint configuration for remote services. In the absence of tags, metric reporting might not work as expected when multiple endpoints are involved. Agents cannot distinguish similar metrics scraped from multiple endpoints unless those metrics are uniquely identified by tags.

To help you get started, an example configuration for Kubernetes is given below:

prometheus:
remote_services:
- prom_1:
kubernetes.node.annotation.sysdig.com/region: europe
kubernetes.node.annotation.sysdig.com/scraper: true
conf:
url: "https://xx.xxx.xxx.xy:5005/metrics"
tags:
host: xx.xxx.xxx.xy
service: prom_1
scraping_node: "{kubernetes.node.name}"
- prom_2:
kubernetes.node.annotation.sysdig.com/region: india
kubernetes.node.annotation.sysdig.com/scraper: true
conf:
host: xx.xxx.xxx.yx
port: 5005
use_https: true
tags:
host: xx.xxx.xxx.yx
service: prom_2
scraping_node: "{kubernetes.node.name}"
- prom_3:
kubernetes.pod.annotation.sysdig.com/prom_3_scraper: true
conf:
url: "{kubernetes.pod.annotation.sysdig.com/prom_3_url}"
tags:
service: prom_3
scraping_node: "{kubernetes.node.name}"
- haproxy:
kubernetes.node.annotation.yourhost.com/haproxy_scraper: true
conf:
host: "mymasternode"
port: 1936
path: "/metrics"
tags:
service: router

In the above example, scraping is triggered by node and pod annotations. You can add annotations to nodes and pods by using the kubectl annotate command as follows:

kubectl annotate node mynode --overwrite sysdig.com/region=india sysdig.com/scraper=true haproxy_scraper=true yourhost.com/haproxy_username=admin yourhost.com/haproxy_password=admin

In this example, you set annotation on a node to trigger scraping of the prom2 and haproxy services as defined in the above configuration.

## Preparing Container Environments

An example configuration for Docker environment is given below:

prometheus:
remote_services:
- prom_container:
container.label.com.sysdig.scrape_xyz: true
conf:
url: "https://xyz:5005/metrics"
tags:
host: xyz
service: xyz

In order for remote scraping to work in a Docker-based container environment, set the com.sysdig.scrape_xyz=true label to the Agent container. For example:

docker run -d --name sysdig-agent --restart always --privileged --net host --pid host -e ACCESS_KEY=<KEY> -e COLLECTOR=<COLLECTOR> -e SECURE=true -e TAGS=example_tag:example_value -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro --shm-size=512m sysdig/agent

Substitute <KEY>, <COLLECTOR>, TAGS with your account key, collector, and tags respectively.

## Syntax of the Rules

The syntax of the rules for theremote_servicesis almost identical to those ofprocess_filterwith an exception to the include/exclude rule. Theremote_servicessection does not use include/exclude rules. Theprocess_filteruses include and exclude rules of which only the first match against a process is applied, whereas, in theremote_servicessection, each rule has a corresponding service name and all the matching rules are applied.

## Rule Conditions

The rule conditions work the same way as those for the process_filter. The only caveat is that the rules will be matched against the Agent process and container because the remote process/context is unknown. Therefore, matches for container labels and annotations work as before but they must be applicable to the Agent container as well. For instance, node annotations will apply because the Agent container runs on a node.

For annotations, multiple patterns can be specified in a single rule, in which case all patterns must match for the rule to be a match (AND operator). In the following example, the endpoint will not be considered unless both the annotations match:

kubernetes.node.annotation.sysdig.com/region_scraper: europe
kubernetes.node.annotation.sysdig.com/scraper: true

That is, Kubernetes nodes belonging to only the Europe region are considered for scraping.

## Authenticating Sysdig Agent

Sysdig Agent requires necessary permissions on the remote host to scrape for metrics. The authentication methods for local scraping works for authenticating agents on remote hosts as well, but the authorization parameters work only in the agent context.

• Authentication based on certificate-key pair requires it to be constructed into Kubernetes secret and mounted to the agent.

• In token-based authentication, make sure the agent token has access rights on the remote endpoint to do the scraping.

• Use annotation to retrieve username/password instead of passing them in plaintext. Any annotation enclosed in curly braces will be replaced by the value of the annotation. If the annotation doesn't exist the value will be an empty string. Token substitution is supported for all the authorization parameters. Because authorization works only in the Agent context, credentials cannot be automatically retrieved from the target pod. Therefore, use an annotation in the Agent pod to pass them. To do so, set the password into an annotation for the selected Kubernetes object.

In the following example, an HAProxy account is authenticated with the password supplied in the yourhost.com/haproxy_password annotation on the agent node.

- haproxy:
kubernetes.node.annotation.yourhost.com/haproxy_scraper: true
conf:
host: "mymasternode"
port: 1936
path: "/metrics"
service: router