Filter Data
The dragent.yaml
file elements are wide-reaching. This section
describes the parameters to edit in dragent.yaml
to perform a range of
activities:
This the multi-page printable view of this section. Click here to print.
The dragent.yaml
file elements are wide-reaching. This section
describes the parameters to edit in dragent.yaml
to perform a range of
activities:
Use the blacklisted_ports
parameter in the agent configuration file to
block network traffic and metrics from unnecessary network ports.
Note: Port 53 (DNS) is always blacklisted.
Access the agent configuration file, using one of the options listed.
Add blacklisted_ports
with desired port numbers.
Example (YAML):
blacklisted_ports:
- 6443
- 6379
Restart the agent (if editing dragent.yaml
file directly), using
either the service dragent restart
or
docker restart sysdig-agent
command as appropriate.
Sysdig Monitor supports event integrations with certain applications by default. The Sysdig agent will automatically discover these services and begin collecting event data from them.
The following applications are currently supported:
Docker
Kubernetes
Other methods of ingesting custom events into Sysdig Monitor are touched upon in Custom Events.
By default, only a limited set of events is collected for a supported
application, and are listed in the agent’s default settings
configuration file (/opt/draios/etc/dragent.default.yaml
).
To enable collecting other supported events, add an events
entry to
dragent.yaml
.
You can also change log
entry in dragent.yaml
to filter events by
severity.
Learn more about it in the following sections.
Events marked with *
are enabled by default; see the
dragent.default.yaml
file.
The following Docker events are supported.
docker:
container:
- attach # Container Attached (information)
- commit # Container Committed (information)
- copy # Container Copied (information)
- create # Container Created (information)
- destroy # Container Destroyed (warning)
- die # Container Died (warning)
- exec_create # Container Exec Created (information)
- exec_start # Container Exec Started (information)
- export # Container Exported (information)
- kill # Container Killed (warning)*
- oom # Container Out of Memory (warning)*
- pause # Container Paused (information)
- rename # Container Renamed (information)
- resize # Container Resized (information)
- restart # Container Restarted (warning)
- start # Container Started (information)
- stop # Container Stopped (information)
- top # Container Top (information)
- unpause # Container Unpaused (information)
- update # Container Updated (information)
image:
- delete # Image Deleted (information)
- import # Image Imported (information)
- pull # Image Pulled (information)
- push # Image Pushed (information)
- tag # Image Tagged (information)
- untag # Image Untaged (information)
volume:
- create # Volume Created (information)
- mount # Volume Mounted (information)
- unmount # Volume Unmounted (information)
- destroy # Volume Destroyed (information)
network:
- create # Network Created (information)
- connect # Network Connected (information)
- disconnect # Network Disconnected (information)
- destroy # Network Destroyed (information)
The following Kubernetes events are supported.
kubernetes:
node:
- TerminatedAllPods # Terminated All Pods (information)
- RegisteredNode # Node Registered (information)*
- RemovingNode # Removing Node (information)*
- DeletingNode # Deleting Node (information)*
- DeletingAllPods # Deleting All Pods (information)
- TerminatingEvictedPod # Terminating Evicted Pod (information)*
- NodeReady # Node Ready (information)*
- NodeNotReady # Node not Ready (information)*
- NodeSchedulable # Node is Schedulable (information)*
- NodeNotSchedulable # Node is not Schedulable (information)*
- CIDRNotAvailable # CIDR not Available (information)*
- CIDRAssignmentFailed # CIDR Assignment Failed (information)*
- Starting # Starting Kubelet (information)*
- KubeletSetupFailed # Kubelet Setup Failed (warning)*
- FailedMount # Volume Mount Failed (warning)*
- NodeSelectorMismatching # Node Selector Mismatch (warning)*
- InsufficientFreeCPU # Insufficient Free CPU (warning)*
- InsufficientFreeMemory # Insufficient Free Mem (warning)*
- OutOfDisk # Out of Disk (information)*
- HostNetworkNotSupported # Host Ntw not Supported (warning)*
- NilShaper # Undefined Shaper (warning)*
- Rebooted # Node Rebooted (warning)*
- NodeHasSufficientDisk # Node Has Sufficient Disk (information)*
- NodeOutOfDisk # Node Out of Disk Space (information)*
- InvalidDiskCapacity # Invalid Disk Capacity (warning)*
- FreeDiskSpaceFailed # Free Disk Space Failed (warning)*
pod:
- Pulling # Pulling Container Image (information)
- Pulled # Ctr Img Pulled (information)
- Failed # Ctr Img Pull/Create/Start Fail (warning)*
- InspectFailed # Ctr Img Inspect Failed (warning)*
- ErrImageNeverPull # Ctr Img NeverPull Policy Violate (warning)*
- BackOff # Back Off Ctr Start, Image Pull (warning)
- Created # Container Created (information)
- Started # Container Started (information)
- Killing # Killing Container (information)*
- Unhealthy # Container Unhealthy (warning)
- FailedSync # Pod Sync Failed (warning)
- FailedValidation # Failed Pod Config Validation (warning)
- OutOfDisk # Out of Disk (information)*
- HostPortConflict # Host/Port Conflict (warning)*
replicationController:
- SuccessfulCreate # Pod Created (information)*
- FailedCreate # Pod Create Failed (warning)*
- SuccessfulDelete # Pod Deleted (information)*
- FailedDelete # Pod Delete Failed (warning)*
To customize the default events collected for a specific application (by
either enabling or disabling events), add an events
entry to
dragent.yaml
as described in the examples below.
An entry in a section in dragent.yaml
overrides the entire section
in the default configuration.
For example, the Pulling
entry below will permit only kubernetes pod
Pulling
events to be collected and all other kubernetes pod events
settings in dragent.default.yaml
will be ignored.
However, other kubernetes sections - node
and replicationController
-
remain intact and will be used as specified in dragent.default.yaml.
Collect only ‘Pulling’ events from Kubernetes for pods:
events:
kubernetes:
pod:
- Pulling
To disable all events in a section, set the event section to none
:
events:
kubernetes: none
docker: none
These methods can be combined. For example, disable all kubernetes node
and docker image events and limit docker container events to
[attach, commit, copy]
(components events in other sections will be
collected as specified by default):
events:
kubernetes:
node: none
docker:
image: none
container:
- attach
- commit
- copy
In addition to bulleted lists, sequences can also be specified in a bracketed single line, eg.:
events:
kubernetes:
pod: [Pulling, Pulled, Failed]
So, the following two settings are equivalent, permitting only
Pulling, Pulled, Failed
events for pods to be emitted:
events:
kubernetes:
pod: [Pulling, Pulled, Failed]
events:
kubernetes:
pod:
- Pulling
- Pulled
- Failed
Events are limited globally at the agent level based on severity, using
the log
settings in dragent.yaml
.
The default setting for the events severity filter is information
(only warning and higher severity events are transmitted).
Valid severity levels are:
none, emergency, alert, critical, error, warning, notice, information, debug
.
Block all low-severity messages (notice, information, debug
):
log:
event_priority: warning
Block all event collection:
log:
event_priority: none
For other uses of the log
settings see Optional: Change the Agent Log
Level.
For more information, see Integrate Applications (Default App Checks).
It is possible to filter custom metrics in the following ways:
Ability to include/exclude custom metrics using configurable patterns,
Ability to log which custom metrics are exceeding limits
After you identify those key custom metrics that must be received, use the new ‘include’ and ’exclude’ filtering parameters to make sure you receive them before the metrics limit is hit.
Here is an example configuration entry that would be put into the agent config file: (/opt/draios/etc/dragent.yaml)
metrics_filter:
- include: test.*
- exclude: test.*
- include: haproxy.backend.*
- exclude: haproxy.*
- exclude: redis.*
Given the config entry above, this is the action for these metrics:
test.* → send
haproxy.backend.request → send
haproxy.frontend.bytes → drop
redis.keys → drop
The semantic is: whenever the agent is reading metrics, they are filtered according to configured filters and the filtering rule order - the first rule that matches will be applied. Thus since the inclusion item for test.* was listed first it will be followed and that second ’exclude’ rule for the same exact metric entry will be ignored.
Logging is disabled by default. You can enable logging to see which metrics are accepted or dropped by adding the following configuration entry into the dragent.yaml config file:
metrics_excess_log: true
When logging of excess metrics is enabled, logging occurs at INFO-level,
every 30 seconds and lasts for 10 seconds. The entries that can be seen
in /opt/draios/logs/draios.log
will be formatted like this:
+/-[type] [metric included/excluded]: metric.name (filter: +/-[metric.filter])
The first ‘+’ or ‘-’, followed by ’type’ provides an easy way to quickly scan the list of metrics and spot which are included or excluded (’+’ means “included”, ‘-’ means “excluded”).
The second entry specifies metric type (“statsd”, “app_check”, “service_check”, or “jmx”).
A third entry spells out whether “included” or “excluded”, followed by the metric name. Finally, inside the last entry (in parentheses), there is information about filter applied and its effect (’+’ or ‘-’, meaning “include” or “exclude”).
With this example filter rule set:
metrics_filter:
- include: mongo.statsd.net*
- exclude: mongo.statsd.*
We might see the following INFO-level log entries (timestamps stripped):
-[statsd] metric excluded: mongo.statsd.vsize (filter: -[mongo.statsd.*])
+[statsd] metric included: mongo.statsd.netIn (filter: +[mongo.statsd.net*])
To get the most out of Sysdig Monitor, you may want to customize the way in which container data is prioritized and reported. Use this page to understand the default behavior and sorting rules, and to implement custom behavior when and where you need it. This can help reduce agent and backend load by not monitoring unnecessary containers, or– if encountering backend limits for containers– you can filter to ensure that the important containers are always reported.
By default, a Sysdig agent will collect metrics from all containers it detects in an environment. When reporting to the Monitor interface, it uses default sorting behavior to prioritize what container information to display first.
Out of the box, it chooses the containers with the highest
CPU
Memory
File IO
Net IO
and allocates approximately 1/4 of the total limit to each stat type.
As of agent version 0.86,
it is possible set a use_container_filter
parameter in the agent
config
file, tag/label specific containers, and set include/exclude
rules to push those containers to the top of the reporting hierarchy.
This is an effective sorting tool when:
You can manually mark each container with an include
or exclude
tag, AND
The number of includes is small (say, less than 100)
In this case, the containers that explicitly match the include
rules
will take top priority.
In some enterprises, the number of containers is too high to tag with
simple filtering rules, and/or the include_all
group is too large to
ensure that the most-desired containers are consistently reported. As of
Sysdig agent version 0.91,
you can append another parameter to the agent config file,
smart_container_reporting
.
This is an effective sorting tool when:
The number of containers is large and you can’t or won’t mark each one with include/exclude tags, AND
There are certain containers you would like to always prioritize
This helps ensure that even when there are thousands of containers in an environment, the most-desired containers are consistently reported.
Container filtering and smart container reporting affect the monitoring of all the processes/metrics within a container, including StatsD, JMX, app-checks, and built-in metrics.
Prometheus metrics are attached to processes, rather than containers, and are therefore handled differently.
The container limit is set in dragent.yaml under containers:limit:
The sydig_aggregated
parameter is automatically activated when smart
container reporting is enabled, to capture the most-desired metrics from
the containers that were excluded by smart filtering and report them
under a single entity. It appears like any other container in the Sysdig
Monitor UI, with the name “sysdig_aggregated.
”
Sysdig_aggregated
can report on a wide array of metrics; see
Sysdig_aggregated Container
Metrics. However, because
this is not a regular container, certain limitations apply:
container_id and container_image do not exist.
The aggregated container cannot be segmented by certain metrics that are excluded, such as process.
Some default dashboards associated with the aggregated container may have some empty graphs.
By default, the filtering feature is turned off. It can be enabled by adding the following line to the agent configuration:
use_container_filter: true
When enabled, the agent will follow include/exclude filtering rules based on:
container image
container name
container label
Kubernetes annotation or label
The default behavior in default.dragent.yaml
excludes based on a
container label (com.sysdig.report
) and/or a Kubernetes pod
annotation (.sysdig.com/report
).
The condition parameters are described in the following table:
Pattern name | Description | Example |
---|---|---|
| Matches if the process is running inside a container running the specified image |
|
| Matches if the process is running inside a container with the specified name |
|
| Matches if the process is running in a container that has a Label matching the given value |
|
| Matches if the process is attached to a Kubernetes object (Pod, Namespace, etc.) that is marked with the Annotation/Label matching the given value. |
|
all | Matches all. Use as last rule to determine default behavior. |
|
Once enabled (when use_container_filter: true
is set), the agent will
follow filtering rules from the container_filter
section.
Each rule is an include
or exclude
rule which can contain one or
more conditions.
The first matching rule in the list will determine if the container is included or excluded.
The conditions consist of a key name and a value. If the given key for a container matches the value, the rule will be matched.
If a rule contains multiple conditions they all need to match for the rule to be considered a match.
The dragent.default.yaml
contains the following default configuration
for container filters:
use_container_filter: false
container_filter:
- include:
container.label.com.sysdig.report: true
- exclude:
container.label.com.sysdig.report: false
- include:
kubernetes.pod.annotation.sysdig.com/report: true
- exclude:
kubernetes.pod.annotation.sysdig.com/report: false
- include:
all
Note that it excludes via a container.label
and by a
kubernetes.pod.annotation.
The examples on this page show how to edit in the dragent.yaml
file
directly. Convert the examples to Docker or Helm commands, if applicable
for your situation.
To enable container filtering using the default configuration in
default.dragent.yaml
(above), follow the steps below.
To set up, decide which containers should be excluded from automatic monitoring.
Apply the container label .com.sysdig.report
and/or the Kubernetes
pod annotation sysdig.com/report
to the designated containers.
Add the following line to dragent.yaml
to turn on the default
functionality:
use_container_filter: true
You can also edit dragent.yaml
to apply your own container filtering
rules.
To set up, decide which containers should be excluded from automatic monitoring.
Note the image, name, label, or Kubernetes pod information as appropriate, and build your rule set accordingly.
For example:
use_container_filter: true
container_filter:
- include:
container.name: my-app
- include:
container.label.com.sysdig.report: true
- exclude:
kubernetes.namespace.name: kube-system
container.image: "gcr.io*"
- include:
all
The above example shows a container_filter
with 3 include rules and 1
exclude rule.
If the container name is “my-app
” it will be included.
Likewise, if the container has a label with the key
“com.sysdig.report
” and with the value “true
”.
If neither of those rules is true, and the container is part of a
Kubernetes hierarchy within the “kube-system
” namespace and the
container image starts with “gcr.io
”, it will be excluded.
The last rule includes all, so any containers not matching an earlier rule will be monitored and metrics for them will be sent to the backend.
As of Sysdig agent version
0.91, you can add another
parameter to the config file: smart_container_reporting = true
This enables several new prioritization checks:
container_filter (you would enable and set include/exclude rules, as described above)
container age
high stats
legacy patterns
The sort is modified with the following rules in priority order:
User-specified containers come before others
Containers reported previously should be reported before those which have never been reported
Containers with higher usage by each of the 4 default stats should come before those with lower usage
Set up any simple container filtering rules you need, following either Option 1 or Option 2, above.
Edit the agent configuration:
smart_container_reporting: true
This turns on both smart_container_reporting
and
sysdig_aggregated
. The changes will be visible in the Sysdig
Monitor UI.
See also Sysdig_aggregated Container Metrics..
When the log level is set to DEBUG, the following messages may be found in the logs:
message | meaning |
---|---|
container <id>, no filter configured | container filtering is not enabled |
container <id>, include in report | container is included |
container <id>, exclude in report | container is excluded |
Not reporting thread <thread-id> in container <id> | Process thread is excluded |
See also: Optional: Change the Agent Log Level.
Sysdig_aggregated containers can report on the following metrics:
tcounters
other
time_ns
time_percentage
count
io_file
time_ns_in
time_ns_out
time_ns_other
time_percentage_in
time_percentage_out
time_percentage_other
count_in
count_out
count_other
bytes_in
bytes_out
bytes_other
io_net
time_ns_in
time_ns_out
time_ns_other
time_percentage_in
time_percentage_out
time_percentage_other
count_in
count_out
count_other
bytes_in
bytes_out
bytes_other
processing
time_ns
time_percentage
count
reqcounters
other
time_ns
time_percentage
count
io_file
time_ns_in
time_ns_out
time_ns_other
time_percentage_in
time_percentage_out
time_percentage_other
count_in
count_out
count_other
bytes_in
bytes_out
bytes_other
io_net
time_ns_in
time_ns_out
time_ns_other
time_percentage_in
time_percentage_out
time_percentage_other
count_in
count_out
count_other
bytes_in
bytes_out
bytes_other
processing
time_ns
time_percentage
count
max_transaction_counters
time_ns_in
time_ns_out
count_in
count_out
resource_counters
connection_queue_usage_pct
fd_usage_pct
cpu_pct
resident_memory_usage_kb
swap_memory_usage_kb
major_pagefaults
minor_pagefaults
fd_count
cpu_shares
memory_limit_kb
swap_limit_kb
count_processes
proc_start_count
threads_count
syscall_errors
count
count_file
count_file_opened
count_net
protos
http
server_totals
ncalls
time_tot
time_max
bytes_in
bytes_out
nerrors
client_totals
ncalls
time_tot
time_max
bytes_in
bytes_out
nerrors
mysql
server_totals
ncalls
time_tot
time_max
bytes_in
bytes_out
nerrors
client_totals
ncalls
time_tot
time_max
bytes_in
bytes_out
nerrors
postgres
server_totals
ncalls
time_tot
time_max
bytes_in
bytes_out
nerrors
client_totals
ncalls
time_tot
time_max
bytes_in
bytes_out
nerrors
mongodb
server_totals
ncalls
time_tot
time_max
bytes_in
bytes_out
nerrors
client_totals
ncalls
time_tot
time_max
bytes_in
bytes_out
nerrors
names
transaction_counters
time_ns_in
time_ns_out
count_in
count_out
In addition to filtering data by container, it is also possible to filter independently by process. Broadly speaking, this refinement helps ensure that relevant data is reported while noise is reduced. More specifically, use cases for process filtering may include:
Wanting to alert reliably whenever a given process goes down. The total number of processes can exceed the reporting limit; when that happens, some processes are not reported. In this case, an unreported process could be misinterpreted as being “down.” Specify a filter for 30-40 processes to guarantee that they will always be reported.
Wanting to limit the number of noisy but inessential processes being reported, for example: sed, awk, grep, and similar tools that may be used infrequently.
Wanting to prioritize workload-specific processes, perhaps from integrated applications such as NGINX, Supervisord or PHP-FPM.
Note that you can report on processes and containers independently; the including/excluding of one does not affect the including/excluding of the other.
This feature requires the following Sysdig component versions:
Sysdig agent version 0.91 or higher
For on-premises installations: version 3.2.0.2540 or higher
By default, processes are reported according to internal criteria such as resource usage (CPU/memory/file and net IO) and container count.
If you choose to enable process filtering, processes in the include list will be given preference over other internal criteria.
Processes are filtered based on a standard priority filter description already used in Sysdig yaml files. It is comprised of -include and -exclude statements which are matched in order, with evaluation ceasing with the first matched statement. Statements are considered matched if EACH of the conditions in the statement is met.
Edit dragent.yaml per the following patterns to implement the filtering you need.
The process:
condition parameters and rules are described below.
Name | Value | Description |
---|---|---|
app_checks_always_send: | true/false | Legacy config that causes the agent to emit any process with app check. With process filtering, this translates to an extra “include” clause at the head of the process filter which matches a process with any app check, thereby overriding any exclusions. Still subject to limit. |
flush_filter: | Definition of process filter to be used if flush_filter_enabled == true. Defaults to -include all | |
flush_filter_enabled: | true/false | Defaults to false (default process reporting behavior). Set to true to use the rest of the process filtering options. |
limit: | N (chosen number) | Defines the approximate limit of processes to emit to the backend, within 10 processes or so. Default is 250 processes. |
top_n_per_container: | N (chosen number) | Defines how many of the top processes per resource category per emitted container to report after included processes. Still subject to limit. Defaults to 1. |
top_n_per_host: | N (chosen number) | Defines how many of the top processes per resource category per host are reported before included processes. Still subject to limit. Defaults to 1. |
The process: Condition Parameters
Rules
container.image: my_container_image
Validates whether the
container image associated with the process is a wild card match of
the provided image name
container.name: my_container_name Validates whether the container name associated with the process is a wild card match of the provided image name
container.label.XYZ: value
Validates whether the label XYZ of the
container associated with the process is a wildcard match of the
provided value
process.name: my_process_name
Validates whether the name of the
process is a wild card match of the provided value
process.cmdline: value
Checks whether the executable name of a
process contains the specified value, or any argument to the process
is a wildcard match of the provided value
appcheck.match: value
Checks whether the process has any appcheck
which is a wildcard match of the given value
all
Matches all processes, but does not whitelist them, nor does
it blacklist them. If no filter is provided, the default
is -include all
. However, if a filter is provided and no match is
made otherwise, then all unmatched processes will be blacklisted. In
most cases, the definition of a process filter should end
with -include: all
.
Block all processes from a given container. No processes from some_container_name will be reported.
process:
flush_filter_enabled: true
flush_filter:
- exclude:
container.name: some_container_name
- include:
allprocess: flush_filter: - exclude: container.name: some_container_name - include: all
Send all processes from a given container at high priority.
process:
flush_filter_enabled: true
flush_filter:
- include:
container.name: some_container_name
- include:
all
Send all processes that contain ‘java" in the name at high priority.
process:
flush_filter_enabled: true
flush_filter:
- include:
process.name: java
- include:
all
Send processes containing “java” from a given container at high priority.
process:
flush_filter_enabled: true
flush_filter:
- include:
container.name: some_container_name
process.name: java
- include:
all
Send all processes that contain “java” in the name that are not in
container some_container_nane
.
process:
flush_filter_enabled: true
flush_filter:
- exclude:
container.name: some_container_name
- include:
process.name: java
- include:
all
Send all processes containing “java” in the name. If a process does not contain “java” in the name and if the container within which the process runs is named some_container_name, then exclude it.
Note that each include/exclude rule is handled sequentially and hierarchically so that even if the container is excluded, it can still report “java” processes.
flush_filter:
- flush_filter_enabled: true
- include:
process.name: java
- exclude:
container.name: some_container_name
- include:
all
Send Java processes from one container and SQL processes from another at high priority.
process:
flush_filter:
- flush_filter_enabled: true
- include:
container.name: java_container_name
process.name: java
- include
container.name: sql_container_name
process.name: sql
- include
all
Only send processes running in a container with a given label.
process:
flush_filter:
- flush_filter_enabled: true
- include:
=container.label.report_processes_from_this_container_example_label: true
- exclude:
all
Sysdig agent does not automatically discover and collect metrics from
external file systems, such as NFS, by default. To enable collecting
these metrics, add the following entry to the dragent.yaml
file:
remotefs = true
In addition to the remote file systems, the following mount types are also excluded because they cause high load.
mounts_filter:
- exclude: "*|autofs|*"
- exclude: "*|proc|*"
- exclude: "*|cgroup|*"
- exclude: "*|subfs|*"
- exclude: "*|debugfs|*"
- exclude: "*|devpts|*"
- exclude: "*|fusectl|*"
- exclude: "*|mqueue|*"
- exclude: "*|rpc_pipefs|*"
- exclude: "*|sysfs|*"
- exclude: "*|devfs|*"
- exclude: "*|devtmpfs|*"
- exclude: "*|kernfs|*"
- exclude: "*|ignore|*"
- exclude: "*|rootfs|*"
- exclude: "*|none|*"
- exclude: "*|tmpfs|*"
- exclude: "*|pstore|*"
- exclude: "*|hugetlbfs|*"
- exclude: "*|*|/etc/resolv.conf"
- exclude: "*|*|/etc/hostname"
- exclude: "*|*|/etc/hosts"
- exclude: "*|*|/var/lib/rkt/pods/*"
- exclude: "overlay|*|/opt/stage2/*"
- exclude: "/dev/mapper/cl-root*|*|/opt/stage2/*"
- exclude: "*|*|/dev/termination-log*"
- include: "*|*|/var/lib/docker"
- exclude: "*|*|/var/lib/docker/*"
- exclude: "*|*|/var/lib/kubelet/pods/*"
- exclude: "*|*|/run/secrets"
- exclude: "*|*|/run/containerd/*"
- include: "*|*|*"
To include a mount type:
Open the dragent.yaml
file.
Remove the corresponding line from the exclude list in the
mount_filter
.
Add the file mount to the include list under mount_filter
.
The format is:
# format of a mount filter is:
# ```
# mounts_filter:
# - exclude: "device|filesystem|mount_directory"
# - include: "pattern1|pattern2|pattern3"
For example:
mounts_filter:
- include: "*|autofs|*"mounts_filter:
- include: "overlay|*|/opt/stage2/*"
- include: "/dev/mapper/cl-root*|*|/opt/stage2/*"
Save the configuration changes and restart the agent.
Sometimes, security requirements dictate that capture functionality should NOT be triggered at all (for example, PCI compliance for payment information).
To disable Captures altogether:
Access using one of the options listed.
This example accesses dragent.yaml
directly. ``
Set the parameter:
sysdig_capture_enabled: false
Restart the agent, using the command: ``
service dragent restart
See Captures for more information on the feature