This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Event Forwarding

Sysdig supports sending different types of security data to third-party SIEM (security information and event management) platforms and logging tools, such as Splunk, Elastic Stack, Qradar, Arcsight, LogDNA. Use Event Forwarding to perform these integrations so you can view security events and correlate Sysdig findings with the tool that you are already using for analysis.

Review the Types of Secure Integrations table for more context. The Event Forwarding column lists the various options and their levels of support.

You must be logged in to Sysdig Secure as Administrator to access the event forwarding options.

Supported Event Forwarding Data Sources

At this time, Sysdig Secure can forward the following types of data:

If Sysdig Monitor is installed, Monitor events are also supported.

JSON Formats Used per Data Source

Informational; in most cases, there is no need to change the default format.

Policy Event Payloads

There are now two formats supported. See also this Release Note.

New Runtime Policy Events Payload

{
    "id": "164ace360cc3cfbc26ec22d61b439500",
    "type": "policy",
    "timestamp": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "originator": "policy",
    "category": "runtime",
    "source": "syscall",
    "name": "Notable Filesystem Changes",
    "description": "Identified notable filesystem activity that might change sensitive/important files. This differs from Suspicious Filesystem Changes in that it looks more broadly at filesystem activity, and might have more false positives as a result.",
    "severity": 0,
    "agentId": 13530,
    "containerId": "",
    "machineId": "08:00:27:54:f3:9d",
    "actions": [
        {
          "type": "POLICY_ACTION_CAPTURE",
          "successful": true,
          "token": "abffffdd-fba8-42c7-b922-85364b00eeeb",
          "afterEventNs": 5000000000,
          "beforeEventNs": 5000000000
        }
    ],
    "content": {
        "policyId": 544,
        "baselineId": "",
        "ruleName": "Write below etc",
        "ruleType": "RULE_TYPE_FALCO",
        "ruleTags": [
            "NIST_800-190",
            "NIST_800-53",
            "ISO",
            "NIST_800-53_CA-9",
            "NIST_800-53_SC-4",
            "NIST",
            "ISO_27001",
            "MITRE_T1552_unsecured_credentials",
            "MITRE_T1552.001_credentials_in_files"
        ],
        "output": "File below /etc opened for writing (user=root command=touch /etc/ard parent=bash pcmdline=bash file=/etc/ard program=touch gparent=su ggparent=sudo gggparent=bash container_id=host image=<NA>)",
        "fields": {
            "container.id": "host",
            "container.image.repository": "<NA>",
            "falco.rule": "Write below etc",
            "fd.directory": "/etc/pam.d",
            "fd.name": "/etc/ard",
            "group.gid": "8589935592",
            "group.name": "sysdig",
            "proc.aname[2]": "su",
            "proc.aname[3]": "sudo",
            "proc.aname[4]": "bash",
            "proc.cmdline": "touch /etc/ard",
            "proc.name": "touch",
            "proc.pcmdline": "bash",
            "proc.pname": "bash",
            "user.name": "root"
        },
        "falsePositive": false,
        "matchedOnDefault": false,
        "policyVersion": 2,
        "policyOrigin": "Sysdig"
    },
    "labels": {
        "host.hostName": "ardbox",
        "process.name": "touch /etc/ard"
    }
}

Legacy Secure Policy Event Payload

{
    "id": "164ace360cc3cfbc26ec22d61b439500",
    "containerId": "",
    "name": "Notable Filesystem Changes",
    "description": "Identified notable filesystem activity that might change sensitive/important files. This differs from Suspicious Filesystem Changes in that it looks more broadly at filesystem activity, and might have more false positives as a result.",
    "severity": 0,
    "policyId": 544,
    "actionResults": [
        {
            "type": "POLICY_ACTION_CAPTURE",
            "successful": true,
            "token": "15c6b9cc-59f9-4573-82bb-a1dbab2c4737",
            "beforeEventNs": 5000000000,
            "afterEventNs": 5000000000
        }
    ],
    "output": "File below /etc opened for writing (user=root command=touch /etc/ard parent=bash pcmdline=bash file=/etc/ard program=touch gparent=su ggparent=sudo gggparent=bash container_id=host image=<NA>)",
    "ruleType": "RULE_TYPE_FALCO",
    "matchedOnDefault": false,
    "fields": [
        {
            "key": "container.image.repository",
            "value": "<NA>"
        },
        {
            "key": "proc.aname[3]",
            "value": "sudo"
        },
        {
            "key": "proc.aname[4]",
            "value": "bash"
        },
        {
            "key": "proc.cmdline",
            "value": "touch /etc/ard"
        },
        {
            "key": "proc.pname",
            "value": "bash"
        },
        {
            "key": "falco.rule",
            "value": "Write below etc"
        },
        {
            "key": "proc.name",
            "value": "touch"
        },
        {
            "key": "fd.name",
            "value": "/etc/ard"
        },
        {
            "key": "proc.aname[2]",
            "value": "su"
        },
        {
            "key": "proc.pcmdline",
            "value": "bash"
        },
        {
            "key": "container.id",
            "value": "host"
        },
        {
            "key": "user.name",
            "value": "root"
        }
    ],
    "eventLabels": [
        {
            "key": "container.image.repo",
            "value": "alpine"
        },
        {
            "key": "container.image.tag",
            "value": "latest"
        },
        {
            "key": "container.name",
            "value": "large-label-container-7"
        },
        {
            "key": "host.hostName",
            "value": "ardbox"
        },
        {
            "key": "process.name",
            "value": "touch /etc/ard"
        }
    ],
    "falsePositive": false,
    "baselineId": "",
    "policyVersion": 2,
    "origin": "Sysdig",
    "timestamp": 1606322948648718,
    "timestampNs": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "hostMac": "08:00:27:54:f3:9d",
    "isAggregated": false
}

Activity Audit Forwarding Payloads

Each of the activity audit types has its own JSON format.

Command (cmd) Payload

{
    "id": "164806c17885b5615ba513135ea13d79",
    "agentId": 32212,
    "cmdline": "calico-node -felix-ready -bird-ready",
    "comm": "calico-node",
    "pcomm": "apt-get",
    "containerId": "a407fb17332b",
    "count": 1,
    "customerId": 1,
    "cwd": "/",
    "hostname": "qa-k8smetrics",
    "loginShellDistance": 0,
    "loginShellId": 0,
    "pid": 29278,
    "ppid": 29275,
    "rxTimestamp": 1606322949537513500,
    "timestamp": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "tty": 34816,
    "type": "command",
    "uid": 0,
    "labels": {
        "aws.accountId": "059797578166",
        "aws.instanceId": "i-053b1f0509fdbc15a",
        "aws.region": "us-east-1",
        "container.image.digest": "sha256:26c68657ccce2cb0a31b330cb0be2b5e108d467f641c62e13ab40cbec258c68d",
        "container.image.id": "d2e4e1f51132",
        "container.label.io.kubernetes.pod.namespace": "default",
        "container.name": "bash",
        "host.hostName": "ip-172-20-46-221",
        "host.mac": "12:9f:a1:c9:76:87",
        "kubernetes.node.name": "ip-172-20-46-221.ec2.internal",
        "kubernetes.pod.name": "bash"
    }
}

Network (net) Payload

{
    "id": "164806f43b4d7e8c6708f40cdbb47838",
    "agentId": 32212,
    "clientIpv4": 2886795285,
    "clientPort": 60720,
    "containerId": "da3abd373c7a",
    "customerId": 1,
    "direction": "out",
    "hostname": "qa-k8smetrics",
    "l4protocol": 6,
    "pid": 2452,
    "processName": "kubectl",
    "rxTimestamp": 0,
    "serverIpv4": 174063617,
    "serverPort": 443,
    "timestamp": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "type": "connection"
    "tty": 34816,
    "labels": {
        "aws.accountId": "059797578166",
        "aws.instanceId": "i-053b1f0509fdbc15a",
        "aws.region": "us-east-1",
        "container.image.digest": "sha256:26c68657ccce2cb0a31b330cb0be2b5e108d467f641c62e13ab40cbec258c68d",
        "container.image.id": "d2e4e1f51132",
        "host.hostName": "ip-172-20-46-221",
        "host.mac": "12:9f:a1:c9:76:87",
        "kubernetes.cluster.name": "k8s-onprem",
        "kubernetes.namespace.name": "default",
        "kubernetes.node.name": "ip-172-20-46-221.ec2.internal",
        "kubernetes.pod.name": "bash"
    }
}

File (file) Payload

{
    "id": "164806c161a5dd221c4ee79d6b5dd1ce",
    "agentId": 32212,
    "containerId": "a407fb17332b",
    "customerId": 1,
    "directory": "/var/lib/dpkg/updates/",
    "filename": "tmp.i",
    "hostname": "qa-k8smetrics",
    "permissions": "w",
    "pid": 414661,
    "comm": "dpkg",
    "timestamp": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "type": "fileaccess",
    "tty": 34817,
    "metrics": [
        "default",
        "",
        "k8s-onprem",
        "bash",
        "",
        "ip-172-20-46-221",
        "12:9f:a1:c9:76:87"
    ],
    "labels": {
        "aws.accountId": "059797578166",
        "aws.instanceId": "i-053b1f0509fdbc15a",
        "aws.region": "us-east-1",
        "container.image.digest": "sha256:26c68657ccce2cb0a31b330cb0be2b5e108d467f641c62e13ab40cbec258c68d",
        "container.image.id": "d2e4e1f51132",
        "container.image.repo": "docker.io/library/ubuntu",
        "container.name": "bash",
        "host.hostName": "ip-172-20-46-221",
        "host.mac": "12:9f:a1:c9:76:87",
        "kubernetes.cluster.name": "k8s-onprem",
        "kubernetes.namespace.name": "default",
        "kubernetes.node.name": "ip-172-20-46-221.ec2.internal",
        "kubernetes.pod.name": "bash"
    }
}

Kubernetes (kube exec) Payload

{
    "id": "164806f4c47ad9101117d87f8b574ecf",
    "agentId": 32212,
    "args": {
        "command": "bash",
        "container": "nginx"
    },
    "auditId": "c474d1de-c764-445a-8142-a0142505868e",
    "containerId": "397be1762fba",
    "hostname": "qa-k8smetrics",
    "name": "nginx-76f9cf7469-k5kf7",
    "namespace": "nginx",
    "resource": "pods",
    "sourceAddresses": [
        "172.17.0.21"
    ],
    "stages": {
        "started": 1605540915526159000,
        "completed": 1605540915660084000
    },
    "subResource": "exec",
    "timestamp": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "type": "kubernetes",
    "user": {
        "username": "system:serviceaccount:default:default-kubectl-trigger",
        "groups": [
            "system:serviceaccounts",
            "system:serviceaccounts:default",
            "system:authenticated"
        ]
    },
    "userAgent": "kubectl/v1.16.2 (linux/amd64) kubernetes/c97fe50",
    "labels": {
        "agent.tag.cluster": "k8s-onprem",
        "agent.tag.sysdig_secure.enabled": "true",
        "container.image.repo": "docker.io/library/nginx",
        "container.image.tag": "1.21.6",
        "container.label.io.kubernetes.container.name": "nginx",
        "container.label.io.kubernetes.pod.name": "nginx-76f9cf7469-k5kf7",
        "container.label.io.kubernetes.pod.namespace": "nginx",
        "container.name": "nginx",
        "host.hostName": "qa-k8smetrics",
        "host.mac": "12:09:c7:7d:8b:25",
        "kubernetes.cluster.name": "demo-env-prom",
        "kubernetes.deployment.name": "nginx-deployment",
        "kubernetes.namespace.name": "nginx",
        "kubernetes.pod.name": "nginx-76f9cf7469-k5kf7",
        "kubernetes.replicaSet.name": "nginx-deployment-5677bff5b7"
    }
}

Benchmark Result Payloads

To forward benchmark events, you must have Benchmarks v2 installed and configured, using the Node Analyzer.

A Benchmark Control payload is emitted for each control on each host on every Benchmark Run. A Benchmark Run payload containing a summary of the results is emitted for each host on every Benchmark Run.

Benchmark Control Payload

{
    "id": "16ee684c65c356616381cbcbfed06eb6",
    "type": "benchmark",
    "timestamp": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "originator": "benchmarks",
    "category": "runtime",
    "source": "host",
    "name": "Kubernetes Benchmark Control Reported",
    "description": "Kubernetes benchmark kube_bench_cis-1.6.0 control 4.1.8 completed.",
    "severity": 7,
    "agentId": 0,
    "containerId": "",
    "machineId": "0a:e2:ce:65:f5:b7",
    "content": {
        "taskId": "9",
        "runId": "535de4fb-3fac-4716-b5c6-9c906226ed01",
        "source": "host",
        "schema": "kube_bench_cis-1.6.0",
        "subType": "control",
        "control": {
            "id": "4.1.8",
            "title": "Ensure that the client certificate authorities file ownership is set to root:root (Manual)",
            "description": "The certificate authorities file controls the authorities used to validate API requests. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`.",
            "rationale": "The certificate authorities file controls the authorities used to validate API requests. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`.",
            "remediation": "Run the following command to modify the ownership of the --client-ca-file.\nchown root:root <filename>\n",
            "auditCommand": "CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')\nif test -z $CAFILE; then CAFILE=/etc/kubernetes/pki/ca.crt; fi\nif test -e $CAFILE; then stat -c %U:%G $CAFILE; fi\n",
            "auditOutput": "root:root",
            "expectedOutput": "'root:root' is equal to 'root:root'",
            "familyName": "Worker Node Configuration Files",
            "level": "Level 1",
            "type": "manual",
            "result": "Pass",
            "resourceType": "Hosts",
            "resourceCount": 0
        }
    },
    "labels": {
        "aws.accountId": "845151661675",
        "aws.instanceId": "i-0cafe61565a04c866",
        "aws.region": "eu-west-1",
        "host.hostName": "ip-172-20-57-8",
        "host.mac": "0a:e2:ce:65:f5:b7",
        "kubernetes.cluster.name": "demo-env-prom",
        "kubernetes.node.name": "ip-172-20-57-8.eu-west-1.compute.internal"
    }
}

Benchmark Run Payload

{
    "id": "16ee684c65c356617457f59f07b11210",
    "type": "benchmark",
    "timestamp": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "originator": "benchmarks",
    "category": "runtime",
    "source": "host",
    "name": "Kubernetes Benchmark Run Passed (with warnings)",
    "description": "Kubernetes benchmark kube_bench_cis-1.6.0 completed.",
    "severity": 4,
    "agentId": 0,
    "containerId": "",
    "machineId": "0a:28:16:38:93:39",
    "content": {
        "taskId": "9",
        "runId": "535de4fb-3fac-4716-b5c6-9c906226ed01",
        "source": "host",
        "schema": "kube_bench_cis-1.6.0",
        "subType": "run",
        "run": {
            "passCount": 20,
            "failCount": 0,
            "warnCount": 27
        }
    },
    "labels": {
        "aws.accountId": "845151661675",
        "aws.instanceId": "i-00280f61718cc25ba",
        "aws.region": "eu-west-1",
        "host.hostName": "ip-172-20-40-177",
        "host.mac": "0a:28:16:38:93:39",
        "kubernetes.cluster.name": "demo-env-prom",
        "kubernetes.node.name": "ip-172-20-40-177.eu-west-1.compute.internal"
    }
}

Host Scanning Payload

Incremental Report

This is the “vuln diff” report; it contains the list of added, removed, or updated vulnerabilities that the host presents compared to the previous scan.

[
  {
    "id": "167fddc1197bcc776d72f0f299e83530",
    "type": "hostscanning",
    "timestamp": 1621258212302,
    "originator": "hostscanning",
    "category": "hostscanning_incremental_report",
    "source": "hostscanning",
    "name": "Vulnerability updates - Host dev-vm",
    "description": "",
    "severity": 4,
    "agentId": 0,
    "containerId": "",
    "machineId": "00:0c:29:e5:9e:51",
    "content": {
      "hostname": "dev-vm",
      "mac": "00:0c:29:e5:9e:51",
      "reportType": "incremental",
      "added": [
        {
          "cve": "CVE-2020-27170",
          "fixAvailable": "5.4.0-70.78",
          "packageName": "linux-headers-5.4.0-67",
          "packageType": "dpkg",
          "packageVersion": "5.4.0-67.75",
          "severity": "High",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2020-27170",
          "vulnerablePackage": "linux-headers-5.4.0-67:5.4.0-67.75"
        },
        {
          "cve": "CVE-2019-9515",
          "fixAvailable": "None",
          "packageName": "libgrpc6",
          "packageType": "dpkg",
          "packageVersion": "1.16.1-1ubuntu5",
          "severity": "Medium",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2019-9515",
          "vulnerablePackage": "libgrpc6:1.16.1-1ubuntu5"
        }
      ],
      "updated": [
        {
          "cve": "CVE-2018-17977",
          "fixAvailable": "None",
          "packageName": "linux-modules-5.4.0-72-generic",
          "packageType": "dpkg",
          "packageVersion": "5.4.0-72.80",
          "severity": "Medium",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2018-17977",
          "vulnerablePackage": "linux-modules-5.4.0-72-generic:5.4.0-72.80"
        },
        {
          "cve": "CVE-2021-3348",
          "fixAvailable": "5.4.0-71.79",
          "packageName": "linux-modules-extra-5.4.0-67-generic",
          "packageType": "dpkg",
          "packageVersion": "5.4.0-67.75",
          "severity": "Medium",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2021-3348",
          "vulnerablePackage": "linux-modules-extra-5.4.0-67-generic:5.4.0-67.75"
        },
        {
          "cve": "CVE-2021-29265",
          "fixAvailable": "5.4.0-73.82",
          "packageName": "linux-headers-5.4.0-67-generic",
          "packageType": "dpkg",
          "packageVersion": "5.4.0-67.75",
          "severity": "Medium",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2021-29265",
          "vulnerablePackage": "linux-headers-5.4.0-67-generic:5.4.0-67.75"
        },
        {
          "cve": "CVE-2021-29921",
          "fixAvailable": "None",
          "packageName": "python3.8-dev",
          "packageType": "dpkg",
          "packageVersion": "3.8.5-1~20.04.2",
          "severity": "Medium",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2021-29921",
          "vulnerablePackage": "python3.8-dev:3.8.5-1~20.04.2"
        }
      ],
      "removed": [
        {
          "cve": "CVE-2021-26932",
          "fixAvailable": "None",
          "packageName": "linux-modules-5.4.0-67-generic",
          "packageType": "dpkg",
          "packageVersion": "5.4.0-67.75",
          "severity": "Medium",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2021-26932",
          "vulnerablePackage": "linux-modules-5.4.0-67-generic:5.4.0-67.75"
        },
        {
          "cve": "CVE-2020-26541",
          "fixAvailable": "None",
          "packageName": "linux-modules-extra-5.4.0-67-generic",
          "packageType": "dpkg",
          "packageVersion": "5.4.0-67.75",
          "severity": "Medium",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2020-26541",
          "vulnerablePackage": "linux-modules-extra-5.4.0-67-generic:5.4.0-67.75"
        },
        {
          "cve": "CVE-2014-4607",
          "fixAvailable": "2.04-1ubuntu26.8",
          "packageName": "grub-pc",
          "packageType": "dpkg",
          "packageVersion": "2.04-1ubuntu26.7",
          "severity": "Medium",
          "url": "http://people.ubuntu.com/~ubuntu-security/cve/CVE-2014-4607",
          "vulnerablePackage": "grub-pc:2.04-1ubuntu26.7"
        }
      ]
    },
    "labels": {
      "host.hostName": "dev-vm",
      "cloudProvider.account.id": "",
      "cloudProvider.host.name": "",
      "cloudProvider.region": "",
      "host.hostName": "ip-172-20-40-177",
      "host.id": "d82e5bde1d992bedd10a640bdb2f052493ff4b3e03f5e96d1077bf208f32ea96",
      "host.mac": "00:0c:29:e5:9e:51",
      "host.os.name": "ubuntu",
      "host.os.version": "20.04"
      "kubernetes.cluster.name": "",
      "kubernetes.node.name": ""
    }
  }
]

Full Report

The full report contains all the vulnerabilities found during the first host scan.

[
  {
    "id": "1680c8462f368eaf38d2f269d9de1637",
    "type": "hostscanning",
    "timestamp": 1621516069618,
    "originator": "hostscanning",
    "category": "hostscanning_full_report",
    "source": "hostscanning",
    "name": "Host ip-172-31-94-81 scanned",
    "description": "",
    "severity": 4,
    "agentId": 0,
    "containerId": "",
    "machineId": "16:1f:b4:f5:02:03",
    "content": {
      "hostname": "ip-172-31-94-81",
      "mac": "16:1f:b4:f5:02:03",
      "reportType": "full",
      "added": [
        {
          "cve": "CVE-2015-0207",
          "fixAvailable": "None",
          "packageName": "libssl1.1",
          "packageType": "dpkg",
          "packageVersion": "1.1.0l-1~deb9u3",
          "severity": "Negligible",
          "url": "https://security-tracker.debian.org/tracker/CVE-2015-0207",
          "vulnerablePackage": "libssl1.1:1.1.0l-1~deb9u3"
        },
        {
          "cve": "CVE-2016-2088",
          "fixAvailable": "None",
          "packageName": "libdns162",
          "packageType": "dpkg",
          "packageVersion": "1:9.10.3.dfsg.P4-12.3+deb9u8",
          "severity": "Negligible",
          "url": "https://security-tracker.debian.org/tracker/CVE-2016-2088",
          "vulnerablePackage": "libdns162:1:9.10.3.dfsg.P4-12.3+deb9u8"
        },
        {
          "cve": "CVE-2017-5123",
          "fixAvailable": "None",
          "packageName": "linux-headers-4.9.0-15-amd64",
          "packageType": "dpkg",
          "packageVersion": "4.9.258-1",
          "severity": "Negligible",
          "url": "https://security-tracker.debian.org/tracker/CVE-2017-5123",
          "vulnerablePackage": "linux-headers-4.9.0-15-amd64:4.9.258-1"
        },
        {
          "cve": "CVE-2014-2739",
          "fixAvailable": "None",
          "packageName": "linux-headers-4.9.0-15-common",
          "packageType": "dpkg",
          "packageVersion": "4.9.258-1",
          "severity": "Negligible",
          "url": "https://security-tracker.debian.org/tracker/CVE-2014-2739",
          "vulnerablePackage": "linux-headers-4.9.0-15-common:4.9.258-1"
        },
        {
          "cve": "CVE-2014-9781",
          "fixAvailable": "None",
          "packageName": "linux-kbuild-4.9",
          "packageType": "dpkg",
          "packageVersion": "4.9.258-1",
          "severity": "Negligible",
          "url": "https://security-tracker.debian.org/tracker/CVE-2014-9781",
          "vulnerablePackage": "linux-kbuild-4.9:4.9.258-1"
        },
        {
          "cve": "CVE-2015-8705",
          "fixAvailable": "None",
          "packageName": "libisc-export160",
          "packageType": "dpkg",
          "packageVersion": "1:9.10.3.dfsg.P4-12.3+deb9u8",
          "severity": "Negligible",
          "url": "https://security-tracker.debian.org/tracker/CVE-2015-8705",
          "vulnerablePackage": "libisc-export160:1:9.10.3.dfsg.P4-12.3+deb9u8"
        }
      ]
    },
    "labels": {
      "agent.tag.distribution": "Debian",
      "agent.tag.fqdn": "ec2-3-231-219-145.compute-1.amazonaws.com",
      "agent.tag.test-type": "qa-hs",
      "agent.tag.version": "9.13",
      "host.hostName": "ip-172-31-94-81",
      "host.id": "cbd8fc14e9116a33770453e0755cbd1e72e4790e16876327607c50ce9de25a4b",
      "host.mac": "16:1f:b4:f5:02:03",
      "host.os.name": "debian",
      "host.os.version": "9.13"
      "kubernetes.cluster.name": "",
      "kubernetes.node.name": ""
    }
  }
]

Sysdig Platform Audit Payload

{
    "id": "16f43920a0d70f005f136173fcec3375",
    "type": "audittrail",
    "timestamp": 1606322948648718268,
    "timestampRFC3339Nano": "2020-11-25T16:49:08.648718268Z",
    "originator": "ingestion",
    "category": "",
    "source": "auditTrail",
    "name": "",
    "description": "",
    "severity": 0,
    "agentId": 0,
    "containerId": "",
    "machineId": "",
    "content": {
        "timestampNs": 1654009775452000000,
        "customerId": 1,
        "userId": 454926,
        "teamId": 46902,
        "requestMethod": "GET",
        "requestUri": "/api/integrations/discovery/",
        "userOriginIP": "187.188.243.122",
        "queryString": "cluster=demo-env-prom&namespace=sysdig-agent",
        "responseStatusCode": 200,
        "entityType": "integration",
        "entityPayload": ""
    },
    "labels": {
        "entityType": "integration"
    }
}

Delete an Event Forwarding Integration

To delete an existing integration:

  1. From the Settings module of the Sysdig Secure UI, navigate to the Events Forwarding tab.

  2. Click the More Options (three dots) icon.

  3. Click the Delete Integration button.

  4. Click the Yes, delete button to confirm the change.

1 - Forwarding to Splunk

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle Splunk event forwarding.

Configure Splunk Event Forwarding

To forward event data to Splunk:

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Select Splunk from the drop-down menu.

  4. Configure the required options:

    Integration Name: Define an integration name.

    URL: Define the URL of the Splunk service. This is the HTTP Event Collector that forwards the events to a Splunk deployment. Be sure to use the format scheme://host:port.

    Token: This is the token that Sysdig uses to authenticate the connection to the HTTP Event Collector. This token is created when you create the Splunk Event Collector.

    Optional: Configure additional Splunk parameters (Index, Source, Source Type) as desired.

    Certificate: If you have configured Certificates Management tool, you can select one of your uploaded certs here.

    Index: The index where events are stored. Specify the Index if you have selected one while configuring the HTTP Event Collector.

    Source Type: Identifies the data structure of the event. For more information, see Source Type.

    For more information on these parameters, refer to the Splunk documentation.

    If left empty, each data type will have a source type. See Appendix: Data Categories Mapped to Source Types for more details.

    Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.

    Select whether or not you want to allow insecure connections (i.e. invalid or self-signed certificate on the receiving side).

    Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.

  5. Click the Save button to save the integration.

Here is an example of how policy events forwarded from Sysdig Secure is displayed on Splunk:

Appendix: Data Categories Mapped to Source Types

Sysdig Data TypeSplunk Source Type
Monitor EventsSysdigMonitor
Policy Events (Legacy)SysdigPolicy
Sysdig Platform AuditSysdigSecureEvents
Benchmark EventsSysdigSecureEvents
Secure events complianceSysdigSecureEvents
Host VulnerabilitiesSysdigSecureEvents
Runtime Policy EventsSysdigSecureEvents
Activity AuditSysdigActivityAudit

2 - Forwarding to Syslog

Syslog refers to System Logging protocol. It is a standard chiefly used by network devices to send events and logs in a particular format to a centralized system for storage and analysis. A Syslog event includes severity level, host IP, timestamps, diagnostics information, and so on.

Sysdig Event Forwarding allows you to send events gathered by Sysdig Secure to a Syslog server.

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

Configure Syslog Event Forwarding

To forward event data to a Syslog Server:

  1. Log in to Sysdig Secure as admin. From the Settings module, navigate to the Events Forwarding tab.

  2. Click the Add Integration button.

  3. Select Syslog from the drop-down menu.

  4. Configure the required options:

    Integration Name: Define an integration name.

    Address: Specify the Syslog server where the events are forwarded. Enter a domain name or IP address. If a domain name resolves to several IP addresses, the first resolved address is used.

    Port: Specify the port number.

    Protocol: Choose the protocol depending on the server you are sending the logs to:

    • RFC 3164: RFC 3164 is the older version of the protocol, default port and transport is 514/UDP.

    • RFC 5424: RFC 5424 is the current version of the protocol, default port and transport is 514/UDP

    • RFC 5425 (TLS): RFC 5425 (TLS) is an extension to RFC 5424 to use an encrypted channel, default port and transport is 6514/TCP. Select this option if you want to use a certificate uploaded via Sysdig’s Certificates Management tool.

    UDC/TCP: Define transport layer protocol UDP/TCP. Use TCP for security incidents, as it’s far more reliable than UDP for handling network congestion and preventing packet loss.

    • NOTE: RFC 5425 (TLS) only supports TCP.

    Certificate: (Optional) Select a certificate you’ve uploaded via Sysdig’s Certificates Management tool. Note that the RFC 5425 (TLS) protocol is required for you to see this field.

    Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.

    Allow insecure connections: Toggle on if you want to allow insecure connections (i.e. invalid or self-signed certificate on the receiving side).

    Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.

  5. Click the Save button to save the integration.

3 - Forwarding to IBM Cloud Pak for Multicloud Management

Prerequisite: A grafeas-service-admin-id API key in IBM Cloud Pak for Multicloud Management

To forward event data to IBM Cloud Pak for Multicloud Management:

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Click the Add Integration button.

  4. Select IBM MCM from the drop-down menu.

  5. Configure the required options:

    Integration Name: Define an integration name.

    URL: This is the URL for your MCM API endpoint. This should be the same that you use to connect to the IBM MCM CloudPak console. Be sure to use the format scheme://host:port.

    Grafeas API Key: You need to create a Grafeas API key that Sysdig will use to authorize and authenticate.

    Account ID: (Optional) You can leave it blank to use the default value of id-mycluster-account. If you want to use a different account name, provide it here.

    Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.

    Select whether or not you want to allow insecure connections (i.e. invalid or self-signed certificate on the receiving side).

    Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.

  6. Click the Save button to save the integration.

Here is an example of how events forwarded from Sysdig Secure are displayed on IBM Multicloud Managment Console:

4 - Forwarding to IBM QRadar

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

Configure Event Forwarding Integration with IBM Radar

To forward event data to IBM QRadar:

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Click the Add Integration button.

  4. Select IBM QRadar from the drop-down menu.

  5. Configure the required options:

    Integration Name: Define an integration name.

    Address: Specify the DNS address of the QRadar installation endpoint.

    Port: Port to send data, hardcoded to TCP transport protocol. 514/TCP is the default

    Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.

    Allow insecure connections: Toggle on if you want to allow insecure connections (i.e. invalid or self-signed certificate on the receiving side).

    Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.

  6. Click the Save button to save the integration.

See also: Installing Extensions from IBM’s Knowledge Center.

5 - Forwarding to Kafka Topic

Kafka is a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. It can be deployed on bare-metal hardware, virtual machines, or containers in on-premise as well as cloud environments.

Events are organized and durably stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

Configure Event Forwarding Integration with a Kafka Topic

To forward secure data to Kafka:

  1. Log in to Sysdig Secure as admin. From the Settings module, navigate to the Events Forwarding tab.

  2. Click the Add Integration button.

  3. Select Kafka topic from the drop-down menu.

  4. Configure the required options:

    Integration Name: Define an integration name.

    Brokers: Kafka server endpoints. A Kafka cluster may provide several brokers; it follows the “hostname: port” (without protocol scheme). You can list several using a comma-separated list.

    Topic: Kafka topic where you want to store the forwarded data

    Partitioner/Balancer: Algorithm that the client uses to multiplex data between the multiple Brokers. For compatibility with the Java client, Murmur2 is used as the default partitioner. Supported algorithms are:

    • Murmur2

    • Round robin

    • Least bytes

    • Hash

    • CRC32

    Compression: Compression standard used for the data. Supported algorithms are:

    • LZ4

    • Snappy

    • Gzip

    • Standard

    Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.

    Select whether or not you want to allow insecure connections (i.e. invalid or self-signed certificate on the receiving side).

    Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.

  5. Click the Save button to save the integration.

6 - Forwarding to Amazon SQS

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

Configure Event Forwarding Integration with Amazon SQS

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Click the Add Integration button.

  4. Select Amazon SQS from the drop-down menu.

  5. Configure the required options:

    • Integration Name: Define an integration name.
    • Access Key and Access Secret: Enter your AWS access key and secret
    • Token: Enter the AWS token used
    • Region: Enter the AWS region where you created you Amazon SQS queue
    • Delay Optional: Enter a value (in seconds) between 0 and 900 that a message delivery should be delayed.
    • Metadata Optional: Set up to 10 10 key value headers with which the messages should be tagged. Entries can be string values.
    • Queue: Enter your Amazon SQS queue
    • Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
    • Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
  6. Click the Save button to save the integration.

7 - Forwarding to Google Chronicle

Google Chronicle is a cloud service, built as a specialized layer on top of core Google infrastructure, designed for enterprises to privately retain, analyze, and search the massive amounts of security and network telemetry they generate. Chronicle normalizes, indexes, correlates, and analyzes the data to provide instant analysis and context on risky activity.

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

Configure Event Forwarding Integration with Google Chronicle

To forward event data to Chronicle:

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Click the Add Integration button.

  4. Select Chronicle from the drop-down menu.

  5. Configure the required options:

    • Integration Name: Define an integration name.
    • API Key:
    • Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.
    • Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
  6. Click the Save button to save the integration.

8 - Forwarding to Google PubSub

Google Pub/Sub allows services to communicate asynchronously and is used for streaming analytics and data integration pipelines to ingest and distribute data. It is equally effective as messaging-oriented middleware for service integration or as a queue to parallelize tasks. See Common Use Cases for more background detail.

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

NOTE: The permissions for the service account must be either Editor or Admin. Publisher is not sufficient.

Configure Event Forwarding Integration with Pub/Sub

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Click the Add Integration button.

  4. Select Google Pub/Sub from the drop-down menu.

  5. Configure the required options:

  • Integration Name: Define an integration name.

  • Project: Enter the Cloud Console project name you created in Google Pub/Sub.

  • Topic: Enter the Topic Name you created.

  • JSON Credentials: Enter the Service Account credentials you created.

  • Attributes: If you have chosen to embed custom attributes as metadata in Pub/Sub messages, enter them here.

  • Ordering Key: If you chose to have subscribers receive messages in order, enter the ordering key information you set up.

  • Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.

  • Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.

  1. Click the Save button to save the integration.

9 - Forwarding to Google Security Command Center

Google Security Command Center or SCC is a centralized vulnerability and threat reporting service that helps you strengthen your security posture and provide asset inventory and discovery.

Supported data

For the moment we only support GCP Audit Log events to be forwarded to this integration.

Prerequisites

  1. Event forwarder originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

  2. Enable integration from GCP console, select Enable APIs and Services and enable the following APIs

    • Security Command Center API
    • Identity and Access Management (IAM) API
  3. Service Account:A service account with the right permissions is required. The following example illustrates how to do it automatically from the terminal. The values PROJECT_ID and ORG_ID have to be provided. SERVICE_ACCOUNT refers to the desired name for the account. KEY_LOCATION refers to the desired name for the json output file that will need to be uploaded in to the Sysdig UI in the next step.

      export SERVICE_ACCOUNT=scc-servaccount
      export PROJECT_ID=elevated-web-872901
      export KEY_LOCATION=scckey.json
      export ORG_ID=494436833222
    
      gcloud iam service-accounts create $SERVICE_ACCOUNT  \
        --display-name "Service Account for USER"  \
        --project $PROJECT_ID
    
      gcloud iam service-accounts keys create $KEY_LOCATION  \
        --iam-account $SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com
    
      gcloud beta organizations add-iam-policy-binding $ORG_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com" \
        --role='roles/securitycenter.admin'
    

Configure Event Forwarding Integration with Google SCC

This action can be performed only by an Administrator

To forward event data to Google SCC:

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Click the Add Integration button.

  4. Select Google SCC from the drop-down menu.

  5. Configure the required options:

    • Integration Name: Define an integration name.
    • Organization: Set the ID of your GCP organization.
    • JSON credentials: Updload JSON credentials that you previously generated from a service account or user.
  • Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. Note that since only GCP Audit Log events can be forwarded, only Runtime Policy events are shown.
    • Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
  1. Click the Save button to save the integration.

10 - Forwarding to Sentinel

Microsoft Sentinel (formerly Azure Sentinel) is a security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution built on Azure services. See Microsoft’s Sentinel documentation for more detail.

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

To successfully integrate Sentinel with Sysdig’s event forwarding, you must have access to a configured Log Analytics Workspace. Go there to retrieve the workspace ID and secret you will need for the integration:

  • Open your Log Analytics Workspace.
  • Navigate to Agents management and select Linux servers.
  • Copy the workspace id and primary key.

Configure Event Forwarding Integration with Sentinel

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Click the Add Integration button.

  4. Select Microsoft Sentinel from the drop-down menu.

  5. Configure the required options:

  • Integration Name: Define an integration name.
  • Workspace ID: Enter the workspace Id you copied from the Log Analytics Workspace.
  • Secret: Enter the Primary key you copied from the Log Analytics Workspace.
  • Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded to Sentinel. The available list depends on the Sysidg features and products you have enabled.
  • Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.
  1. Click the Save button to save the integration.

11 - Forwarding to Elasticsearch

Elasticsearch is a distributed, RESTful search and analytics engine at the heart of the Elastic Stack. Sysdig provides event forwarding to Elasticsearch for versions major or equal to:

  • Elasticsearch 7
  • Opensearch 1.2

For more information, see How to Ingest Data Into Elasticsearch Service

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

You must have an instance of Elasticsearch running and permissions to access it.

Configure Event Forwarding Integration with Elasticsearch

  1. Log in to Sysdig Secure as admin.

  2. From the Settings module, navigate to the Events Forwarding tab.

  3. Click the Add Integration button.

  4. Select Elasticsearch from the drop-down menu.

  5. Configure the required options:

  • Integration Name: Define an integration name.

  • Endpoint: Enter the specific Elasticsearch instance where the data will be saved. For ES Cloud and ES Cloud Enterprise, the endpoint can be found under the Deployments page:

  • Index Name: Name of the index under which the data will be stored. See also: https://www.elastic.co/blog/what-is-an-elasticsearch-index

Datastreams are currently not supported. Make sure to configure your Elasticsearch index template with the “datastream” option set to off. That way, data will be stored on indices.

  • Authentication: Basic authentication is the most common format (username:password). The given user must have write privileges in Elasticsearch; you can query the available users.

  • Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.

  • Allow insecure connections: Used to skip certificate validations when using HTTPS

  • Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.

  1. Click the Save button to save the integration.

Timestamp Mapping

To handle timestamps directly in Elasticsearch, you might want to map them to the appropriate field type. Timestamps have nanosecond resolution in Sysdig and they are available both in epoch timestamp and in RFC 3339 format.

The best approach is using the date_nanos field type and define an explicit mapping in your Elasticsearch instance.

You will need to perform a PUT /<index>/_mapping API call, with the index you are storing data into, using the following payload:

{
 "properties": {
    "timestampRFC3339Nano": {
      "type": "date_nanos",
      "format": "strict_date_optional_time_nanos"
    }
  }
}

Otherwise, you can do it using the Kibana interface, if you use it.

12 - Forwarding to Webhook

Webhooks are “user-defined HTTP callbacks.” They are usually triggered by some event. When that event occurs, the source site makes an HTTP request to the URL configured for the webhook. Users can configure them to cause events on one site to invoke behavior on another.

Sysdig Secure leverages webhooks to support integrations that are not covered by any other particular integration/protocol present in the Event Forwarder list.

Prerequisites

Event forwards originate from region-specific IPs. For the full list of outbound IPs by region, see SaaS Regions and IP Ranges. Update your firewall and allow inbound requests from these IP addresses to enable Sysdig to handle event forwarding.

Configure Event Forwarding to a Webhook

To forward secure data to a Webhook:

  1. Log in to Sysdig Secure as admin. From the Settings module, navigate to the Events Forwarding tab.

  2. Click the Add Integration button.

  3. Select Webhook from the drop-down menu.

  4. Configure the required options:

    Integration Name: Define an integration name.

    Endpoint: Webhook endpoint following the schema protocol (i.e. https://)hostname:port

    Authentication: Four different methods are supported:

    • Basic authentication: If you select this method, you must fill the Secret field with the desired user: password. No whiteespaces, semicolon character as separation.

    • Bearer token: If you select this method, you must fill the Secret field with the desired user: password. No whiteespaces, semicolon character as separation.

    • Signature header: If you select this method, you must fill the Secret field with the cryptographic key provided by the software on the other end.

    • Certificate: Select this option if you want to use a certificate uploaded via Sysdig’s Certificates Management tool.

      • The Certificate field will then appear; select the appropriate cert from the drop-down menu.

    Secret: Authorization / Authentication data. This field depends on the method selected in c).

    Custom Headers Any number of custom headers defined by the user to accommodate additional parameters required on the receiving end.

    To avoid interfering with the regular webhook protocol and expected headers, the following headers cannot be set using this form.

    Data to Send: Select from the drop-down the types of Sysdig data that should be forwarded. The available list depends on the Sysidg features and products you have enabled.

    Due to the heavy connection establishment overhead imposed by the HTTP protocol, the Secure policy events are grouped by time proximity into batches and sent together in a single request as a JSON array. In other words, every HTTP request will contain a JSON array containing one or more policy runtime events.

    Select whether or not you want to allow insecure connections (i.e. invalid or self-signed certificate on the receiving side).

    Toggle the enable switch as necessary. Remember that you will need to “Test Integration” with the button below before enabling the integration.

  5. Click the Save button to save the integration.

13 - Event Enrichment with Agent Labels

The agent includes these labels by default when enabling event labels

Enable labels

event_labels:
  enabled: true/false

Default labels

event_labels:
  include:
    - process.name
    - host.hostName
    - agent.tag
    - container.name
    - kubernetes.cluster.name
    - kubernetes.namespace.name
    - kubernetes.deployment.name
    - kubernetes.pod.name
    - kubernetes.node.name

Adding Custom Labels

Event labeling has the ability to both include and exclude event labels.

event_labels:
  exclude:
    - custom.label.to.exclude

event_labels:
  include:
    - custom.label.to.include

Example of an enriched event being sent to splunk

{ [-]
baselineId: null
containerId: e4d32e56d9d2
description: A shell was used as the entrypoint/exec point into a container with an attached terminal.
eventLabels: [ [-]
{ [-]
key: kubernetes.node.name
value: ip-172-31-72-246
}
{ [-]
key: container.name
value: k8s_elasticsearch_sysdigcloud-elasticsearch-0_sysdigcloud_c824e1f8-aa1f-11e9-aff4-027768606bae_0
}
{ [-]
key: kubernetes.cluster.name
value: SysdigBackend
}
{ [-]
key: kubernetes.pod.name
value: sysdigcloud-elasticsearch-0
}
{ [-]
key: kubernetes.namespace.name
value: sysdigcloud
}
{ [-]
key: agent.tag.timezone
value: UTC
}
{ [-]
key: agent.tag.location
value: europe
}
{ [-]
key: process.name
value: bash
}
{ [-]
key: host.hostName
value: ip-172-31-72-246
}
]
falsePositive: false
fields: [ [+]
]
hostMac: 02:77:68:60:6b:ae
id: 702701271278202880
isAggregated: false
matchedOnDefault: false
name: Terminal shell in container
output: A shell was spawned in a container with an attached terminal (user=root k8s_elasticsearch_sysdigcloud-elasticsearch-0_sysdigcloud_c824e1f8-aa1f-11e9-aff4-027768606bae_0 (id=e4d32e56d9d2) shell=bash parent=docker-runc cmdline=bash terminal=34816)
policyId: 18
ruleSubtype: null
ruleType: RULE_TYPE_FALCO
severity: 5
timestamp: 1564065391633554
version: 1
}