This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Kubernetes Audit Logging

    Kubernetes log integration enables Sysdig Secure to use Kubernetes audit log data for Falco Rules and activity audit. We now provide examples for the distributions and platforms listed below.

    The integration allows auditing of:

    • Creation and destruction of pods, services, deployments, daemon sets, etc.

    • Creating/updating/removing config maps or secrets

    • Attempts to subscribe to changes to any endpoint

    Review the Types of Secure Integrations table for more context. The Audit Logging (Kubernetes) column lists the various options and their levels of support.

    To enable this feature in Sysdig Secure SaaS, install the Sysdig Admission Controller and set the features.k8sAuditDetections to true. After installing, create Kubernetes Audit policies and then you should be able to view results in the UI.

    View Results in the UI

    Policies will need to be created to use the new Falco rules for Kubernetes audit logging. For information on creating policies, refer to the Policies documentation.

    View Audit Logging Rules

    The Kubernetes audit logging rules can be viewed in the Sysdig Policies Rules Editor, found in the Policies module. To view the audit rules:

    1. From the Policies module, navigate to the Rules Editor tab.

    2. Open the drop-down menu for the default rules, and select k8s_audit_rules.yaml:

    View Audit Events

    Kubernetes audit events will now be routed to the Sysdig agent daemon set within the cluster.

    Once the policies are created, the audit events will be able to be observed via the Sysdig Secure Events module.

    LEGACY INSTALLATION INSTRUCTIONS

    These methods of enabling Kubernetes audit logging on Sysdig Secure SaaS have been replaced by simply installing the Sysdig Admission Controller. See also: July 27,2021.

    if your cluster already has the Kubernetes audit logging enabled, there’s no need to change to the Admission Controller method.

    Prerequisites

    Install Sysdig Agent and Apply the Agent Service

    These instructions assume that the Sysdig agent has already been deployed to the Kubernetes cluster. See Agent Installation for details. When the agent(s) are installed, have the Sysdig agent service account, secret, configmap, and daemonset information on hand.

    • If the sysdig-agent-service.yaml was not explicitly deployed during agent installation, you need to apply it now:

      kubectl apply -f https://raw.githubusercontent.com/draios/sysdig-cloud-scripts/master/agent_deploy/kubernetes/sysdig-agent-service.yaml -n sysdig-agent
      

      Note: It is also assumed that the agent has been deployed in the sysdig-agent namespace; if it’s not, you might need to adjust the commands.

    • If your agent version is less than 11.2.0: You must add a a variable, k8s_audit_server_url to your agent configmap, and set it to 0.0.0.0.

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: sysdig-agent
      data:
        dragent.yaml: |
          configmap: true
          ...
          security:
            k8s_audit_server_url: 0.0.0.0
      

      For agent versions 11.2.0+, this step is already configured and no action is needed.

    Choose Enablement Steps

    Sysdig has tested Kubernetes audit log integration on a variety of platforms and distributions. Each requires different steps, as detailed in the sections below.

    The routing of Kubernetes audit events has changed rapidly between Kubernetes versions. For more information, review the Kubernetes documentation.

    Routing is accomplished via either:

    • Webhook backend: Kubernetes version >= 1.11, or

    • Dynamic backend with Audit Sink: Kubernetes version >= 1.13 and <1.19 (deprecated since 1.19)

    The table below summarizes the tested options:

    Distro/PlatformVersionUses WebhookUses DynamicUses Other
    OpenShift3.11X
    OpenShift4.2, 4.3X
    OpenShift4.4+ not yet supported
    MiniShift3.11X
    Kops1.15, 1.18, 1.20X
    GKE (Google)1.13X (bridge)
    EKS (Amazon)eks.5/ Kubernetes 1.14 on AWS Cloud or AWS OutpostX CloudWatch
    AKS (Azure)1.15+X (bridge)
    RKE (Rancher)RKE v1.0.0/Kubernetes 1.13+X
    IKS (IBM)1.15X
    Minikube1.11+X

    Enable Kubernetes Audit Logging

    These instructions assume that the Kubernetes cluster has NO audit configuration or logging in place. The steps add configuration only to route audit log messages to the Sysdig agent.

    There is a beta script automating many of these steps, which is suitable for proof-of-concept/non-production environments. In any case, we recommend reading the step-by-step instructions carefully before continuing.

    OpenShift 3.11

    Openshift 3.11 only supports webhook backends (described as “Advanced Audit” in the Openshift Documentation).

    Follow the steps below on the Kubernetes API master node:

    1. Copy the provided audit-policy.yaml file to the Kubernetes API master node in the /etc/origin/master directory.

      (The file will be picked up by OpenShift services running in containers because this directory is mounted into the Kube API server container at /etc/origin/master.)

    2. Create a Webhook Configuration File and copy it to the Kubernetes API master node, in the /etc/origin/master directory.

    3. Modify the master configuration by adding the following to your /etc/origin/master/master-config.yaml file, replacing any existing auditConfig: entry.

      auditConfig:
        enabled: true
        maximumFileSizeMegabytes: 10
        maximumRetainedFiles: 1
        auditFilePath: "/etc/origin/master/k8s_audit_events.log"
        logFormat: json
        webHookMode: "batch"
        webHookKubeConfig: /etc/origin/master/webhook-config.yaml
        policyFile: /etc/origin/master/audit-policy.yaml
      

      One way to do this is to use oc ex config patch.

      Assuming the above content were in a file audit-patch.yaml and you had copied /etc/origin/master/master-config.yaml to /tmp/master-config.yaml.original, you could run:

      oc ex config patch /tmp/master-config.yaml.original -p "$(cat audit-patch.yaml)" &gt; /etc/origin/master/master-config.yaml
      
    4. Restart the API server by running the following:

      # sudo /usr/local/bin/master-restart api
      # sudo /usr/local/bin/master-restart controllers
      

      Once restarted, the server will route Kubernetes audit events to the Sysdig agent service.

    MiniShift 3.11

    Like OpenShift 3.11, Minishift 3.11 supports webhook backends, but the way Minishift launches the Kubernetes API server is different. Therefore, the command line arguments are somewhat different than in the instructions above.

    1. Copy the provided audit-policy.yaml file to the Minishift VM into the directory /var/lib/minishift/base/kube-apiserver/.

      (The file will be picked up by Minishift services running in containers because this directory is mounted into the kube API server container at /etc/origin/master.)

    2. Create a Webhook Configuration File and copy it to the Minishift VM into the directory /var/lib/minishift/base/kube-apiserver/.

    3. Modify the master configuration by adding the following to /var/lib/minishift/base/kube-apiserver/master-config.yaml on the Minishift VM, merging/updating as required. 

      Note:master-config.yaml also exists in other directories such as /var/lib/minishift/base/openshift-apiserver and /var/lib/minishift/base/openshift-controller-manager/.

      You should modify the one in kube-apiserver:

      kubernetesMasterConfig:
        apiServerArguments:
        audit-log-maxbackup:
        - "1"
        audit-log-maxsize:
        - "10"
        audit-log-path:
        - /etc/origin/master/k8s_audit_events.log
        audit-policy-file:
        - /etc/origin/master/audit-policy.yaml
        audit-webhook-batch-max-wait:
        - 5s
        audit-webhook-config-file:
        - /etc/origin/master/webhook-config.yaml
        audit-webhook-mode:
        - batch
      
    4. Restart the API server by running the following:

      (For minishift)
      # minishift openshift restart
      

      Once restarted, the server will route Kubernetes audit events to the Sysdig agent service.

    OpenShift 4.2, 4.3

    By default, Openshift 4.2/4.3 enables Kubernetes API server logs and makes them available on each master node, at the path /var/log/kube-apiserver/audit.log. However, the API server is not configured by default with the ability to create dynamic backends.

    You must first enable the creation of dynamic backends by changing the API server configuration. You then create audit sinks to route audit events to the Sysdig agent.

    1. Run the following to update the API server configuration:

      oc patch kubeapiserver cluster --type=merge -p '{"spec":{"unsupportedConfigOverrides":{"apiServerArguments":{"audit-dynamic-configuration":["true"],"feature-gates":["DynamicAuditing=true"],"runtime-config":["auditregistration.k8s.io/v1alpha1=true"]}}}}'
      
    2. Wait for the API server to restart with the updated configuration.

    3. Create a Dynamic Audit Sink.

      Once the dynamic audit sink is created, it will route Kubernetes audit events to the Sysdig agent service.

    Kops

    You will modify the cluster configuration using kops set, update the configuration using kops update, and then perform a rolling update using kops rolling-update.

    1. Create a Webhook Configuration File and save it locally.

    2. Get the current cluster configuration and save it to a file:

      kops get cluster <your cluster name> -o yaml > cluster-current.yaml
      
    3. To ensure that webhook-config.yaml is available on each master node at /var/lib/k8s_audit, and that the kube-apiserver process is run with the required arguments to enable the webhook backend, you will edit cluster.yaml to add/modify the fileAssets and kubeAPIServer sections as follows:

      apiVersion: kops.k8s.io/v1alpha2
      kind: Cluster
      spec:
        ...
        fileAssets:
          - name: webhook-config
            path: /var/lib/k8s_audit/webhook-config.yaml
            roles: [Master]
            content: |
                      <contents of webhook-config.yaml go here>
          - name: audit-policy
            path: /var/lib/k8s_audit/audit-policy.yaml
            roles: [Master]
            content: |
                      <contents of audit-policy.yaml go here>
        ...
        kubeAPIServer:
          auditLogPath: /var/lib/k8s_audit/audit.log
          auditLogMaxBackups: 1
          auditLogMaxSize: 10
          auditWebhookBatchMaxWait: 5s
          auditPolicyFile: /var/lib/k8s_audit/audit-policy.yaml
          auditWebhookConfigFile: /var/lib/k8s_audit/webhook-config.yaml
        ...
      

      A simple way to do this using yq would be with the following script:

      cat <<EOF > merge.yaml
      spec:
        fileAssets:
          - name: webhook-config
            path: /var/lib/k8s_audit/webhook-config.yaml
            roles: [Master]
            content: |
      $(cat webhook-config.yaml | sed -e 's/^/        /')
          - name: audit-policy
            path: /var/lib/k8s_audit/audit-policy.yaml
            roles: [Master]
            content: |
      $(cat audit-policy.yaml | sed -e 's/^/        /')
        kubeAPIServer:
          auditLogPath: /var/lib/k8s_audit/audit.log
          auditLogMaxBackups: 1
          auditLogMaxSize: 10
          auditWebhookBatchMaxWait: 5s
          auditPolicyFile: /var/lib/k8s_audit/audit-policy.yaml
          auditWebhookConfigFile: /var/lib/k8s_audit/webhook-config.yaml
      EOF
      
      yq m -a cluster-current.yaml merge.yaml > cluster.yaml
      
    4. Configure Kops with the new cluster configuration:

      kops replace -f cluster.yaml
      
    5. Update the cluster configuration to prepare changes to the cluster:

      kops update cluster <your cluster name> --yes
      
    6. Perform a rolling update to redeploy the master nodes with the new files and API server configuration:

      kops rolling-update cluster --yes
      

    GKE (Google)

    These instructions assume you have already created a cluster and configured the gcloud and kubectl command-line programs to interact with the cluster. Note the known limitations, below.

    GKE already provides Kubernetes audit logs, but the logs are exposed using Stackdriver and are in a different format than the native format used by Kubernetes.

    To simplify things, we have written a bridge program that reads audit logs from Stackdriver, reformats them to match the Kubernetes-native format, and sends the logs to a configurable webhook and to the Sysdig agent service.

    1. Create a Google Cloud (not Kubernetes) service account and key that has the ability to read logs:

      $ gcloud iam service-accounts create swb-logs-reader --description "Service account used by stackdriver-webhook-bridge" --display-name "stackdriver-webhook-bridge logs reader"
      $ gcloud projects add-iam-policy-binding <your gce project id> --member serviceAccount:swb-logs-reader@<your gce project id>.iam.gserviceaccount.com --role 'roles/logging.viewer'
      $ gcloud iam service-accounts keys create $PWD/swb-logs-reader-key.json --iam-account swb-logs-reader@<your gce project id>.iam.gserviceaccount.com
      
    2. Create a Kubernetes secret containing the service account keys:

      kubectl create secret generic stackdriver-webhook-bridge --from-file=key.json=$PWD/swb-logs-reader-key.json -n sysdig-agent
      
    3. Deploy the bridge program to your cluster using the provided stackdriver-webhook-bridge.yaml file:

      kubectl apply -f stackdriver-webhook-bridge.yaml -n sysdig-agent
      

    The bridge program routes audit events to the domain name sysdig-agent.sysdig-agent.svc.cluster.local, which corresponds to the sysdig-agent service you created either when deploying the agent or as a prerequisite step.

    GKE Limitations

    GKE uses a Kuberenetes audit policy that emits a more limited set of information than the one recommended by Sysdig. As a result, there are several limitations when retrieving Kubernetes audit information for the Events feed and Activity Audit features in Sysdig Secure.

    Request Object

    In particular, audit events for config maps in GKE generally do not contain a requestObject field that contains the object being created/modified.

    Pod exec does not Include command/container

    For many Kubernetes distributions, an audit event representing a pod exec includes the command and specific container as arguments to the requestURI. For example:

    “requestURI”:"/api/v1/namespaces/default/pods/nginx-deployment-7998647bdf-phvq7/exec?command=bash&container=nginx1&container=nginx1&stdin=true&stdout=true&tty=true

    In GKE, the audit event is missing those request parameters.

    Implications for the Event Feed

    If the rule condition trigger includes a field that is not available in the Kubernetes audit log provided by GKE, the rule will not trigger.

    As a result, the following rule from k8s_audit_rules.yaml will not trigger: Create/Modify Configmap With Private Credentials. (The contents of configmaps are not included in audit logs, so the contents can not be examined for sensitive information.)

    This will limit the information that can be displayed in the outputs of rules. For example the  command=%ka.uri.param[command] output variable in the Attach/Exec Pod rule will always return N/A.

    Implications for Activity Audit
    • kubectl exec elements will not be scoped to the cluster name; they will only be visible scoping by entire infrastructure"

    • A kubectl exec item in Activity Audit will not display command or container information

    • Drilling down into a kubectl exec will not provide the container activity as there is no information that allows Sysdig to correlate the kubectl exec action with an individual container.

    EKS (Amazon)

    These instructions were verified with eks.5 on Kubernetes v1.14 for both AWS public cloud and AWS Outposts.

    Amazon EKS does not provide webhooks for audit logs, but it allows audit logs to be forwarded to CloudWatch. To access CloudWatch logs from the Sysdig agent, proceed as follows:

    1. Enable CloudWatch logs for your EKS cluster.

    2. Allow access to CloudWatch from the worker nodes.

    3. Add a new deployment that polls CloudWatch and forwards events to the Sysdig agent.

    You can find an example configuration that can be implemented with the AWS UI, along with the code and the image for an example audit log forwarder. (In a production system this would be implemented as IaC scripts.)

    Please note that CloudWatch is an additional AWS paid offering. In addition, with this solution, all the pods running on the worker nodes will be allowed to read CloudWatch logs through AWS APIs.

    AKS (Azure)

    Requirements

    The installation script (below) has the following command-line tool requirements:

    • Azure CLI, already logged into your account

    • envsubst (shipped with gettext package)

    • kubectl

    • curl, tr, grep

    Installation

    Execute the following script:

    curl -s https://raw.githubusercontent.com/sysdiglabs/aks-audit-log/master/install-aks-audit-log.sh | bash -s -- -g YOUR_RESOURCE_GROUP_NAME -c YOUR_AKS_CLUSTER_NAME
    

    Some resources will be created in the same resource group as your cluster:

    • Storage Account, to coordinate event consumers

    • Event Hubs, to receive audit log events

    • Diagnostic setting in the cluster, to send audit log to Event Hubs

    • Kubernetes deployment aks-audit-log-forwarder, to forward the log to Sysdig agent

    If everything worked as expected, you can verify that the audit logs are being forwarded executing:

    kubectl get pods -n sysdig-agent
    # take note of the pod name for aks-audit-log-forwarder
    kubectl logs aks-audit-log-forwarder-XXXX -f
    

    For additional information, optional parameters, and architecture details, see the repository.

    To Uninstall

    Use the same parameters as for installation. The script will delete all created resources and configurations.

    curl -s https://raw.githubusercontent.com/sysdiglabs/aks-audit-log/master/uninstall-aks-audit-log.sh |  bash -s -- -g YOUR_RESOURCE_GROUP_NAME -c YOUR_AKS_CLUSTER_NAME
    

    RKE (Rancher) with Kubernetes 1.13+

    These instructions were verified with RKE v1.0.0 and Kubernetes v1.16.3. It should work with versions as old as Kubernetes v1.13.

    Audit support is already enabled by default, but the audit policy must be updated to provide additional granularity. These instructions enable a webhook backend pointing to the agent’s service. Dynamic audit backends are not supported as there isn’t a way to enable the audit feature flag.

    1. On each Kubernetes API Master Node, create the directory /var/lib/k8s_audit.

    2. On each Kubernetes API master node, copy the provided audit-policy.yaml file to to the master node into the directory /var/lib/k8s_audit. (This directory will be mounted into the API server, giving it access to the audit/webhook files.)

    3. Create a Webhook Configuration File and copy it to each Kubernetes API master node, into the directory /var/lib/k8s_audit.

    4. Modify your RKE cluster configuration cluster.yml to add extra_args and extra_binds sections to the kube-api section. Here’s an example:

      kube-api:
      ...
          extra_args:
            audit-policy-file: /var/lib/k8s_audit/audit-policy.yaml
            audit-webhook-config-file: /var/lib/k8s_audit/webhook-config.yaml
            audit-webhook-batch-max-wait: 5s
          extra_binds:
          - /var/lib/k8s_audit:/var/lib/k8s_audit
      ...
      

      This changes the command-line arguments for the API server to use an alternate audit policy and to use the webhook backend you created.

    5. Restart the RKE cluster via rke up.

    IKS (IBM)

    IKS supports routing Kubernetes audit events to a single configurable webhook backend URL. It does not support dynamic audit sinks and does not support the ability to change the audit policy that controls which Kubernetes audit events are sent.

    The instructions below were adapted from the IBM-provided documentation on how to integrate with Fluentd. It is expected that you are familiar with (or will review) the IKS tools for forwarding cluster and app logs described there.

    Limitation: The Kubernetes default audit policy generally does not include events at the Request or RequestResponse levels, meaning that any rules that look in detail at the objects being created/modified (e.g. rules using the ka.req.* and ka.resp.* fields) will not trigger. This includes the following rules:

    • Create Disallowed Pod

    • Create Privileged Pod

    • Create Sensitive Mount Pod

    • Create HostNetwork Pod

    • Pod Created in Kube Namespace

    • Create NodePort Service

    • Create/Modify Configmap With Private Credentials

    • Attach to cluster-admin Role

    • ClusterRole With Wildcard Created

    • ClusterRole With Write Privileges Created

    • ClusterRole With Pod Exec Created

    These instructions describe how to redirect from Fluentd to the Sysdig agent service.

    1. Set the webhook backend URL to the IP address of the sysdig-agent service:

      http://$(kubectl get service sysdig-agent -o=jsonpath={.spec.clusterIP} -n sysdig-agent):7765/k8s_audit
      
    2. Verify that the webhook backend URL has been set:

      ibmcloud ks cluster master audit-webhook get --cluster <cluster_name_or_ID>
      
    3. Apply the webhook to your Kubernetes API server by refreshing the cluster master. It may take several minutes for the master to refresh.

      ibmcloud ks cluster master refresh --cluster <cluster_name_or_ID>
      

    Minikube 1.11+

    These instructions were verified using Minikube 1.19.0. Other Minikube versions should also work as long as they run Kubernetes versions 1.11. In all cases below, “the Minikube VM” refers to the VM created by Minikube. In cases where you’re using --vm-driver=none, this means the local machine.

    1. Create the directory /var/lib/k8s_audit on the master node. (On Minikube, it must be on the Minikube VM.)

    2. For Kubernetes 1.11 to 1.18: Copy the provided audit-policy.yaml file into the directory /var/lib/k8s_audit. (This directory will be mounted into the API server, giving it access to the audit/webhook files. On Minikube, it must be on the Minikube VM.)

      For Kubernetes 1.19: Use this audit-policy.yaml file instead.

    3. Create a Webhook Configuration File and copy it to each Kubernetes API master node, into the directory /var/lib/k8s_audit.

    4. Modify the Kubernetes API server manifest at /etc/kubernetes/manifests/kube-apiserver.yaml, adding the following command-line arguments:

      --audit-log-path=/var/lib/k8s_audit/k8s_audit_events.log
      --audit-policy-file=/var/lib/k8s_audit/audit-policy.yaml
      --audit-log-maxbackup=1
      --audit-log-maxsize=10
      --audit-webhook-config-file=/var/lib/k8s_audit/webhook-config.yaml
      --audit-webhook-batch-max-wait=5s
      

      Command-line arguments are provided in the container spec as arguments to the program /usr/local/bin/kube-apiserver. The relevant section of the manifest will look like this:

      spec:
        containers:
        - command:
          - kube-apiserver --allow-privileged=true --anonymous-auth=false
            --audit-log-path=/var/lib/k8s_audit/audit.log
            --audit-policy-file=/var/lib/k8s_audit/audit-policy.yaml
            --audit-log-maxbackup=1
            --audit-log-maxsize=10
            --audit-webhook-config-file=/var/lib/k8s_audit/webhook-config.yaml
            --audit-webhook-batch-max-wait=5s
            ...
      
    5. Modify the Kubernetes API server manifest at /etc/kubernetes/manifests/kube-apiserver.yaml to add a mount of /var/lib/k8s_audit into the kube-apiserver container. The relevant sections look like this:

      volumeMounts:
      - mountPath: /var/lib/k8s_audit/
        name: k8s-audit
        readOnly: true
           ...
      volumes:
      - hostPath:
        path: /var/lib/k8s_audit
        type: DirectoryOrCreate
        name: k8s-audit
          ...
      
    6. Modifying the manifest will cause the Kubernetes API server automatically to restart. Once restarted, it will route Kubernetes audit events to the Sysdig agent’s service.

    Prepare Webhook or (Legacy) Dynamic Backend

    Most of the platform-specific instructions will use one of these methods.

    Create a Webhook Configuration File

    Sysdig provides a templated resource file that sends audit events to an IP associated with the Sysdig agent service, via port 7765.

    It is “templated” in that the actual IP is defined in an environment variable AGENT_SERVICE_CLUSTERIP, which can be plugged in using a program like envsubst.

    1. Download webhook-config.yaml.in.

    2. Run the following to fill in the template file with the ClusterIP IP address associated with the sysdig-agent service you created, either when installing the agent or in the prereq step:

      AGENT_SERVICE_CLUSTERIP=$(kubectl get service sysdig-agent -o=jsonpath={.spec.clusterIP} -n sysdig-agent) envsubst < webhook-config.yaml.in > webhook-config.yaml
      

      Note: l Athough service domain names like sysdig-agent.sysdig-agent.svc.cluster.local can not be resolved from the Kubernetes API server (they’re typically run as pods but not really a part of the cluster), the ClusterIPs associated with those services are routable.

    Using a webhook backend to route audit events is a feature available from Kubernetes v1.11+. See Kubernetes’ documentation for background info.

    Create a Dynamic Audit Sink

    When using dynamic audit sinks, you must create an AuditSink object that directs audit events to the sysdig agent service.

    Sysdig provides a template file that can be used to create the sink.

    Dynamic audit webhooks using AuditSink API objects are available from Kubernetes v1.13 and were deprecated it 1.19. See [Kubernetes' documentation](http:// https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#dynamic-backend) for background info.

    1. Download audit-sink.yaml.in .

    2. Run the following to fill in the template file with the ClusterIP IP address associated with the sysdig-agent service you created, either when installing the agent or in the prereq step:

      AGENT_SERVICE_CLUSTERIP=$(kubectl get service sysdig-agent -o=jsonpath={.spec.clusterIP} -n sysdig-agent) envsubst < audit-sink.yaml.in > audit-sink.yaml
      
    3. Apply the following:

      kubectl apply -f audit-sink.yaml -n sysdig-agent
      

    Test the Integration

    To test that Kubernetes audit events are being properly passed to the agent, you can do any of the following:

    • Enable the All K8s Object Modifications policy and create a deployment, service, configmap, or namespace to see if the events are recorded and forwarded.

    • Enable other policies, such as Suspicious K8s Activity,if and test them.

    • You can use the falco-event-generator Docker image to generate activity that maps to many of the default rules/policies provided in Sysdig Secure. You can run the image via a command line like the following:

      docker run -v $HOME/.kube:/root/.kube -it falcosecurity/falco-event-generator k8s_audit
      

      This will create resources in a namespace falco-event-generator.

      See also: Using Falco within Sysdig Secure and the native Falco documentation for more information about this tool.

    (BETA) Script to Automate Configuration Changes

    As a convenience, Sysdig has created a script: enable-k8s-audit.sh, which performs the necessary steps for enabling audit log support for all Kubernetes distributions described above, except EKS.

    You can run it via: bash enable-k8s-audit.sh <distribution> where <distribution> is one of the following:

    • minishift-3.11

    • openshift-3.11

    • openshift-4.2, openshift-4.3

    • gke

    • iks

    • rke-1.13 (implies Kubernetes 1.13)

    • kops

    • minikube-1.13 (implies Kubernetes 1.13)

    • minikube-1.12 (implies Kubernetes 1.11/1.12)

    It should be run from the sysdig-cloud-scripts/k8s_audit_config directory.

    In some cases, it may prompt for the GCE project ID, IKS cluster name, etc..

    For Minikube/Openshift-3.11/Minishift 3.11, it will use ssh/scp to copy files to and run scripts on the API master node. Otherwise, it should be fully automated.