# [Beta] Network Security Policy Tool

By default, all pods within a Kubernetes cluster can communicate with each other without any restrictions. Kubernetes Network Policies help you isolate the microservice applications from each other, to limit the blast radius and improve the overall security posture.

With the Network Security Policy tool, you can author and fine-tune Kubernetes network policies within Sysdig Secure. Use it to generate a “least-privilege” policy to protect your workloads, incorporating both observed network traffic and additional user assessment. It doesn’t introduce any additional firewalls or inline connection proxies, leveraging the functionality that already exists in Kubernetes instead.

## Benefits

This tool provides deep insight into microservice communications, saves time and effort by automatically describing network policies based on observed traffic, and guides the user to author appropriate policies.

More specifically, it delivers:

• Out-of-the-box visibility into network traffic between applications and services, with a visual topology map to help identify communications.

• A baseline network policy that you can directly refine and modify to match your desired declarative state.

• Automated KNP generation based on the network communication baseline + user-defined adjustments.

• Least-privilege: KNPs follow an allow-only model, any communication that is not explicitly allowed will be forbidden

• Enforcement delegated to the Kubernetes control plane, avoiding additional instrumentation or directly tampering with the host’s network configuration

## Using the Network Security Policy Tool

To use the Network Security Policy tool, follow these basic steps:

1. Ensure your environment meets the Prerequisites.

2. Set up the scope (which entities over which time periods should be analyzed?).

3. Review the Ingress/Egress tables and edit the detected communications as desired.

4. Review everything visually in the Topography Map.

### Prerequisites

Sysdig agent version 10.7+

Supported Orchestrator Distributions and CNI Plugins:

• Vanilla Kubernetes (kops, kube-admin) using Calico

• OpenShift 4.x using OVS

• Amazon EKS using Calico

• Rancher Kubernetes using Calico

### Set the Scope

You first define the Kubernetes entity and timeframe for which you want to aggregate communications.

1. In the Sysdig Secure UI, select Policies>Network Security Policies from the left menu.

2. Choose Cluster and Namespace from the drop-down menus.

3. Select the type of Kubernetes entity for which you want to create a policy:

• Service

• Deployment

• Daemonset

• Stateful Set

• Job

4. Select the timespan, i.e. how far back in time to aggregate the observed communications for the entity. The interface will display the Ingress / Egress tables for that Kubernetes entity and timeframe

### Manage Ingress and Egress

The ingress/egress tables detail the observed communications for the selected entity (pod owner) and time period.

Granular and global assignments: You can then cherry-pick rows to include/exclude from the policy granularly, or establish general rules using the drop-down global rule options.

Choose Ingress or Egress to review and edit the detected communications:

1. Select the scope as described above.

2. Edit the permitted communications as desired, by either:

• Selecting/deselecting rows of allowed communication, or

• Choosing General Ingress/Egress Rules: Block All, Allow All Inside Namespace, or Allow All.

3. Repeat on the other table, then proceed to check the topology and/or generate the policy.

### Use Topology Visualization

Use the Topology view to visually validate if this is the policy you want, or if something should be changed. The topology view is a high-level Kubernetes metadata view: pod owners, listening ports, services, and labels.

Communications that will not be allowed if you decide to apply this policy are color-coded red.

When you are satisfied with the rules and communication lines, simply click the Generated Policy tab to get an instantaneously generated file.

## Understanding How the Data Is Processed

Aggregation: Communications are aggregated using Kubernetes metadata to avoid having additional entries that are not relevant for the policy creation. For example, if pod A under deployment A communicates several times with pod B under deployment B, only one entry appears in the interface.

Unresolved IPs:For some communications, it may not be possible to resolve one of the endpoints to Kubernetes metadata. For example, if a microservice is communicating with an external web server, that external IP is not associated with any Kubernetes metadata in your cluster. The UI will still display these entities as "unresolved IPs." Unresolved IPs are excluded by default from the Kubernetes network policy, but can be added manually via the ingress/egress interface.

Cluster CIDR: Unresolved IPs are listed and categorized as “internal” (inside the cluster), “external” (outside the cluster) or “unknown,” (subnet information incomplete). For unknowns, Sysdig will prompt with an error message to help you resolve it. See Error Message: Cluster subnet is incomplete . To avoid cluttering the UI with unresolved IPs, the tool retains a maximum of 5 each, internal, external, and unknown IPs.

## Sample Use Cases

In all cases, you begin by leaving the application running for at least 12 hours, to allow the agent to collect information.

### Case 1: Only Allow Specified Ingress/Egress Communications

As a developer, you want to create a Kubernetes network policy that only allows your service/deployment to establish ingress and egress network communications that you explicitly allow.

• Select the cluster namespace and deployment for your application.

You should see pre-computed ingress and egress tables. You know the application does not communicate with any external IP for ingress or egress, so should not see any unresolved IPs. The topology map shows the same information.

• Change a rule: You decide one service your application is communicating with is obsolete. You uncheck that row in the egress table.

• Check the topology map. You will see the communication still exists, but is now drawn in red, meaning that it is forbidden using the current Kubernetes network policy (KNP).

• Check the generated policy code. Verify that it follows your plan:

• No ingress/egress raw IP

• No entry for the service you explicitly excluded

• Verify that your application can only communicate with the services that were marked in black in the topology and checked in the tables. Then generate and download the policy to apply it.

As a developer, you know your application uses proxies with a static IP and you want to configure a policy that allows your application to access them.

• See the proxy IPs in the egress section of the interface

• Use the Allow Egress to IP  mask to create a manual rule to allow those IPs in particular

• De-select all the other entries in the ingress and egress tables

• Looking at the topology map, verify that only the communications to these external IPs are marked in black, the other communications with the other services/deployments are marked in red

### Case 3: Allow Communication Only Inside the Namespace

You know that your application should only communicate inside the namespace, both for ingress and for egress.

• Allow ingress inside the namespace using the general rules

• Allow egress inside the namespace using the general rules

• Generate the policy and confirm: everything inside the namespace is allowed, without nominating a particular service/deployment, then apply it.

Your application deployment A only communicates with applications in deployment B, which lives in a different namespace. You only need that egress traffic; there is no ingress traffic required for that communication.

• Verify that the ingress table is empty, both for Kubernetes entities and for raw IPs

• Verify that the only communication listed on the Egress table is communication with deployment B

• Your application cannot communicate with other entities inside A’s namespace

• The application can contact the cluster DNS server to resolve other entities

### Case 5: Allow Access When a Deployment Has Been Relabeled

As a developer, you want to create a policy that only allows your service/deployment to establish ingress and egress network communications that you explicitly allow, and you need to make a change.

• After leaving the application running for a few hours, you realize you didn't tag all the namespaces involved in this policy

A message at the top of the view will state "you need to assign labels to this namespace".

• Confirm the situation in the different views:

• The generated policy should not have an entry for that communication

• The Topology map should show the connection with a red line

• Attach a label to the namespace that was missing it. After some minutes, a row shows the updated information.

• Whitelist the connection appropriately.

## Troubleshooting

Tips to resolve common error messages:

### Error message: Namespaces without labels

Problem: Namespaces must be labeled for the KNPs to define ingress/egress rules. If non-labeled namespaces are detected in the targeted communications, the "Namespaces without labels" error message is displayed in the UI:

Resolution: Simply assign a label to the relevant namespace and wait a few minutes for the system's auto-detection to catch up.

### Error Message: Cluster subnet is incomplete

Problem: To categorize unresolved IPs as inside or outside the cluster, the agent must know which CIDR ranges belong to the cluster. By default, the agent tries to discover the ranges by examining the command line arguments of the kube-apiserver and kube-controller-manager processes.

If it cannot auto-discover the cluster subnets, the "cluster subnet is incomplete" error message is displayed in the UI:

Resolution: Specify the ranges explicitly by appending the following to the agent configmap:

network_topology:
service_cidr: <E.F.G.H/MASK> 
In rare cases, you may need to configure the agent to look for the CIDR ranges in other processes than the default kube-apiserver, kube-controller-manager processes. In that case, append the following vveto the agent configmap:
network_topology:
[<PROCESS_NAME>, <PROCESS_NAME>]