Secure Overview [BETA]

The Secure Overview page provides an entry point to Sysdig Secure and a birds-eye view of your assets and their status.

secure_overview.png

Chart Highlights

The Overview page displays pass/fail results over time, to a maximum of 90 days.

If there are any broken lines in the trend chart, it means there was no data available for that period.

Definitions

  • Build-time images: All the images that have been evaluated by Sysdig Secure.

  • Runtime images: All the images that are being used by running containers in the past few hours

  • Policy Events: The security events generated as a result of policies

  • Benchmarks: Docker/Kubernetes CIS benchmark check results

Scope

Panels can be scoped by Cluster or Namespace. The scope will update all panels that are displaying run-time data and the corresponding drill-down views.

The panels are affected in the following ways by the scope:

  • Build Time - Images Scanned and Build Time - CVEs Found by Severity (OS and Non-OS):

    Not impacted by this filter.

    When filtered by cluster, a small info icon appears on build time panels showing the results are independent of cluster

  • All other panels get filtered by cluster/namespace (filters both instant data and trend chart).

  • Benchmarks panel: cannot be filtered by namespace.

    When namespace is selected, it will still show the cluster’s data and a small info icon appears on the panel showing the results are independent of namespace.

  • Namespace: disabled when a non-Kubernetes cluster is selected.

  • "Non-k8s" as a cluster selection will show all results that are running outside of the scope of a kubernetes cluster.

Panel Details

The graphs display pass/fail results over time, to a maximum of 90 days. Note that if you have less data (e.g., two days), then only two days will be shown.

Build Time - Images Scanned

Shows the pass/fail status of all the images analyzed by Sysdig Secure.

Donut: shows past 24 hours

overview_btis.png
Table 5. Data Collection Details

Duration

Process

Drill-Down

Last 24 hours of data

Data is collected and aggregated every 6 hours.

Example: Suppose the last computation happened at 10 AM. Was: 6 pass, 2 fail. Two new images are added at 12 PM (status = pass). The panel count is updated at 4 PM to 8 pass, 2 fail.

Reports page.

Shows all the images that were added.

In this example, if user drills down at 10 AM, reports page will show 6 pass, 2 fail. At 12 PM, reports page will show 8 pass, 2 fail (may not match overview data). At 4 PM, both reports and overview page will show 8 pass, 2 fail.



Runtime - Images Scanned

Shows the pass/fail status of all runtime images scanned across clusters for the past 1 hour.

Donut: shows last 1-hour snapshot of data

overview_ris.png
Table 6. Data Collection Details

Duration

Process

Drill-Down

Last 1-hour snapshot

Shows the runtime images across clusters for the last 1 hour.

Example: Suppose the last computation happened at 10 AM was: 6 pass, 2 fail, 1 unscanned.

Three new runtime images were added at 12 PM (2 fail, 1 unscanned). The panel count is updated at 4 PM and shows as 6 pass, 3 fail, 2 unscanned.

Runtime Scanning Image page.

Note: Though the count usually matches between the overview panel and the runtime image page, it may not always match. Reason: The overview runtime panel aggregates data for the last hour of data (10.30 - 11.30), but the runtime scanning page shows the snapshot for the last hour (10.00 - 11.00).



Runtime - Policy Events by Severity

Shows the events in Sysdig Secure over the past 24 hours, sorted by high, medium, low, and information levels of severity.

Donut: shows past 24 hours of data

overview_rpes.png
Table 7. Data Collection Details

Duration

Process

Drill-Down

Last 24 hours of data

Data is collected and aggregated every 6 hours.

Example: suppose the last computation happened at 10 AM. Was: 10 high, 4 medium, 7 low, 2 info. Four new events were triggered at 12 PM (2 high, 2 info). The panel count is updated at 4PM and shows 12 high, 4 medium, 7 low, 4 info.

Events page

Note: The Events page shows all events that were triggered. In this example, if user drills down at 10 AM, Events page will show 10 high, 4 medium, 7 low, 2 info for the last 1 day. At 12 PM Events will show 12 high, 4 medium, 7 low, 4 info (may not match Overview data). At 4 PM, both Overview and Events page will show 12 high, 4 medium, 7 low, 4 info.



Build Time - CVEs Found by Severity (OS and non-OS)

Shows the Common Vulnerabilities and Exposures detected over the past 24 hours, sorted by high, medium, low, and information levels of severity.

Donut: shows last 24 hours of data

overview_bt_cves.png
Table 8. Data Collection Details

Duration

Process

Drill-Down

Past 24 hours of data

Data is collected and aggregated every 6 hours.

Example: suppose the last computation happened at 10 AM. Was: 10 critical, 4 high, 7 medium, 2 low. Two new images with vulnerabilities were added at 12 PM (OS Vuln: 2 high, 2 low; Non OS vuln: 3 Critical, 1 high). The panel count is updated at 4 PM and shows as 13 Critical, 7 high, 7 medium, 3 low.

No drilldown yet. To be added.



Runtime - CVEs Found by Severity (OS and non-OS)

Shows the Common Vulnerabilities and Exposures detected for runtime images across clusters for the last 1 hour.

Donut: shows 1-hour snapshot

Overview_rt_cves.png
Table 9. Data Collection Details

Duration

Process

Drill-Down

Last 1-hour snapshot

Shows CVEs for runtime images across clusters for the last 1 hour.

Example: suppose the last computation happened at 10 AM. Was: 10 critical, 4 high, 7 medium, 2 low. Two new images with vulnerabilities were added at 12 PM (OS Vuln: 2 high, 2 low; Non OS vuln: 3 Critical, 1 high). The panel count is updated at 4 PM and shows as 13 Critical, 7 high, 7 medium, 3 low.

No drilldown yet. To be added.



Runtime - Benchmark Tests Failed

Shows the average of failed benchmark results across hosts, sorted by test type.

Donut: shows

overview_rt_bench.png
Table 10. Data Collection Details

Duration

Process

Drill-Down

Average of last host for the type

Average of failed benchmark results across hosts

Example: Suppose there are 3 Kubernets hosts and the last Kubernetes benchmark result on the hosts were: h1 = 23 fail, h2 = 24 fail, h3 = 24 fail.

The Overview page will now show Kubernetes benchmarks results as 24.

Benchmarks Results page