Host

sysdig_host_container_count

|Prometheus ID |sysdig_host_container_count | |Legacy ID |container.count | |Metric Type |gauge | |Unit |number | |Description |Count of the number of containers. | |Addional Notes|This metric is perfect for dashboards and alerts. In particular, you can create alerts that notify you when you have too many (or too few) containers of a certain type in a certain group or node - try segmenting by container.image, .id or .name. See also: host.count.|

sysdig_host_container_start_count

|Prometheus ID |sysdig_host_container_start_count| |Legacy ID |host.container.start.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_count

|Prometheus ID |sysdig_host_count | |Legacy ID |host.count | |Metric Type |gauge | |Unit |number | |Description |Count of the number of hosts. | |Addional Notes|This metric is perfect for dashboards and alerts. In particular, you can create alerts that notify you when you have too many (or too few) machines of a certain type in a certain group - try segment by tag or hostname. See also: container.count.|

sysdig_host_cpu_cores_used

|Prometheus ID |sysdig_host_cpu_cores_used| |Legacy ID |cpu.cores.used | |Metric Type |gauge | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_cpu_cores_used_percent

|Prometheus ID |sysdig_host_cpu_cores_used_percent| |Legacy ID |cpu.cores.used.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_cpu_idle_percent

|Prometheus ID |sysdig_host_cpu_idle_percent | |Legacy ID |cpu.idle.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request. | |Addional Notes|By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_cpu_iowait_percent

|Prometheus ID |sysdig_host_cpu_iowait_percent | |Legacy ID |cpu.iowait.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. | |Addional Notes|By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_cpu_nice_percent

|Prometheus ID |sysdig_host_cpu_nice_percent | |Legacy ID |cpu.nice.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of CPU utilization that occurred while executing at the user level with nice priority. | |Addional Notes|By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_cpu_stolen_percent

|Prometheus ID |sysdig_host_cpu_stolen_percent | |Legacy ID |cpu.stolen.percent | |Metric Type |gauge | |Unit |percent | |Description |CPU steal time is a measure of the percent of time that a virtual machine’s CPU is in a state of involuntary wait due to the fact that the physical CPU is shared among virtual machines. In calculating steal time, the operating system kernel detects when it has work available but does not have access to the physical CPU to perform that work. | |Addional Notes|If the percent of steal time is consistently high, you may want to stop and restart the instance (since it will most likely start on different physical hardware) or upgrade to a virtual machine with more CPU power. Also see the metric ‘capacity total percent’ to see how steal time directly impacts the number of server requests that could not be handled. On AWS EC2, steal time does not depend on the activity of other virtual machine neighbours. EC2 is simply making sure your instance is not using more CPU cycles than paid for.|

sysdig_host_cpu_system_percent

|Prometheus ID |sysdig_host_cpu_system_percent | |Legacy ID |cpu.system.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of CPU utilization that occurred while executing at the system level (kernel). | |Addional Notes|By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_cpu_used_percent

|Prometheus ID |sysdig_host_cpu_used_percent | |Legacy ID |cpu.used.percent | |Metric Type |gauge | |Unit |percent | |Description |The CPU usage for each container is obtained from cgroups, and normalized by dividing by the number of cores to determine an overall percentage. For example, if the environment contains six cores on a host, and the container or processes are assigned two cores, Sysdig will report CPU usage of 2/6 * 100% = 33.33%. This metric is calculated differently for hosts and processes.| |Addional Notes| |

sysdig_host_cpu_user_percent

|Prometheus ID |sysdig_host_cpu_user_percent | |Legacy ID |cpu.user.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of CPU utilization that occurred while executing at the user level (application). | |Addional Notes|By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_cpucore_idle_percent

|Prometheus ID |sysdig_host_cpucore_idle_percent| |Legacy ID |cpucore.idle.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_cpucore_iowait_percent

|Prometheus ID |sysdig_host_cpucore_iowait_percent| |Legacy ID |cpucore.iowait.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_cpucore_nice_percent

|Prometheus ID |sysdig_host_cpucore_nice_percent| |Legacy ID |cpucore.nice.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_cpucore_stolen_percent

|Prometheus ID |sysdig_host_cpucore_stolen_percent| |Legacy ID |cpucore.stolen.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_cpucore_system_percent

|Prometheus ID |sysdig_host_cpucore_system_percent| |Legacy ID |cpucore.system.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_cpucore_used_percent

|Prometheus ID |sysdig_host_cpucore_used_percent| |Legacy ID |cpucore.used.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_cpucore_user_percent

|Prometheus ID |sysdig_host_cpucore_user_percent| |Legacy ID |cpucore.user.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_fd_used_percent

|Prometheus ID |sysdig_host_fd_used_percent | |Legacy ID |fd.used.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of used file descriptors out of the maximum available. | |Addional Notes|Usually, when a process reaches its FD limit it will stop operating properly and possibly crash. As a consequence, this is a metric you want to monitor carefully, or even better use for alerts.|

sysdig_host_file_error_open_count

|Prometheus ID |sysdig_host_file_error_open_count | |Legacy ID |file.error.open.count | |Metric Type |counter | |Unit |number | |Description |Number of errors in opening files. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_file_error_total_count

|Prometheus ID |sysdig_host_file_error_total_count | |Legacy ID |file.error.total.count | |Metric Type |counter | |Unit |number | |Description |Number of error caused by file access. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_file_in_bytes

|Prometheus ID |sysdig_host_file_in_bytes | |Legacy ID |file.bytes.in | |Metric Type |counter | |Unit |data | |Description |Amount of bytes read from file. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_file_in_iops

|Prometheus ID |sysdig_host_file_in_iops | |Legacy ID |file.iops.in | |Metric Type |counter | |Unit |number | |Description |Number of file read operations per second. | |Addional Notes|This is calculated by measuring the actual number of read and write requests made by a process. Therefore, it can differ from what other tools show, which is usually based on interpolating this value from the number of bytes read and written to the file system.|

sysdig_host_file_in_time

|Prometheus ID |sysdig_host_file_in_time | |Legacy ID |file.time.in | |Metric Type |counter | |Unit |time | |Description |Time spent in file reading. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_file_open_count

|Prometheus ID |sysdig_host_file_open_count | |Legacy ID |file.open.count | |Metric Type |counter | |Unit |number | |Description |Number of time the file has been opened.| |Addional Notes| |

sysdig_host_file_out_bytes

|Prometheus ID |sysdig_host_file_out_bytes | |Legacy ID |file.bytes.out | |Metric Type |counter | |Unit |data | |Description |Amount of bytes written to file. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_file_out_iops

|Prometheus ID |sysdig_host_file_out_iops | |Legacy ID |file.iops.out | |Metric Type |counter | |Unit |number | |Description |Number of file write operations per second. | |Addional Notes|This is calculated by measuring the actual number of read and write requests made by a process. Therefore, it can differ from what other tools show, which is usually based on interpolating this value from the number of bytes read and written to the file system.|

sysdig_host_file_out_time

|Prometheus ID |sysdig_host_file_out_time | |Legacy ID |file.time.out | |Metric Type |counter | |Unit |time | |Description |Time spent in file writing. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_file_total_bytes

|Prometheus ID |sysdig_host_file_total_bytes | |Legacy ID |file.bytes.total | |Metric Type |counter | |Unit |data | |Description |Amount of bytes read from and written to file. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_file_total_iops

|Prometheus ID |sysdig_host_file_total_iops | |Legacy ID |file.iops.total | |Metric Type |counter | |Unit |number | |Description |Number of read and write file operations per second. | |Addional Notes|This is calculated by measuring the actual number of read and write requests made by a process. Therefore, it can differ from what other tools show, which is usually based on interpolating this value from the number of bytes read and written to the file system.|

sysdig_host_file_total_time

|Prometheus ID |sysdig_host_file_total_time | |Legacy ID |file.time.total | |Metric Type |counter | |Unit |time | |Description |Time spent in file I/O. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_fs_free_bytes

|Prometheus ID |sysdig_host_fs_free_bytes | |Legacy ID |fs.bytes.free | |Metric Type |gauge | |Unit |data | |Description |Filesystem available space.| |Addional Notes| |

sysdig_host_fs_free_percent

|Prometheus ID |sysdig_host_fs_free_percent | |Legacy ID |fs.free.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of filesystem free space.| |Addional Notes| |

sysdig_host_fs_inodes_total_count

|Prometheus ID |sysdig_host_fs_inodes_total_count| |Legacy ID |fs.inodes.total.count | |Metric Type |gauge | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_fs_inodes_used_count

|Prometheus ID |sysdig_host_fs_inodes_used_count| |Legacy ID |fs.inodes.used.count | |Metric Type |gauge | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_fs_inodes_used_percent

|Prometheus ID |sysdig_host_fs_inodes_used_percent| |Legacy ID |fs.inodes.used.percent | |Metric Type |gauge | |Unit |percent | |Description | | |Addional Notes| |

sysdig_host_fs_largest_used_percent

|Prometheus ID |sysdig_host_fs_largest_used_percent | |Legacy ID |fs.largest.used.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of the largest filesystem in use.| |Addional Notes| |

sysdig_host_fs_root_used_percent

|Prometheus ID |sysdig_host_fs_root_used_percent | |Legacy ID |fs.root.used.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of the root filesystem in use.| |Addional Notes| |

sysdig_host_fs_total_bytes

|Prometheus ID |sysdig_host_fs_total_bytes| |Legacy ID |fs.bytes.total | |Metric Type |gauge | |Unit |data | |Description |Filesystem size. | |Addional Notes| |

sysdig_host_fs_used_bytes

|Prometheus ID |sysdig_host_fs_used_bytes| |Legacy ID |fs.bytes.used | |Metric Type |gauge | |Unit |data | |Description |Filesystem used space. | |Addional Notes| |

sysdig_host_fs_used_percent

|Prometheus ID |sysdig_host_fs_used_percent | |Legacy ID |fs.used.percent | |Metric Type |gauge | |Unit |percent | |Description |Percentage of the sum of all filesystems in use.| |Addional Notes| |

sysdig_host_info

|Prometheus ID |sysdig_host_info| |Legacy ID |info | |Metric Type |gauge | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_load_average_15m

|Prometheus ID |sysdig_host_load_average_15m | |Legacy ID |load.average.15m | |Metric Type |gauge | |Unit |number | |Description |The 15 minute system load average represents the average number of jobs in (1) the CPU run queue or (2) waiting for disk I/O averaged over 15 minutes for all cores. The value should correspond to the third (and last) load average value displayed by ‘uptime’ command.| |Addional Notes| |

sysdig_host_load_average_1m

|Prometheus ID |sysdig_host_load_average_1m | |Legacy ID |load.average.1m | |Metric Type |gauge | |Unit |number | |Description |The 1 minute system load average represents the average number of jobs in (1) the CPU run queue or (2) waiting for disk I/O averaged over 1 minute for all cores. The value should correspond to the first (of three) load average values displayed by ‘uptime’ command.| |Addional Notes| |

sysdig_host_load_average_5m

|Prometheus ID |sysdig_host_load_average_5m | |Legacy ID |load.average.5m | |Metric Type |gauge | |Unit |number | |Description |The 5 minute system load average represents the average number of jobs in (1) the CPU run queue or (2) waiting for disk I/O averaged over 5 minutes for all cores. The value should correspond to the second (of three) load average values displayed by ‘uptime’ command.| |Addional Notes| |

sysdig_host_load_average_percpu_15m

|Prometheus ID |sysdig_host_load_average_percpu_15m | |Legacy ID |load.average.percpu.15m | |Metric Type |gauge | |Unit |number | |Description |The 15 minute system load average represents the average number of jobs in (1) the CPU run queue or (2) waiting for disk I/O averaged over 15 minutes, divided by number of system CPUs.| |Addional Notes| |

sysdig_host_load_average_percpu_1m

|Prometheus ID |sysdig_host_load_average_percpu_1m | |Legacy ID |load.average.percpu.1m | |Metric Type |gauge | |Unit |number | |Description |The 1 minute system load average represents the average number of jobs in (1) the CPU run queue or (2) waiting for disk I/O averaged over 1 minute, divided by number of system CPUs.| |Addional Notes| |

sysdig_host_load_average_percpu_5m

|Prometheus ID |sysdig_host_load_average_percpu_5m | |Legacy ID |load.average.percpu.5m | |Metric Type |gauge | |Unit |number | |Description |The 5 minute system load average represents the average number of jobs in (1) the CPU run queue or (2) waiting for disk I/O averaged over 5 minutes, divided by number of system CPUs.| |Addional Notes| |

sysdig_host_memory_available_bytes

|Prometheus ID |sysdig_host_memory_available_bytes | |Legacy ID |memory.bytes.available | |Metric Type |gauge | |Unit |data | |Description |The available memory for a host is obtained from /proc/meminfo. For environments using Linux kernel version 3.12 and later, the available memory is obtained using the mem.available field in /proc/meminfo. For environments using earlier kernel versions, the formula is MemFree + Cached + Buffers.| |Addional Notes| |

sysdig_host_memory_swap_available_bytes

|Prometheus ID |sysdig_host_memory_swap_available_bytes | |Legacy ID |memory.swap.bytes.available | |Metric Type |gauge | |Unit |data | |Description |Available amount of swap memory. | |Addional Notes|Sum of free and cached swap memory. By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_memory_swap_total_bytes

|Prometheus ID |sysdig_host_memory_swap_total_bytes | |Legacy ID |memory.swap.bytes.total | |Metric Type |gauge | |Unit |data | |Description |Total amount of swap memory. | |Addional Notes|By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_memory_swap_used_bytes

|Prometheus ID |sysdig_host_memory_swap_used_bytes | |Legacy ID |memory.swap.bytes.used | |Metric Type |gauge | |Unit |data | |Description |Used amount of swap memory. | |Addional Notes|The amount of used swap memory is calculated by subtracting available from total swap memory. By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_memory_swap_used_percent

|Prometheus ID |sysdig_host_memory_swap_used_percent | |Legacy ID |memory.swap.used.percent | |Metric Type |gauge | |Unit |percent | |Description |Used percent of swap memory. | |Addional Notes|The percentage of used swap memory is calculated as percentual ratio of used and total swap memory. By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_memory_total_bytes

|Prometheus ID |sysdig_host_memory_total_bytes | |Legacy ID |memory.bytes.total | |Metric Type |gauge | |Unit |data | |Description |The total memory of a host, in bytes. This value is obtained from /proc.| |Addional Notes| |

sysdig_host_memory_used_bytes

|Prometheus ID |sysdig_host_memory_used_bytes | |Legacy ID |memory.bytes.used | |Metric Type |gauge | |Unit |data | |Description |The amount of physical memory currently in use. | |Addional Notes|By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, the metric can also be segmented by using ‘Segment by’ in the UI.|

sysdig_host_memory_used_percent

|Prometheus ID |sysdig_host_memory_used_percent | |Legacy ID |memory.used.percent | |Metric Type |gauge | |Unit |percent | |Description |The percentage of physical memory in use. | |Addional Notes|By default, this metric shows the average value for the selected scope. For instance, if you apply it to a group of machines, you will see the average value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_memory_virtual_bytes

|Prometheus ID |sysdig_host_memory_virtual_bytes | |Legacy ID |memory.bytes.virtual | |Metric Type |gauge | |Unit |data | |Description |The virtual memory size of the process, in bytes. This value is obtained from Sysdig events.| |Addional Notes| |

sysdig_host_net_connection_in_count

|Prometheus ID |sysdig_host_net_connection_in_count | |Legacy ID |net.connection.count.in | |Metric Type |counter | |Unit |number | |Description |Number of currently established client (inbound) connections. | |Addional Notes|This metric is especially useful when segmented by protocol, port or process.|

sysdig_host_net_connection_out_count

|Prometheus ID |sysdig_host_net_connection_out_count | |Legacy ID |net.connection.count.out | |Metric Type |counter | |Unit |number | |Description |Number of currently established server (outbound) connections. | |Addional Notes|This metric is especially useful when segmented by protocol, port or process.|

sysdig_host_net_connection_total_count

|Prometheus ID |sysdig_host_net_connection_total_count | |Legacy ID |net.connection.count.total | |Metric Type |counter | |Unit |number | |Description |Number of currently established connections. This value may exceed the sum of the inbound and outbound metrics since it represents client and server inter-host connections as well as internal only connections.| |Addional Notes|This metric is especially useful when segmented by protocol, port or process. |

sysdig_host_net_error_count

|Prometheus ID |sysdig_host_net_error_count | |Legacy ID |net.error.count | |Metric Type |counter | |Unit |number | |Description |Number of network errors. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_net_http_error_count

|Prometheus ID |sysdig_host_net_http_error_count | |Legacy ID |net.http.error.count | |Metric Type |counter | |Unit |number | |Description |Number of failed HTTP requests as counted from 4xx/5xx status codes.| |Addional Notes| |

sysdig_host_net_http_request_count

|Prometheus ID |sysdig_host_net_http_request_count| |Legacy ID |net.http.request.count | |Metric Type |counter | |Unit |number | |Description |Count of HTTP requests. | |Addional Notes| |

sysdig_host_net_http_request_time

|Prometheus ID |sysdig_host_net_http_request_time| |Legacy ID |net.http.request.time | |Metric Type |counter | |Unit |time | |Description |Average time for HTTP requests. | |Addional Notes| |

sysdig_host_net_http_statuscode_error_count

|Prometheus ID |sysdig_host_net_http_statuscode_error_count| |Legacy ID |net.http.statuscode.error.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_http_statuscode_request_count

|Prometheus ID |sysdig_host_net_http_statuscode_request_count| |Legacy ID |net.http.statuscode.request.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_http_url_error_count

|Prometheus ID |sysdig_host_net_http_url_error_count| |Legacy ID |net.http.url.error.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_http_url_request_count

|Prometheus ID |sysdig_host_net_http_url_request_count| |Legacy ID |net.http.url.request.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_http_url_request_time

|Prometheus ID |sysdig_host_net_http_url_request_time| |Legacy ID |net.http.url.request.time | |Metric Type |counter | |Unit |time | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_collection_error_count

|Prometheus ID |sysdig_host_net_mongodb_collection_error_count| |Legacy ID |net.mongodb.collection.error.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_collection_request_count

|Prometheus ID |sysdig_host_net_mongodb_collection_request_count| |Legacy ID |net.mongodb.collection.request.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_collection_request_time

|Prometheus ID |sysdig_host_net_mongodb_collection_request_time| |Legacy ID |net.mongodb.collection.request.time | |Metric Type |counter | |Unit |time | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_error_count

|Prometheus ID |sysdig_host_net_mongodb_error_count| |Legacy ID |net.mongodb.error.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_operation_error_count

|Prometheus ID |sysdig_host_net_mongodb_operation_error_count| |Legacy ID |net.mongodb.operation.error.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_operation_request_count

|Prometheus ID |sysdig_host_net_mongodb_operation_request_count| |Legacy ID |net.mongodb.operation.request.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_operation_request_time

|Prometheus ID |sysdig_host_net_mongodb_operation_request_time| |Legacy ID |net.mongodb.operation.request.time | |Metric Type |counter | |Unit |time | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_request_count

|Prometheus ID |sysdig_host_net_mongodb_request_count| |Legacy ID |net.mongodb.request.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_mongodb_request_time

|Prometheus ID |sysdig_host_net_mongodb_request_time| |Legacy ID |net.mongodb.request.time | |Metric Type |counter | |Unit |time | |Description | | |Addional Notes| |

sysdig_host_net_in_bytes

|Prometheus ID |sysdig_host_net_in_bytes | |Legacy ID |net.bytes.in | |Metric Type |counter | |Unit |data | |Description |Inbound network bytes. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_net_out_bytes

|Prometheus ID |sysdig_host_net_out_bytes | |Legacy ID |net.bytes.out | |Metric Type |counter | |Unit |data | |Description |Outbound network bytes. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_net_request_count

|Prometheus ID |sysdig_host_net_request_count | |Legacy ID |net.request.count | |Metric Type |counter | |Unit |number | |Description |Total number of network requests. Note, this value may exceed the sum of inbound and outbound requests, because this count includes requests over internal connections.| |Addional Notes| |

sysdig_host_net_request_in_count

|Prometheus ID |sysdig_host_net_request_in_count | |Legacy ID |net.request.count.in | |Metric Type |counter | |Unit |number | |Description |Number of inbound network requests.| |Addional Notes| |

sysdig_host_net_request_in_time

|Prometheus ID |sysdig_host_net_request_in_time | |Legacy ID |net.request.time.in | |Metric Type |counter | |Unit |time | |Description |Average time to serve an inbound request.| |Addional Notes| |

sysdig_host_net_request_out_count

|Prometheus ID |sysdig_host_net_request_out_count | |Legacy ID |net.request.count.out | |Metric Type |counter | |Unit |number | |Description |Number of outbound network requests.| |Addional Notes| |

sysdig_host_net_request_out_time

|Prometheus ID |sysdig_host_net_request_out_time | |Legacy ID |net.request.time.out | |Metric Type |counter | |Unit |time | |Description |Average time spent waiting for an outbound request.| |Addional Notes| |

sysdig_host_net_request_time

|Prometheus ID |sysdig_host_net_request_time | |Legacy ID |net.request.time | |Metric Type |counter | |Unit |time | |Description |Average time to serve a network request.| |Addional Notes| |

sysdig_host_net_server_connection_in_count

|Prometheus ID |sysdig_host_net_server_connection_in_count| |Legacy ID |net.server.connection.count.in | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_server_in_bytes

|Prometheus ID |sysdig_host_net_server_in_bytes| |Legacy ID |net.server.bytes.in | |Metric Type |counter | |Unit |data | |Description | | |Addional Notes| |

sysdig_host_net_server_out_bytes

|Prometheus ID |sysdig_host_net_server_out_bytes| |Legacy ID |net.server.bytes.out | |Metric Type |counter | |Unit |data | |Description | | |Addional Notes| |

sysdig_host_net_server_total_bytes

|Prometheus ID |sysdig_host_net_server_total_bytes| |Legacy ID |net.server.bytes.total | |Metric Type |counter | |Unit |data | |Description | | |Addional Notes| |

sysdig_host_net_sql_error_count

|Prometheus ID |sysdig_host_net_sql_error_count| |Legacy ID |net.sql.error.count | |Metric Type |counter | |Unit |number | |Description |Number of Failed SQL requests. | |Addional Notes| |

sysdig_host_net_sql_query_error_count

|Prometheus ID |sysdig_host_net_sql_query_error_count| |Legacy ID |net.sql.query.error.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_sql_query_request_count

|Prometheus ID |sysdig_host_net_sql_query_request_count| |Legacy ID |net.sql.query.request.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_sql_query_request_time

|Prometheus ID |sysdig_host_net_sql_query_request_time| |Legacy ID |net.sql.query.request.time | |Metric Type |counter | |Unit |time | |Description | | |Addional Notes| |

sysdig_host_net_sql_querytype_error_count

|Prometheus ID |sysdig_host_net_sql_querytype_error_count| |Legacy ID |net.sql.querytype.error.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_sql_querytype_request_count

|Prometheus ID |sysdig_host_net_sql_querytype_request_count| |Legacy ID |net.sql.querytype.request.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_sql_querytype_request_time

|Prometheus ID |sysdig_host_net_sql_querytype_request_time| |Legacy ID |net.sql.querytype.request.time | |Metric Type |counter | |Unit |time | |Description | | |Addional Notes| |

sysdig_host_net_sql_request_count

|Prometheus ID |sysdig_host_net_sql_request_count| |Legacy ID |net.sql.request.count | |Metric Type |counter | |Unit |number | |Description |Number of SQL requests. | |Addional Notes| |

sysdig_host_net_sql_request_time

|Prometheus ID |sysdig_host_net_sql_request_time | |Legacy ID |net.sql.request.time | |Metric Type |counter | |Unit |time | |Description |Average time to complete a SQL request.| |Addional Notes| |

sysdig_host_net_sql_table_error_count

|Prometheus ID |sysdig_host_net_sql_table_error_count| |Legacy ID |net.sql.table.error.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_sql_table_request_count

|Prometheus ID |sysdig_host_net_sql_table_request_count| |Legacy ID |net.sql.table.request.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_net_sql_table_request_time

|Prometheus ID |sysdig_host_net_sql_table_request_time| |Legacy ID |net.sql.table.request.time | |Metric Type |counter | |Unit |time | |Description | | |Addional Notes| |

sysdig_host_net_tcp_queue_len

|Prometheus ID |sysdig_host_net_tcp_queue_len | |Legacy ID |net.tcp.queue.len | |Metric Type |counter | |Unit |number | |Description |Length of the TCP request queue.| |Addional Notes| |

sysdig_host_net_total_bytes

|Prometheus ID |sysdig_host_net_total_bytes | |Legacy ID |net.bytes.total | |Metric Type |counter | |Unit |data | |Description |Total network bytes, inbound and outbound. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_proc_count

|Prometheus ID |sysdig_host_proc_count | |Legacy ID |proc.count | |Metric Type |counter | |Unit |number | |Description |Number of processes on host or container.| |Addional Notes| |

sysdig_host_syscall_count

|Prometheus ID |sysdig_host_syscall_count | |Legacy ID |syscall.count | |Metric Type |gauge | |Unit |number | |Description |Total number of syscalls seen | |Addional Notes|Syscalls are resource intensive. This metric tracks how many have been made by a given process or container|

sysdig_host_syscall_error_count

|Prometheus ID |sysdig_host_syscall_error_count | |Legacy ID |host.error.count | |Metric Type |counter | |Unit |number | |Description |Number of system call errors. | |Addional Notes|By default, this metric shows the total value for the selected scope. For instance, if you apply it to a group of machines, you will see the total value for the whole group. However, you can easily segment the metric to see it by host, process, container, and so on. Just use ‘Segment by’ in the UI.|

sysdig_host_system_uptime

|Prometheus ID |sysdig_host_system_uptime | |Legacy ID |system.uptime | |Metric Type |gauge | |Unit |time | |Description |This metric is sent by the agent and represent the amount of seconds since host boot time. It is not available with container granularity.| |Addional Notes| |

sysdig_host_thread_count

|Prometheus ID |sysdig_host_thread_count| |Legacy ID |thread.count | |Metric Type |counter | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_timeseries_count_appcheck

|Prometheus ID |sysdig_host_timeseries_count_appcheck| |Legacy ID |metricCount.appCheck | |Metric Type |gauge | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_timeseries_count_jmx

|Prometheus ID |sysdig_host_timeseries_count_jmx| |Legacy ID |metricCount.jmx | |Metric Type |gauge | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_timeseries_count_prometheus

|Prometheus ID |sysdig_host_timeseries_count_prometheus| |Legacy ID |metricCount.prometheus | |Metric Type |gauge | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_timeseries_count_statsd

|Prometheus ID |sysdig_host_timeseries_count_statsd| |Legacy ID |metricCount.statsd | |Metric Type |gauge | |Unit |number | |Description | | |Addional Notes| |

sysdig_host_up

|Prometheus ID |sysdig_host_up | |Legacy ID |uptime | |Metric Type |gauge | |Unit |number | |Description |The percentage of time the selected entity was down during the visualized time sample. This can be used to determine if a machine (or a group of machines) went down.| |Addional Notes| |



Last modified June 24, 2022