PostgreSQL
This integration is enabled by default.
Versions supported: > v9.4
This integration uses a standalone exporter that is available in UBI or scratch base image.
This integration has 34 metrics.
Timeseries generated: 250 series per instance + 25 series per database + 30 series per table
List of Alerts
Alert | Description | Format |
---|---|---|
[PostgreSQL] Instance Down | PostgreSQL instance is unavailable | Prometheus |
[PostgreSQL] Max Write Buffer Reached | Background writer stops because it reached the maximum write buffers | Prometheus |
[PostgreSQL] High WAL Files Archive Error Rate | High error rate in WAL files archiver | Prometheus |
[PostgreSQL] Low Available Connections | Low available network connections | Prometheus |
[PostgreSQL] High Response Time | High response time in at least one of the databases | Prometheus |
[PostgreSQL] Low Cache Hit Rate | Low cache hit rate | Prometheus |
[PostgreSQL] DeadLocks In Database | Deadlocks detected in database | Prometheus |
List of Dashboards
PostgreSQL Instance Health
The dashboard provides information on the status, error rate and resource usage of a PostgreSQL instance.
PostgreSQL Database Details
The dashboard provides information on the connections, cache hit rate, error rate, latency and traffic of one of the databases of the postgreSQL instance.
List of Metrics
Metric name |
---|
pg_database_size_bytes |
pg_locks_count |
pg_replication_lag_seconds |
pg_settings_max_connections |
pg_settings_superuser_reserved_connections |
pg_stat_activity_count |
pg_stat_activity_max_tx_duration |
pg_stat_archiver_archived_count |
pg_stat_archiver_failed_count |
pg_stat_bgwriter_buffers_alloc_total |
pg_stat_bgwriter_buffers_backend_total |
pg_stat_bgwriter_buffers_checkpoint_total |
pg_stat_bgwriter_buffers_clean_total |
pg_stat_bgwriter_checkpoint_sync_time_total |
pg_stat_bgwriter_checkpoint_write_time_total |
pg_stat_bgwriter_checkpoints_req_total |
pg_stat_bgwriter_checkpoints_timed_total |
pg_stat_bgwriter_maxwritten_clean_total |
pg_stat_database_blk_read_time |
pg_stat_database_blks_hit |
pg_stat_database_blks_read |
pg_stat_database_conflicts_confl_deadlock |
pg_stat_database_conflicts_confl_lock |
pg_stat_database_deadlocks |
pg_stat_database_numbackends |
pg_stat_database_temp_bytes |
pg_stat_database_tup_deleted |
pg_stat_database_tup_fetched |
pg_stat_database_tup_inserted |
pg_stat_database_tup_returned |
pg_stat_database_tup_updated |
pg_stat_database_xact_commit |
pg_stat_database_xact_rollback |
pg_up |
Prerequisites
Create Credentials for the Exporter in the Database
If you want to use a no-admin user for the exporter, you will have to create the user and associated views and permissions to be able to gather the data from the tables.
The Postgres exporter documentation contains the following script that you can use in your database to create the exporter user:
Note: Before running the script, be sure to set the correct password for the user in the line:
ALTER USER postgres_exporter WITH PASSWORD 'password';
CREATE OR REPLACE FUNCTION __tmp_create_user() returns void as $$
BEGIN
IF NOT EXISTS (
SELECT -- SELECT list can stay empty for this
FROM pg_catalog.pg_user
WHERE usename = 'postgres_exporter') THEN
CREATE USER postgres_exporter;
END IF;
END;
$$ language plpgsql;
SELECT __tmp_create_user();
DROP FUNCTION __tmp_create_user();
ALTER USER postgres_exporter WITH PASSWORD 'password';
ALTER USER postgres_exporter SET SEARCH_PATH TO postgres_exporter,pg_catalog;
-- If deploying as non-superuser (for example in AWS RDS), uncomment the GRANT
-- line below and replace <MASTER_USER> with your root user.
-- GRANT postgres_exporter TO <MASTER_USER>;
CREATE SCHEMA IF NOT EXISTS postgres_exporter;
GRANT USAGE ON SCHEMA postgres_exporter TO postgres_exporter;
GRANT CONNECT ON DATABASE postgres TO postgres_exporter;
CREATE OR REPLACE FUNCTION get_pg_stat_activity() RETURNS SETOF pg_stat_activity AS
$$ SELECT * FROM pg_catalog.pg_stat_activity; $$
LANGUAGE sql
VOLATILE
SECURITY DEFINER;
CREATE OR REPLACE VIEW postgres_exporter.pg_stat_activity
AS
SELECT * from get_pg_stat_activity();
GRANT SELECT ON postgres_exporter.pg_stat_activity TO postgres_exporter;
CREATE OR REPLACE FUNCTION get_pg_stat_replication() RETURNS SETOF pg_stat_replication AS
$$ SELECT * FROM pg_catalog.pg_stat_replication; $$
LANGUAGE sql
VOLATILE
SECURITY DEFINER;
CREATE OR REPLACE VIEW postgres_exporter.pg_stat_replication
AS
SELECT * FROM get_pg_stat_replication();
GRANT SELECT ON postgres_exporter.pg_stat_replication TO postgres_exporter;
Create a Secret with the Credentials
To create the secret with the user and password, you have to take in mind:
- It has to be in the same namespace where the exporter will be deployed.
- Use the same user name and password that you used to create the exporter user in the database in the previous step.
- You can change the name of the secret. If you do it, you will need to select it in the next steps of the integration.
Without TLS certs
kubectl create -n Your-Application-Namespace secret generic postgresql-exporter \
--from-literal=username=userName \
--from-literal=password=password
With TLS certs
kubectl create -n Your-Application-Namespace secret generic postgresql-exporter \
--from-literal=username=userName \
--from-literal=password=password \
--from-file=sslRootCert=/path/to/tls/cert
Installation
An automated wizard is present in the Monitoring Integrations in Sysdig Monitor. Expert users can also use the Helm chart for installation: https://github.com/sysdiglabs/integrations-charts/tree/main/charts/postgresql-exporter
Related Blog Posts
Agent Configuration
The default agent job for this integration is as follows:
- job_name: postgres-default
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
source_labels: [__meta_kubernetes_pod_host_ip]
regex: __HOSTIPS__
- action: keep
source_labels:
- __meta_kubernetes_pod_annotation_promcat_sysdig_com_integration_type
regex: "postgresql"
- source_labels: [__meta_kubernetes_pod_phase]
action: keep
regex: Running
- action: replace
source_labels: [__meta_kubernetes_pod_annotation_promcat_sysdig_com_target_ns]
target_label: kube_namespace_name
- action: replace
source_labels: [__meta_kubernetes_pod_annotation_promcat_sysdig_com_target_workload_type]
target_label: kube_workload_type
- action: replace
source_labels: [__meta_kubernetes_pod_annotation_promcat_sysdig_com_target_workload_name]
target_label: kube_workload_name
- action: replace
replacement: true
target_label: sysdig_omit_source
- action: replace
source_labels: [__address__, __meta_kubernetes_pod_annotation_promcat_sysdig_com_port]
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: replace
source_labels: [__meta_kubernetes_pod_uid]
target_label: sysdig_k8s_pod_uid
- action: replace
source_labels: [__meta_kubernetes_pod_container_name]
target_label: sysdig_k8s_pod_container_name
metric_relabel_configs:
- source_labels: [__name__]
regex: (pg_replication_lag_seconds|pg_database_size_bytes|pg_locks_count|pg_settings_max_connections|pg_settings_superuser_reserved_connections|pg_stat_activity_count|pg_stat_activity_max_tx_duration|pg_stat_archiver_archived_count|pg_stat_archiver_failed_count|pg_stat_bgwriter_buffers_alloc_total|pg_stat_bgwriter_buffers_backend_total|pg_stat_bgwriter_buffers_checkpoint_total|pg_stat_bgwriter_buffers_clean_total|pg_stat_bgwriter_checkpoint_sync_time_total|pg_stat_bgwriter_checkpoint_write_time_total|pg_stat_bgwriter_checkpoints_req_total|pg_stat_bgwriter_checkpoints_timed_total|pg_stat_bgwriter_maxwritten_clean_total|pg_stat_database_blk_read_time|pg_stat_database_blks_hit|pg_stat_database_blks_read|pg_stat_database_conflicts_confl_deadlock|pg_stat_database_conflicts_confl_lock|pg_stat_database_deadlocks|pg_stat_database_numbackends|pg_stat_database_temp_bytes|pg_stat_database_tup_deleted|pg_stat_database_tup_fetched|pg_stat_database_tup_inserted|pg_stat_database_tup_returned|pg_stat_database_tup_updated|pg_stat_database_xact_commit|pg_stat_database_xact_rollback|pg_up)
action: keep
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.