The module lifecycle stagePreview
The module has requirements for installation

How to check the module’s operability?

To do this, you need to check the conditions status of the Postgres resource in the user namespace. All Type should be in True status

kubectl -n <users-ns> get postgres <cluster_name> -owide -w

Which PostgreSQL versions are supported by the module?

See Supported PostgreSQL Versions.

How to connect to the database in a PG cluster?

To connect to the database, services are available in the namespace: rw service - d8ms-pg-<cluster_name>-rw, which always points to the master instance and allows read/write operations ro service - d8ms-pg-<cluster_name>-ro, which points to slave instances and allows read operations from replica.

In case of creating a user with the storeCredsToSecret field specified, a connection string in the format <db_name>-dsn will be stored in the namespaced secret with the corresponding name

  test-dsn: 'pgsql:host=d8ms-pg-test-rw;port=5432;dbname=test;user=test-ro;password=123'

Out of disk space / low disk space on one of the PG cluster pods

When one of the pods does not have enough disk space to create a new WAL file, the Postgres process in that pod is forcibly stopped, and on pod startup the logs will show a message about insufficient space

{"level":"info","ts":"2026-03-23T08:44:11.161864717Z","msg":"Checking for free disk space for WALs before starting PostgreSQL","logger":"instance-manager","logging_pod":"d8ms-pg-staging-postgres-4"}
{"level":"info","ts":"2026-03-23T08:44:11.174976009Z","msg":"Detected low-disk space condition, avoid starting the instance","logger":"instance-manager","logging_pod":"d8ms-pg-staging-postgres-4"}

and the pod will remain stuck in CrashLoopBackoff.

To fix this, add more space on the pod disks. Below are recommendations on how to do this.

  1. Find the PVC that is low on space:

    kubectl get pvc -A -l cnpg.internal.managed.deckhouse.io/instanceName=<stuck pod name>
  2. Ensure allowVolumeExpansion is enabled in the StorageClass used to provision the PVC:

    kubectl get storageclass -o jsonpath='{$.allowVolumeExpansion}' <storageclass name from PVC>
  3. Increase spec.instance.persistentVolumeClaim.size in the Postgres CR for the relevant PG cluster.

  4. Wait until the CAPACITY field on the PVC from step 1 matches the size specified in the Postgres resource.

  5. Restart the stuck pod (via kubectl delete pod).

The pod should start successfully and synchronize with the primary.