The module is available only in Deckhouse Enterprise Edition.

The functionality of the module might significantly change. Compatibility with future versions is not guaranteed.

Commander internals

    Database       ┆                             Commander
                   ┆                       ┌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┐
                   ┆                       ┊ commander-agent ┊
                   ┆                       └╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┘
                   ┆     Integration API  ╱                          ┌─────────────┐
                   ┆           │        ╱                            │┌────────────┴┐
┌──────────────┐   ┆   ┌───────┴───────┴┐  ┌─────────────────────┐   ││┌────────────┴─┐
│   Postgres   ├───┆───┤   API Server   ├──┤   Cluster Manager   ├───┴││ dhctl server │
└──────────────┘   ┆   └───────┬────────┘  └─────────────────────┘    └│      ×N      │
                   ┆           │                                       └──────────────┘
                   ┆   ┌───────┴────────┐    ┌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┐
                   ┆   │    Web app     ├╌╌╌╌┊ deckhouse-admin ┊
                   ┆   └────────────────┘    └╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┘

Commander has two external dependencies that are required for its full operation:

  • PostgreSQL DBMS (required)
  • deckhouse-admin module (optional)

The API server is the central component. Data is stored in PostgreSQL. Options for installing Commander with a DBMS are listed in the section below.

The API server provides both external APIs — web applications and for external integration — and internal APIs for working with clusters.

The web application uses the API to manage clusters and other Commander entities, while deckhouse-admin opens up to manage an individual cluster. The deckhouse-admin module must be enabled on the same cluster where Commander is running.

Asynchronous operations — tasks — are used to manage clusters. The cluster manager is a service that monitors tasks and executes them. Tasks can be cluster installation, cluster deletion, or cluster state reconciliation with the specified configuration.

Cluster manager is single-threaded. Therefore, the efficiency of cluster processing depends on the number of clusters and the number of cluster manager replicas. When a cluster is created in API, the API server creates an installation task. Then the free instance of the cluster manager takes the task to work on. The same happens for cluster update, delete or reconciliation operations.

Cluster manager uses a special component to manage clusters — dhctl server. In the target picture, the cluster manager launches a replica of dhctl server only the necessary version for each DKP cluster individually. However, dhctl server is currently under active development, so there is currently a limit on the version of DKP that Commander can install. See the “Current limitations” section below.

In each cluster, Commander automatically installs the deckhouse-commander-agent module. This module in the application cluster is responsible for synchronizing Kubernetes resources, as well as sending telemetry to the Commander API server. Telemetry now includes basic metrics (CPU, memory, number of nodes, and total storage space), DKP version, Kubernetes version, and DKP components availability.

Commander also uses additional services that are not shown in the diagram — renderer and connector. The renderer is responsible for generating and validating cluster configurations, and the connector is responsible for the operation of the cluster administration interface.

Current limitations

DKP version in application clusters

For application clusters, we recommend using the Deckhouse Kubernetes Platform (DKP) version Enterprise Edition (EE) on the Early Access update channel. The current DKP EE version on the Early Access channel can be found at

The Commander cluster manager service is built on the basis of the DKP installer (hereinafter referred to as “dhctl”). Based on dhctl, Commander always installs a fixed version of DKP corresponding to the version of dhctl. For example, if dhctl is version 1.59.8, then the Deckhouse image in the application cluster after installation will be 1.59.8. The installed version of DKP does not depend on the selected update channel in the cluster configuration. However, after installation, the DPK cluster will be updated according to the selected update channel.

To manage clusters, it is necessary that dhctl and the version of DKP match. At the moment, Commander uses a fixed version of dhctl. We release Commander patch releases so that the installer in Commander corresponds to the DKP EE version on the EarlyAccess channel. For example, as soon as DKP v1.59.3 is released on EE/EarlyAccess, we release a Commander patch release with dhctl v1.59.3.

We are working to remove this limitation. The installer version will be selected according to the target version. During installation, the dhctl version will correspond to the selected DKP update channel, and the installer for the current DKP installation will be used to update and delete clusters.

Requirements for resources

To start using Commandor, we recommend creating a fault-tolerant management cluster that will include the following node sets (NodeGroup):

Node Group Number of nodes CPU, cores Memory, GB Disk, GB
master 3 4 8 50
system 2 4 8 50
frontend 2 4 8 50
commander 3 8 12 50
  • Postgres in HA mode in two replicas requires 1 core and 1 GB of memory on 2 separate nodes.
  • The API server in HA mode for two replicas needs 1 core and 1GB of memory on two separate nodes.
  • Service components used for rendering configurations and connecting to application clusters require 0.5 cores and 128 MB of memory per cluster.
  • Cluster manager and dhctl server together require resources based on the number of clusters they serve and the number of DKP versions they serve simultaneously.
  • Up to 2 cores per node can be occupied by DKP service components (for example: runtime-audit-engine, istio, cilium, log-shipper).
Number of clusters CPU, cores Memory, GB Number of 8/8 nodes Number of 8/12 nodes
10 9 16 3 (=24/24) 2 (=16/24)
25 10 19 3 (=24/24) 3(=24/36)
100 15 29 4 (=32/32) 4 (=32/48)


The deckhouse-commander module has an external dependency — a Postgres database. If you are using your own database, set the database parameters in ModuleConfig/deckhouse-commander. You can also use the operator-postgres module, in which case you need to enable it first and ensure that the CRDs from this module appear in the cluster. Then you can enable the Commander module. Below, we will describe these options in more detail.

If you are using your Postgres DB installation

To enable Commander, create a ModuleConfig:

kind: ModuleConfig
  name: deckhouse-commander
  enabled: true
  version: 1
      mode: External
        host: "..."     # Mandatory field
        port: "..."     # Mandatory field
        user: "..."     # Mandatory field
        password: "..." # Mandatory field
        db: "..."       # Mandatory field

If you’re using the operator-postgres module

Step 1: Enabling operator-postgres

First, enable the postgres operator module and wait for it to become active:

kind: ModuleConfig
  name: operator-postgres
  enabled: true

Then, wait until the Deckhouse task queue becomes empty to make sure the module is enabled:

kubectl -n d8-system exec -t deploy/deckhouse -c deckhouse -- deckhouse-controller queue main

Step 2: Enable deckhouse-commander

Next, enable the deckhouse-commander module. Make sure to specify the storage class that the DB from the operator-postgres module will use.

kind: ModuleConfig
  name: deckhouse-commander
  enabled: true
  version: 1
    nodeSelector: commander
      mode: Internal
          class: your-storageclass-of-choice  # StorageClass is mandatory