The module is available only in Deckhouse Enterprise Edition.

The module is actively developed. It might significantly change in the future.

Deckhouse Commander is a web app that lets one create uniform clusters using UI, and control the lifecycle of those clusters.

Features

  • Creation, updating and deleting clusters on major cloud platforms as well as on static resources
  • Unification and actualization of clusters configuration using cluster templates
  • The control of changes and the reconciliation to the desired configuration
  • Cluster operation wia embedded admin console
  • Catalogs of resources data utilized in clusters

Coming soon:

  • Integration API
  • Access control: users and permissions
  • Cross-cluster projects
  • Cloud resources overview used by cluster

Installation

deckhouse-commander module depends on Postgres database. One can use their own instance of the database or use one provided by operator-postgres module. Below these two ways are described in detail.

Using your own Postgres instance

To enable Deckhouse Commander, apply the following ModuleConfig containing your database credentials:

apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: deckhouse-commander
spec:
  enabled: true
  version: 1
  settings:
    postgres:
      mode: External
      external:
        host: "..."      #
        port: "..."      #
        user: "..."      # required fields
        password: "..."  #
        db: "..."        #

Using operator-postgres module

Step 1: enable operator-postgres module

First, enable operator-postgres module and wait for it to initialize

apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: operator-postgres
spec:
  enabled: true

To ensure the module is enabled, wait for Deckhouse main queue to become empty:

kubectl -n d8-system exec -t deploy/deckhouse -c deckhouse -- deckhouse-controller queue main

Step 2: enable deckhouse-commander module

Second, enable deckhouse-commander module

apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: deckhouse-commander
spec:
  enabled: true

Note for Deckhouse v1.56

After an installation of Deckhouse 1.56, one might meet a problem

Warning: module name 'deckhouse-commander' is unknown for deckhouse
moduleconfig.deckhouse.io/deckhouse-commander created

In the ModuleConfig there will be a message saying ‘unknown module name’

$ kubectl get moduleconfigs.deckhouse.io deckhouse-commander
NAME              STATE   VERSION   AGE   TYPE   STATUS
deckhouse-commander   N/A               14s   N/A    Ignored: unknown module name

To work around this, one should create a ModuleUpdatePolicy that will fix that issue for all modules in ModuleSource/deckhouse:

apiVersion: deckhouse.io/v1alpha1
kind: ModuleUpdatePolicy
metadata:
  name: deckhouse
spec:
  moduleReleaseSelector:
    labelSelector:
      matchLabels:
        source: deckhouse
  releaseChannel: Alpha  # pick your preferable release channel here
  update:
    mode: Auto

Concepts

dhctl

To install the Deckhouse Kubernetes Platform manually, use the dhctl utility. It accepts three sets of data as input:

  1. The cluster configuration and cluster installation in the form of a file, hereinafter referred to as config.yaml.
  2. The SSH connection configuration to the machine that will become the first master node. The configuration is used as dhctl command line keys (it is possible to specify it in the form of YAML). Hereinafter referred to as SSHConfig.
  3. An optional set of resources that need to be created at the last step of the installation, hereinafter referred to as resources.yaml.

What logical parts are contained in this data?

  1. config.yaml
    1. InitConfiguration — cluster setup configuration
    2. ModuleConfig resources — configuration of built-in modules: explicit enabling or disabling, as well as default settings override
    3. ClusterConfiguration — Kubernetes configuration: version, pod subnets, services, etc.
    4. Deployment parameters
      1. <Provider>ClusterConfiguration — parameters of cluster deployment in the cloud or a Virtualization API;
      2. or StaticClusterConfiguration if Deckhouse Kubernetes Platform is being installed on static resources;
  2. resources.yaml
    1. Arbitrary Kubernetes resources, including ModuleConfig for non-built-in modules in Deckhouse Kubernetes Platform
  3. SSHConfig
    1. User name, password and key to connect to an existing machine or the one that will be created during cluster creation
    2. IP address of the machine, if the cluster is deployed on static resources and the address of the future master node is known in advance
    3. You can see the rest of the parameters in the help of the dhctl command, in this documentation, additional details are not essential.

For manual cluster management using dhctl, the above configuration types are needed in different combinations for cluster installation, modification, and removal.

Configuration type Purpose Installation Modification Removal
SSHConfig SSH connection to the master node
config.yaml Installation configuration
InitConfiguration
config.yaml Deployment configuration
<Provider>ClusterConfiguration or StaticClusterConfiguration
config.yaml Kubernetes cluster configuration
ClusterConfiguration
config.yaml Deckhouse Kubernetes Platform configuration (ModuleConfig)
resources.yaml Cluster resources

As you can see, all the provided configuration is used to create a cluster. To do this, use dhctl bootstrap with all the provided configuration.

Changes that can be made to the cluster using the same tool relate to either the deployment parameters (for example, resources of permanent nodes created by Terraform) or the Kubernetes parameters. Only the connection settings and general cluster parameters are available for modification with dhctl converge. However, it is not possible to apply changes to the platform configuration or additional cluster resources in this way.

Finally, to delete a cluster, it is enough to have an access to it: the dhctl destroy operation uses only SSHConfig.

Commander

Commander makes use of the same dataset for configuring clusters as dhctl does, however Commander adds the ability to synchronize a complete desired configuration with the cluster. If we imagine Commander as an enhanced version of dhctl, then the table would look like this:

Configuration Type Purpose Installation Modification Deletion
SSHConfig SSH connection to the master node
config.yaml Installation configuration
InitConfiguration
config.yaml Deployment configuration
<Provider>ClusterConfiguration or StaticClusterConfiguration
config.yaml Kubernetes configuration
ClusterConfiguration
config.yaml Deckhouse Kubernetes Platform configuration (ModuleConfig)
resources.yaml Cluster resources

As you can see, Commander has a complete coverage of the Deckhouse Kubernetes Platform configuration for managing it after installation. Only InitConfiguration does not participate in modification, because this part of the configuration does not bring new information to the existing cluster.

Commander is the source of truth for the cluster configuration. Commander monitors that the cluster configuration matches the desired one. If Commander detects a discrepancy, it attempts to bring the cluster to the desired configuration. For this purpose, we will use the term “synchronization” further on.

In Commander, it is possible to specify the initial configuration of cluster resources, which will be applied during the cluster installation but will not be synchronized later on. This is useful when you need to create recommended or initial resources, but want to give control over them to the cluster operators.

Commander divides the Deckhouse Kubernetes Platform configuration based on the principle of traceability. Moreover, the user decides which part of the configuration should be synchronized and which should be set once when creating the cluster. Here is how this configuration looks from Commander’s perspective:

dhctl Configuration Type Commander Configuration Type Purpose Installation Synchronization Deletion
SSHConfig SSH parameters SSH connection to the master node
config.yaml Deployment Deployment configuration
<Provider>ClusterConfiguration or StaticClusterConfiguration
config.yaml Kubernetes Kubernetes configuration
ClusterConfiguration
config.yaml Resources Deckhouse Kubernetes Platform configuration (ModuleConfig)
resources.yaml Resources Cluster resources, including any ModuleConfig
resources.yaml Initial resources Cluster resources, including any ModuleConfig
config.yaml Installation Installation configuration
InitConfiguration

Note that the last two lines describe a configuration that will not be monitored or synchronized after the cluster is created.

Templates

The Idea

Commander is designed to manage typical clusters. Since all types of configuration in Commander are represented in YAML format, clustering templatization is a markup of the required YAML configuration with parameters and a description of the input parameters scheme of the template. To templatize YAML, the go template syntax and the sprig function set are used. A custom syntax for fields is used to describe the scheme of input parameters.

Type of Commander config Type Purpose
Input parameters Scheme Scheme of input parameters of the template
Kubernetes YAML Template Kubernetes configuration
ClusterConfiguration
Deployment YAML Template Deployment configuration
<Provider>ClusterConfiguration or StaticClusterConfiguration
SSH parameters YAML Template SSH connection to the master node
Resources YAML Template Cluster resources, including any ModuleConfig
Primary resources YAML Template Cluster resources, including any ModuleConfig
Installation YAML Template Installation configuration
InitConfiguration

The cluster configuration is created by substituting the input parameters into the configuration templates. The input parameters are validated by the scheme specified for them.

Template versions

An important feature of a template is evolution. It is not enough to create a cluster fleet based on templates. Templates are improved and updated to meet the new software versions and new requirements for cluster operation. An updated template allows not only creating new clusters that meet modern requirements, but also updating existing clusters.

To evolve templates in Commander, a versioning mechanism is provided. When a template receives updates, a new version is created for it. The version can be accompanied by a comment. Based on the template version, you can create a cluster and test its performance. If the template version is unsuitable for use, it can be marked as unavailable for use. Then, cluster administrators will not be able to switch the cluster to the template version.

In Commander, each cluster is tied to a specific template version. However, technically, the cluster can be transferred to any other template and any available template version, subject to an invalid configuration that Commander will not allow to save. When the cluster is transferred to a new version or template, it is necessary to update the input parameters so that the updated configuration is created for the cluster. Commander will detect that the target configuration does not match the last applied configuration and create a task to synchronize the cluster.

Complexity of the template

Creating and testing a template is an engineering task, while creating clusters based on a template does not require a deep dive into technical details in general.

The input parameters of a template are presented to the user in the form of an online form, where the user enters or selects the parameters necessary to create a cluster. The entire set of input parameters is defined by the author of the template: which parameters are available, which are mandatory, in what order they are filled in, what test they are accompanied by, and how they are formatted for ease of perception by the end user.

Only the author of the template determines how easy or difficult it will be for the end user to use the template, and what decisions the user needs to make in order to successfully create a cluster. The more complex the template, the more complex the templating code and the more complex the form of the template parameters. Commander users themselves determine the ratio of complexity of the template and the number of templates for different scenarios. Commander is a flexible enough tool. With it, you can create both one template for all occasions and many templates for each individual use case.

Creating a template

You can add a template to Commander in two ways: by importing an existing one (for example, one created earlier in another installation of Commander) or by creating a new one from scratch. Ultimately, the templated configuration must comply with the features of dhctl and Deckhouse Kubernetes Platform of the version that will be installed using the template.

Where to find documentation for configuration types:

Special variables

There are several special variables in the cluster templates.

Variable Purpose
dc_sshPublicKey The public part of the SSH key. A pair of SSH keys is created for each cluster.
Can be used for cloud-init of cloud clusters.
dc_sshPrivateKey The private part of the SSH key. A pair of SSH keys is created for each cluster.
Can be used to access master nodes of cloud clusters.
dc_clusterUUID UUID of the current cluster. Generated for each cluster.
Can be used to tag metrics and logs of the cluster.
dc_domain The domain on which Commander is hosted. Common for the entire application.
Example: commander.example.com

Required manifests

At the moment, Commander does not create invisible configuration, so the author of the template needs to take into account several manifests in the template to get a full experience using Commander. In the future, Commander will be improved to reduce the impact of technical features on the experience of working with it.

SSH parameters for a cloud cluster

For a cloud cluster, you can use the private key created by Commander if you do not provide a predefined key in the OS image. Also, in the virtual machine images, a user will be created under which Commander will connect to the created machine to start it up as a master node.

apiVersion: dhctl.deckhouse.io/v1
kind: SSHConfig
# The name of the user for SSH is defined in the "Hosting" section of the OS image
sshUser: ubuntu
sshPort: 22
# The private key that will be used to connect to VMs via SSH
sshAgentPrivateKeys:
- key: |
    {{ .dc_sshPrivateKey | nindent 4 }}    
SSH and resources for static cluster

Since the machines are created in advance and SSH server, user and key are configured on them, these data must be provided in the input parameters of the cluster. Unlike the cloud configuration above, we use not an embedded parameter but one explicitly passed by the user. Some data can always be set within the template if their parameterization is not considered appropriate.

Pay attention to the SSHHost manifests. They declare IP addresses to which Commander has access. In this example, it is assumed that the input parameter .masterHosts is a list of IP addresses based on which the configuration will contain SSH hosts. Since these are masters, they should be specified in the quantity of 1 or 3.

apiVersion: dhctl.deckhouse.io/v1
kind: SSHConfig
# username and port for SSH configured on the machines
sshUser: {{ .sshUser }}
sshPort: {{ .sshPort }}
# private key used on machines is passed as an input parameter to the cluster
sshAgentPrivateKeys:
- key: |
    {{ .sshPrivateKey | nindent 4 }}    

{{- range $masterHost := .masterHosts }}
---
apiVersion: dhctl.deckhouse.io/v1
kind: SSHHost
host: {{ $masterHost.ext_ip }}
{{- end }}

Commander will connect only to the first SSH host in the list provided. This host will become a master node in the cluster. Once Deckhouse is installed on the first master node, it will be able to add the other remaining master nodes to the cluster itself, if they are specified in the template. To do this, it is necessary to inform Deckhouse that the machines exist, how to get to them, and that they need to be added to the cluster. To do this, you need to create StaticInstance for two masters, define SSHCredentials for them, as well as explicitly write a master node group with the parameter spec.staticInstances.count=2, so that two static master nodes are not only known to Deckhouse, but also needed as master nodes. It is advisable to define this part of the template in “Resources”. Below is the template code for this task:

---
apiVersion: deckhouse.io/v1alpha1
kind: SSHCredentials
metadata:
  name: commander-ssh-credentials
  labels:
    heritage: deckhouse-commander
spec:
  sshPort: {{ .sshPort }}
  user: {{ .sshUser }}
  privateSSHKey: {{ .sshPrivateKey | b64enc }}

{{- if gt (len .masterHosts) 1 }}
{{-   range $masterInstance := slice .masterHosts 1 }}
---
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
  labels:
    type: master
    heritage: deckhouse-commander
  name: {{ $masterInstance.hostname | quote }}
spec:
  address: {{ $masterInstance.ip | quote }}
  credentialsRef:
    apiVersion: deckhouse.io/v1alpha1
    kind: SSHCredentials
    name: commander-ssh-credentials
{{-   end }}
{{- end }}

{{- if gt (len .masterHosts) 1 }}
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: master
  labels:
    heritage: deckhouse-commander
spec:
  disruptions:
    approvalMode: Manual
  nodeTemplate:
    labels:
      node-role.kubernetes.io/control-plane: ""
      node-role.kubernetes.io/master: ""
    taints:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
    - effect: NoSchedule
      key: node-role.kubernetes.io/control-plane
  nodeType: Static
  staticInstances:
    count: 2
    labelSelector:
      matchLabels:
        type: master
{{- end }}
Resources: deckhouse-commander-agent module

Commander synchronizes resources using the deckhouse-commander-agent module. This module is installed on the target cluster. The commander-agent application requests the current list of resources for the cluster and updates them in the cluster where it is running. To configure the agent correctly, you need to create a manifest in the resources that includes the module.

Please pay attention to commanderUrl. You will have to specify the scheme of this address: HTTP or HTTPS.