Deckhouse consists of a Deckhouse operator and modules. A module is a set of helm charts, hooks, files, and assembly rules for module components (Deckhouse components).

You can configure Deckhouse using the:

Deckhouse configuration

The Deckhouse configuration is stored in the deckhouse ConfigMap in the d8-system namespace and may contain the following parameters (keys):

  • global — contains the global Deckhouse settings as a multi-line string in YAML format;
  • <moduleName> (where <moduleName> is the name of the Deckhouse module in camelCase) — contains the module settings as a multi-line string in YAML format;
  • <moduleName>Enabled (where <moduleName> is the name of the Deckhouse module in camelCase) — this one explicitly enables or disables the module.

Use the following command to view the deckhouse ConfigMap:

kubectl -n d8-system get cm/deckhouse -o yaml

Example of the deckhouse ConfigMap:

apiVersion: v1
metadata:
  name: deckhouse
  namespace: d8-system
data:
  global: |          # Note the vertical bar.
    # Section of the YAML file with global settings.
    modules:
      publicDomainTemplate: "%s.kube.company.my"
  # monitoring-ping related section of the YAML file.
  monitoringPing: |
    externalTargets:
    - host: 8.8.8.8
  # Disabling the dashboard module.
  dashboardEnabled: "false"

Pay attention to the following:

  • The | sign — vertical bar glyph that must be specified when passing settings, because the parameter being passed is a multi-line string, not an object.
  • A module name is in camelCase style.

Use the following command to edit the deckhouse ConfigMap:

kubectl -n d8-system edit cm/deckhouse

Configuring the module

Deckhouse uses addon-operator when working with modules. Please refer to its documentation to learn how Deckhouse works with modules, module hooks and module parameters. We would appreciate it if you star the project.

Deckhouse only works with the enabled modules. Modules can be enabled or disabled by default, depending on the bundle used. Learn more on how to explicitly enable and disable the module.

You can configure the module using the parameter with the module name in camelCase in the Deckhouse configuration. The parameter value is a multi-line YAML string with the module settings.

Some modules can also be configured using custom resources. Use the search bar at the top of the page or select a module in the left menu to see a detailed description of its settings and the custom resources used.

Below is an example of the kube-dns module settings:

data:
  kubeDns: |
    stubZones:
    - upstreamNameservers:
      - 192.168.121.55
      - 10.2.7.80
      zone: directory.company.my
    upstreamNameservers:
    - 10.2.100.55
    - 10.2.200.55

Enabling and disabling the module

Depending on the bundle used, some modules may be enabled by default.

To enable/disable a module, add the <moduleName>Enabled parameter to the deckhouse ConfigMap with one of the following two values: "true" or "false" (note: quotation marks are mandatory), where <moduleName> is the name of the module in camelCase.

Here is an example of enabling the user-authn module:

data:
  userAuthnEnabled: "true"

Module bundles

Depending on the bundle used, modules may be enabled or disabled by default.

Bundle nameList of modules, enabled by default
Default
  • cert-manager
  • chrony
  • cilium-hubble
  • control-plane-manager
  • dashboard
  • deckhouse
  • deckhouse-web
  • descheduler
  • extended-monitoring
  • ingress-nginx
  • kube-dns
  • kube-proxy
  • local-path-provisioner
  • log-shipper
  • monitoring-custom
  • monitoring-deckhouse
  • monitoring-kubernetes-control-plane
  • monitoring-kubernetes
  • monitoring-ping
  • namespace-configurator
  • node-manager
  • pod-reloader
  • priority-class
  • prometheus
  • prometheus-metrics-adapter
  • secret-copier
  • smoke-mini
  • snapshot-controller
  • terraform-manager
  • upmeter
  • user-authn
  • user-authz
  • vertical-pod-autoscaler
Managed
  • cert-manager
  • dashboard
  • deckhouse
  • deckhouse-web
  • descheduler
  • extended-monitoring
  • ingress-nginx
  • local-path-provisioner
  • log-shipper
  • monitoring-custom
  • monitoring-deckhouse
  • monitoring-kubernetes
  • monitoring-ping
  • namespace-configurator
  • pod-reloader
  • prometheus
  • prometheus-metrics-adapter
  • secret-copier
  • snapshot-controller
  • upmeter
  • user-authz
  • vertical-pod-autoscaler
Minimal
  • deckhouse

Managing placement of Deckhouse components

Advanced scheduling

If no nodeSelector/tolerations are explicitly specified in the module parameters, the following strategy is used for all modules:

  1. If the nodeSelector module parameter is not set, then Deckhouse will try to calculate the nodeSelector automatically. Deckhouse looks for nodes with the specific labels in the cluster (see the list below). If there are any, then the corresponding nodeSelectors are automatically applied to module resources.
  2. If the tolerations parameter is not set for the module, all the possible tolerations are automatically applied to the module’s Pods (see the list below).
  3. You can set both parameters to false to disable their automatic calculation.

You cannot set nodeSelector and tolerations for modules:

  • that involve running a DaemonSet on all cluster nodes (e.g., cni-flannel, monitoring-ping);
  • designed to run on master nodes (e.g., prometheus-metrics-adapter or some vertical-pod-autoscaler components).

Module features that depend on its type

  • The monitoring-related modules (operator-prometheus, prometheus and vertical-pod-autoscaler):
    • Deckhouse examines nodes to determine a nodeSelector in the following order:
      • It checks if a node with the node-role.deckhouse.io/MODULE_NAME label is present in the cluster.
      • It checks if a node with the node-role.deckhouse.io/monitoring label is present in the cluster.
      • It checks if a node with the node-role.deckhouse.io/system label is present in the cluster.
    • Tolerations to add (note that tolerations are added all at once):
      • {"key":"dedicated.deckhouse.io","operator":"Equal","value":"MODULE_NAME"}

        E.g., {"key":"dedicated.deckhouse.io","operator":"Equal","value":"operator-prometheus"}.

      • {"key":"dedicated.deckhouse.io","operator":"Equal","value":"monitoring"}.
      • {"key":"dedicated.deckhouse.io","operator":"Equal","value":"system"}.
  • The frontend-related modules (nginx-ingress only):
    • Deckhouse examines nodes to determine a nodeSelector in the following order:
      • It checks if a node with the node-role.deckhouse.io/MODULE_NAME label is present in the cluster.
      • It checks if a node with the node-role.deckhouse.io/frontend label is present in the cluster.
    • Tolerations to add (note that tolerations are added all at once):
      • {"key":"dedicated.deckhouse.io","operator":"Equal","value":"MODULE_NAME"}.
      • {"key":"dedicated.deckhouse.io","operator":"Equal","value":"frontend"}.
  • Other modules:
    • Deckhouse examines nodes to determine a nodeSelector in the following order:
      • It checks if a node with the node-role.deckhouse.io/MODULE_NAME label is present in the cluster;

        E.g., node-role.deckhouse.io/cert-manager);

      • It checks if a node with the node-role.deckhouse.io/system label is present in the cluster.

    • Tolerations to add (note that tolerations are added all at once):
      • {"key":"dedicated.deckhouse.io","operator":"Equal","value":"MODULE_NAME"}

        E.g., {"key":"dedicated.deckhouse.io","operator":"Equal","value":"network-gateway"};

      • {"key":"dedicated.deckhouse.io","operator":"Equal","value":"system"}.