Deckhouse Kubernetes Platform on Deckhouse Virtualization Platform (DVP)

Before installation, ensure the following:

  • Cloud provider quotas for cluster deployment.
  • The cloud-init package is installed on the VMs. After the VM starts, services cloud-config.service, cloud-final.service, cloud-init.service must be running.
  • The virtual machine template contains only one disk.

Additional requirements and notes

  • For ContainerdV2 on cluster nodes, the OS on virtual machines must meet the requirements:
    • Linux kernel version 5.8 or newer, except for the ranges 6.12.0–6.12.28 or 6.14.0–6.14.6 (these versions are affected by CVE-2025-37999 in EROFS);
    • CgroupsV2 support;
    • Systemd version 244 or newer;
    • erofs kernel module support.

    For more information, see the ClusterConfiguration resource.

  • From version 1.74, Deckhouse has a module integrity control mechanism (protection against replacement and modification). It turns on automatically when the OS on the nodes supports the erofs kernel module. Without it, Deckhouse runs as before but the mechanism is off — an alert will indicate it is unavailable.

To deploy Deckhouse Kubernetes Platform on DVP, perform the initial setup in the virtualization system. Create a user (ServiceAccount), assign permissions, and obtain a kubeconfig.

  1. Create a user (ServiceAccount and token) by running:

    d8 k create -f -<<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sa-demo
      namespace: default
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: sa-demo-token
      namespace: default
      annotations:
        kubernetes.io/service-account.name: sa-demo
    type: kubernetes.io/service-account-token
    EOF
    
  2. Assign a role to the user by running:

    d8 k create -f -<<EOF
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: sa-demo-rb
      namespace: default
    subjects:
      - kind: ServiceAccount
        name: sa-demo
        namespace: default
    roleRef:
      kind: ClusterRole
      name: d8:use:role:manager
      apiGroup: rbac.authorization.k8s.io
    EOF
    
  3. Enable kubeconfig issuance via API. Open the user-authn module settings (create a ModuleConfig resource named user-authn if it does not exist):

    d8 k edit mc user-authn
    
  4. Add the following section to the settings block and save:

    publishAPI:
      enabled: true
    
  5. Generate a kubeconfig to be used in the cluster initial configuration file in the next step:

    cat <<EOF > kubeconfig
    apiVersion: v1
    clusters:
    - cluster:
        server: https://<KUBE-APISERVER-URL>   # Replace this with the actual API server address for the cluster.
      name: <CLUSTER-NAME>                     # Replace with the cluster name.
    contexts:
    - context:
        cluster: <CLUSTER-NAME>                # Replace with the cluster name.
        user: sa-demo
        namespace: default
      name: sa-demo-context
    current-context: sa-demo-context
    kind: Config
    preferences: {}
    users:
    - name: sa-demo
      user:
        token: $(d8 k get secret sa-demo-token -n default -o json | jq -rc .data.token | base64 -d)
    EOF
    

    Encode the generated kubeconfig file using Base64 encoding (it appears in the initial configuration file as follows):

    base64 kubeconfig | tr -d '\n'