Switching DKP from CE to EE
A valid license key is required. If needed, you can request a temporary license.
This instruction assumes the use of the public container registry: registry.deckhouse.ru
.
To switch from Deckhouse Community Edition to Enterprise Edition, follow these steps
(all commands should be executed on a master node either as a user with a configured kubectl
context or with superuser privileges):
-
Prepare variables with your license token:
LICENSE_TOKEN=<PUT_YOUR_LICENSE_TOKEN_HERE> AUTH_STRING="$(echo -n license-token:${LICENSE_TOKEN} | base64)"
Create a NodeGroupConfiguration resource to enable transitional authorization to
registry.deckhouse.ru
:apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: containerd-ee-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 30 content: | _on_containerd_config_changed() { bb-flag-set containerd-need-restart } bb-event-on 'containerd-config-file-changed' '_on_containerd_config_changed' mkdir -p /etc/containerd/conf.d bb-sync-file /etc/containerd/conf.d/ee-registry.toml - containerd-config-file-changed << "EOF_TOML" [plugins] [plugins."io.containerd.grpc.v1.cri"] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.configs."registry.deckhouse.ru".auth] auth = "$AUTH_STRING" EOF_TOML EOF
Wait until the
/etc/containerd/conf.d/ee-registry.toml
file appears on the nodes and Bashible synchronization is complete.You can monitor the sync status using the
UPTODATE
column (the number ofUPTODATE
nodes should match the total number of nodes in the group):d8 k get ng -o custom-columns=NAME:.metadata.name,NODES:.status.nodes,READY:.status.ready,UPTODATE:.status.upToDate -w
Example output:
NAME NODES READY UPTODATE master 1 1 1 worker 2 2 2
You should also see the message
Configuration is in sync, nothing to do.
in the bashible systemd service logs, for example:journalctl -u bashible -n 5
Example output:
Aug 21 11:04:28 master-ce-to-ee-0 bashible.sh[53407]: Configuration is in sync, nothing to do. Aug 21 11:04:28 master-ce-to-ee-0 bashible.sh[53407]: Annotate node master-ce-to-ee-0 with annotation node.deckhouse.io/ configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master ce-to-ee-0 bashible.sh[53407]: Successful annotate node master-ce-to-ee-0 with annotation node.deckhouse.io/ configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master-ce-to-ee-0 systemd[1]: bashible.service: Deactivated successfully.
Then, launch a temporary DKP EE pod to retrieve the latest image digests and module list:
DECKHOUSE_VERSION=$(d8 k -n d8-system get deploy deckhouse -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "deckhouse") | .image' | awk -F: '{print $2}') d8 k run ee-image --image=registry.deckhouse.ru/deckhouse/ee/install:$DECKHOUSE_VERSION --command sleep -- infinity
To verify which DKP version is currently deployed:
d8 k get deckhousereleases | grep Deployed
-
Once the pod reaches the
Running
state, execute the following commands:Retrieve the value of
EE_REGISTRY_PACKAGE_PROXY
:EE_REGISTRY_PACKAGE_PROXY=$(d8 k exec ee-image -- cat deckhouse/candi/images_digests.json | jq -r ".registryPackagesProxy.registryPackagesProxy")
Pull the Deckhouse EE image using the obtained digest:
crictl pull registry.deckhouse.ru/deckhouse/ee@$EE_REGISTRY_PACKAGE_PROXY
Example output:
Image is up to date for sha256:8127efa0f903a7194d6fb7b810839279b9934b200c2af5fc416660857bfb7832
-
Update the DKP registry access secret by running the following command:
d8 k -n d8-system create secret generic deckhouse-registry \ --from-literal=".dockerconfigjson"="{\"auths\": { \"registry.deckhouse.ru\": { \"username\": \"license-token\", \"password\": \"$LICENSE_TOKEN\", \"auth\": \"$AUTH_STRING\" }}}" \ --from-literal="address"=registry.deckhouse.ru \ --from-literal="path"=/deckhouse/ee \ --from-literal="scheme"=https \ --type=kubernetes.io/dockerconfigjson \ --dry-run='client' \ -o yaml | kubectl -n d8-system exec -i svc/deckhouse-leader -c deckhouse -- kubectl replace -f -
-
Apply the webhook-handler image:
HANDLER=$(d8 k exec ee-image -- cat deckhouse/candi/images_digests.json | jq -r ".deckhouse.webhookHandler") d8 k --as=system:serviceaccount:d8-system:deckhouse -n d8-system set image deployment/webhook-handler handler=registry.deckhouse.ru/deckhouse/ee@$HANDLER
-
Apply the Deckhouse EE image:
DECKHOUSE_KUBE_RBAC_PROXY=$(d8 k exec ee-image -- cat deckhouse/candi/images_digests.json | jq -r ".common.kubeRbacProxy") DECKHOUSE_INIT_CONTAINER=$(d8 k exec ee-image -- cat deckhouse/candi/images_digests.json | jq -r ".common.init") DECKHOUSE_VERSION=$(d8 k -n d8-system get deploy deckhouse -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "deckhouse") | .image' | awk -F: '{print $2}') d8 k --as=system:serviceaccount:d8-system:deckhouse -n d8-system set image deployment/deckhouse init-downloaded-modules=registry.deckhouse.ru/deckhouse/ee@$DECKHOUSE_INIT_CONTAINER kube-rbac-proxy=registry.deckhouse.ru/deckhouse/ee@$DECKHOUSE_KUBE_RBAC_PROXY deckhouse=registry.deckhouse.ru/deckhouse/ee:$DECKHOUSE_VERSION
-
Wait for the Deckhouse pod to reach the
Ready
status and for all tasks in the queue to complete.
If you encounter theImagePullBackOff
error during this process, wait for the pod to restart automatically.Check the status of the DKP pod:
d8 k -n d8-system get po -l app=deckhouse
Check the DKP task queue:
d8 platform queue list
-
Check if any pods are still using the CE registry address:
d8 k get pods -A -o json | jq -r '.items[] | select(.spec.containers[] | select(.image | contains("deckhouse.ru/deckhouse/ce"))) | .metadata.namespace + "\t" + .metadata.name' | sort | uniq
-
Clean up temporary files, the NodeGroupConfiguration resource, and variables:
d8 k delete ngc containerd-ee-config.sh d8 k delete pod ee-image d8 k apply -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: del-temp-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 90 content: | if [ -f /etc/containerd/conf.d/ee-registry.toml ]; then rm -f /etc/containerd/conf.d/ee-registry.toml fi EOF
After Bashible synchronization (you can track it by the
UPTODATE
status of the NodeGroup), delete the temporary configuration resource:d8 k delete ngc del-temp-config.sh
Switching DKP from EE to CE
This instruction assumes the use of the public container registry: registry.deckhouse.ru
.
Using registries other than registry.deckhouse.io
and registry.deckhouse.ru
is only available in commercial editions of Deckhouse Kubernetes Platform.
Cloud clusters on OpenStack and VMware vSphere are not supported in DKP CE.
To switch from Deckhouse Enterprise Edition to Community Edition, follow these steps
(all commands should be executed on a master node either as a user with a configured kubectl
context or with superuser privileges):
-
To retrieve the current image digests and module list, create a temporary DKP CE pod using the following command:
DECKHOUSE_VERSION=$(d8 k -n d8-system get deploy deckhouse -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "deckhouse") | .image' | awk -F: '{print $2}') d8 k run ce-image --image=registry.deckhouse.ru/deckhouse/ce/install:$DECKHOUSE_VERSION --command sleep -- infinity
This will run the image of the latest installed DKP version in the cluster.
To determine the currently installed version, use:
d8 k get deckhousereleases | grep Deployed
-
Once the pod enters the
Running
state, execute the following commands:Retrieve the
CE_REGISTRY_PACKAGE_PROXY
value:CE_REGISTRY_PACKAGE_PROXY=$(d8 k exec ce-image -- cat deckhouse/candi/images_digests.json | jq -r ".registryPackagesProxy.registryPackagesProxy")
Pull the DKP CE image using the obtained digest:
crictl pull registry.deckhouse.ru/deckhouse/ce@$CE_REGISTRY_PACKAGE_PROXY
Example output:
Image is up to date for sha256:8127efa0f903a7194d6fb7b810839279b9934b200c2af5fc416660857bfb7832
Retrieve the list of
CE_MODULES
:CE_MODULES=$(d8 k exec ce-image -- ls -l deckhouse/modules/ | grep -oE "\d.*-\w*" | awk {'print $9'} | cut -c5-)
Check the result:
echo $CE_MODULES
Example output:
common priority-class deckhouse external-module-manager registrypackages ...
Retrieve the list of currently enabled embedded modules:
USED_MODULES=$(d8 k get modules -o custom-columns=NAME:.metadata.name,SOURCE:.properties.source,STATE:.properties.state,ENABLED:.status.phase | grep Embedded | grep -E 'Enabled|Ready' | awk {'print $1'})
Verify the result:
echo $USED_MODULES
Example output:
admission-policy-engine cert-manager chrony ...
Determine which modules will be disabled after switching to CE:
MODULES_WILL_DISABLE=$(echo $USED_MODULES | tr ' ' '\n' | grep -Fxv -f <(echo $CE_MODULES | tr ' ' '\n'))
Verify the result:
echo $MODULES_WILL_DISABLE
Example output:
node-local-dns registry-packages-proxy
If
registry-packages-proxy
appears in$MODULES_WILL_DISABLE
, it must be manually re-enabled. Otherwise, the cluster will not be able to switch to DKP CE images. Instructions for re-enabling it are provided in Step 8. -
Make sure that the modules currently used in the cluster are supported in DKP CE.
To display the list of modules that are not supported and will be disabled:
echo $MODULES_WILL_DISABLE
Review the list carefully and make sure that the functionality provided by these modules is not critical for your cluster, and that you are ready to disable them.
To disable unsupported modules:
echo $MODULES_WILL_DISABLE | tr ' ' '\n' | awk {'print "d8 platform module disable",$1'} | bash
Example output:
Defaulted container "deckhouse" out of: deckhouse, kube-rbac-proxy, init-external-modules (init) Module node-local-dns disabled
-
Update the DKP registry access secret by running the following command:
d8 k -n d8-system create secret generic deckhouse-registry \ --from-literal=".dockerconfigjson"="{\"auths\": { \"registry.deckhouse.ru\": {}}}" \ --from-literal="address"=registry.deckhouse.ru \ --from-literal="path"=/deckhouse/ce \ --from-literal="scheme"=https \ --type=kubernetes.io/dockerconfigjson \ --dry-run='client' \ -o yaml | kubectl -n d8-system exec -i svc/deckhouse-leader -c deckhouse -- kubectl replace -f -
-
Apply the
webhook-handler
image:HANDLER=$(d8 k exec ce-image -- cat deckhouse/candi/images_digests.json | jq -r ".deckhouse.webhookHandler") d8 k --as=system:serviceaccount:d8-system:deckhouse -n d8-system set image deployment/webhook-handler handler=registry.deckhouse.ru/deckhouse/ce@$HANDLER
-
Apply the DKP CE image:
DECKHOUSE_KUBE_RBAC_PROXY=$(d8 k exec ce-image -- cat deckhouse/candi/images_digests.json | jq -r ".common.kubeRbacProxy") DECKHOUSE_INIT_CONTAINER=$(d8 k exec ce-image -- cat deckhouse/candi/images_digests.json | jq -r ".common.init") DECKHOUSE_VERSION=$(d8 k -n d8-system get deploy deckhouse -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "deckhouse") | .image' | awk -F: '{print $2}') d8 k --as=system:serviceaccount:d8-system:deckhouse -n d8-system set image deployment/deckhouse init-downloaded-modules=registry.deckhouse.ru/deckhouse/ce@$DECKHOUSE_INIT_CONTAINER kube-rbac-proxy=registry.deckhouse.ru/deckhouse/ce@$DECKHOUSE_KUBE_RBAC_PROXY deckhouse=registry.deckhouse.ru/deckhouse/ce:$DECKHOUSE_VERSION
-
Wait for the DKP pod to reach the
Ready
status and for all tasks in the queue to complete.
If you encounter theImagePullBackOff
error during this process, wait for the pod to restart automatically.Check the status of the DKP pod:
d8 k -n d8-system get po -l app=deckhouse
Check the DKP task queue:
d8 platform queue list
-
Check if any pods in the cluster are still using the DKP EE registry address:
d8 k get pods -A -o json | jq -r '.items[] | select(.spec.containers[] | select(.image | contains("deckhouse.ru/deckhouse/ee"))) | .metadata.namespace + "\t" + .metadata.name' | sort | uniq
If the
registry-packages-proxy
module was previously disabled, re-enable it:d8 platform module enable registry-packages-proxy
-
Delete the temporary DKP CE pod:
d8 k delete pod ce-image
Switching DKP from EE to SE
To perform the switch, you will need a valid license token.
This instruction uses the public container registry address: registry.deckhouse.ru
.
DKP SE does not support cloud providers such as dynamix
, openstack
, VCD
, VSphere
, and several modules.
The steps below describe how to switch a Deckhouse Enterprise Edition cluster to Standard Edition:
All commands must be executed on the master node of the existing cluster.
-
Prepare environment variables with your license token:
LICENSE_TOKEN=<PUT_YOUR_LICENSE_TOKEN_HERE> AUTH_STRING="$(echo -n license-token:${LICENSE_TOKEN} | base64 )"
-
Create a NodeGroupConfiguration resource to enable transitional authorization to
registry.deckhouse.ru
:apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: containerd-se-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 30 content: | _on_containerd_config_changed() { bb-flag-set containerd-need-restart } bb-event-on 'containerd-config-file-changed' '_on_containerd_config_changed' mkdir -p /etc/containerd/conf.d bb-sync-file /etc/containerd/conf.d/se-registry.toml - containerd-config-file-changed << "EOF_TOML" [plugins] [plugins."io.containerd.grpc.v1.cri"] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.configs."registry.deckhouse.ru".auth] auth = "$AUTH_STRING" EOF_TOML EOF
Wait until the file
/etc/containerd/conf.d/se-registry.toml
appears on the nodes and Bashible synchronization completes. You can track the synchronization status by checking theUPTODATE
value (it should match the total number of nodes in each group):d8 k get ng -o custom-columns=NAME:.metadata.name,NODES:.status.nodes,READY:.status.ready,UPTODATE:.status.upToDate -w
Example output:
NAME NODES READY UPTODATE master 1 1 1 worker 2 2 2
You should also see the message
Configuration is in sync, nothing to do.
in the bashible systemd service logs:journalctl -u bashible -n 5
Example output:
Aug 21 11:04:28 master-ee-to-se-0 bashible.sh[53407]: Configuration is in sync, nothing to do. Aug 21 11:04:28 master-ee-to-se-0 bashible.sh[53407]: Annotate node master-ee-to-se-0 with annotation node.deckhouse.io/ configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master ee-to-se-0 bashible.sh[53407]: Successful annotate node master-ee-to-se-0 with annotation node.deckhouse.io/ configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master-ee-to-se-0 systemd[1]: bashible.service: Deactivated successfully.
-
Launch a temporary DKP SE pod to retrieve the latest image digests and module list:
DECKHOUSE_VERSION=$(d8 k -n d8-system get deploy deckhouse -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "deckhouse") | .image' | awk -F: '{print $2}') d8 k run se-image --image=registry.deckhouse.ru/deckhouse/se/install:$DECKHOUSE_VERSION --command sleep -- infinity
To check the currently installed DKP version:
d8 k get deckhousereleases | grep Deployed
-
Once the pod reaches the
Running
state, execute the following steps:-
Retrieve the value of
SE_REGISTRY_PACKAGE_PROXY
:SE_REGISTRY_PACKAGE_PROXY=$(d8 k exec se-image -- cat deckhouse/candi/images_digests.json | jq -r ".registryPackagesProxy.registryPackagesProxy")
Pull the DKP SE image manually:
sudo /opt/deckhouse/bin/crictl pull registry.deckhouse.ru/deckhouse/se@$SE_REGISTRY_PACKAGE_PROXY
Example output:
Image is up to date for sha256:7e9908d47580ed8a9de481f579299ccb7040d5c7fade4689cb1bff1be74a95de
-
Retrieve the list of available modules in SE:
SE_MODULES=$(d8 k exec se-image -- ls -l deckhouse/modules/ | grep -oE "\d.*-\w*" | awk {'print $9'} | cut -c5-)
Check the result:
echo $SE_MODULES
Example output:
common priority-class deckhouse external-module-manager ...
-
Retrieve the list of currently enabled embedded modules:
USED_MODULES=$(d8 k get modules -o custom-columns=NAME:.metadata.name,SOURCE:.properties.source,STATE:.properties.state,ENABLED:.status.phase | grep Embedded | grep -E 'Enabled|Ready' | awk {'print $1'})
Check the result:
echo $USED_MODULES
Example output:
admission-policy-engine cert-manager chrony ...
-
Determine which modules must be disabled:
MODULES_WILL_DISABLE=$(echo $USED_MODULES | tr ' ' '\n' | grep -Fxv -f <(echo $SE_MODULES | tr ' ' '\n'))
-
-
Make sure the modules currently used in the cluster are supported by the SE edition. To check which modules are not supported and will be disabled, run:
echo $MODULES_WILL_DISABLE
Review the list and make sure the functionality of these modules is not critical for your cluster before proceeding.
Disable the unsupported modules:
echo $MODULES_WILL_DISABLE | tr ' ' '\n' | awk {'print "d8 platform module disable",$1'} | bash
Wait for the DKP pod to reach the
Ready
state. -
Update the Deckhouse registry access secret:
d8 k -n d8-system create secret generic deckhouse-registry \ --from-literal=".dockerconfigjson"="{\"auths\": { \"registry.deckhouse.ru\": { \"username\": \"license-token\", \"password\": \"$LICENSE_TOKEN\", \"auth\": \"$AUTH_STRING\" }}}" \ --from-literal="address"=registry.deckhouse.ru --from-literal="path"=/deckhouse/se \ --from-literal="scheme"=https --type=kubernetes.io/dockerconfigjson \ --dry-run=client \ -o yaml | kubectl -n d8-system exec -i svc/deckhouse-leader -c deckhouse -- kubectl replace -f -
-
Apply the new webhook-handler image:
```shell HANDLER=$(d8 k exec se-image – cat deckhouse/candi/images_digests.json | jq -r “.deckhouse.webhookHandler”) d8 k –as=system:serviceaccount:d8-system:deckhouse -n d8-system set image deployment/webhook-handler handler=registry.deckhouse.ru/deckhouse/se@$HANDLER ``
-
Apply the DKP SE images:
DECKHOUSE_KUBE_RBAC_PROXY=$(d8 k exec se-image -- cat deckhouse/candi/images_digests.json | jq -r ".common.kubeRbacProxy") DECKHOUSE_INIT_CONTAINER=$(d8 k exec se-image -- cat deckhouse/candi/images_digests.json | jq -r ".common.init") DECKHOUSE_VERSION=$(d8 k -n d8-system get deploy deckhouse -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "deckhouse") | .image' | awk -F: '{print $2}') d8 k --as=system:serviceaccount:d8-system:deckhouse -n d8-system set image deployment/deckhouse init-downloaded-modules=registry.deckhouse.ru/deckhouse/se@$DECKHOUSE_INIT_CONTAINER kube-rbac-proxy=registry.deckhouse.ru/deckhouse/se@$DECKHOUSE_KUBE_RBAC_PROXY deckhouse=registry.deckhouse.ru/deckhouse/se:$DECKHOUSE_VERSION
You can check the currently installed DKP version with:
d8 k get deckhousereleases | grep Deployed
-
Wait for the Deckhouse pod to reach the
Ready
.
If anImagePullBackOff
error occurs during the update, wait for the pod to restart automatically.To check the status of the DKP pod:
d8 k -n d8-system get po -l app=deckhouse
To check the status of the Deckhouse queue:
d8 platform queue list
-
Make sure there are no running pods using the DKP EE registry address:
d8 k get pods -A -o json | jq -r '.items[] | select(.status.phase=="Running" or .status.phase=="Pending" or .status.phase=="PodInitializing") | select(.spec.containers[] | select(.image | contains("deckhouse.ru/deckhouse/ee"))) | .metadata.namespace + "\t" + .metadata.name' | sort | uniq
-
Clean up temporary files, the NodeGroupConfiguration resource, and variables:
d8 k delete ngc containerd-se-config.sh d8 k delete pod se-image d8 k apply -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: del-temp-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 90 content: | if [ -f /etc/containerd/conf.d/se-registry.toml ]; then rm -f /etc/containerd/conf.d/se-registry.toml fi EOF
After bashible synchronization completes (indicated by
UPTODATE
in the NodeGroup status), delete the temporary NodeGroupConfiguration resource:d8 k delete ngc del-temp-config.sh
Switching DKP from EE to CSE
This guide assumes the use of the public container registry address: registry-cse.deckhouse.ru
.
DKP CSE does not support cloud clusters and certain modules. See the edition comparison page for details on supported modules.
Migration to DKP CSE is only possible from DKP EE versions 1.58, 1.64, or 1.67.
The current available DKP CSE versions are: 1.58.2 for the 1.58 release, 1.64.1 for the 1.64 release, and 1.67.0 for the 1.67 release. These versions must be used when setting the DECKHOUSE_VERSION
variable in subsequent steps.
Migration is only supported between the same minor versions. For example, migrating from DKP EE 1.64 to DKP CSE 1.64 is allowed. Migrating from EE 1.58 to CSE 1.67 requires intermediate upgrades: first to EE 1.64, then to EE 1.67, and only then to CSE 1.67. Attempting to upgrade across multiple releases at once may render the cluster inoperable.
DKP CSE 1.58 and 1.64 support Kubernetes version 1.27. DKP CSE 1.67 supports Kubernetes versions 1.27 and 1.29.
A temporary disruption of cluster components may occur during the switch to DKP CSE.
To switch your Deckhouse Enterprise Edition cluster to Certified Security Edition, follow the steps below (all commands must be executed on a master node by a user with a configured kubectl
context or with superuser pr
1.Configure the cluster to use the required Kubernetes version (see the note above regarding the available Kubernetes versions). To do this, run the following command:
d8 platform edit cluster-configuration
-
Change the
kubernetesVersion
parameter to the desired value, for example,"1.27"
(in quotes) for Kubernetes 1.27. -
Save the changes. The cluster nodes will begin updating sequentially.
-
Wait for the update to complete. You can monitor the update progress using the
d8 k get no
command. The update is considered complete when theVERSION
column for each node shows the updated version. -
Prepare the license token variables and create a NodeGroupConfiguration resource to configure temporary authorization for access to
registry-cse.deckhouse.ru
:LICENSE_TOKEN=<PUT_YOUR_LICENSE_TOKEN_HERE> AUTH_STRING="$(echo -n license-token:${LICENSE_TOKEN} | base64 )" d8 k apply -f - <<EOF --- apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: containerd-cse-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 30 content: | _on_containerd_config_changed() { bb-flag-set containerd-need-restart } bb-event-on 'containerd-config-file-changed' '_on_containerd_config_changed' mkdir -p /etc/containerd/conf.d bb-sync-file /etc/containerd/conf.d/cse-registry.toml - containerd-config-file-changed << "EOF_TOML" [plugins] [plugins."io.containerd.grpc.v1.cri"] [plugins."io.containerd.grpc.v1.cri".registry] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry-cse.deckhouse.ru"] endpoint = ["https://registry-cse.deckhouse.ru"] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.configs."registry-cse.deckhouse.ru".auth] auth = "$AUTH_STRING" EOF_TOML EOF
Wait until the synchronization is complete and the
/etc/containerd/conf.d/cse-registry.toml
file appears on the nodes.You can monitor the synchronization status using the
UPTODATE
value (the number of nodes in this status should match the total number of nodes (NODES
) in the group):d8 k get ng -o custom-columns=NAME:.metadata.name,NODES:.status.nodes,READY:.status.ready,UPTODATE:.status.upToDate -w
Example output:
NAME NODES READY UPTODATE master 1 1 1 worker 2 2 2
In the systemd log of the
bashible
service, theConfiguration is in sync, nothing to do
message should appear, indicating successful synchronization:journalctl -u bashible -n 5
Example output:
Aug 21 11:04:28 master-ee-to-cse-0 bashible.sh[53407]: Configuration is in sync, nothing to do. Aug 21 11:04:28 master-ee-to-cse-0 bashible.sh[53407]: Annotate node master-ee-to-cse-0 with annotation node.deckhouse.io/configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master-ee-to-cse-0 bashible.sh[53407]: Successful annotate node master-ee-to-cse-0 with annotation node.deckhouse.io/configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master-ee-to-cse-0 systemd[1]: bashible.service: Deactivated successfully.
-
Run the following commands to start a temporary DKP CSE pod to retrieve the current image digests and module list:
DECKHOUSE_VERSION=v<DECKHOUSE_VERSION_CSE> # For example, DECKHOUSE_VERSION=v1.58.2 d8 k run cse-image --image=registry-cse.deckhouse.ru/deckhouse/cse/install:$DECKHOUSE_VERSION --command sleep -- infinity
Once the pod reaches the
Running
status, execute the following commands:CSE_SANDBOX_IMAGE=$(d8 k exec cse-image -- cat deckhouse/candi/images_digests.json | grep pause | grep -oE 'sha256:\w*') CSE_K8S_API_PROXY=$(d8 k exec cse-image -- cat deckhouse/candi/images_digests.json | grep kubernetesApiProxy | grep -oE 'sha256:\w*') CSE_MODULES=$(d8 k exec cse-image -- ls -l deckhouse/modules/ | awk {'print $9'} |grep -oP "\d.*-\w*" | cut -c5-) USED_MODULES=$(d8 k get modules -o custom-columns=NAME:.metadata.name,SOURCE:.properties.source,STATE:.properties.state,ENABLED:.status.phase | grep Embedded | grep -E 'Enabled|Ready' | awk {'print $1'}) MODULES_WILL_DISABLE=$(echo $USED_MODULES | tr ' ' '\n' | grep -Fxv -f <(echo $CSE_MODULES | tr ' ' '\n')) CSE_DECKHOUSE_KUBE_RBAC_PROXY=$(d8 k exec cse-image -- cat deckhouse/candi/images_digests.json | jq -r ".common.kubeRbacProxy")
Additional command required only when switching to DKP CSE version 1.64:
CSE_DECKHOUSE_INIT_CONTAINER=$(d8 k exec cse-image -- cat deckhouse/candi/images_digests.json | jq -r ".common.init")
-
Make sure that the modules currently used in the cluster are supported in DKP CSE.
For example, in Deckhouse CSE 1.58 and 1.64, thecert-manager
module is not available. Therefore, before disabling thecert-manager
module, you must switch the HTTPS mode of certain components (such as user-authn or prometheus) to alternative modes, or change the global HTTPS mode parameter accordingly.To display the list of modules that are not supported in DKP CSE and will be disabled, run:
echo $MODULES_WILL_DISABLE
Review the list and make sure that the listed modules are not actively used in your cluster and that you are ready to disable them.
Disable the modules not supported in DKP CSE:
echo $MODULES_WILL_DISABLE | tr ' ' '\n' | awk {'print "d8 platform module disable",$1'} | bash
The
earlyOOM
component is not supported in DKP CSE. Disable it using the earlyOomEnabled) setting.Wait for the DKP pod to reach the
Ready
status and for all tasks in the queue to complete:d8 platform queue list
Verify that the disabled modules are now in the
Disabled
state:d8 k get modules
-
Create a NodeGroupConfiguration:
apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: cse-set-sha-images.sh spec: nodeGroups: - '*' bundles: - '*' weight: 50 content: | _on_containerd_config_changed() { bb-flag-set containerd-need-restart } bb-event-on 'containerd-config-file-changed' '_on_containerd_config_changed' bb-sync-file /etc/containerd/conf.d/cse-sandbox.toml - containerd-config-file-changed << "EOF_TOML" [plugins] [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry-cse.deckhouse.ru/deckhouse/cse@$CSE_SANDBOX_IMAGE" EOF_TOML sed -i 's|image: .*|image: registry-cse.deckhouse.ru/deckhouse/cse@$CSE_K8S_API_PROXY|' /var/lib/bashible/bundle_steps/051_pull_and_configure_kubernetes_api_proxy.sh sed -i 's|crictl pull .*|crictl pull registry-cse.deckhouse.ru/deckhouse/cse@$CSE_K8S_API_PROXY|' /var/lib/bashible/bundle_steps/051_pull_and_configure_kubernetes_api_proxy.sh EOF
Wait for
bashible
synchronization to complete on all nodes.You can track the synchronization status by checking the
UPTODATE
value (the number of nodes in this state should match the total number of nodes (NODES
) in the group):d8 k get ng -o custom-columns=NAME:.metadata.name,NODES:.status.nodes,READY:.status.ready,UPTODATE:.status.upToDate -w
The following message should appear in the
bashible
systemd service logs on the nodes, indicating that the configuration is fully synchronized:journalctl -u bashible -n 5
Example output:
Aug 21 11:04:28 master-ee-to-cse-0 bashible.sh[53407]: Configuration is in sync, nothing to do. Aug 21 11:04:28 master-ee-to-cse-0 bashible.sh[53407]: Annotate node master-ee-to-cse-0 with annotation node.deckhouse.io/configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master-ee-to-cse-0 bashible.sh[53407]: Successful annotate node master-ee-to-cse-0 with annotation node.deckhouse.io/configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master-ee-to-cse-0 systemd[1]: bashible.service: Deactivated successfully.
-
Update the secret for accessing the DKP CSE registry:
d8 k -n d8-system create secret generic deckhouse-registry \ --from-literal=".dockerconfigjson"="{\"auths\": { \"registry-cse.deckhouse.ru\": { \"username\": \"license-token\", \"password\": \"$LICENSE_TOKEN\", \"auth\": \"$AUTH_STRING\" }}}" \ --from-literal="address"=registry-cse.deckhouse.ru \ --from-literal="path"=/deckhouse/cse \ --from-literal="scheme"=https \ --type=kubernetes.io/dockerconfigjson \ --dry-run='client' \ -o yaml | kubectl -n d8-system exec -i svc/deckhouse-leader -c deckhouse -- kubectl replace -f -
-
Update the DKP image to use the DKP CSE image:
For DKP CSE version 1.58:
d8 k -n d8-system set image deployment/deckhouse kube-rbac-proxy=registry-cse.deckhouse.ru/deckhouse/cse@$CSE_DECKHOUSE_KUBE_RBAC_PROXY deckhouse=registry-cse.deckhouse.ru/deckhouse/cse:$DECKHOUSE_VERSION
For DKP CSE versions 1.64 and 1.67:
d8 k -n d8-system set image deployment/deckhouse init-downloaded-modules=registry-cse.deckhouse.ru/deckhouse/cse@$CSE_DECKHOUSE_INIT_CONTAINER kube-rbac-proxy=registry-cse.deckhouse.ru/deckhouse/cse@$CSE_DECKHOUSE_KUBE_RBAC_PROXY deckhouse=registry-cse.deckhouse.ru/deckhouse/cse:$DECKHOUSE_VERSION
-
Wait for the DKP pod to reach the
Ready
status and for all tasks in the queue to complete. If theImagePullBackOff
error occurs, wait for the pod to automatically restart.Check the DKP pod status:
d8 k -n d8-system get po -l app=deckhouse
Check the DKP task queue:
d8 platform queue list
-
Verify that no pods are using the EE registry image:
d8 k get pods -A -o json | jq -r '.items[] | select(.spec.containers[] | select(.image | contains("deckhouse.ru/deckhouse/ee"))) | .metadata.namespace + "\t" + .metadata.name' | sort | uniq
If the output contains pods from the
chrony
module, re-enable the module (it’s disabled by default in DKP CSE):d8 platform module enable chrony
-
Clean up temporary files, the NodeGroupConfiguration resource, and temporary variables:
rm /tmp/cse-deckhouse-registry.yaml d8 k delete ngc containerd-cse-config.sh cse-set-sha-images.sh kd8 k delete pod cse-image
apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: del-temp-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 90 content: | if [ -f /etc/containerd/conf.d/cse-registry.toml ]; then rm -f /etc/containerd/conf.d/cse-registry.toml fi if [ -f /etc/containerd/conf.d/cse-sandbox.toml ]; then rm -f /etc/containerd/conf.d/cse-sandbox.toml fi EOF
After synchronization (track status by
UPTODATE
value for NodeGroup), delete the cleanup configuration:d8 k delete ngc del-temp-config.sh
Switching DKP to CE/BE/SE/SE+/EE
When using the registry
module, switching between editions is only possible in Unmanaged
mode.
To switch to Unmanaged
mode, follow the instruction.
- The functionality of this guide is validated for Deckhouse versions starting from
v1.70
. If your version is older, use the corresponding documentation. - For commercial editions, you need a valid license key that supports the desired edition. If necessary, you can request a temporary key.
- The guide assumes the use of the public container registry address:
registry.deckhouse.io
. If you are using a different container registry address, modify the commands accordingly or refer to the guide on switching Deckhouse to use a different registry. - The Deckhouse CE/BE/SE/SE+ editions do not support the cloud providers
dynamix
,openstack
,VCD
, andvSphere
(vSphere is supported in SE+) and a number of modules. - All commands are executed on the master node of the existing cluster with
root
user.
-
Prepare variables for the license token and new edition name:
It is not necessary to fill the
NEW_EDITION
andAUTH_STRING
variables when switching to Deckhouse CE edition. TheNEW_EDITION
variable should match your desired Deckhouse edition. For example, to switch to:- CE, the variable should be
ce
; - BE, the variable should be
be
; - SE, the variable should be
se
; - SE+, the variable should be
se-plus
; - EE, the variable should be
ee
.
NEW_EDITION=<PUT_YOUR_EDITION_HERE> LICENSE_TOKEN=<PUT_YOUR_LICENSE_TOKEN_HERE> AUTH_STRING="$(echo -n license-token:${LICENSE_TOKEN} | base64 )"
- CE, the variable should be
-
Ensure the Deckhouse queue is empty and error-free.
-
Create a NodeGroupConfiguration resource for temporary authorization in
registry.deckhouse.io
:Skip this step if switching to Deckhouse CE.
d8 k apply -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: containerd-$NEW_EDITION-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 30 content: | _on_containerd_config_changed() { bb-flag-set containerd-need-restart } bb-event-on 'containerd-config-file-changed' '_on_containerd_config_changed' mkdir -p /etc/containerd/conf.d bb-sync-file /etc/containerd/conf.d/$NEW_EDITION-registry.toml - containerd-config-file-changed << "EOF_TOML" [plugins] [plugins."io.containerd.grpc.v1.cri"] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.configs."registry.deckhouse.io".auth] auth = "$AUTH_STRING" EOF_TOML EOF
Wait for the
/etc/containerd/conf.d/$NEW_EDITION-registry.toml
file to appear on the nodes and for bashible synchronization to complete. To track the synchronization status, check theUPTODATE
value (the number of nodes in this status should match the total number of nodes (NODES
) in the group):d8 k get ng -o custom-columns=NAME:.metadata.name,NODES:.status.nodes,READY:.status.ready,UPTODATE:.status.upToDate -w
Example output:
NAME NODES READY UPTODATE master 1 1 1 worker 2 2 2
Also, a message stating
Configuration is in sync, nothing to do
should appear in the systemd service log for bashible by executing the following command:journalctl -u bashible -n 5
Example output:
Aug 21 11:04:28 master-ee-to-se-0 bashible.sh[53407]: Configuration is in sync, nothing to do. Aug 21 11:04:28 master-ee-to-se-0 bashible.sh[53407]: Annotate node master-ee-to-se-0 with annotation node.deckhouse.io/configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master ee-to-se-0 bashible.sh[53407]: Successful annotate node master-ee-to-se-0 with annotation node.deckhouse.io/configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master-ee-to-se-0 systemd[1]: bashible.service: Deactivated successfully.
-
Start a temporary pod for the new Deckhouse edition to obtain current digests and a list of modules:
DECKHOUSE_VERSION=$(d8 k -n d8-system get deploy deckhouse -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "deckhouse") | .image' | awk -F: '{print $2}') d8 k run $NEW_EDITION-image --image=registry.deckhouse.io/deckhouse/$NEW_EDITION/install:$DECKHOUSE_VERSION --command sleep --infinity
-
Once the pod is in
Running
state, execute the following commands:NEW_EDITION_MODULES=$(d8 k exec $NEW_EDITION-image -- ls -l deckhouse/modules/ | grep -oE "\d.*-\w*" | awk {'print $9'} | cut -c5-) USED_MODULES=$(d8 k get modules -o custom-columns=NAME:.metadata.name,SOURCE:.properties.source,STATE:.properties.state,ENABLED:.status.phase | grep Embedded | grep -E 'Enabled|Ready' | awk {'print $1'}) MODULES_WILL_DISABLE=$(echo $USED_MODULES | tr ' ' '\n' | grep -Fxv -f <(echo $NEW_EDITION_MODULES | tr ' ' '\n'))
-
Verify that the modules used in the cluster are supported in the desired edition. To see the list of modules not supported in the new edition and will be disabled:
echo $MODULES_WILL_DISABLE
Check the list to ensure the functionality of these modules is not in use in your cluster and you are ready to disable them.
Disable the modules not supported by the new edition:
echo $MODULES_WILL_DISABLE | tr ' ' '\n' | awk {'print "d8 platform module disable",$1'} | bash
Wait for the Deckhouse pod to reach
Ready
state and ensure all tasks in the queue are completed. -
Execute the
deckhouse-controller helper change-registry
command from the Deckhouse pod with the new edition parameters:To switch to BE/SE/SE+/EE editions:
DOCKER_CONFIG_JSON=$(echo -n "{\"auths\": {\"registry.deckhouse.io\": {\"username\": \"license-token\", \"password\": \"${LICENSE_TOKEN}\", \"auth\": \"${AUTH_STRING}\"}}}" | base64 -w 0) d8 k --as system:sudouser -n d8-cloud-instance-manager patch secret deckhouse-registry --type merge --patch="{\"data\":{\".dockerconfigjson\":\"$DOCKER_CONFIG_JSON\"}}" d8 k -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller helper change-registry --user=license-token --password=$LICENSE_TOKEN --new-deckhouse-tag=$DECKHOUSE_VERSION registry.deckhouse.io/deckhouse/$NEW_EDITION
To switch to CE edition:
d8 k -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller helper change-registry --new-deckhouse-tag=$DECKHOUSE_VERSION registry.deckhouse.io/deckhouse/ce
-
Check if there are any pods with the DKP old edition address left in the cluster, where
<YOUR-PREVIOUS-EDITION>
your previous edition name:d8 k get pods -A -o json | jq -r '.items[] | select(.spec.containers[] | select(.image | contains("deckhouse.io/deckhouse/<YOUR-PREVIOUS-EDITION>"))) | .metadata.namespace + "\t" + .metadata.name' | sort | uniq
-
Delete temporary files, the NodeGroupConfiguration resource, and variables:
Skip this step if switching to Deckhouse CE.
d8 k delete ngc containerd-$NEW_EDITION-config.sh d8 k delete pod $NEW_EDITION-image d8 k apply -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: del-temp-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 90 content: | if [ -f /etc/containerd/conf.d/$NEW_EDITION-registry.toml ]; then rm -f /etc/containerd/conf.d/$NEW_EDITION-registry.toml fi EOF
After the bashible synchronization completes (synchronization status on the nodes is shown by the
UPTODATE
value in NodeGroup), delete the created NodeGroupConfiguration resource:d8 k delete ngc del-temp-config.sh