How do I find out all Deckhouse parameters?
Deckhouse is configured using global settings, module settings, and various custom resources. Read more in the documentation.
-
Display global Deckhouse settings:
kubectl get mc global -o yaml
-
List the status of all modules (available for Deckhouse version 1.47+):
kubectl get modules
-
Display the settings of the
user-authn
module configuration:kubectl get moduleconfigs user-authn -o yaml
How do I find the documentation for the version installed?
The documentation for the Deckhouse version running in the cluster is available at documentation.<cluster_domain>
, where <cluster_domain>
is the DNS name that matches the template defined in the modules.publicDomainTemplate parameter.
Documentation is available when the documentation module is enabled. It is enabled by default except the Minimal
bundle.
Deckhouse update
How to find out in which mode the cluster is being updated?
You can view the cluster update mode in the configuration of the deckhouse
module. To do this, run the following command:
kubectl get mc deckhouse -oyaml
Example of the output:
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
creationTimestamp: "2022-12-14T11:13:03Z"
generation: 1
name: deckhouse
resourceVersion: "3258626079"
uid: c64a2532-af0d-496b-b4b7-eafb5d9a56ee
spec:
settings:
releaseChannel: Stable
update:
windows:
- days:
- Mon
from: "19:00"
to: "20:00"
version: 1
status:
state: Enabled
status: ""
type: Embedded
version: "1"
There are three possible update modes:
- Automatic + update windows are not set. The cluster will be updated after the new version appears on the corresponding release channel.
- Automatic + update windows are set. The cluster will be updated in the nearest available window after the new version appears on the release channel.
- Manual. Manual action is required to apply the update.
How do I set the desired release channel?
Change (set) the releaseChannel parameter in the deckhouse
module configuration to automatically switch to another release channel.
It will activate the mechanism of automatic stabilization of the release channel.
Here is an example of the deckhouse
module configuration with the Stable
release channel:
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: deckhouse
spec:
version: 1
settings:
releaseChannel: Stable
How do I disable automatic updates?
To completely disable the Deckhouse update mechanism, remove the releaseChannel parameter in the deckhouse
module configuration.
In this case, Deckhouse does not check for updates and doesn’t apply patch releases.
It is highly not recommended to disable automatic updates! It will block updates to patch releases that may contain critical vulnerabilities and bugs fixes.
How do I apply an update without having to wait for the update window, canary-release and manual update mode?
To apply an update immediately, set the release.deckhouse.io/apply-now : "true"
annotation on the DeckhouseRelease resource.
Caution! In this case, the update windows, settings canary-release and manual cluster update mode will be ignored. The update will be applied immediately after the annotation is installed.
An example of a command to set the annotation to skip the update windows for version v1.56.2
:
kubectl annotate deckhousereleases v1.56.2 release.deckhouse.io/apply-now="true"
An example of a resource with the update window skipping annotation in place:
apiVersion: deckhouse.io/v1alpha1
kind: DeckhouseRelease
metadata:
annotations:
release.deckhouse.io/apply-now: "true"
...
How to understand what changes the update contains and how it will affect the cluster?
You can find all the information about Deckhouse versions in the list of Deckhouse releases.
Summary information about important changes, component version updates, and which components in the cluster will be restarted during the update process can be found in the description of the zero patch version of the release. For example, v1.46.0 for the v1.46 Deckhouse release.
A detailed list of changes can be found in the Changelog, which is referenced in each release.
How do I understand that the cluster is being updated?
During the update:
- The
DeckhouseUpdating
alert is displayed. - The
deckhouse
Pod is not theReady
status. If the Pod does not go to theReady
status for a long time, then this may indicate that there are problems in the work of Deckhouse. Diagnosis is necessary.
How do I know that the update was successful?
If the DeckhouseUpdating
alert is resolved, then the update is complete.
You can also check the status of Deckhouse releases by running the following command:
kubectl get deckhouserelease
Example output:
NAME PHASE TRANSITIONTIME MESSAGE
v1.46.8 Superseded 13d
v1.46.9 Superseded 11d
v1.47.0 Superseded 4h12m
v1.47.1 Deployed 4h12m
The Deployed
status of the corresponding version indicates that the switch to the corresponding version was performed (but this does not mean that it ended successfully).
Check the status of the Deckhouse Pod:
kubectl -n d8-system get pods -l app=deckhouse
Example output:
NAME READY STATUS RESTARTS AGE
deckhouse-7844b47bcd-qtbx9 1/1 Running 0 1d
- If the status of the Pod is
Running
, and1/1
indicated in the READY column, the update was completed successfully. - If the status of the Pod is
Running
, and0/1
indicated in the READY column, the update is not over yet. If this goes on for more than 20-30 minutes, then this may indicate that there are problems in the work of Deckhouse. Diagnosis is necessary. - If the status of the Pod is not
Running
, then this may indicate that there are problems in the work of Deckhouse. Diagnosis is necessary.
Possible options for action if something went wrong:
-
Check Deckhouse logs using the following command:
kubectl -n d8-system logs -f -l app=deckhouse | jq -Rr 'fromjson? | .msg'
- Collect debugging information and contact technical support.
- Ask for help from the community.
How do I know that a new version is available for the cluster?
As soon as a new version of Deckhouse appears on the release channel installed in the cluster:
- The alert
DeckhouseReleaseIsWaitingManualApproval
fires, if the cluster uses manual update mode (the update.mode parameter is set toManual
). - There is a new custom resource DeckhouseRelease. Use the
kubectl get deckhousereleases
command, to view the list of releases. If theDeckhouseRelease
is in thePending
state, the specified version has not yet been installed. Possible reasons whyDeckhouseRelease
may be inPending
:- Manual update mode is set (the update.mode parameter is set to
Manual
). - The automatic update mode is set, and the update windows are configured, the interval of which has not yet come.
- The automatic update mode is set, update windows are not configured, but the installation of the version has been postponed for a random time due to the mechanism of reducing the load on the repository of container images. There will be a corresponding message in the
status.message
field of theDeckhouseRelease
resource. - The update.notification.minimalNotificationTime parameter is set, and the specified time has not passed yet.
- Manual update mode is set (the update.mode parameter is set to
How do I get information about the upcoming update in advance?
You can get information in advance about updating minor versions of Deckhouse on the release channel in the following ways:
- Configure manual update mode. In this case, when a new version appears on the release channel, the alert
DeckhouseReleaseIsWaitingManualApproval
will be displayed and a new custom resource DeckhouseRelease will be applied in the cluster. - Configure automatic update mode and specify the minimum time in the minimalNotificationTime parameter for which the update will be postponed. In this case, when a new version appears on the release channel, a new custom resource DeckhouseRelease will appear in the cluster. And if you specify a URL in the update.notification.webhook parameter, then the webhook will be called additionally.
How do I find out which version of Deckhouse is on which release channel?
Information about which version of Deckhouse is on which release channel can be obtained at https://releases.deckhouse.io.
How does automatic Deckhouse update work?
Every minute Deckhouse checks a new release appeared in the release channel specified by the releaseChannel parameter.
When a new release appears on the release channel, Deckhouse downloads it and creates CustomResource DeckhouseRelease.
After creating a DeckhouseRelease
custom resource in a cluster, Deckhouse updates the deckhouse
Deployment and sets the image tag to a specified release tag according to selected update mode and update windows (automatic at any time by default).
To get list and status of all releases use the following command:
kubectl get deckhousereleases
Starting from DKP 1.70 patch releases (e.g., an update from version 1.70.1
to version 1.70.2
) are installed taking into account the update windows. Prior to DKP 1.70, patch version updates ignore update windows settings and apply as soon as they are available.
What happens when the release channel changes?
- When switching to a more stable release channel (e.g., from
Alpha
toEarlyAccess
), Deckhouse downloads release data from the release channel (theEarlyAccess
release channel in the example) and compares it with the existingDeckhouseReleases
:- Deckhouse deletes later releases (by semver) that have not yet been applied (with the
Pending
status). - if the latest releases have been already Deployed, then Deckhouse will hold the current release until a later release appears on the release channel (on the
EarlyAccess
release channel in the example).
- Deckhouse deletes later releases (by semver) that have not yet been applied (with the
- When switching to a less stable release channel (e.g., from
EarlyAccess
toAlpha
), the following actions take place:- Deckhouse downloads release data from the release channel (the
Alpha
release channel in the example) and compares it with the existingDeckhouseReleases
. - Then Deckhouse performs the update according to the update parameters.
- Deckhouse downloads release data from the release channel (the
What do I do if Deckhouse fails to retrieve updates from the release channel?
- Make sure that the desired release channel is configured.
- Make sure that the DNS name of the Deckhouse container registry is resolved correctly.
-
Retrieve and compare the IP addresses of the Deckhouse container registry (
registry.deckhouse.io
) on one of the nodes and in the Deckhouse pod. They should match.To retrieve the IP address of the Deckhouse container registry on a node, run the following command:
getent ahosts registry.deckhouse.io
Example output:
46.4.145.194 STREAM registry.deckhouse.io 46.4.145.194 DGRAM 46.4.145.194 RAW
To retrieve the IP address of the Deckhouse container registry in a pod, run the following command:
kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- getent ahosts registry.deckhouse.io
Example output:
46.4.145.194 STREAM registry.deckhouse.io 46.4.145.194 DGRAM registry.deckhouse.io
If the retrieved IP addresses do not match, inspect the DNS settings on the host. Specifically, check the list of domains in the
search
parameter of the/etc/resolv.conf
file (it affects name resolution in the Deckhouse pod). If thesearch
parameter of the/etc/resolv.conf
file includes a domain where wildcard record resolution is configured, it may result in incorrect resolution of the IP address of the Deckhouse container registry (see the following example).
How to check the job queue in Deckhouse?
To view the status of all Deckhouse job queues, run the following command:
kubectl -n d8-system exec -it svc/deckhouse-leader -c deckhouse -- deckhouse-controller queue list
Example of the output (queues are empty):
Summary:
- 'main' queue: empty.
- 88 other queues (0 active, 88 empty): 0 tasks.
- no tasks to handle.
To view the status of the main
Deckhouse task queue, run the following command:
kubectl -n d8-system exec -it svc/deckhouse-leader -c deckhouse -- deckhouse-controller queue main
Example of the output (38 tasks in the main
queue):
Queue 'main': length 38, status: 'run first task'
Example of the output (the main
queue is empty):
Queue 'main': length 0, status: 'waiting for task 0s'
Air-gapped environment; working via proxy and third-party registry
How do I configure Deckhouse to use a third-party registry?
This feature is available in the following editions: BE, SE, SE+, EE.
Deckhouse only supports Bearer authentication for container registries.
Tested and guaranteed to work with the following container registries: Nexus, Harbor, Artifactory, Docker Registry, Quay.
Deckhouse can be configured to work with a third-party registry (e.g., a proxy registry inside private environments).
Define the following parameters in the InitConfiguration
resource:
imagesRepo: <PROXY_REGISTRY>/<DECKHOUSE_REPO_PATH>/ee
. The path to the Deckhouse EE image in the third-party registry, for exampleimagesRepo: registry.deckhouse.io/deckhouse/ee
;registryDockerCfg: <BASE64>
. Base64-encoded auth credentials of the third-party registry.
Use the following registryDockerCfg
if anonymous access to Deckhouse images is allowed in the third-party registry:
{"auths": { "<PROXY_REGISTRY>": {}}}
registryDockerCfg
must be Base64-encoded.
Use the following registryDockerCfg
if authentication is required to access Deckhouse images in the third-party registry:
{"auths": { "<PROXY_REGISTRY>": {"username":"<PROXY_USERNAME>","password":"<PROXY_PASSWORD>","auth":"<AUTH_BASE64>"}}}
<PROXY_USERNAME>
— auth username for<PROXY_REGISTRY>
.<PROXY_PASSWORD>
— auth password for<PROXY_REGISTRY>
.<PROXY_REGISTRY>
— registry address:<HOSTNAME>[:PORT]
.<AUTH_BASE64>
— Base64-encoded<PROXY_USERNAME>:<PROXY_PASSWORD>
auth string.
registryDockerCfg
must be Base64-encoded.
You can use the following script to generate registryDockerCfg
:
declare MYUSER='<PROXY_USERNAME>'
declare MYPASSWORD='<PROXY_PASSWORD>'
declare MYREGISTRY='<PROXY_REGISTRY>'
MYAUTH=$(echo -n "$MYUSER:$MYPASSWORD" | base64 -w0)
MYRESULTSTRING=$(echo -n "{\"auths\":{\"$MYREGISTRY\":{\"username\":\"$MYUSER\",\"password\":\"$MYPASSWORD\",\"auth\":\"$MYAUTH\"}}}" | base64 -w0)
echo "$MYRESULTSTRING"
The InitConfiguration
resource provides two more parameters for non-standard third-party registry configurations:
registryCA
- root CA certificate to validate the third-party registry’s HTTPS certificate (if self-signed certificates are used);registryScheme
- registry scheme (HTTP
orHTTPS
). The default value isHTTPS
.
Tips for configuring Nexus
When interacting with a docker
repository located in Nexus (e. g. executing docker pull
, docker push
commands), you must specify the address in the <NEXUS_URL>:<REPOSITORY_PORT>/<PATH>
format.
Using the URL
value from the Nexus repository options is not acceptable
The following requirements must be met if the Nexus repository manager is used:
- Docker proxy repository must be pre-created (Administration -> Repository -> Repositories):
- The
Maximum metadata age
parameter is set to0
for the repository.
- The
- Access control configured as follows:
- The Nexus role is created (Administration -> Security -> Roles) with the following permissions:
nx-repository-view-docker-<repository>-browse
nx-repository-view-docker-<repository>-read
- A user (Administration -> Security -> Users) with the Nexus role is created.
- The Nexus role is created (Administration -> Security -> Roles) with the following permissions:
Configuration:
-
Create a docker proxy repository (Administration -> Repository -> Repositories) pointing to the Deckhouse registry:
- Fill in the fields on the Create page as follows:
Name
must contain the name of the repository you created earlier, e.g.,d8-proxy
.Repository Connectors / HTTP
orRepository Connectors / HTTPS
must contain a dedicated port for the created repository, e.g.,8123
or other.Remote storage
must be set tohttps://registry.deckhouse.io/
.- You can disable
Auto blocking enabled
andNot found cache enabled
for debugging purposes, otherwise they must be enabled. Maximum Metadata Age
must be set to0
.Authentication
must be enabled if you plan to use a commercial edition of Deckhouse Kubernetes Platform, and the related fields must be set as follows:Authentication Type
must be set toUsername
.Username
must be set tolicense-token
.Password
must contain your Deckhouse Kubernetes Platform license key.
- Configure Nexus access control to allow Nexus access to the created repository:
- Create a Nexus role (Administration -> Security -> Roles) with the
nx-repository-view-docker-<repository>-browse
andnx-repository-view-docker-<repository>-read
permissions.
- Create a user with the role above granted.
- Create a Nexus role (Administration -> Security -> Roles) with the
Thus, Deckhouse images will be available at https://<NEXUS_HOST>:<REPOSITORY_PORT>/deckhouse/ee:<d8s-version>
.
Tips for configuring Harbor
Use the Harbor Proxy Cache feature.
- Create a Registry:
Administration -> Registries -> New Endpoint
.Provider
:Docker Registry
.Name
— specify any of your choice.Endpoint URL
:https://registry.deckhouse.io
.- Specify the
Access ID
andAccess Secret
(the Deckhouse Kubernetes Platform license key).
- Create a new Project:
Projects -> New Project
.Project Name
will be used in the URL. You can choose any name, for example,d8s
.Access Level
:Public
.Proxy Cache
— enable and choose the Registry, created in the previous step.
Thus, Deckhouse images will be available at https://your-harbor.com/d8s/deckhouse/ee:{d8s-version}
.
Manually uploading Deckhouse Kubernetes Platform, vulnerability scanner DB and Deckhouse modules to private registry
The d8 mirror
command group is not available for Community Edition (CE) and Basic Edition (BE).
Check releases.deckhouse.io for the current status of the release channels.
-
Pull Deckhouse images using the
d8 mirror pull
command.By default,
d8 mirror
pulls only the latest available patch versions for every actual Deckhouse release, latest enterprise security scanner databases (if your edition supports it) and the current set of officially supplied modules. For example, for Deckhouse 1.59, only version1.59.12
will be pulled, since this is sufficient for updating Deckhouse from 1.58 to 1.59.Run the following command (specify the edition code and the license key) to download actual images:
d8 mirror pull \ --source='registry.deckhouse.io/deckhouse/<EDITION>' \ --license='<LICENSE_KEY>' /home/user/d8-bundle
where:
<EDITION>
— the edition code of the Deckhouse Kubernetes Platform (for example,ee
,se
,se-plus
).<LICENSE_KEY>
— Deckhouse Kubernetes Platform license key./home/user/d8-bundle
— the directory to store the resulting bundle into. It will be created if not present.
If the loading of images is interrupted, rerunning the command will resume the loading if no more than a day has passed since it stopped.
You can also use the following command options:
--no-pull-resume
— to forcefully start the download from the beginning;--no-platform
— to skip downloading the Deckhouse Kubernetes Platform package (platform.tar);--no-modules
— to skip downloading modules packages (module-*.tar);--no-security-db
— to skip downloading security scanner databases (security.tar);--since-version=X.Y
— to download all versions of Deckhouse starting from the specified minor version. This parameter will be ignored if a version higher than the version on the Rock Solid updates channel is specified. This parameter cannot be used simultaneously with the--deckhouse-tag
parameter;--deckhouse-tag
— to download only a specific build of Deckhouse (without considering update channels). This parameter cannot be used simultaneously with the--since-version
parameter;--include-module
/-i
=name[@Major.Minor]
— to download only a specific whitelist of modules (and optionally their minimal versions). Specify multiple times to whitelist more modules. This flags are ignored if used with--no-modules
.--exclude-module
/-e
=name
— to skip downloading of a specific blacklisted set of modules. Specify multiple times to blacklist more modules. Ignored if--no-modules
or--include-module
are used.--modules-path-suffix
— to change the suffix of the module repository path in the main Deckhouse repository. By default, the suffix is/modules
. (for example, the full path to the repository with modules will look likeregistry.deckhouse.io/deckhouse/EDITION/modules
with this default).--gost-digest
— for calculating the checksums of the bundle in the format of GOST R 34.11-2012 (Streebog). The checksum for each package will be displayed and written to a file with the extension.tar.gostsum
in the folder with the package;--source
— to specify the address of the Deckhouse source registry;- To authenticate in the official Deckhouse image registry, you need to use a license key and the
--license
parameter; - To authenticate in a third-party registry, you need to use the
--source-login
and--source-password
parameters;
- To authenticate in the official Deckhouse image registry, you need to use a license key and the
--images-bundle-chunk-size=N
— to specify the maximum file size (in GB) to split the image archive into. As a result of the operation, instead of a single file archive, a set of.chunk
files will be created (e.g.,d8.tar.NNNN.chunk
). To upload images from such a set of files, specify the file name without the.NNNN.chunk
suffix in thed8 mirror push
command (e.g.,d8.tar
for files liked8.tar.NNNN.chunk
);--tmp-dir
— path to a temporary directory to use for image pulling and pushing. All processing is done in this directory, so make sure there is enough free disk space to accommodate the entire bundle you are downloading. By default,.tmp
subdirectory under the bundle directory is used.
Additional configuration options for the
d8 mirror
family of commands are available as environment variables:HTTP_PROXY
/HTTPS_PROXY
— URL of the proxy server for HTTP(S) requests to hosts that are not listed in the variable$NO_PROXY
;NO_PROXY
— comma-separated list of hosts to exclude from proxying. Supported value formats include IP addresses (1.2.3.4
), CIDR notations (1.2.3.4/8
), domains, and the asterisk character (*
). The IP addresses and domain names can also include a literal port number (1.2.3.4:80
). The domain name matches that name and all the subdomains. The domain name with a leading.
matches subdomains only. For example,foo.com
matchesfoo.com
andbar.foo.com
;.y.com
matchesx.y.com
but does not matchy.com
. A single asterisk*
indicates that no proxying should be done;SSL_CERT_FILE
— path to the SSL certificate. If the variable is set, system certificates are not used;SSL_CERT_DIR
— list of directories to search for SSL certificate files, separated by a colon. If set, system certificates are not used. See more…;MIRROR_BYPASS_ACCESS_CHECKS
— set to1
to skip validation of registry credentials;
Example of a command to download all versions of Deckhouse EE starting from version 1.59 (provide the license key):
d8 mirror pull \ --license='<LICENSE_KEY>' \ --source='registry.deckhouse.io/deckhouse/ee' \ --since-version=1.59 /home/user/d8-bundle
Example of a command to download versions of Deckhouse SE for every release-channel available:
d8 mirror pull \ --license='<LICENSE_KEY>' \ --source='registry.deckhouse.io/deckhouse/se' \ /home/user/d8-bundle
Example of a command to download all versions of Deckhouse hosted on a third-party registry:
d8 mirror pull \ --source='corp.company.com:5000/sys/deckhouse' \ --source-login='<USER>' --source-password='<PASSWORD>' /home/user/d8-bundle
Example of a command to download latest vulnerability scanner databases (if available for your deckhouse edition):
d8 mirror pull \ --license='<LICENSE_KEY>' \ --source='registry.deckhouse.io/deckhouse/ee' \ --no-platform --no-modules /home/user/d8-bundle
Example of a command to download all of Deckhouse modules available in registry:
d8 mirror pull \ --license='<LICENSE_KEY>' \ --source='registry.deckhouse.io/deckhouse/ee' \ --no-platform --no-security-db /home/user/d8-bundle
Example of a command to download
stronghold
andsecrets-store-integration
Deckhouse modules:d8 mirror pull \ --license='<LICENSE_KEY>' \ --source='registry.deckhouse.io/deckhouse/ee' \ --no-platform --no-security-db \ --include-module stronghold \ --include-module secrets-store-integration \ /home/user/d8-bundle
-
Upload the bundle with the pulled Deckhouse images to a host with access to the air-gapped registry and install the Deckhouse CLI tool onto it.
-
Push the images to the air-gapped registry using the
d8 mirror push
command.The
d8 mirror push
command uploads images from all packages present in the given directory to the repository. If you need to upload only some specific packages to the repository, you can either run the command for each required package, passing in the direct path to the tar package instead of the directory, or by removing the.tar
extension from unnecessary packages or moving them outside the directory.Example of a command for pushing images from the
/mnt/MEDIA/d8-images
directory (specify authorization data if necessary):d8 mirror push /mnt/MEDIA/d8-images 'corp.company.com:5000/sys/deckhouse' \ --registry-login='<USER>' --registry-password='<PASSWORD>'
Before pushing images, make sure that the path for loading into the registry exists (
/sys/deckhouse
in the example above), and the account being used has write permissions. Harbor users, please note that you will not be able to upload images to the project root; instead use a dedicated repository in the project to host Deckhouse images. -
Once pushing images to the air-gapped private registry is complete, you are ready to install Deckhouse from it. Refer to the Getting started guide.
When launching the installer, use a repository where Deckhouse images have previously been loaded instead of official Deckhouse registry. For example, the address for launching the installer will look like
corp.company.com:5000/sys/deckhouse/install:stable
instead ofregistry.deckhouse.io/deckhouse/ee/install:stable
.During installation, add your registry address and authorization data to the InitConfiguration resource (the imagesRepo and registryDockerCfg parameters; you might refer to step 3 of the Getting started guide as well).
How do I switch a running Deckhouse cluster to use a third-party registry?
Using a registry other than registry.deckhouse.io
is only available in a commercial edition of Deckhouse Kubernetes Platform.
To switch the Deckhouse cluster to using a third-party registry, follow these steps:
- Run
deckhouse-controller helper change-registry
inside the Deckhouse Pod with the new registry settings.-
Example:
kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller helper change-registry --user MY-USER --password MY-PASSWORD registry.example.com/deckhouse/ee
-
If the registry uses a self-signed certificate, put the root CA certificate that validates the registry’s HTTPS certificate to file
/tmp/ca.crt
in the Deckhouse Pod and add the--ca-file /tmp/ca.crt
option to the script or put the content of CA into a variable as follows:CA_CONTENT=$(cat <<EOF -----BEGIN CERTIFICATE----- CERTIFICATE -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- CERTIFICATE -----END CERTIFICATE----- EOF ) kubectl -n d8-system exec svc/deckhouse-leader -c deckhouse -- bash -c "echo '$CA_CONTENT' > /tmp/ca.crt && deckhouse-controller helper change-registry --ca-file /tmp/ca.crt --user MY-USER --password MY-PASSWORD registry.example.com/deckhouse/ee"
-
To view the list of available keys of the
deckhouse-controller helper change-registry
command, run the following command:kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller helper change-registry --help
Example output:
usage: deckhouse-controller helper change-registry [<flags>] <new-registry> Change registry for deckhouse images. Flags: --help Show context-sensitive help (also try --help-long and --help-man). --user=USER User with pull access to registry. --password=PASSWORD Password/token for registry user. --ca-file=CA-FILE Path to registry CA. --scheme=SCHEME Used scheme while connecting to registry, http or https. --dry-run Don't change deckhouse resources, only print them. --new-deckhouse-tag=NEW-DECKHOUSE-TAG New tag that will be used for deckhouse deployment image (by default current tag from deckhouse deployment will be used). Args: <new-registry> Registry that will be used for deckhouse images (example: registry.deckhouse.io/deckhouse/ce). By default, https will be used, if you need http - provide '--scheme' flag with http value
-
- Wait for the Deckhouse Pod to become
Ready
. Restart Deckhouse Pod if it will be inImagePullBackoff
state. - Wait for bashible to apply the new settings on the master node. The bashible log on the master node (
journalctl -u bashible
) should contain the messageConfiguration is in sync, nothing to do
. - If you want to disable Deckhouse automatic updates, remove the releaseChannel parameter from the
deckhouse
module configuration. -
Check if there are Pods with original registry in cluster (if there are — restart them):
kubectl get pods -A -o json | jq -r '.items[] | select(.spec.containers[] | select(.image | startswith("registry.deckhouse"))) | .metadata.namespace + "\t" + .metadata.name' | sort | uniq
How to bootstrap a cluster and run Deckhouse without the usage of release channels?
This method should only be used if there are no release channel images in your air-gapped registry.
If you want to install Deckhouse with automatic updates disabled:
- Use the installer image tag of the corresponding version. For example, if you want to install the
v1.44.3
release, use theyour.private.registry.com/deckhouse/install:v1.44.3
image. - Specify the corresponding version number in the deckhouse.devBranch parameter in the InitConfiguration resource.
Do not specify the deckhouse.releaseChannel parameter in the InitConfiguration resource.
If you want to disable automatic updates for an already installed Deckhouse (including patch release updates), remove the releaseChannel parameter from the deckhouse
module configuration.
Using a proxy server
This feature is available in the following editions: BE, SE, SE+, EE.
Use the proxy parameter of the ClusterConfiguration
resource to configure proxy usage.
An example:
apiVersion: deckhouse.io/v1
kind: ClusterConfiguration
clusterType: Cloud
cloud:
provider: OpenStack
prefix: main
podSubnetCIDR: 10.111.0.0/16
serviceSubnetCIDR: 10.222.0.0/16
kubernetesVersion: "Automatic"
cri: "Containerd"
clusterDomain: "cluster.local"
proxy:
httpProxy: "http://user:password@proxy.company.my:3128"
httpsProxy: "https://user:password@proxy.company.my:8443"
Autoloading proxy variables for users at CLI
Since DKP v1.67, the file /etc/profile.d/d8-system-proxy.sh
, which sets proxy variables for users, is no longer configurable. To autoload proxy variables for users at the CLI, use the NodeGroupConfiguration
resource:
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: profile-proxy.sh
spec:
bundles:
- '*'
nodeGroups:
- '*'
weight: 99
content: |
{{- if .proxy }}
{{- if .proxy.httpProxy }}
export HTTP_PROXY={{ .proxy.httpProxy | quote }}
export http_proxy=${HTTP_PROXY}
{{- end }}
{{- if .proxy.httpsProxy }}
export HTTPS_PROXY={{ .proxy.httpsProxy | quote }}
export https_proxy=${HTTPS_PROXY}
{{- end }}
{{- if .proxy.noProxy }}
export NO_PROXY={{ .proxy.noProxy | join "," | quote }}
export no_proxy=${NO_PROXY}
{{- end }}
bb-sync-file /etc/profile.d/profile-proxy.sh - << EOF
export HTTP_PROXY=${HTTP_PROXY}
export http_proxy=${HTTP_PROXY}
export HTTPS_PROXY=${HTTPS_PROXY}
export https_proxy=${HTTPS_PROXY}
export NO_PROXY=${NO_PROXY}
export no_proxy=${NO_PROXY}
EOF
{{- else }}
rm -rf /etc/profile.d/profile-proxy.sh
{{- end }}
Changing the configuration
To apply node configuration changes, you need to run the dhctl converge
using the Deckhouse installer. This command synchronizes the state of the nodes with the specified configuration.
How do I change the configuration of a cluster?
The general cluster parameters are stored in the ClusterConfiguration structure.
To change the general cluster parameters, run the command:
kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller edit cluster-configuration
After saving the changes, Deckhouse will bring the cluster configuration to the state according to the changed configuration. Depending on the size of the cluster, this may take some time.
How do I change the configuration of a cloud provider in a cluster?
Cloud provider setting of a cloud of hybrid cluster are stored in the <PROVIDER_NAME>ClusterConfiguration
structure, where <PROVIDER_NAME>
— name/code of the cloud provider. E.g., for an OpenStack provider, the structure will be called OpenStackClusterConfiguration.
Regardless of the cloud provider used, its settings can be changed using the following command:
kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller edit provider-cluster-configuration
How do I change the configuration of a static cluster?
Settings of a static cluster are stored in the StaticClusterConfiguration structure.
To change the settings of a static cluster, run the command:
kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller edit static-cluster-configuration
How to switch Deckhouse edition to CE/BE/SE/SE+/EE?
- The functionality of this guide is validated for Deckhouse versions starting from
v1.70
. If your version is older, use the corresponding documentation. - For commercial editions, you need a valid license key that supports the desired edition. If necessary, you can request a temporary key.
- The guide assumes the use of the public container registry address:
registry.deckhouse.io
. If you are using a different container registry address, modify the commands accordingly or refer to the guide on switching Deckhouse to use a different registry. - The Deckhouse CE/BE/SE/SE+ editions do not support the cloud providers
dynamix
,openstack
,VCD
, andvSphere
(vSphere is supported in SE+) and a number of modules. A detailed comparison is available in the documentation. - All commands are executed on the master node of the existing cluster with
root
user.
-
Prepare variables for the license token and new edition name:
It is not necessary to fill the
NEW_EDITION
andAUTH_STRING
variables when switching to Deckhouse CE edition. TheNEW_EDITION
variable should match your desired Deckhouse edition. For example, to switch to:- CE, the variable should be
ce
; - BE, the variable should be
be
; - SE, the variable should be
se
; - SE+, the variable should be
se-plus
; - EE, the variable should be
ee
.
NEW_EDITION=<PUT_YOUR_EDITION_HERE> LICENSE_TOKEN=<PUT_YOUR_LICENSE_TOKEN_HERE> AUTH_STRING="$(echo -n license-token:${LICENSE_TOKEN} | base64 )"
- CE, the variable should be
-
Ensure the Deckhouse queue is empty and error-free.
-
Create a
NodeGroupConfiguration
resource for temporary authorization inregistry.deckhouse.io
:Skip this step if switching to Deckhouse CE.
kubectl apply -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: containerd-$NEW_EDITION-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 30 content: | _on_containerd_config_changed() { bb-flag-set containerd-need-restart } bb-event-on 'containerd-config-file-changed' '_on_containerd_config_changed' mkdir -p /etc/containerd/conf.d bb-sync-file /etc/containerd/conf.d/$NEW_EDITION-registry.toml - containerd-config-file-changed << "EOF_TOML" [plugins] [plugins."io.containerd.grpc.v1.cri"] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.configs."registry.deckhouse.io".auth] auth = "$AUTH_STRING" EOF_TOML EOF
Wait for the
/etc/containerd/conf.d/$NEW_EDITION-registry.toml
file to appear on the nodes and for bashible synchronization to complete. To track the synchronization status, check theUPTODATE
value (the number of nodes in this status should match the total number of nodes (NODES
) in the group):kubectl get ng -o custom-columns=NAME:.metadata.name,NODES:.status.nodes,READY:.status.ready,UPTODATE:.status.upToDate -w
Example output:
NAME NODES READY UPTODATE master 1 1 1 worker 2 2 2
Also, a message stating
Configuration is in sync, nothing to do
should appear in the systemd service log for bashible by executing the following command:journalctl -u bashible -n 5
Example output:
Aug 21 11:04:28 master-ee-to-se-0 bashible.sh[53407]: Configuration is in sync, nothing to do. Aug 21 11:04:28 master-ee-to-se-0 bashible.sh[53407]: Annotate node master-ee-to-se-0 with annotation node.deckhouse.io/configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master ee-to-se-0 bashible.sh[53407]: Successful annotate node master-ee-to-se-0 with annotation node.deckhouse.io/configuration-checksum=9cbe6db6c91574b8b732108a654c99423733b20f04848d0b4e1e2dadb231206a Aug 21 11:04:29 master-ee-to-se-0 systemd[1]: bashible.service: Deactivated successfully.
-
Start a temporary pod for the new Deckhouse edition to obtain current digests and a list of modules:
DECKHOUSE_VERSION=$(kubectl -n d8-system get deploy deckhouse -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "deckhouse") | .image' | awk -F: '{print $2}') kubectl run $NEW_EDITION-image --image=registry.deckhouse.io/deckhouse/$NEW_EDITION/install:$DECKHOUSE_VERSION --command sleep --infinity
-
Once the pod is in
Running
state, execute the following commands:NEW_EDITION_MODULES=$(kubectl exec $NEW_EDITION-image -- ls -l deckhouse/modules/ | grep -oE "\d.*-\w*" | awk {'print $9'} | cut -c5-) USED_MODULES=$(kubectl get modules -o custom-columns=NAME:.metadata.name,SOURCE:.properties.source,STATE:.properties.state,ENABLED:.status.phase | grep Embedded | grep -E 'Enabled|Ready' | awk {'print $1'}) MODULES_WILL_DISABLE=$(echo $USED_MODULES | tr ' ' '\n' | grep -Fxv -f <(echo $NEW_EDITION_MODULES | tr ' ' '\n'))
-
Verify that the modules used in the cluster are supported in the desired edition. To see the list of modules not supported in the new edition and will be disabled:
echo $MODULES_WILL_DISABLE
Check the list to ensure the functionality of these modules is not in use in your cluster and you are ready to disable them.
Disable the modules not supported by the new edition:
echo $MODULES_WILL_DISABLE | tr ' ' '\n' | awk {'print "d8 platform module disable",$1'} | bash
Wait for the Deckhouse pod to reach
Ready
state and ensure all tasks in the queue are completed. -
Execute the
deckhouse-controller helper change-registry
command from the Deckhouse pod with the new edition parameters:To switch to BE/SE/SE+/EE editions:
kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller helper change-registry --user=license-token --password=$LICENSE_TOKEN --new-deckhouse-tag=$DECKHOUSE_VERSION registry.deckhouse.io/deckhouse/$NEW_EDITION
To switch to CE edition:
kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller helper change-registry --new-deckhouse-tag=$DECKHOUSE_VERSION registry.deckhouse.io/deckhouse/ce
-
Check if there are any pods with the Deckhouse old edition address left in the cluster, where
<YOUR-PREVIOUS-EDITION>
your previous edition name:kubectl get pods -A -o json | jq -r '.items[] | select(.spec.containers[] | select(.image | contains("deckhouse.io/deckhouse/<YOUR-PREVIOUS-EDITION>"))) | .metadata.namespace + "\t" + .metadata.name' | sort | uniq
-
Delete temporary files, the
NodeGroupConfiguration
resource, and variables:Skip this step if switching to Deckhouse CE.
kubectl delete ngc containerd-$NEW_EDITION-config.sh kubectl delete pod $NEW_EDITION-image kubectl apply -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: NodeGroupConfiguration metadata: name: del-temp-config.sh spec: nodeGroups: - '*' bundles: - '*' weight: 90 content: | if [ -f /etc/containerd/conf.d/$NEW_EDITION-registry.toml ]; then rm -f /etc/containerd/conf.d/$NEW_EDITION-registry.toml fi EOF
After the bashible synchronization completes (synchronization status on the nodes is shown by the
UPTODATE
value in NodeGroup), delete the created NodeGroupConfiguration resource:kubectl delete ngc del-temp-config.sh
How do I get access to Deckhouse controller in multimaster cluster?
In clusters with multiple master nodes Deckhouse runs in high availability mode (in several instances). To access the active Deckhouse controller, you can use the following command (as an example of the command deckhouse-controller queue list
):
kubectl -n d8-system exec -it svc/deckhouse-leader -c deckhouse -- deckhouse-controller queue list
How do I upgrade the Kubernetes version in a cluster?
To upgrade the Kubernetes version in a cluster change the kubernetesVersion parameter in the ClusterConfiguration structure by making the following steps:
-
Run the command:
kubectl -n d8-system exec -ti svc/deckhouse-leader -c deckhouse -- deckhouse-controller edit cluster-configuration
- Change the
kubernetesVersion
field. - Save the changes. Cluster nodes will start updating sequentially.
- Wait for the update to finish. You can track the progress of the update using the
kubectl get no
command. The update is completed when the new version appears in the command’s output for each cluster node in theVERSION
column.
How do I run Deckhouse on a particular node?
Set the nodeSelector
parameter of the deckhouse
module and avoid setting tolerations
. The necessary values will be assigned to the tolerations
parameter automatically.
Use only nodes with the CloudStatic or Static type to run Deckhouse. Also, avoid using a NodeGroup
containing only one node to run Deckhouse.
Here is an example of the module configuration:
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: deckhouse
spec:
version: 1
settings:
nodeSelector:
node-role.deckhouse.io/deckhouse: ""
How do I force IPv6 to be disabled on Deckhouse cluster nodes?
Internal communication between Deckhouse cluster components is performed via IPv4 protocol. However, at the operating system level of the cluster nodes, IPv6 is usually active by default. This leads to automatic assignment of IPv6 addresses to all network interfaces, including Pod interfaces. This results in unwanted network traffic - for example, redundant DNS queries like
AAAAA
- which can affect performance and make debugging network communications more difficult.
To correctly disable IPv6 at the node level in a Deckhouse-managed cluster, it is sufficient to set the necessary parameters via the NodeGroupConfiguration resource:
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: disable-ipv6.sh
spec:
nodeGroups:
- '*'
bundles:
- '*'
weight: 50
content: |
GRUB_FILE_PATH="/etc/default/grub"
if ! grep -q "ipv6.disable" "$GRUB_FILE_PATH"; then
sed -E -e 's/^(GRUB_CMDLINE_LINUX_DEFAULT="[^"]*)"/\1 ipv6.disable=1"/' -i "$GRUB_FILE_PATH"
update-grub
bb-flag-set reboot
fi
After applying the resource, the GRUB settings will be updated and the cluster nodes will begin a sequential reboot to apply the changes.