To automate actions on group nodes, use the NodeGroupConfiguration resource. It allows you to run bash scripts on the nodes using the Bash Booster command set, as well as apply the Go Template templating engine. This is useful for automating operations such as:
-
Installing and configuring additional OS packages.
Examples:
-
Updating the OS kernel to a specific version.
Examples:
-
Modifying OS parameters.
Examples:
-
Collecting information on the node and performing similar tasks.
The NodeGroupConfiguration resource allows you to define execution priority for scripts, limit execution to specific node groups or OS types.
The script code is specified in the content field of the resource. When a script is created on a node, the content passes through the Go Template templating engine, which adds a layer of logic to script generation. A dynamic context with a set of variables is available in the template.
Available template variables include:
.cloudProvider(for node groups withCloudEphemeralorCloudPermanentnodeType) — an array of cloud provider data..cri— the container runtime interface in use (since Deckhouse version 1.49, onlyContainerdis used)..kubernetesVersion— the version of Kubernetes in use..nodeUsers— an array of user data added to the node using the NodeUser resource..nodeGroup— an array of NodeGroup data.
Example of using variables in the template engine:
{{- range .nodeUsers }}
echo 'Tuning environment for user {{ .name }}'
# Some code for tuning user environment
{{- end }}
Example of using Bash Booster commands:
bb-event-on 'bb-package-installed' 'post-install'
post-install() {
bb-log-info "Setting reboot flag due to kernel was updated"
bb-flag-set reboot
}
Monitoring script execution
You can view the script execution log on a node in the bashible service log using the following command:
journalctl -u bashible.service
The scripts are located in the /var/lib/bashible/bundle_steps/ directory on the node.
Script re-execution mechanism
The service decides whether to re-run the scripts by comparing a unified checksum of all files, located at /var/lib/bashible/configuration_checksum, with the checksum stored in the configuration-checksums secret in the d8-cloud-instance-manager namespace in the Kubernetes cluster.
You can check the checksum with the following command:
d8 k -n d8-cloud-instance-manager get secret configuration-checksums -o yaml
The checksum comparison is performed by the service every minute.
The checksum in the cluster is updated every 4 hours, thereby re-triggering the execution of scripts on all nodes.
To manually trigger bashible execution on a node, you can delete the checksum file using the following command:
rm /var/lib/bashible/configuration_checksum
Script writing specifics
When writing scripts, it’s important to consider the following features of their usage in Deckhouse:
- Scripts in DKP are executed every 4 hours or based on external triggers. Therefore, it’s important to write scripts in a way that they first check whether changes are necessary, to avoid repeated or unnecessary actions on each execution.
- There are predefined scripts that perform various actions, including service installation and configuration. It’s important to consider this when assigning priority to custom scripts. For example, if a custom script restarts a service, it must run after the script that installs that service. Otherwise, the custom script won’t be able to run during the initial provisioning of the node (since the service won’t be installed yet).
Useful specifics of certain scripts:
032_configure_containerd.sh: Merges allcontainerdservice configuration files located in/etc/containerd/conf.d/*.toml, and also restarts the service. Note that the/etc/containerd/conf.d/directory is not created automatically, and any configuration files in it should be created by scripts with a priority lower than32.
How to use containerd with Nvidia GPU support
You need to create a separate NodeGroup for GPU nodes:
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: gpu
spec:
chaos:
mode: Disabled
disruptions:
approvalMode: Automatic
nodeType: CloudStatic
Next, create a NodeGroupConfiguration resource for the gpu NodeGroup to configure containerd:
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: containerd-additional-config.sh
spec:
bundles:
- '*'
content: |
# Copyright 2023 Flant JSC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
mkdir -p /etc/containerd/conf.d
bb-sync-file /etc/containerd/conf.d/nvidia_gpu.toml - << "EOF"
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "nvidia"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia]
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia.options]
BinaryName = "/usr/bin/nvidia-container-runtime"
SystemdCgroup = false
EOF
nodeGroups:
- gpu
weight: 31
Add a NodeGroupConfiguration resource to install Nvidia drivers for the gpu NodeGroup.
Ubuntu
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: install-cuda.sh
spec:
bundles:
- ubuntu-lts
content: |
# Copyright 2023 Flant JSC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ ! -f "/etc/apt/sources.list.d/nvidia-container-toolkit.list" ]; then
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
fi
bb-apt-install nvidia-container-toolkit nvidia-driver-535-server
nvidia-ctk config --set nvidia-container-runtime.log-level=error --in-place
nodeGroups:
- gpu
weight: 30
CentOS
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: install-cuda.sh
spec:
bundles:
- centos
content: |
# Copyright 2023 Flant JSC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ ! -f "/etc/yum.repos.d/nvidia-container-toolkit.repo" ]; then
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
fi
bb-dnf-install nvidia-container-toolkit nvidia-driver
nvidia-ctk config --set nvidia-container-runtime.log-level=error --in-place
nodeGroups:
- gpu
weight: 30
After the configurations are applied, bootstrap and reboot the nodes to apply the settings and install the drivers.
Verifying successful installation
Create the following Job in your cluster:
apiVersion: batch/v1
kind: Job
metadata:
name: nvidia-cuda-test
namespace: default
spec:
completions: 1
template:
spec:
restartPolicy: Never
nodeSelector:
node.deckhouse.io/group: gpu
containers:
- name: nvidia-cuda-test
image: nvidia/cuda:11.6.2-base-ubuntu20.04
imagePullPolicy: "IfNotPresent"
command:
- nvidia-smi
Check the logs with the following command:
d8 k logs job/nvidia-cuda-test
Example output:
Tue Jan 24 11:36:18 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:8B:00.0 Off | 0 |
| N/A 45C P0 25W / 70W | 0MiB / 15360MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Create the following Job in your cluster:
apiVersion: batch/v1
kind: Job
metadata:
name: gpu-operator-test
namespace: default
spec:
completions: 1
template:
spec:
restartPolicy: Never
nodeSelector:
node.deckhouse.io/group: gpu
containers:
- name: gpu-operator-test
image: nvidia/samples:vectoradd-cuda10.2
imagePullPolicy: "IfNotPresent"
Check the logs with the following command:
d8 k logs job/gpu-operator-test
Example output:
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done
How to deploy a custom containerd configuration file
Adding custom settings will trigger a restart of the containerd service.
bashible on the nodes merges the DKP containerd configuration with configurations from /etc/containerd/conf.d/*.toml.
You can override the parameter values defined in the /etc/containerd/deckhouse.toml file. However, you are responsible for ensuring the correct operation of such changes. It is recommended not to modify the configuration on control plane (master) nodes (NodeGroup master).
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: containerd-option-config.sh
spec:
bundles:
- '*'
content: |
# Copyright 2024 Flant JSC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
mkdir -p /etc/containerd/conf.d
bb-sync-file /etc/containerd/conf.d/additional_option.toml - << EOF
oom_score = 500
[metrics]
address = "127.0.0.1"
grpc_histogram = true
EOF
nodeGroups:
- "worker"
weight: 31
How to add authentication for an additional registry
Deploy a NodeGroupConfiguration script:
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: containerd-additional-config.sh
spec:
bundles:
- '*'
content: |
# Copyright 2023 Flant JSC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
REGISTRY_URL=private.registry.example
mkdir -p /etc/containerd/conf.d
bb-sync-file /etc/containerd/conf.d/additional_registry.toml - << EOF
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."${REGISTRY_URL}"]
endpoint = ["https://${REGISTRY_URL}"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."${REGISTRY_URL}".auth]
auth = "AAAABBBCCCDDD=="
EOF
nodeGroups:
- "*"
weight: 31
Configuring a certificate for an additional registry
In addition to containerd, the certificate can also be added to the operating system.
Example of a NodeGroupConfiguration for configuring a certificate for an additional registry:
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: configure-cert-containerd.sh
spec:
bundles:
- '*'
content: |-
# Copyright 2024 Flant JSC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
REGISTRY_URL=private.registry.example
CERT_FILE_NAME=${REGISTRY_URL}
CERTS_FOLDER="/var/lib/containerd/certs/"
CERT_CONTENT=$(cat <<"EOF"
-----BEGIN CERTIFICATE-----
MIIDSjCCAjKgAwIBAgIRAJ4RR/WDuAym7M11JA8W7D0wDQYJKoZIhvcNAQELBQAw
...
-----END CERTIFICATE-----
EOF
)
CONFIG_CONTENT=$(cat <<EOF
[plugins]
[plugins."io.containerd.grpc.v1.cri".registry.configs."${REGISTRY_URL}".tls]
ca_file = "${CERTS_FOLDER}/${CERT_FILE_NAME}.crt"
EOF
)
mkdir -p ${CERTS_FOLDER}
mkdir -p /etc/containerd/conf.d
# bb-tmp-file - Create temp file function. More information: http://www.bashbooster.net/#tmp
CERT_TMP_FILE="$( bb-tmp-file )"
echo -e "${CERT_CONTENT}" > "${CERT_TMP_FILE}"
CONFIG_TMP_FILE="$( bb-tmp-file )"
echo -e "${CONFIG_CONTENT}" > "${CONFIG_TMP_FILE}"
# bb-sync-file - File synchronization function. More information: http://www.bashbooster.net/#sync
## "${CERTS_FOLDER}/${CERT_FILE_NAME}.crt" - Destination file
## ${CERT_TMP_FILE} - Source file
bb-sync-file \
"${CERTS_FOLDER}/${CERT_FILE_NAME}.crt" \
${CERT_TMP_FILE}
bb-sync-file \
"/etc/containerd/conf.d/${REGISTRY_URL}.toml" \
${CONFIG_TMP_FILE}
nodeGroups:
- '*'
weight: 31
How to automatically put custom labels on the node
-
On the node, create the directory
/var/lib/node_labels. -
In this directory, create a single of multiple files containing the necessary labels. There can be any number of files, as well as the number of subdirectories containing them.
-
Add the necessary labels to the files in the
key=valueformat. For example:example-label=test -
Save the files.
When adding a node to the cluster, the labels specified in the files will be automatically affixed to the node.
Note that it is not possible to add labels used in DKP in this way. This method will only work with custom labels that do not overlap with those reserved for Deckhouse.