Ниже представлены несколько примеров описания NodeGroup, а также установки плагина cert-manager для kubectl и задания параметра sysctl .
| Below are some examples of NodeGroup description, as well as installing the cert-manager plugin for kubectl and setting the sysctl parameter.
|
Примеры описания NodeGroup
| Examples of the NodeGroup configuration
|
|
|
Облачные узлы
| Cloud nodes
|
yaml
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: test
spec:
nodeType: CloudEphemeral
cloudInstances:
zones:
- eu-west-1a
- eu-west-1b
minPerZone: 1
maxPerZone: 2
classReference:
kind: AWSInstanceClass
name: test
nodeTemplate:
labels:
tier: test
| yaml
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: test
spec:
nodeType: CloudEphemeral
cloudInstances:
zones:
- eu-west-1a
- eu-west-1b
minPerZone: 1
maxPerZone: 2
classReference:
kind: AWSInstanceClass
name: test
nodeTemplate:
labels:
tier: test
|
Статические узлы
| Static nodes
|
|
|
Для виртуальных машин на гипервизорах или физических серверов используйте статические узлы, указав nodeType: Static в NodeGroup.
| Use nodeType: Static for physical servers and VMs on Hypervisors.
|
Пример:
| An example:
|
yaml
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: Static
| yaml
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: Static
|
Узлы в такую группу добавляются вручную с помощью подготовленных скриптов.
| Adding nodes to such a group is done manually using the pre-made scripts.
|
Также можно использовать способ добавления статических узлов с помощью Cluster API Provider Static.
| You can also use a method that adds static nodes using the Cluster API Provider Static.
|
Системные узлы
| System nodes
|
|
|
yaml
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: system
spec:
nodeTemplate:
labels:
node-role.deckhouse.io/system: “”
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: system
nodeType: Static
| yaml
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: system
spec:
nodeTemplate:
labels:
node-role.deckhouse.io/system: “”
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: system
nodeType: Static
|
Добавление статического узла в кластер
| Adding a static node to a cluster
|
|
|
Добавление статического узла можно выполнить вручную или с помощью Cluster API Provider Static.
| Adding a static node can be done manually or using the Cluster API Provider Static.
|
Вручную
| Manually
|
Чтобы добавить новый статический узел (выделенная ВМ, bare-metal-сервер и т. п.) в кластер вручную, выполните следующие шаги:
| Follow the steps below to add a new static node (e.g., VM or bare metal server) to the cluster:
|
- Для CloudStatic-узлов в облачных провайдерах, перечисленных ниже, выполните описанные в документации шаги:
- Используйте существующий или создайте новый ресурс NodeGroup (пример NodeGroup с именем
worker ). Параметр nodeType в ресурсе NodeGroup для статических узлов должен быть Static или CloudStatic .
- Получите код скрипта в кодировке Base64 для добавления и настройки узла.
|
- For CloudStatic nodes in the following cloud providers, refer to the steps outlined in the documentation:
- Use the existing one or create a new NodeGroup custom resource (see the example for the
NodeGroup called worker ). The nodeType parameter for static nodes in the NodeGroup must be Static or CloudStatic .
- Get the Base64-encoded script code to add and configure the node.
|
Пример получения кода скрипта в кодировке Base64 для добавления узла в NodeGroup worker :
| Here is how you can get Base64-encoded script code to add a node to the worker NodeGroup:
|
shell
NODE_GROUP=worker
kubectl -n d8-cloud-instance-manager get secret manual-bootstrap-for-${NODE_GROUP} -o json | jq ‘.data.”bootstrap.sh”’ -r
| shell
NODE_GROUP=worker
kubectl -n d8-cloud-instance-manager get secret manual-bootstrap-for-${NODE_GROUP} -o json | jq ‘.data.”bootstrap.sh”’ -r
|
- Выполните предварительную настройку нового узла в соответствии с особенностями вашего окружения. Например:
- добавьте необходимые точки монтирования в файл
/etc/fstab (NFS, Ceph и т. д.);
- установите необходимые пакеты;
- настройте сетевую связность между новым узлом и остальными узлами кластера.
- Зайдите на новый узел по SSH и выполните следующую команду, вставив полученную в п. 3 Base64-строку:
|
- Pre-configure the new node according to the specifics of your environment. For example:
- Add all the necessary mount points to the
/etc/fstab file (NFS, Ceph, etc.).
- Install the necessary packages.
- Configure network connectivity between the new node and the other nodes of the cluster.
- Connect to the new node over SSH and run the following command, inserting the Base64 string you got in step 3:
|
shell
echo | base64 -d | bash
| shell
echo | base64 -d | bash
|
С помощью Cluster API Provider Static
| Using the Cluster API Provider Static
|
Простой пример добавления статического узла в кластер с помощью Cluster API Provider Static (CAPS):
| A brief example of adding a static node to a cluster using Cluster API Provider Static (CAPS):
|
- Подготовьте необходимые ресурсы.
|
- Prepare the necessary resources.
|
- Выделите сервер (или виртуальную машину), настройте сетевую связность и т. п., при необходимости установите специфические пакеты ОС и добавьте точки монтирования которые потребуются на узле.
|
- Allocate a server (or a virtual machine), configure networking, etc. If required, install specific OS packages and add the mount points on the node.
|
- Создайте пользователя (в примере —
caps ) с возможностью выполнять sudo , выполнив на сервере следующую команду:
|
- Create a user (
caps in the example below) and add it to sudoers by running the following command on the server:
|
shell
useradd -m -s /bin/bash caps
usermod -aG sudo caps
| shell
useradd -m -s /bin/bash caps
usermod -aG sudo caps
|
- Разрешите пользователю выполнять команды через sudo без пароля. Для этого на сервере внесите следующую строку в конфигурацию sudo (отредактировав файл
/etc/sudoers , выполнив команду sudo visudo или другим способом):
|
- Allow the user to run sudo commands without having to enter a password. For this, add the following line to the sudo configuration on the server (you can either edit the
/etc/sudoers file, or run the sudo visudo command, or use some other method):
|
text
caps ALL=(ALL) NOPASSWD: ALL
| text
caps ALL=(ALL) NOPASSWD: ALL
|
- Сгенерируйте на сервере пару SSH-ключей с пустой парольной фразой:
|
- Generate a pair of SSH keys with an empty passphrase on the server:
|
shell
ssh-keygen -t rsa -f caps-id -C “” -N “”
| shell
ssh-keygen -t rsa -f caps-id -C “” -N “”
|
Публичный и приватный ключи пользователя caps будут сохранены в файлах caps-id.pub и caps-id в текущей директории на сервере.
| The public and private keys of the caps user will be stored in the caps-id.pub and caps-id files in the current directory on the server.
|
- Добавьте полученный публичный ключ в файл
/home/caps/.ssh/authorized_keys пользователя caps , выполнив в директории с ключами на сервере следующие команды:
|
- Add the generated public key to the
/home/caps/.ssh/authorized_keys file of the caps user by executing the following commands in the keys directory on the server:
|
shell
mkdir -p /home/caps/.ssh
cat caps-id.pub » /home/caps/.ssh/authorized_keys
chmod 700 /home/caps/.ssh
chmod 600 /home/caps/.ssh/authorized_keys
chown -R caps:caps /home/caps/
| shell
mkdir -p /home/caps/.ssh
cat caps-id.pub » /home/caps/.ssh/authorized_keys
chmod 700 /home/caps/.ssh
chmod 600 /home/caps/.ssh/authorized_keys
chown -R caps:caps /home/caps/
|
В операционных системах семейства Astra Linux, при использовании модуля мандатного контроля целостности Parsec, сконфигурируйте максимальный уровень целостности для пользователя caps :
|
- Create a SSHCredentials resource in the cluster:
|
shell
pdpl-user -i 63 caps
| Run the following command in the user key directory on the server to encode the private key to Base64:
|
- Создайте в кластере ресурс SSHCredentials.
| shell
base64 -w0 caps-id
|
В директории с ключами пользователя на сервере выполните следующую команду для получения закрытого ключа в формате Base64:
| On any computer with kubectl configured to manage the cluster, create an environment variable with the value of the Base64-encoded private key you generated in the previous step:
|
shell
base64 -w0 caps-id
| shell
CAPS_PRIVATE_KEY_BASE64=
|
На любом компьютере с kubectl , настроенным на управление кластером, создайте переменную окружения со значением закрытого ключа созданного пользователя в Base64, полученным на предыдущем шаге:
| Create a SSHCredentials resource in the cluster (note that from this point on, you have to use kubectl configured to manage the cluster):
|
shell
CAPS_PRIVATE_KEY_BASE64=<ЗАКРЫТЫЙ_КЛЮЧ_В_BASE64>ЗАКРЫТЫЙ_КЛЮЧ_В_BASE64>
| shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1alpha1
kind: SSHCredentials
metadata:
name: credentials
spec:
user: caps
privateSSHKey: “${CAPS_PRIVATE_KEY_BASE64}”
EOF
|
Выполните следующую команду, для создания в кластере ресурса SSHCredentials (здесь и далее также используйте kubectl , настроенный на управление кластером):
|
- Create a StaticInstance resource in the cluster; specify the IP address of the static node server:
|
shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1alpha1
kind: SSHCredentials
metadata:
name: credentials
spec:
user: caps
privateSSHKey: “${CAPS_PRIVATE_KEY_BASE64}”
EOF
| shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-worker-1
labels:
role: worker
spec:
Specify the IP address of the static node server.
address: “"
credentialsRef:
kind: SSHCredentials
name: credentials
EOF
|
- Создайте в кластере ресурс StaticInstance, указав IP-адрес сервера статического узла:
|
The labelSelector field in the NodeGroup resource is immutable. To update the labelSelector , you need to create a new NodeGroup and move the static nodes into it by changing their labels.
|
shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-worker-1
labels:
role: worker
spec:
Укажите IP-адрес сервера статического узла.
address: “"
credentialsRef:
kind: SSHCredentials
name: credentials
EOF
|
- Create a NodeGroup resource in the cluster:
|
Поле labelSelector в ресурсе NodeGroup является неизменным. Чтобы обновить labelSelector, нужно создать новую NodeGroup и перенести в неё статические узлы, изменив их лейблы (labels).
| shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: Static
staticInstances:
count: 1
labelSelector:
matchLabels:
role: worker
EOF
|
- Создайте в кластере ресурс NodeGroup:
| Using Cluster API Provider Static for multiple node groups
|
shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: Static
staticInstances:
count: 1
labelSelector:
matchLabels:
role: worker
EOF
| This example shows how you can use filters in the StaticInstance label selector to group static nodes and use them in different NodeGroups. Here, two node groups (front and worker ) are used for different tasks. Each group includes nodes with different characteristics — the front group has two servers and the worker group has one.
|
С помощью Cluster API Provider Static для нескольких групп узлов
|
- Prepare the required resources (3 servers or virtual machines) and create the
SSHCredentials resource in the same way as step 1 and step 2 of the example.
|
Пример использования фильтров в label selector StaticInstance, для группировки статических узлов и использования их в разных NodeGroup. В примере используются две группы узлов (front и worker ), предназначенные для разных задач, которые должны содержать разные по характеристикам узлы — два сервера для группы front и один для группы worker .
|
- Create two NodeGroup in the cluster (from this point on, use
kubectl configured to manage the cluster):
|
- Подготовьте необходимые ресурсы (3 сервера или виртуальные машины) и создайте ресурс
SSHCredentials , аналогично п.1 и п.2 примера.
|
The labelSelector field in the NodeGroup resource is immutable. To update the labelSelector , you need to create a new NodeGroup and move the static nodes into it by changing their labels.
|
- Создайте в кластере два ресурса NodeGroup (здесь и далее используйте
kubectl , настроенный на управление кластером):
| shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: front
spec:
nodeType: Static
staticInstances:
count: 2
labelSelector:
matchLabels:
role: front
—
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: Static
staticInstances:
count: 1
labelSelector:
matchLabels:
role: worker
EOF
|
Поле labelSelector в ресурсе NodeGroup является неизменным. Чтобы обновить labelSelector, нужно создать новую NodeGroup и перенести в неё статические узлы, изменив их лейблы (labels).
|
- Create StaticInstance resources in the cluster and specify the valid IP addresses of the servers:
|
shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: front
spec:
nodeType: Static
staticInstances:
count: 2
labelSelector:
matchLabels:
role: front
—
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: Static
staticInstances:
count: 1
labelSelector:
matchLabels:
role: worker
EOF
| shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-front-1
labels:
role: front
spec:
address: “"
credentialsRef:
kind: SSHCredentials
name: credentials
---
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-front-2
labels:
role: front
spec:
address: ""
credentialsRef:
kind: SSHCredentials
name: credentials
---
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-worker-1
labels:
role: worker
spec:
address: ""
credentialsRef:
kind: SSHCredentials
name: credentials
EOF
|
- Создайте в кластере ресурсы StaticInstance, указав актуальные IP-адреса серверов:
| Cluster API Provider Static: Moving Instances Between Node Groups
|
shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-front-1
labels:
role: front
spec:
address: “"
credentialsRef:
kind: SSHCredentials
name: credentials
---
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-front-2
labels:
role: front
spec:
address: ""
credentialsRef:
kind: SSHCredentials
name: credentials
---
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-worker-1
labels:
role: worker
spec:
address: ""
credentialsRef:
kind: SSHCredentials
name: credentials
EOF
| During the process of transferring instances between node groups, the instance will be cleaned and re-bootstrapped, and the Node object will be recreated.
|
Cluster API Provider Static: перемещение узлов между NodeGroup
| This section describes the process of moving static instances between different node groups (NodeGroup) using the Cluster API Provider Static (CAPS). The process involves modifying the NodeGroup configuration and updating the labels of the corresponding StaticInstance.
|
В данном разделе описывается процесс перемещения статических узлов между различными NodeGroup с использованием Cluster API Provider Static (CAPS). Процесс включает изменение конфигурации NodeGroup и обновление лейблов у соответствующих StaticInstance.
| Initial Configuration
|
Исходная конфигурация
| Assume that there is already a NodeGroup named worker in the cluster, configured to manage one static instance with the label role: worker .
|
Предположим, что в кластере уже существует NodeGroup с именем worker , настроенный для управления одним статическим узлом с лейблом role: worker .
| NodeGroup worker:
|
NodeGroup worker:
| yaml
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: Static
staticInstances:
count: 1
labelSelector:
matchLabels:
role: worker
|
yaml
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: Static
staticInstances:
count: 1
labelSelector:
matchLabels:
role: worker
| StaticInstance static-worker-1:
|
StaticInstance static-0:
| yaml
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-worker-1
labels:
role: worker
spec:
address: “192.168.1.100”
credentialsRef:
kind: SSHCredentials
name: credentials
|
yaml
apiVersion: deckhouse.io/v1alpha1
kind: StaticInstance
metadata:
name: static-worker-1
labels:
role: worker
spec:
address: “192.168.1.100”
credentialsRef:
kind: SSHCredentials
name: credentials
| Steps to Move an Instance Between Node Groups
|
Шаги по перемещению узла между NodeGroup
| 1. Create a New NodeGroup for the Target Node Group
|
В процессе переноса узлов между NodeGroup будет выполнена очистка и повторный бутстрап узла, объект Node будет пересоздан.
| Create a new NodeGroup resource, for example, named front , which will manage a static instance with the label role: front .
|
1. Создание новой NodeGroup для целевой группы узлов
| shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: front
spec:
nodeType: Static
staticInstances:
count: 1
labelSelector:
matchLabels:
role: front
EOF
|
Создайте новый ресурс NodeGroup, например, с именем front , который будет управлять статическим узлом с лейблом role: front .
| 2. Update the Label on the StaticInstance
|
shell
kubectl create -f - «EOF
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: front
spec:
nodeType: Static
staticInstances:
count: 1
labelSelector:
matchLabels:
role: front
EOF
| Change the role label of the existing StaticInstance from worker to front . This will allow the new NodeGroup front to manage this instance.
|
2. Обновление лейбла у StaticInstance
| shell
kubectl label staticinstance static-worker-1 role=front –overwrite
|
Измените лейбл role у существующего StaticInstance с worker на front . Это позволит новой NodeGroup front начать управлять этим узлом.
| 3. Decrease the Number of Static Instances in the Original NodeGroup
|
shell
kubectl label staticinstance static-worker-1 role=front –overwrite
| Update the NodeGroup resource worker by reducing the count parameter from 1 to 0 .
|
3. Уменьшение количества статических узлов в исходной NodeGroup
| shell
kubectl patch nodegroup worker -p ‘{“spec”: {“staticInstances”: {“count”: 0}}}’ –type=merge
|
Обновите ресурс NodeGroup worker , уменьшив значение параметра count с 1 до 0 .
| An example of the NodeUser configuration
|
shell
kubectl patch nodegroup worker -p ‘{“spec”: {“staticInstances”: {“count”: 0}}}’ –type=merge
| yaml
apiVersion: deckhouse.io/v1
kind: NodeUser
metadata:
name: testuser
spec:
uid: 1100
sshPublicKeys:
- “"
passwordHash:
isSudoer: true
|
Пример описания NodeUser
| An example of the NodeGroupConfiguration configuration
|
yaml
apiVersion: deckhouse.io/v1
kind: NodeUser
metadata:
name: testuser
spec:
uid: 1100
sshPublicKeys:
- “"
passwordHash:
isSudoer: true
| Installing the cert-manager plugin for kubectl on master nodes
|
Пример описания NodeGroupConfiguration
| yaml
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: add-cert-manager-plugin.sh
spec:
weight: 100
bundles:
- “*”
nodeGroups:
- “master”
content: |
if [ -x /usr/local/bin/kubectl-cert_manager ]; then
exit 0
fi
curl -L https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/kubectl-cert_manager-linux-amd64.tar.gz -o - | tar -zxvf - kubectl-cert_manager
mv kubectl-cert_manager /usr/local/bin
|
Установка плагина cert-manager для kubectl на master-узлах
| Tuning sysctl parameters
|
yaml
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: add-cert-manager-plugin.sh
spec:
weight: 100
bundles:
- “*”
nodeGroups:
- “master”
content: |
if [ -x /usr/local/bin/kubectl-cert_manager ]; then
exit 0
fi
curl -L https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/kubectl-cert_manager-linux-amd64.tar.gz -o - | tar -zxvf - kubectl-cert_manager
mv kubectl-cert_manager /usr/local/bin
| yaml
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: sysctl-tune.sh
spec:
weight: 100
bundles:
- “*”
nodeGroups:
- “*”
content: |
sysctl -w vm.max_map_count=262144
|
Задание параметра sysctl
| Adding a root certificate to the host
|
yaml
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: sysctl-tune.sh
spec:
weight: 100
bundles:
- “*”
nodeGroups:
- “*”
content: |
sysctl -w vm.max_map_count=262144
| Example is given for Ubuntu OS.
The method of adding certificates to the store may differ depending on the OS.
|
Добавление корневого сертификата в хост
| Change the bundles and content parameters to adapt the script to a different OS.
|
Данный пример приведен для ОС Ubuntu.
Способ добавления сертификатов в хранилище может отличаться в зависимости от ОС.
При адаптации скрипта под другую ОС измените параметры bundles и content.
| To use the certificate in containerd (including pulling containers from a private repository), a restart of the service is required after adding the certificate.
|
Для использования сертификата в containerd (в т.ч. pull контейнеров из приватного репозитория) после добавления сертификата требуется произвести рестарт сервиса.
| yaml
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: add-custom-ca.sh
spec:
weight: 31
nodeGroups:
- ‘*’
bundles:
- ‘ubuntu-lts’
content: |-
CERT_FILE_NAME=example_ca
CERTS_FOLDER=”/usr/local/share/ca-certificates”
CERT_CONTENT=$(cat «EOF
—–BEGIN CERTIFICATE—–
MIIDSjCCAjKgAwIBAgIRAJ4RR/WDuAym7M11JA8W7D0wDQYJKoZIhvcNAQELBQAw
JTEjMCEGA1UEAxMabmV4dXMuNTEuMjUwLjQxLjIuc3NsaXAuaW8wHhcNMjQwODAx
MTAzMjA4WhcNMjQxMDMwMTAzMjA4WjAlMSMwIQYDVQQDExpuZXh1cy41MS4yNTAu
NDEuMi5zc2xpcC5pbzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL1p
WLPr2c4SZX/i4IS59Ly1USPjRE21G4pMYewUjkSXnYv7hUkHvbNL/P9dmGBm2Jsl
WFlRZbzCv7+5/J+9mPVL2TdTbWuAcTUyaG5GZ/1w64AmAWxqGMFx4eyD1zo9eSmN
G2jis8VofL9dWDfUYhRzJ90qKxgK6k7tfhL0pv7IHDbqf28fCEnkvxsA98lGkq3H
fUfvHV6Oi8pcyPZ/c8ayIf4+JOnf7oW/TgWqI7x6R1CkdzwepJ8oU7PGc0ySUWaP
G5bH3ofBavL0bNEsyScz4TFCJ9b4aO5GFAOmgjFMMUi9qXDH72sBSrgi08Dxmimg
Hfs198SZr3br5GTJoAkCAwEAAaN1MHMwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB
/wQCMAAwUwYDVR0RBEwwSoIPbmV4dXMuc3ZjLmxvY2FsghpuZXh1cy41MS4yNTAu
NDEuMi5zc2xpcC5pb4IbZG9ja2VyLjUxLjI1MC40MS4yLnNzbGlwLmlvMA0GCSqG
SIb3DQEBCwUAA4IBAQBvTjTTXWeWtfaUDrcp1YW1pKgZ7lTb27f3QCxukXpbC+wL
dcb4EP/vDf+UqCogKl6rCEA0i23Dtn85KAE9PQZFfI5hLulptdOgUhO3Udluoy36
D4WvUoCfgPgx12FrdanQBBja+oDsT1QeOpKwQJuwjpZcGfB2YZqhO0UcJpC8kxtU
by3uoxJoveHPRlbM2+ACPBPlHu/yH7st24sr1CodJHNt6P8ugIBAZxi3/Hq0wj4K
aaQzdGXeFckWaxIny7F1M3cIWEXWzhAFnoTgrwlklf7N7VWHPIvlIh1EYASsVYKn
iATq8C7qhUOGsknDh3QSpOJeJmpcBwln11/9BGRP
—–END CERTIFICATE—–
EOF
)
|
yaml
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: add-custom-ca.sh
spec:
weight: 31
nodeGroups:
- ‘*’
bundles:
- ‘ubuntu-lts’
content: |-
CERT_FILE_NAME=example_ca
CERTS_FOLDER=”/usr/local/share/ca-certificates”
CERT_CONTENT=$(cat «EOF
—–BEGIN CERTIFICATE—–
MIIDSjCCAjKgAwIBAgIRAJ4RR/WDuAym7M11JA8W7D0wDQYJKoZIhvcNAQELBQAw
JTEjMCEGA1UEAxMabmV4dXMuNTEuMjUwLjQxLjIuc3NsaXAuaW8wHhcNMjQwODAx
MTAzMjA4WhcNMjQxMDMwMTAzMjA4WjAlMSMwIQYDVQQDExpuZXh1cy41MS4yNTAu
NDEuMi5zc2xpcC5pbzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL1p
WLPr2c4SZX/i4IS59Ly1USPjRE21G4pMYewUjkSXnYv7hUkHvbNL/P9dmGBm2Jsl
WFlRZbzCv7+5/J+9mPVL2TdTbWuAcTUyaG5GZ/1w64AmAWxqGMFx4eyD1zo9eSmN
G2jis8VofL9dWDfUYhRzJ90qKxgK6k7tfhL0pv7IHDbqf28fCEnkvxsA98lGkq3H
fUfvHV6Oi8pcyPZ/c8ayIf4+JOnf7oW/TgWqI7x6R1CkdzwepJ8oU7PGc0ySUWaP
G5bH3ofBavL0bNEsyScz4TFCJ9b4aO5GFAOmgjFMMUi9qXDH72sBSrgi08Dxmimg
Hfs198SZr3br5GTJoAkCAwEAAaN1MHMwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB
/wQCMAAwUwYDVR0RBEwwSoIPbmV4dXMuc3ZjLmxvY2FsghpuZXh1cy41MS4yNTAu
NDEuMi5zc2xpcC5pb4IbZG9ja2VyLjUxLjI1MC40MS4yLnNzbGlwLmlvMA0GCSqG
SIb3DQEBCwUAA4IBAQBvTjTTXWeWtfaUDrcp1YW1pKgZ7lTb27f3QCxukXpbC+wL
dcb4EP/vDf+UqCogKl6rCEA0i23Dtn85KAE9PQZFfI5hLulptdOgUhO3Udluoy36
D4WvUoCfgPgx12FrdanQBBja+oDsT1QeOpKwQJuwjpZcGfB2YZqhO0UcJpC8kxtU
by3uoxJoveHPRlbM2+ACPBPlHu/yH7st24sr1CodJHNt6P8ugIBAZxi3/Hq0wj4K
aaQzdGXeFckWaxIny7F1M3cIWEXWzhAFnoTgrwlklf7N7VWHPIvlIh1EYASsVYKn
iATq8C7qhUOGsknDh3QSpOJeJmpcBwln11/9BGRP
—–END CERTIFICATE—–
EOF
)
| bb-event - Creating subscription for event function. More information: http://www.bashbooster.net/#event
ca-file-updated - Event name
update-certs - The function name that the event will call
bb-event-on “ca-file-updated” “update-certs”
update-certs() { # Function with commands for adding a certificate to the store
update-ca-certificates
}
|
bb-event - Creating subscription for event function. More information: http://www.bashbooster.net/#event
ca-file-updated - Event name
update-certs - The function name that the event will call
bb-event-on “ca-file-updated” “update-certs”
update-certs() { # Function with commands for adding a certificate to the store
update-ca-certificates
}
| bb-tmp-file - Creating temp file function. More information: http://www.bashbooster.net/#tmp
CERT_TMP_FILE=”$( bb-tmp-file )”
echo -e “${CERT_CONTENT}” > “${CERT_TMP_FILE}”
bb-sync-file - File synchronization function. More information: http://www.bashbooster.net/#sync
“${CERTS_FOLDER}/${CERT_FILE_NAME}.crt” - Destination file
${CERT_TMP_FILE} - Source file
ca-file-updated - Name of event that will be called if the file changes.
|
bb-tmp-file - Creating temp file function. More information: http://www.bashbooster.net/#tmp
CERT_TMP_FILE=”$( bb-tmp-file )”
echo -e “${CERT_CONTENT}” > “${CERT_TMP_FILE}”
bb-sync-file - File synchronization function. More information: http://www.bashbooster.net/#sync
“${CERTS_FOLDER}/${CERT_FILE_NAME}.crt” - Destination file
${CERT_TMP_FILE} - Source file
ca-file-updated - Name of event that will be called if the file changes.
| bb-sync-file
“${CERTS_FOLDER}/${CERT_FILE_NAME}.crt”
${CERT_TMP_FILE}
ca-file-updated
|
bb-sync-file
“${CERTS_FOLDER}/${CERT_FILE_NAME}.crt”
${CERT_TMP_FILE}
ca-file-updated
| Adding a certificate to the OS and containerd
|
Добавление сертификата в ОС и containerd
| Example is given for Ubuntu OS.
The method of adding certificates to the store may differ depending on the OS.
|
Данный пример приведен для ОС Ubuntu.
Способ добавления сертификатов в хранилище может отличаться в зависимости от ОС.
При адаптации скрипта под другую ОС измените параметры bundles и content.
| Change the bundles parameter to adapt the script to a different OS.
|
Пример NodeGroupConfiguration основан на функциях, заложенных в скрипте 032_configure_containerd.sh.
| The example of NodeGroupConfiguration uses functions of the script 032_configure_containerd.sh.
|
yaml
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: add-custom-ca-containerd..sh
spec:
weight: 31
nodeGroups:
- ‘*’
bundles:
- ‘ubuntu-lts’
content: |-
REGISTRY_URL=private.registry.example
CERT_FILE_NAME=${REGISTRY_URL}
CERTS_FOLDER=”/usr/local/share/ca-certificates”
CERT_CONTENT=$(cat «EOF
—–BEGIN CERTIFICATE—–
MIIDSjCCAjKgAwIBAgIRAJ4RR/WDuAym7M11JA8W7D0wDQYJKoZIhvcNAQELBQAw
JTEjMCEGA1UEAxMabmV4dXMuNTEuMjUwLjQxLjIuc3NsaXAuaW8wHhcNMjQwODAx
MTAzMjA4WhcNMjQxMDMwMTAzMjA4WjAlMSMwIQYDVQQDExpuZXh1cy41MS4yNTAu
NDEuMi5zc2xpcC5pbzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL1p
WLPr2c4SZX/i4IS59Ly1USPjRE21G4pMYewUjkSXnYv7hUkHvbNL/P9dmGBm2Jsl
WFlRZbzCv7+5/J+9mPVL2TdTbWuAcTUyaG5GZ/1w64AmAWxqGMFx4eyD1zo9eSmN
G2jis8VofL9dWDfUYhRzJ90qKxgK6k7tfhL0pv7IHDbqf28fCEnkvxsA98lGkq3H
fUfvHV6Oi8pcyPZ/c8ayIf4+JOnf7oW/TgWqI7x6R1CkdzwepJ8oU7PGc0ySUWaP
G5bH3ofBavL0bNEsyScz4TFCJ9b4aO5GFAOmgjFMMUi9qXDH72sBSrgi08Dxmimg
Hfs198SZr3br5GTJoAkCAwEAAaN1MHMwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB
/wQCMAAwUwYDVR0RBEwwSoIPbmV4dXMuc3ZjLmxvY2FsghpuZXh1cy41MS4yNTAu
NDEuMi5zc2xpcC5pb4IbZG9ja2VyLjUxLjI1MC40MS4yLnNzbGlwLmlvMA0GCSqG
SIb3DQEBCwUAA4IBAQBvTjTTXWeWtfaUDrcp1YW1pKgZ7lTb27f3QCxukXpbC+wL
dcb4EP/vDf+UqCogKl6rCEA0i23Dtn85KAE9PQZFfI5hLulptdOgUhO3Udluoy36
D4WvUoCfgPgx12FrdanQBBja+oDsT1QeOpKwQJuwjpZcGfB2YZqhO0UcJpC8kxtU
by3uoxJoveHPRlbM2+ACPBPlHu/yH7st24sr1CodJHNt6P8ugIBAZxi3/Hq0wj4K
aaQzdGXeFckWaxIny7F1M3cIWEXWzhAFnoTgrwlklf7N7VWHPIvlIh1EYASsVYKn
iATq8C7qhUOGsknDh3QSpOJeJmpcBwln11/9BGRP
—–END CERTIFICATE—–
EOF
)
CONFIG_CONTENT=$(cat «EOF
[plugins]
[plugins.”io.containerd.grpc.v1.cri”.registry.configs.”${REGISTRY_URL}”.tls]
ca_file = “${CERTS_FOLDER}/${CERT_FILE_NAME}.crt”
EOF
)
mkdir -p /etc/containerd/conf.d
| yaml
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: add-custom-ca-containerd..sh
spec:
weight: 31
nodeGroups:
- ‘*’
bundles:
- ‘ubuntu-lts’
content: |-
REGISTRY_URL=private.registry.example
CERT_FILE_NAME=${REGISTRY_URL}
CERTS_FOLDER=”/usr/local/share/ca-certificates”
CERT_CONTENT=$(cat «EOF
—–BEGIN CERTIFICATE—–
MIIDSjCCAjKgAwIBAgIRAJ4RR/WDuAym7M11JA8W7D0wDQYJKoZIhvcNAQELBQAw
JTEjMCEGA1UEAxMabmV4dXMuNTEuMjUwLjQxLjIuc3NsaXAuaW8wHhcNMjQwODAx
MTAzMjA4WhcNMjQxMDMwMTAzMjA4WjAlMSMwIQYDVQQDExpuZXh1cy41MS4yNTAu
NDEuMi5zc2xpcC5pbzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL1p
WLPr2c4SZX/i4IS59Ly1USPjRE21G4pMYewUjkSXnYv7hUkHvbNL/P9dmGBm2Jsl
WFlRZbzCv7+5/J+9mPVL2TdTbWuAcTUyaG5GZ/1w64AmAWxqGMFx4eyD1zo9eSmN
G2jis8VofL9dWDfUYhRzJ90qKxgK6k7tfhL0pv7IHDbqf28fCEnkvxsA98lGkq3H
fUfvHV6Oi8pcyPZ/c8ayIf4+JOnf7oW/TgWqI7x6R1CkdzwepJ8oU7PGc0ySUWaP
G5bH3ofBavL0bNEsyScz4TFCJ9b4aO5GFAOmgjFMMUi9qXDH72sBSrgi08Dxmimg
Hfs198SZr3br5GTJoAkCAwEAAaN1MHMwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB
/wQCMAAwUwYDVR0RBEwwSoIPbmV4dXMuc3ZjLmxvY2FsghpuZXh1cy41MS4yNTAu
NDEuMi5zc2xpcC5pb4IbZG9ja2VyLjUxLjI1MC40MS4yLnNzbGlwLmlvMA0GCSqG
SIb3DQEBCwUAA4IBAQBvTjTTXWeWtfaUDrcp1YW1pKgZ7lTb27f3QCxukXpbC+wL
dcb4EP/vDf+UqCogKl6rCEA0i23Dtn85KAE9PQZFfI5hLulptdOgUhO3Udluoy36
D4WvUoCfgPgx12FrdanQBBja+oDsT1QeOpKwQJuwjpZcGfB2YZqhO0UcJpC8kxtU
by3uoxJoveHPRlbM2+ACPBPlHu/yH7st24sr1CodJHNt6P8ugIBAZxi3/Hq0wj4K
aaQzdGXeFckWaxIny7F1M3cIWEXWzhAFnoTgrwlklf7N7VWHPIvlIh1EYASsVYKn
iATq8C7qhUOGsknDh3QSpOJeJmpcBwln11/9BGRP
—–END CERTIFICATE—–
EOF
)
CONFIG_CONTENT=$(cat «EOF
[plugins]
[plugins.”io.containerd.grpc.v1.cri”.registry.configs.”${REGISTRY_URL}”.tls]
ca_file = “${CERTS_FOLDER}/${CERT_FILE_NAME}.crt”
EOF
)
mkdir -p /etc/containerd/conf.d
|
bb-tmp-file - Create temp file function. More information: http://www.bashbooster.net/#tmp
| bb-tmp-file - Create temp file function. More information: http://www.bashbooster.net/#tmp
|
CERT_TMP_FILE=”$( bb-tmp-file )”
echo -e “${CERT_CONTENT}” > “${CERT_TMP_FILE}”
CONFIG_TMP_FILE=”$( bb-tmp-file )”
echo -e “${CONFIG_CONTENT}” > “${CONFIG_TMP_FILE}”
| CERT_TMP_FILE=”$( bb-tmp-file )”
echo -e “${CERT_CONTENT}” > “${CERT_TMP_FILE}”
CONFIG_TMP_FILE=”$( bb-tmp-file )”
echo -e “${CONFIG_CONTENT}” > “${CONFIG_TMP_FILE}”
|
bb-event - Creating subscription for event function. More information: http://www.bashbooster.net/#event
ca-file-updated - Event name
update-certs - The function name that the event will call
bb-event-on “ca-file-updated” “update-certs”
update-certs() { # Function with commands for adding a certificate to the store
update-ca-certificates # Restarting the containerd service is not required as this is done automatically in the script 032_configure_containerd.sh
}
| bb-event - Creating subscription for event function. More information: http://www.bashbooster.net/#event
ca-file-updated - Event name
update-certs - The function name that the event will call
bb-event-on “ca-file-updated” “update-certs”
update-certs() { # Function with commands for adding a certificate to the store
update-ca-certificates # Restarting the containerd service is not required as this is done automatically in the script 032_configure_containerd.sh
}
|
bb-sync-file - File synchronization function. More information: http://www.bashbooster.net/#sync
“${CERTS_FOLDER}/${CERT_FILE_NAME}.crt” - Destination file
${CERT_TMP_FILE} - Source file
ca-file-updated - Name of event that will be called if the file changes.
| bb-sync-file - File synchronization function. More information: http://www.bashbooster.net/#sync
“${CERTS_FOLDER}/${CERT_FILE_NAME}.crt” - Destination file
${CERT_TMP_FILE} - Source file
ca-file-updated - Name of event that will be called if the file changes.
|
bb-sync-file
“${CERTS_FOLDER}/${CERT_FILE_NAME}.crt”
${CERT_TMP_FILE}
ca-file-updated
bb-sync-file
“/etc/containerd/conf.d/${REGISTRY_URL}.toml”
${CONFIG_TMP_FILE}
| bb-sync-file
“${CERTS_FOLDER}/${CERT_FILE_NAME}.crt”
${CERT_TMP_FILE}
ca-file-updated
bb-sync-file
“/etc/containerd/conf.d/${REGISTRY_URL}.toml”
${CONFIG_TMP_FILE}
|